Tag: HPE

  • Red Tuesday for Dell and HPE. Analysts prophesy the end of the IT buying eldorado

    Red Tuesday for Dell and HPE. Analysts prophesy the end of the IT buying eldorado

    The IT hardware sector, which had been riding the wave of artificial intelligence promises for the past few quarters, collided with hard market reality on Tuesday. Morgan Stanley analysts, in a note that immediately cooled sentiment on Wall Street, downgraded the entire industry to ‘cautious’. This is a clear signal to CFOs and investors that the period of easy gains is over, and 2026 could bring a painful review of sales plans.

    Investment bank experts warn of the formation of what they have termed a ‘perfect storm’. It is made up of three critical factors: a marked slowdown in demand, recurring component cost inflation and over-inflated valuations of technology companies. The floor reaction was instantaneous. The shares of infrastructure giants such as Hewlett Packard Enterprise, Dell Technologies and NetApp dived by around 5 per cent, dragging the entire industry index down with them. Logitech also came under pressure, with its recommendation downgraded to ‘underweight’.

    For business decision-makers, however, the most relevant information comes not from the stock prices themselves, but from the hard data underlying this discount. A recent Morgan Stanley survey indicates that corporate IT leaders plan to increase hardware budgets in 2026 by just a token 1 per cent, the weakest reading in 15 years, excluding the anomaly of the pandemic period. What’s more, surveys of sales intermediaries (VARs) suggest that between 30 and 60 per cent of business customers are prepared to drastically reduce planned purchases of servers, PCs and storage if manufacturers continue to pass on rising component costs.

    While investment in AI-dedicated infrastructure remains a bright spot on the spending map, it cannot fully offset broader macroeconomic concerns. Uncertainty is compounded by the Donald Trump administration’s tariff announcements and rising memory prices, as highlighted by Citigroup in its separate analysis. Analysts conclude that with such elastic demand and rigid production costs, the risk of a downward earnings revision for 2026 is now higher than ever. For companies, this means they need to revise their purchasing strategies and prepare for tougher negotiations with technology suppliers, who will struggle to maintain margins.

  • Are the giants catching up? Ubiquiti and Huawei grow in the shadow of WLAN market leaders

    Are the giants catching up? Ubiquiti and Huawei grow in the shadow of WLAN market leaders

    The wireless market continues to grow, but the euphoria of the beginning of the year is clearly subsiding. Although the third quarter of 2025 closed with a solid result on the upside, the real revolution is not happening in the overall sales columns, but inside the leaderboard. Whilst the old stagers are stabilising their positions or losing share, the “contenders” are recording results that the partner channel cannot pass by. Are we witnessing a permanent reshuffling of forces in the network industry?

    As recently as the first half of 2025, the Enterprise WLAN market was still posting double-digit growth, whetting the appetites of integrators and distributors. However, the third quarter brought a slight cooling off. According to the latest IDC data, the sector grew by 7.8% year-on-year, generating sales of $2.7 billion. This is still a solid result, but clearly lower than the 10.5% or 13.4% recorded in the first and second quarters respectively.

    However, looking only at overall sales volumes can be misleading. For beneath the surface of stable growth, there is a fierce battle for customers in which the previous hegemons must increasingly look back.

    Stabilisation at the top, explosion in the “second line”

    The biggest surprise of the past quarter is not how much the market has grown, but who has gained the most from this growth. The vendor landscape (Vendor Landscape) has clearly polarised.

    Cisco still rules indivisibly at the top. With a market share of 37.4%, the US giant remains the default choice for the largest corporations. However, its revenues in the period under review fell by 3% year-on-year (to US$992.4 million). This may suggest some saturation in the premium segment or prolonged decision cycles at the largest customers, who are holding back on further investments.

    In second place on the podium, with a 19.3% market share, is HPE. The company, which finalised its acquisition of Juniper in July 2025, saw revenue growth of a modest 1.6%. While this merger may change the balance of power in the long term, for now we see stabilisation rather than synergies that would catapult sales performance upwards.

    However, the real dynamics can only be seen behind the leaders. The title of ‘dark horse’ of the third quarter unquestionably belongs to Ubiquiti. The manufacturer recorded impressive revenue growth of as much as 47.1%, achieving sales of over USD 300 million and capturing 11.3% of the market. This is a clear sign that customers – especially in the SME and mid-market sectors – are increasingly looking for solutions that offer better value for money, abandoning expensive licences and complex enterprise ecosystems in favour of simpler-to-use platforms.

    Huawei is performing equally impressively . Despite global challenges and trade barriers, the Chinese giant increased WLAN revenues by 33.7%, now controlling 9% of the market. The list is rounded off by CommScope (Ruckus Networks), which also boasts a great result – growth of 18% confirms that in specific verticals (e.g. hospitality or high-density), Ruckus still has a loyal customer base.

    Europe buys the most

    For the Polish IT channel, however, it is not only global brands that are key, but also the geography of sales. Here we have excellent news. It is the EMEA region (Europe, Middle East, Africa) that is currently driving the global WLAN market.

    In the third quarter, the market in our region grew by as much as 12.8% year-on-year. By comparison, the Americas, traditionally the bastion of the largest investments, grew by “only” 6%, and the Chinese market even contracted (-1.3%).

    What does this mean for resellers and integrators on the Vistula? Europe is in a phase of intensive infrastructure modernisation. Companies are catching up with technology and are opening up budgets for new networks more readily than their US or Asian counterparts. This is a moment worth seizing, especially by offering upgrades to the latest standards.

    Wi-Fi 7: It’s no longer a novelty, it’s the standard

    Where is this demand coming from if the market is slowing down a bit globally? The answer is simple: the 6 GHz band. Business customers have realised that it is no longer possible to work efficiently in the crowded ether of the 2.4 GHz and 5 GHz bands.

    Adoption of the new standards is progressing rapidly. Wi-Fi 7 already accounted for 31.3% of market revenue in Q3 2025 (a jump from 21% a quarter earlier!). If we add in the Wi-Fi 6E standard (24.5% share), we find that more than half of the money spent on wireless networks goes to devices that support the 6 GHz band.

    Companies have stopped treating Wi-Fi 7 as ‘tomorrow’s technology’. It has become the ‘for today’ standard. The tripling of bandwidth and new radio spectrum capabilities are arguments that convince IT departments to replace their access point fleets, even if the previous generation has not yet been fully depreciated.

    What is the modern customer looking for?

    Analysing market data and the opinions of IDC experts, it can be concluded that the very criteria for choosing a supplier are changing. The era of “buying boxes” (Access Points) is definitely over.

    The modern organisation is not looking for connectivity alone – it is looking for an intelligent platform. Market experts point out that the key to customer portfolios today is a holistic approach. Systems that offer:

    • Deep integration: WLAN must be part of a larger network stack, not a separate island.
    • AI and automation: With increasing network complexity, administrators need tools that detect and fix problems themselves (AIOps).
    • Built-in security: Wi-Fi becomes the first line of defence and security functions must be integrated into the access point.

    Perhaps this approach is the secret of the success of companies like Ubiquiti – they offer sufficiently advanced management in a model that is easy to deploy and maintain, without the need to maintain an army of certified engineers.

  • HPE accelerates integration after Juniper acquisition. Goal: Autonomous network

    HPE accelerates integration after Juniper acquisition. Goal: Autonomous network

    Just five months after finalising its high-profile acquisition of Juniper Networks, Hewlett Packard Enterprise(HPE) is demonstratively proving that it has no intention of wasting time on lengthy merger processes. At the Discover Barcelona 2025 conference, the company unveiled its cards, presenting a unified network strategy. The giant’s main goal is to move beyond traditional infrastructure management towards fully autonomous networks, driven by the integrated artificial intelligence mechanisms of both companies.

    A key element of the new offering is the deep integration of the Aruba Networking Central and Juniper Mist platforms. Rather than maintaining two separate entities, HPE is creating a common operational structure of AIOps. In practice, this means that the Large Experience Model (LEM) from Juniper, trained on billions of data points, will now power Aruba’s analytics. In the other direction, Aruba’s Agentic Mesh technology will support Mista with deeper pattern recognition. For CIOs and channel partners, this signals that HPE is moving towards a model where the network not only reports errors, but fixes them itself before they affect the end user.

    HPE’s ambitions go beyond software, however. The company is launching dedicated hardware for demanding AI workloads. The new QFX5250 switch, which is liquid-cooled and offers a throughput of 102.4 Tbps, is a direct response to the growing demand from data centres building ‘AI factories’. Completing the portfolio is the MX301 edge router, designed to process inference traffic closer to the data source. This move, combined with the expansion of the partnership with Nvidia, positions HPE as a provider of a complete backbone for artificial intelligence infrastructure, capable of supporting distributed computing clusters.

    The entire ecosystem is tied together by the OpsRamp platform, which is designed to act as a nerve centre, integrating signals from servers, storage and networking into a single management view. To accelerate adoption of these solutions in the partner channel, HPE Financial Services has launched aggressive financing programmes, including a zero interest rate option for AIOps licences. The speed with which HPE is combining Aruba and Juniper technologies suggests a determination to dominate the hybrid IT market before competitors have time to react to the new balance of power.

  • HPE Cray brings AI and HPC together. New generation of supercomputers relies on liquid cooling and choice architecture

    HPE Cray brings AI and HPC together. New generation of supercomputers relies on liquid cooling and choice architecture

    Hewlett Packard Enterprise unveiled the next generation of its supercomputing solutions yesterday (13 November), making a clear strategic bet. In an era of resource-intensive AI models that

    redefine data centres, HPE is unifying its HPE Cray architecture to meet both new workloads and traditional scientific simulation (HPC). This is a direct response to growing demand from research labs, government agencies and enterprises that no longer want to maintain separate, costly silos for both worlds.

    The company has announced that the HPE Cray Supercomputing GX5000 platform, introduced in October, has already won key customers. The German supercomputing centres, HLRS in Stuttgart (the ‘Herder’ system) and LRZ in Bavaria (the ‘Blue Lion’ system), have chosen it for their next-generation machines. Their motivations are clear: they need a platform that seamlessly combines simulation with AI and is extremely energy-efficient at the same time. Prof. Dieter Kranzlmüller from the LRZ emphasises that direct liquid cooling (up to 40°C) will allow the campus to reuse waste heat.

    At the core of Thursday’s announcement are three new liquid-cooled compute modules. HPE is betting on flexibility and partnership here, offering configurations based on both the next generation of NVIDIA Ruby GPUs and Vera CPUs (in the GX440n module), as well as competing AMD Instinct MI430X accelerators and ‘Venice’ EPYC processors (in the GX350a and GX250 modules). Compute density and full liquid cooling are key, to address the growing energy challenges.

    The supercomputer is not just about computing power, however. HPE is upgrading the entire platform. The HPE Slingshot 400 network is expected to provide the 400 Gbps throughput needed to scale AI jobs across thousands of GPUs. New HPE Cray K3000 storage systems, based on ProLiant servers and open-source DAOS (Distributed Asynchronous Object Storage) software, are in turn expected to address data access bottlenecks, which is critical for AI models.

    The whole is tied together by updated HPE Supercomputing Management Software, emphasising management of multi-tenant environments, virtualisation and detailed control of energy consumption across the system.

    While the announcement is strategically significant and secures HPE’s position in future multi-year contracts, IT market analysts must be patient. Most of the unveiled compute modules (with Ruby and MI430X chips) and software updates will not be available until “early 2027”. Slightly earlier, “early 2026”, the K3000 storage is expected to arrive. This is a clear indication that yesterday’s announcement is primarily a roadmap presentation and a response to competitors’ plans, rather than a launch of products that companies can order in the coming quarters.

  • Quantum computers on a massive scale. Nobel laureate and HPE announce groundbreaking plan

    Quantum computers on a massive scale. Nobel laureate and HPE announce groundbreaking plan

    John M. Martinis, a recent winner of the Nobel Prize in Physics (2025) and one of the architects of Google’s breakthrough in ‘quantum supremacy’, is starting a new chapter. This time his goal is not a laboratory record, but the creation of a practical, mass-produced quantum supercomputer. On Monday, he announced the formation of the Quantum Scaling Alliance, bringing in the heavy artillery: supercomputing giant HPE and key players in the semiconductor supply chain.

    The initiative is a direct response to the industry’s biggest pain point. Quantum computers, promising a revolution in chemistry or medicine, remain largely unitary works. As Martinis put it, since the 1980s quantum chips have been produced “in an artisanal way”. The Quantum Scaling Alliance aims to change this by moving the production of qubits from laboratories to factories.

    That’s why the presence in the alliance of Applied Materials, a supplier of chip-making machines, and Synopsys, a leader in chip-design software (EDA), is crucial. The idea is to use the same sophisticated tools that today produce millions of processors for smartphones and AI servers to build quantum systems. This signals the industry’s desire to move “to a more standardised, professional model”.

    However, building stable cubits at scale is only half the battle. The real challenge, the partners emphasise, lies in integration and scaling. Masoud Mohseni, head of the quantum team at HPE, tones down the enthusiasm, noting that moving from hundreds to thousands of qubits raises entirely new issues. “People naively think [scaling] is linear. This is simply not true,” Mohseni stated.

    HPE’s task will primarily be to integrate delicate quantum circuits into classical supercomputers. It is they who are to manage the system in real time and handle the crucial error correction process, without which qubits are useless. The consortium also included specialised companies such as Riverlane and 1QBit (responsible for error correction) or Quantum Machines (control systems), which shows that the aim is to build a complete, commercial technology stack.

  • HPE promises AI, but shows declines. Wall Street is not buying this narrative

    HPE promises AI, but shows declines. Wall Street is not buying this narrative

    Hewlett Packard Enterprise is announcing a new chapter in its strategy – and causing concern on Wall Street at the same time. The company unveiled forecasts for fiscal 2026 that turned out to be cooler than analysts had expected. The announced revenue growth in the 5-10% range contrasts with the earlier market consensus of more than 17%. HPE shares were down nearly 8.5% in yesterday’s after-hours trading.

    The reason? HPE is entering a phase of fundamental redevelopment. From next year, the company will combine its key segments – servers, hybrid cloud and financial services – into a single pillar: Cloud & AI. It is a move that is expected to reposition HPE to compete for the growing artificial intelligence infrastructure market. As CEO Antonio Neri stressed, the new structure is expected to create “more profitable growth” and deliver higher shareholder value.

    Central to the transformation is the acquisition of Juniper Networks, which is expected to double the scale of HPE’s networking business and narrow the gap with Cisco. However, the integration will not come without costs – the company has announced $240 million in job cuts. This signals that HPE is not only investing, but also looking for room for efficiency.

    Despite ambitions in AI and networking, earnings forecasts disappointed the market. Adjusted earnings per share of USD 2.20-2.40 remain below expectations. Similarly, free cash flow, estimated at USD 1.5-2 billion, balances close to the market median.

    The market for AI infrastructure – from data centres to high-bandwidth networks – is growing rapidly, rooting for Big Tech investments in generative models. But HPE is starting from the position of a ‘traditional’ player that needs to convince investors that it can reformulate its DNA faster than Nvidia’s CUDA metrics or Microsoft’s cloud ambitions are growing.

    This transformation could prove to be HPE’s biggest test since its spin-off from Hewlett-Packard, with the company gambling on moving from being a ‘legacy’ infrastructure provider to becoming a partner of the AI age. The share price suggests that Wall Street will wait to applaud – until it sees actual financial results, not just a reorganisation.

  • Maciej Bocian new director of data and storage at HPE Polska

    Maciej Bocian new director of data and storage at HPE Polska

    Hewlett Packard Enterprise is betting on the development of data services and storage infrastructure, entrusting the leadership of this area to Maciej Bocian, a manager with more than 20 years of experience in the data centre and digital transformation industry. His joining HPE’s Polish office signals not only a strengthening of the local strategy, but also the growing importance of data in the group’s offering.

    Bocian has spent years building competence at companies such as VAST Data, Pure Storage, NetApp and Bull, and before that at Cisco and IBM, where he was in charge of IT architecture and enterprise sales. He managed teams in the Polish market and in the CEE region, focusing on scaling the business and introducing new data-driven service models.

    Interestingly, his career has come full circle – as the man himself points out, he started almost 30 years ago as an intern at what was then Hewlett-Packard. His return to HPE comes at a time when the infrastructure market is undergoing an accelerated change driven by artificial intelligence and the growing need for real-time computing.

    For HPE, it is also a step towards strengthening its position in the data services segment – an area that is increasingly defined not by ‘capacity’ but by its ability to support AI, automation and operational resilience. Globally, the company is investing in the HPE GreenLake platform and all-flash solutions, targeting customers upgrading data centres and building their own AI models.

  • Storage for data centres – 5 leaders in 2025

    Storage for data centres – 5 leaders in 2025

    Today’s data centre has ceased to be just a back-office IT facility; it has become an engine that drives innovation. The explosion of data, driven largely by generative artificial intelligence, is forcing a fundamental re-evaluation of the storage infrastructure.

    In 2025, choosing a vendor is a strategic decision beyond hardware purchase. Market leaders are no longer competing on petabytes alone; they are delivering platforms that offer a cloud operating model, guaranteed cyber resilience and architecture built with AI requirements in mind .

    New evaluation criteria: megatrends shaping the market

    To understand which vendors will dominate in 2025, it is necessary to identify the key market forces. Four megatrends define new criteria for evaluating storage platforms:

    • Ubiquitous artificial intelligence: AI workloads, especially training large language models (LLMs), require powerful GPU-based infrastructure and high-performance, low-latency storage. Demand is focused on all-flash NVMe architectures and scale-out designs that can power GPUs without creating bottlenecks. At the same time, vendors are integrating AIOps mechanisms directly into their platforms, automating management and enabling natural language administration.
    • Sustainability mandate: Data centres are forecast to consume more than 1,000 TWh of energy by 2026, leading to energy supply constraints and increased operating costs. Energy efficiency is becoming a key factor in total cost of ownership (TCO). These pressures are driving innovations such as the adoption of QLC flash memory, business models that reduce e-waste and the standard use of liquid cooling.
    • The cyber resilience imperative: The rise of ransomware attacks is making the storage layer the last line of defence. The focus is shifting from simple backup to end-to-end resilience, including real-time threat detection, immutable snapshots and guaranteed, fast recovery.
    • Cloud operating model: the market is increasingly clamouring for a cloud-like experience for on-premises infrastructure, as seen in the rise in popularity of Storage-as-a-Service (STaaS) offerings. IT teams want to manage infrastructure from a single, centralised console in the cloud, automate resource allocation and consume resources on demand, regardless of physical location.

    Analysis of the top 5 suppliers

    Based on the new criteria, five suppliers stand out as leaders ready to meet the challenges of 2025.

    Dell Technologies: market leader with a comprehensive portfolio

    Dell maintains its leadership position in terms of external storage market share. Its strategy is to leverage scale and a comprehensive portfolio to support a wide range of traditional and modern workloads.

    Key products:

    • PowerStore: The flagship mid-range all-flash platform, which recently gained support for the Nutanix Cloud Platform, offering customers an alternative to VMware.
    • PowerFlex: a software-defined infrastructure (SDS) platform designed for performance and scalability, ideal for consolidating diverse corporate workloads.
    • PowerScale: a scale-out NAS solution built to handle unstructured data, making it a key component in AI/ML workflows.

    A differentiator for 2025: Width of offering and established market position. Dell is the choice for large enterprises that need a single supplier to support diverse workloads, from the network edge to the cloud.

    Pure Storage: an innovator in simplicity and sustainability

    Pure Storage’s strategy is based on operational simplicity, customer experience and a subscription model that eliminates the traditional storage lifecycle.

    Key elements:

    • Evergreen® model: A subscription that provides uninterrupted software and hardware updates, directly impacting TCO and reducing e-waste.
    • Energy efficiency: Pure architecture delivers up to 85% lower power consumption compared to competitive all-flash arrays.
    • Pure1 AI Copilot: AI assistant for storage management, allowing administrators to use natural language for troubleshooting and planning.

    Distinction for 2025: A focus on customer experience, delivered through the Evergreen model and simple management. Pure is the choice for organisations that prioritise TCO, operational simplicity and sustainability.

    NetApp: champion of multi-vendor hybrid cloud

    NetApp’s strategy is to dominate the hybrid, multi-cloud data fabric through a software-first approach.

    Key technologies:

    • ONTAP software: the heart of the NetApp ecosystem, delivering unified data services that run consistently locally and natively across the three largest public clouds (AWS, Azure, Google Cloud).
    • Cyber resilience: NetApp offers autonomous ransomware protection with guaranteed recovery from snapshots, using AI/ML to detect anomalies in real time.
    • BlueXP: A unified management console based on AIOps that allows the entire data infrastructure to be managed from a single interface.

    Distinction for 2025: Software. No other vendor provides a more seamless and consistent data management experience in a multi-tenant hybrid cloud landscape. NetApp is the choice for organisations strategically committed to a hybrid architecture.

    Hewlett Packard Enterprise (HPE): On-premises cloud architect

    HPE’s strategy is to deliver its entire IT infrastructure as a service through the HPE GreenLake platform, providing a cloud operational experience in a local environment.

    Key technologies:

    • HPE GreenLake: A platform that underpins HPE’s strategy, providing a unified, cloud-based console for managing and consuming IT resources in a pay-per-use model.
    • HPE Alletra Storage MP: A hardware platform with a disaggregated scale-out architecture, meaning that compute resources and capacity can be scaled independently, providing flexibility and cost efficiency.
    • Availability guarantee: HPE guarantees 100% data availability for its critical Alletra systems.

    A differentiator for 2025: HPE’s vision is a radical departure from traditional infrastructure sales. It is the choice for enterprises that are fully committed to an on-premise cloud strategy and want to manage their entire infrastructure through a single as-a-service platform.

    IBM: a bastion of corporate resilience and security

    IBM’s storage strategy focuses on unparalleled cyber resilience, performance and integration in complex, often highly regulated IT environments.

    Key technologies:

    • FlashCore modules (FCMs): Unlike competitors using standard SSDs, IBM designs its own modules that handle tasks such as compression, encryption and real-time threat detection at the drive level, without affecting performance.
    • Ransomware detection: FlashSystem uses machine learning models running on FCMs to detect anomalies indicative of ransomware in less than a minute.
    • Safeguarded Copy: A function that creates immutable, isolated snapshots that cannot be modified or deleted during an attack.

    Distinction for 2025: In an era of escalating cyber threats, IBM’s deep focus on security engineering is key. It is the choice for large enterprises in regulated industries where data integrity and recoverability are unquestionable.

    Choosing a partner, not just a product

    The storage decision in 2025 is not about which device has the best specifications, but which platform best fits an organisation’s strategic goals for AI, cloud and resilience.

    The market has evolved from selling hardware to providing end-to-end intelligent platforms. The right choice is a supplier that acts as a partner, offering a platform that reduces complexity, mitigates risk and provides a basis for future innovation.

  • AI boom drives HPE results. Company raises forecasts

    AI boom drives HPE results. Company raises forecasts

    Hewlett Packard Enterprise(HPE) is clearly benefiting from the ongoing artificial intelligence boom, as reflected in its third quarter financial results.

    The company not only beat analysts’ expectations, but also significantly raised its annual forecasts, signalling that its strategic investments in AI servers and network infrastructure are starting to pay off.

    The main driver of HPE’s performance was its server segment, whose revenue grew 16% year-on-year to $4.9bn. This jump is directly related to the growing demand for the computing power needed to train and deploy generative AI models.

    HPE is successfully capitalising on this trend by offering AI-optimised systems with the latest Nvidia GPUs.

    Even more impressive growth was seen in the networking division, where revenues shot up 54% to $1.7bn. This is a result of the recent $14bn acquisition of Juniper Networks, which was finalised in July.

    The deal has significantly strengthened HPE’s position in the rapidly growing networking sector, which is growing faster than the traditional server hardware market.

    For the third quarter ended 31 July, HPE’s total revenue was $9.14bn, compared to the market’s expectation of $8.53bn. The company also calmed down internally by reaching an agreement with influential activist investor Elliott Investment Management.

    The settlement resulted in industry veteran Roberto Calderoni joining the board.

    Strong results and stability have translated into strong optimism. HPE forecasts fourth quarter revenue of between $9.7bn and $10.1bn, above analyst consensus ($9.54bn).

    More importantly, the company has raised its full-year revenue growth forecast for fiscal 2025 from the previous 7-9% to 14-16%. This is a clear signal that HPE intends to take full advantage of the favourable boom in the AI and networking markets.

  • AI is not just about the cloud. The success of Dell and HPE is evidence of the renaissance of private server rooms

    AI is not just about the cloud. The success of Dell and HPE is evidence of the renaissance of private server rooms

    The initial phase of the AI revolution solidified a simple and clear picture of the market. At the top was Nvidia, providing the technological ‘shovel’ in the form of GPUs, with cloud giants Amazon, Microsoft and Google just below offering access to ‘goldfields’ of computing power.

    However, the latest financial results from traditional hardware vendors such as Dell and HPE show that this picture was incomplete. The centre of gravity in the key enterprise segment is beginning to shift towards private infrastructure, signalling that the market is entering a new, more mature phase where control, security and cost are becoming the highest denomination currency.

    The hard financial figures leave no illusions. Dell’s server and network segment grew by an impressive 69% in the last quarter, an absolutely exceptional result in such a mature sector.

    This jump translated into record revenues across the company of $29.8bn. At the same time, Hewlett Packard Enterprise reports that AI-dedicated systems generated $1.6bn in revenue, and its entire server segment grew solidly by 16%.

    We’re not talking about selling standard machines. We’re talking about advanced, high-margin systems, saturated with the latest GPUs, ultra-fast interconnects and huge amounts of memory.

    They are the driving force behind these increases and are a clear indication of where companies are now placing their largest technology budgets.

    Behind this fundamental market shift is primarily a pragmatic calculation and strategic course correction. The ‘cloud-first’ model that has dominated IT thinking for the past decade is evolving towards a more sustainable ‘cloud-smart’ approach or, to put it simply, towards a hybrid architecture.

    While the public cloud remains an indispensable environment for rapid prototyping, experimentation and scaling of variable workloads, large-scale production AI deployments have highlighted its structural limitations.

    The motivation to invest in one’s own equipment is based on three pillars that have become critical.

    Firstly, the issue of data security and sovereignty has come to the fore. In an era of regulations such as RODO in Europe, the processing of sensitive corporate data – be it intellectual property, financial data or customer information – on external, shared infrastructure raises legitimate and often unacceptable risks.

    For many industries, from finance to healthcare, the ability to physically control data is not an option, but a legal requirement. The concept of ‘data gravity’ is becoming a reality: it is easier to attract computing power to massive corporate datasets than to transfer petabytes of information to the cloud.

    Secondly, businesses have begun to look closely at total cost of ownership (TCO). While the initial capital outlay to purchase their own servers is high, the operational cost of renting cloud resources to support sustained, intensive AI workloads can be astronomical and unpredictable in the long term.

    For companies that train and operate models continuously, having their own infrastructure offers much better financial predictability and a lower cost over a 3-5 year cycle.

    Thirdly, performance and personalisation requirements cannot be ignored. Latency-sensitive AI applications, crucial in industrial automation, autonomous vehicle systems or banking, require millisecond processing.

    Even the minimal latency associated with transferring data to and from the cloud can be unacceptable in such scenarios. Proprietary hardware also allows for deep optimisation and customisation of the entire architecture – from hardware to software – to the specific needs of the model, which is often impossible in standardised cloud environments.

    In this new landscape, Dell and HPE find themselves perfectly placed. Their advantage is not just in the technology, but in the deep understanding of the enterprise market that they have built up over decades.

    It is not just commercial relationships, but knowledge of procurement cycles, the ability to provide global technical support (SLA) and experience in integrating new solutions with existing complex IT systems. What’s more, they hit exceptionally fertile ground.

    It is estimated that up to 70% of servers in companies are older-generation hardware, which is not only insufficient for AI tasks, but also extremely energy inefficient. The pressure to upgrade is therefore twofold: on the one hand, the need for power; on the other, rising energy costs and sustainability goals (ESG).

    The artificial intelligence market is entering a new, more sustainable phase. This does not mean the end of the cloud, but a redefinition of its role as one of the key elements in a broader, hybrid strategy. The experimentation phase is coming to an end and the time for strategic, long-term deployments is beginning.

    In this game, it is the providers that can offer security, performance and cost predictability in their own data centre that are taking the lead.

  • Maciej Kalisiak, board member of HPE Polska, moves to ApexIT

    Maciej Kalisiak, board member of HPE Polska, moves to ApexIT

    After 18 years at Hewlett Packard Enterprise, Maciej Kalisiak, former board member and sales manager for Data Services solutions, is joining the ApexIT team. There, he will take up the newly created position of Business Development Manager.

    This is a strategic reinforcement for the Polish integrator and signals a market reshuffle among experienced managers.

    Maciej Kalisiak has completed his long career at HPE Poland, where he followed his entire professional path.

    Starting as a trainee in the pre-sales department, he was promoted through the ranks, eventually taking on the role of manager responsible for the company’s key data services segment and becoming a member of the board of directors of the Polish branch of the corporation.

    His departure closes an era in the structures of the company with which he was associated from the very beginning of his professional path.

    His new employer is Apex.IT, a Polish technology company and one of HPE’s key partners in the country. The transfer is no coincidence – Kalisiak has worked closely with the integrator’s team for years on joint projects.

    In his new role as Business Development Manager, he will be responsible for business development and expanding the company’s market horizons. His task will be to use his in-depth knowledge of the technology portfolio of global vendors and his many years of corporate experience to strengthen Apex.IT’s position in the market.

    Behind his transfer are the key figures at Apex.IT – Bernard Krawczyk, Malgorzata Krasuska and Artur Kaminski – who trusted his competence in the context of the company’s further development.

    For ApexIT, this is a significant staff enhancement, allowing it to strengthen its relationship with a key technology partner while gaining a manager with unique insight into the market strategies of one of the world’s largest IT players.

  • HPE Juniper and AI: What’s new in the Mist platform?

    HPE Juniper and AI: What’s new in the Mist platform?

    HPE is expanding its HPE Juniper Networking portfolio with innovations in the Mist AI platform. The main goal is to transform network management – from a reactive to a proactive model.

    This is to be made possible through the use of agent-based artificial intelligence (AIOps), which is to not only analyse problems but also solve them autonomously.

    The company aims to make networks more autonomous, intelligent and capable of operating autonomously, which is expected to significantly relieve the burden on IT departments and reduce operational costs.

    At the heart of the changes is the Marvis AI assistant, whose conversational capabilities have been enhanced to make it easier for administrators to resolve issues in real time.

    The platform has also gained a new, expanded Marvis Actions dashboard, which allows more incidents, such as port misconfigurations and bandwidth issues, to be remediated independently.

    The most interesting element, however, is the development of the Large Experience Model (LEM). It analyses data from popular collaboration apps such as Zoom and Teams, and through digital twins (Marvis Minis) is able to simulate the digital experience of users.

    This allows potential performance issues to be anticipated and eliminated before they realistically affect the team. The technology is designed to enable networks to proactively adapt even before users launch an application.

    The new AIOps features also extend to data centres. Marvis Assistant integrates with the Apstra platform’s contextual database, providing the analytics needed for autonomous infrastructure management.

    These innovations are part of HPE’s broader strategy and feed into its GreenLake Intelligence platform, which uses specialised AI agents across the entire IT architecture – from networking to storage to compute resources. In an era of increasing complexity in hybrid and multi-cloud environments, companies are looking for tools to automate management.

    The development of agent-based AI in the Juniper Mist platform is a step towards fully autonomous networks, capable of predicting and resolving incidents autonomously, often before users even notice them.

  • New IT infrastructure: how companies are designing AI environments without energy compromises

    New IT infrastructure: how companies are designing AI environments without energy compromises

    Performance is not enough for IT infrastructures to meet the demands of generative AI. Increasing workloads, higher power consumption and the need for scalability are forcing a new approach to the design of AI-ready environments.

    Companies no longer want ‘more power’ – they want more balance: between computing power, energy efficiency, scalability and operational costs. This shifts the burden of decision-making from pure hardware specifications to a systems approach to IT infrastructure.

    AI is changing organisations’ priorities

    The increase in the demand for computing power by AI systems is changing the way organisations approach data centre planning. “The development of artificial intelligence is one of the key factors redefining IT architecture and approaches to energy efficiency in data centres. Organisations are increasingly looking for solutions that deliver high performance with reduced energy requirements, while enabling scalability and support for advanced workloads. A key element of this transformation is the use of innovative technologies such as Direct Liquid Cooling (DLC), which is playing an increasingly important role in AI-ready architectures.” – says Karolina Solecka, Compute Sales Director at Hewlett Packard Enterprise Poland.

    Karolina Solecka, HPE
    Karolina Solecka, HPE

    Until recently, maximum efficiency was the main criterion. Today, companies increasingly want efficiency without energy overload. This is due to both rising energy costs and environmental pressures (ESG). Efficiency is no longer an add-on – it is becoming a requirement.

    Infrastructure design starts with cooling

    AI not only increases the energy demand – above all, it generates heat that can no longer be dissipated by classical methods. Traditional air cooling is no longer sufficient with increasing computing density.

    “AI systems require enormous computing power, which generates significant amounts of heat. Traditional cooling methods, based on air exchange, are no longer effective at high computing densities,” – explains an expert from HPE.

    This means that the physical architecture of the infrastructure – from cabinet spacing to air circulation – must be redesigned. Cooling becomes a starting point rather than an addition to the infrastructure.

    DLC: key transformation technology

    A solution that is gaining in importance is Direct Liquid Cooling (DLC) – liquid cooling directly from components such as the CPU and GPU. Compared to traditional methods, DLC significantly reduces energy consumption and increases the efficiency of computing environments.

    “This is why HPE is investing so heavily in the development of liquid cooling technology, which enables direct heat removal from components such as CPUs and GPUs.” – Karolina Solecka emphasises.

    Energy savings can be as high as 30-40% compared to air cooling. But that’s not all – DLC also allows for **compact data centre design, which is especially important for companies with limited space or planning edge deployments.

    “DLC not only increases energy efficiency, reducing energy consumption by up to 30-40%, but also allows for a more compact data centre design.”

    New customer questions: how to optimise rather than maximise

    The way customers approach the purchase and deployment of IT infrastructure is also changing. The focus is no longer just on ‘maximum power’, but on sustainability, controlling consumption and optimising TCO (total cost of ownership).

    “Customers are increasingly driven not only by maximising performance, but also by energy efficiency and sustainability,” notes the HPE representative.

    This shift in priorities is linked to increasing pressure for environmental reporting, but also to real operational needs – companies don’t want to pay for energy they can’t control. They want environments that can be monitored, scaled and optimised for actual usage.

    The service model supports an energy-efficient approach

    In this context, “as-a-service” models are gaining importance – flexible, billed on the basis of actual resource consumption. Such solutions allow customers to avoid oversizing their environment and thus reduce unnecessary energy and cooling consumption.

    “Customers are increasingly driven not only by maximising performance, but also by energy efficiency and sustainability. Technologies such as DLC achieve this balance while providing support for advanced AI workloads. Businesses also appreciate the flexibility of ‘as-a-service’ models, such as HPE GreenLake, which allow infrastructure to adapt to changing business needs while minimising operational costs.” – Solecka says.

    This not only makes the technology more accessible, but also more efficient – both energetically and financially.

    AI-ready architecture from the ground up

    The development of artificial intelligence requires the whole approach to IT infrastructure to change. It is no longer about adding faster processors to an existing server room. It is about new design principles that start with energy efficiency, include cooling as an integral part of the system and end with an operating model that allows growth without wasting resources.

    “The development of AI technologies requires a new approach to IT infrastructure design. At HPE, we believe that technologies such as liquid cooling are the key to efficient and sustainable data centre development to meet the demands of the future.” – Karolina Solecka concludes.

  • Obsolete? Not for IT. Tape is back in the data centre and gaining popularity

    Obsolete? Not for IT. Tape is back in the data centre and gaining popularity

    Today, few technological solutions from the 1980s can boast growing popularity. Magnetic tape – often considered a relic of the past – has not only survived, but has seen a 15 per cent increase in shipment volume, reaching 176.5 exabytes in 2024. And while it’s hard to find it at AI startup presentations, it still holds an important place in many data centres.

    Persistence of the myth of “obsolete technology”

    For decades, tape has functioned in the collective consciousness as a declining technology – slow, cumbersome, physically demanding. Compared to direct-access storage (SSD, NVMe, HDD) or cloud solutions, its limitations are obvious: access times in seconds or minutes, linear read and write, lack of flexibility. But tape was never meant to compete with operating media. Its strength has always been elsewhere – in longevity, cost and security.

    Why does tape still pay off?

    From an IT infrastructure management point of view, tape’s biggest advantage remains the incomparably low cost of data storage. At large scale – tens or hundreds of petabytes – the difference in unit cost between tape and SSD can be up to an order of magnitude. What’s more, data stored on tape requires no power or cooling, resulting in lower power consumption and real operational savings.

    Tape also wins in terms of data longevity. Well-stored LTO cartridges can maintain data integrity for 30 years or more, without the risk of mechanical wear and tear that occurs with spinning disks. This makes them an attractive option for storing regulatory archives, multimedia material or historical research data.

    Finally, security. Tape cartridges can be physically separated from the network, virtually eliminating the risk of them being encrypted by ransomware. At a time when cyber threats are affecting even the best-secured production environments, this level of ‘air gap’ is a significant asset.

    Where is the tape used?

    While it is difficult to imagine tape being used to support real-time applications, there are still many use cases where it performs brilliantly. The most obvious example is ‘cold data’ – rarely read collections that nevertheless cannot be deleted: backups, archive data, financial records, medical images, scientific collections.

    The public sector, large research institutions, the media industry and some companies in the industrial sector all operate on huge data sets that need to be stored for decades. In such cases, the cost and energy intensity of the public cloud prove uneconomic. Hence the decision to retain tape as a permanent archive layer.

    It is worth noting that many modern data centres today operate in a*tiered model. The most frequently used data is stored in high-speed SSD or NVMe storage, active data – on classic HDDs, and archive data – just on tapes. This allows for both cost and operational optimisation of the entire IT environment.

    LTO technology: evolution without revolution

    Tape development has not stood still. The LTO (Linear Tape-Open) standard, developed by IBM, HPE and Quantum, has undergone significant evolution in recent years. The latest generation – LTO 10 – offers 36TB of capacity per cartridge, double that of the previous version. The standard also provides backward compatibility (read to two generations back and write to one), making it easy to migrate data without having to replace the entire infrastructure.

    The roadmap to LTO 14, which is expected to offer up to 576 TB per tape, is already on the horizon. Although the pace of development remains moderate (a new generation every three to four years), the trend is clear – tape is growing in capacity and continues to meet growing demand.

    Will the tape survive for decades to come?

    The demand for long-term information storage is not diminishing. On the contrary, storage systems are being forced to process and archive data at a rate that seemed unattainable just a few years ago.

    Tape can be a very effective component of current and future data architectures. As long as there is a demand for low-cost, durable and secure information storage, tape will continue to play its niche but vital role in the data centre ecosystem.

  • HPE and AMD join forces to simplify virtualisation upgrades

    HPE and AMD join forces to simplify virtualisation upgrades

    Hewlett Packard Enterprise is extending its Morpheus VM Essentials virtualisation platform to support the latest fifth-generation AMD Epyc processors. This move is not just a technical integration – it also sends a clear message: together, HPE and AMD want to fight for the market of customers upgrading their IT infrastructure towards greater efficiency and hybrid cloud readiness.

    Morpheus VM Essentials is a simplified, vendor-neutral virtual machine management platform. It offers features such as clustering, VM migration and backup, as well as unified management of VMware and HPE environments. The platform operates on a per-socket licence model, which simplifies costing, especially for medium-sized organisations. It now gains another advantage – support for energy-efficient and high-performance AMD chips.

    It is the energy-economic argument that could prove crucial. AMD claims that its Epyc processors consume 27 per cent less power and are 28 per cent cheaper in five-year TCO than competing Intel chips. In an era of pressure on data centre energy efficiency, especially as AI workloads increase, this could have a real impact on purchasing decisions.

    From a business perspective, HPE’s alliance with AMD is also a way to build an alternative to the dominant VMware-Intel duo. While Morpheus still works with VMware, it gives users more flexibility and HPE can thus become independent of vendors with an uncertain future (like VMware after its acquisition by Broadcom).

    This is another step in HPE’s broader strategy, which for months has focused on simplified cloud offerings and tools for managing hybrid environments. The Morpheus Enterprise version also supports containerisation and multi-cloud – VM Essentials could therefore be the first step in adopting a more advanced architecture.

    The partnership with AMD also fits well with the trend of diversifying IT infrastructure. In 2024, the share of AMD processors in x86 servers exceeded 30 per cent, and in some segments (e.g. HPC and AI) their popularity is growing even faster. Customers increasingly expect choice – also at the CPU level.

    For HPE and AMD, it is a move that is defensive and offensive at the same time: a response to increasing cost pressures and at the same time an attempt to take advantage of the impasse in the competition. And for customers, a chance to start upgrading their own infrastructure with more open and efficient components without huge investments.

  • HPE acquires Juniper Networks. What’s next for the networking market?

    HPE acquires Juniper Networks. What’s next for the networking market?

    The closing of the acquisition of Juniper Networks by Hewlett Packard Enterprise is more than just the consolidation of two networking companies. It signals that HPE is putting everything on the line: an IT future that will be defined by the convergence of hybrid cloud, security and artificial intelligence. The deal, worth $14 billion, has the potential to change the balance of power in the IT infrastructure market – and beyond.

    HPE’s new position in the network market

    The acquisition doubles the scale of HPE’s networking business, whose strength to date has been the Aruba brand in the campus solutions segment. Juniper, meanwhile, brings expertise in data centres, service providers and native AI solutions. The result is a portfolio that covers the entire cross-section of needs: from edge networks to data centres to cloud-based distributed environments.

    This is not just product consolidation – it is a shift in strategic alignment. HPE is moving from being an IT infrastructure provider to being an integrator of modern network environments that are ‘built with and for artificial intelligence’. Positioned in this way, the offering is intended to meet the needs of customers deploying AI generative models, where networking becomes as critical as computing power.

    The web as the foundation of the AI era

    Artificial intelligence, especially the generative version, requires huge amounts of data and low latency. Traditional approaches to traffic and security management are no longer sufficient. The new architecture must be dynamic, scalable and – crucially – intelligent.

    Juniper brings here its heritage in the area of so-called AI-native networking, with the Mist platform as an example of operations supported by machine learning. HPE intends to integrate this approach with GreenLake services and its own operational model to create a unified network and security management platform.

    Against this backdrop, HPE strongly emphasises the uniqueness of its proposition: agent-based AI management in multi-tenant environments, a consistent user experience (UX) and operator experience, and a security engine integrated into the network rather than added externally.

    A game for higher margins and a bigger market

    The acquisition of Juniper is also a financially sound move. Juniper’s high-margin business is expected to increase the share of higher-profitability activities in HPE‘s structure. In practice, this means shifting the focus from traditional server and storage products to software, security and network services – areas with faster growth and better operating margin ratios.

    The new HPE Networking segment, led by former Juniper CEO Rami Rahim, is expected to account for more than 50% of the company’s future operating income. For HPE, this is a significant change in operating model – closer to Cisco than Dell.

    Challenges: integration, positioning, competition

    While the strategic benefits are obvious, the challenges are not small either. First and foremost, HPE must seamlessly integrate the Juniper team – both operationally and culturally – while maintaining continuity of support for both companies’ customers. Added to this is the need to clearly position the new portfolio against the Aruba offering to avoid internal cannibalisation.

    In the market, meanwhile, the competition is not sleeping. Cisco is already investing in AI in its network management and cyber security platforms. Dell is going deeper into partnerships with Nvidia and Broadcom. Startups such as Arista and Arrcus are gaining traction in cloud environments. HPE will not only have to prove its technological edge, but also build new channels to reach customers – especially those who have so far opted for more ‘software-based’ providers.

    What next?

    HPE today stands at the threshold of a transformation that could redefine its role in the IT ecosystem. The combination with Juniper Networks creates one of the most complete networking stacks on the market, ready to support cloud and AI needs. The key, however, will be how the company integrates technology, people and processes. The success of this operation could put HPE in a whole new league – not as an infrastructure provider, but as an integrator of intelligent IT environments.

  • HPE rebuilds hybrid cloud from the ground up

    HPE rebuilds hybrid cloud from the ground up

    Hewlett Packard Enterprise is redefining the concept of IT operations in hybrid environments with the introduction of its new GreenLake Intelligence platform, a comprehensive agent-based artificial intelligence (AI) system designed to simplify the management of complex infrastructure and accelerate the adoption of AI-based solutions.

    HPE ‘s new approach is based on AIOps agents that run across all layers of infrastructure – from storage to networks and cloud costs – enabling automated problem detection, analysis and proposed corrective actions in real time. This is not just a technical innovation – it is an attempt to build a unified operating model to meet the growing demands of companies looking to deploy native AI without replacing their entire infrastructure.

    “HPE is redefining hybrid IT in its own distinctive way: moving companies from an era of hybrid complexity to an era of cloud operations based on agent-based artificial intelligence,”said Antonio Neri, president and CEO of HPE.“HPE’s new vision for hybrid IT solutions is based on agent-based artificial intelligence at every level of the infrastructure, enabling companies to realise their boldest ambitions and achieve previously impossible levels of IT productivity and operational efficiency.”

    It is worth noting that HPE is positioning GreenLake not as a classic cloud, but as the foundation of an intelligent hybrid IT architecture. Along with this approach come new components: the OpsRamp agent-based operational copilot, automated FinOps tools, predictive sustainability analytics, and Alletra X10000 intelligent storage ready for MCP (Model Context Protocol) servers.

    Of particular interest is the expansion of HPE Aruba Networking to include a mesh of AI agents running from within the network coplot. This is another step towards a self-optimising infrastructure, where network traffic and security management becomes the job of the automated system rather than the administrator.

    In terms of its deployment model, HPE is clearly moving towards a service-based approach. The CloudOps platform, combining OpsRamp, Zerto and Morpheus, is available as both software and a managed service. It is accompanied by flexible financing programmes and new purchasing models (Cloud Commit) to convince companies to move to agent-based cloud without high upfront costs.

    While GreenLake Intelligence is still an early-stage initiative – most of the new features will not reach customers until the second half of 2025 – HPE’s direction is clear. The company is building a future where IT operations will not be managed, but orchestrated by intelligent agents, running in the background and reporting only when a human is needed.

  • HPE and NVIDIA simplify entry into the era of AI factories

    HPE and NVIDIA simplify entry into the era of AI factories

    Hewlett Packard Enterprise is significantly expanding its offering for enterprises building and scaling artificial intelligence factories. Together with NVIDIA, the company is unveiling the next generation of HPE Private Cloud AI, a turnkey infrastructure environment to support advanced AI workloads designed for generative, agent and physical models.

    The new solutions are part of a broader trend of industry standardisation of AI infrastructure – from off-the-shelf racks to multi-tenant architectures to the management of physically separated (air-gapped) environments. This is in response to the growing needs of both hyperscalers and model-building companies, as well as public organisations concerned with digital sovereignty.

    b-Antonio Neri, president and CEO of HPE, said.“HPE and NVIDIA provide the most comprehensive approach combining best-in-class AI infrastructure and services, enabling organisations to realise their ambitions and create sustainable business value.”

    At the heart of HPE’s offering is Private Cloud AI, a full technology stack based on NVIDIA Blackwell GPUs and a federated architecture that allows GPUs to be flexibly shared between teams and projects. Together with the new HPE Alletra Storage MP X10000 storage system, optimised for unstructured data, the platform is expected to support more than 75 AI use cases.

    HPE also focuses on easy deployments: new solutions are prefabricated, verified and ready to go as soon as they are delivered to the customer. Compared to classic build AI environments, HPE’s approach shortens the time to business value, which can be crucial for companies operating in dynamic industries such as financial services or industry.

    For organisations with specific requirements, HPE also offers a new line of AI Factories for Sovereigns – AI factories with features that guarantee full control over data and technology, tailored to the realities of government institutions.

    Also new to the portfolio are AI factory design and financing services and a try-and-buy programme in Equinix data centres. HPE Financial Services, on the other hand, enables lower entry costs into AI, including through leasing and reusing existing infrastructure.

    HPE’s move shows that the era of ‘AI factories’ is entering a phase of consolidation – not just as a technology concept, but as a finished, scalable product. What matters in this race is not just GPU performance, but also ease of deployment, regulatory compliance and flexible funding models.

  • HPE acquires Juniper Networks for $14 billion. Justice Department gives green light

    HPE acquires Juniper Networks for $14 billion. Justice Department gives green light

    The US Department of Justice has approved the acquisition of Juniper Networks by Hewlett Packard Enterprise for $14 billion. While the finalisation of the deal appears to be a foregone conclusion, the settlement includes several significant commitments that shed light on the regulator’s real concerns about increasing consolidation in the US networking sector.

    The case was due to go to the courtroom on 9 July, but the settlement filed on Friday evening avoids it. The terms? HPE must sell its Instant On wireless business, and license the source code for the Mist AI software used in Juniper’s wireless solutions.

    In practice, this means: the deal may happen, but without overly ‘densifying’ the business Wi-Fi market. Mist AI is one of the pillars of Juniper’s offering – an artificial intelligence-based system that automates WLAN management. Separating this technology at the licensing level is a nod to competitors, so as not to close off their path to the market.

    The Department of Justice argued in January that a merger between HPE and Juniper could limit competition, leaving only two players – Cisco and the new HPE-Juniper duo – in the de facto battleground, controlling more than 70% of the US networking equipment market. This is a not unfounded fear: data from IDC and the Dell’Oro Group shows that Cisco has around 45% of the enterprise networking market, Juniper and HPE together over 25%.

    However, the settlement shows that the regulator also sees the other side of the coin. The market is no longer the same playing field it was a decade ago. Cloud solutions, the rise of software-defined network control (SDN), and integration with AI workloads are changing the priorities of enterprise customers. For HPE, the acquisition of Juniper is not just a matter of expanding its portfolio, but more importantly – building a modern, integrated platform that responds to AI-native workloads.

    In this way, HPE is seeking to enter the premier league of AI infrastructure providers, where Nvidia, AMD, Broadcom and, of course, hyperscalers currently dominate. Juniper, with its specialisation in network traffic management and automation, could be an important piece in this puzzle.

    From the perspective of the IT sales channel and integrators, the DOJ’s decision is a signal that regulators will still intervene when consolidations hook into market dominance – but at the same time allow growth if competitive conditions are secured. This is particularly important for HPE and Juniper partners, who will now have to analyse how the change in structure will affect product availability, discount policies and technology roadmaps.

    The settlement therefore does not just end a legal dispute. It opens a new phase: the battle for position in the era of AI-designed networks.

  • Florian Bettges, HPE: “Everything we do at HPE at the end refers to HPE GreenLake”.

    Florian Bettges, HPE: “Everything we do at HPE at the end refers to HPE GreenLake”.

    “The cloud is not a destination, it’s an experience.” – says Florian Bettges, HPE GreenLake Category Lead, Central Europe. It’s hard to disagree with his words, given the momentum behind HPE GreenLake – a brand that epitomises HPE’s transformation to a cloud platform provider at the core of its as-a-service offering, as Betteges discusses in the interview.

    Klaudia Ciesielska, Brandsit: Today, GreenLake is a huge service group with 900 partners and 65,000 customers. However, such a gigantic brand was not created overnight. When did you see the biggest growth in terms of HPE GreenLake’s development?

    Florian Bettges, HPE: Everything we do at HPE at the end refers to HPE GreenLake. It is difficult to pinpoint a single moment of breakthrough. HPE GreenLake is growing all the time. Since the very beginning in 2018, we have seen impressive progress, resulting in today’s number of partners and customers. Since then, we have been growing strongly in double digits year on year, and I am of the opinion that the peak of the growth rate is still ahead of us.

    Klaudia Ciesielska, Brandsit: Is the continued dynamic growth of HPE GreenLake services influenced by the high level of digitisation of SMEs in recent years?

    Florian Bettges, HPE: A huge number of SME companies are already betting on digital solutions in the as a service model, and the number of customers in this sector is growing strongly. This is an opportunity for us to increase our partner network, which confirms that the biggest growth is still to come.

    Klaudia Ciesielska, Brandsit: HPE GreenLake is a brand that HPE is successively developing. Why?

    Florian Bettges, HPE

    Florian Bettges, HPE: HPE GreenLake is of great importance to the company. When we ask customers “What do you think about what HPE does?”, we often hear answers in reference to the success of our brands, such as HPE ProLiant or HPE Alletra – “You are a storage provider, you are a server provider, you make good hardware” and so on. I need to make it clear – HPE has undergone a transformation and transformed itself into a completely different company in the last three years. We are among the world’s leading IT infrastructure providers and this is at the core of our as a service offering. We are a cloud services company and HPE GreenLake is a brand that reflects this change. HPE GreenLake encapsulates the entire vision, perspective and strategy of our business – everything from services to cloud to sustainability.

    However, it is worth remembering that HPE GreenLake is also the name of a cloud services platform. This platform enables monitoring and management of multi-cloud environments, including public cloud instances. We have been saying for years that hybrid cloud is the future. Now we see that this future is becoming the present, and we are ready for it.

    Klaudia Ciesielska, Brandsit: What are the key cloud challenges?

    Florian Bettges, HPE: The cloud is not a destination, it is an experience. So we don’t talk about what the hardware position is in the customer’s data centre, public cloud or co-location. We call it capabilities and user experience and how they engage with IT services. HPE provides these capabilities in the cloud so that the service user can scale, control data and benefit from a fee-for-consumption model.

    However, the challenges that organisations face are quite common – including access to know-how and resources, costs and data sovereignty. Based on these challenges, we are already seeing a lot of repatriation in the market – i.e. moving instances from the public cloud back to an on-premises environment. In the long term, running services in the public cloud becomes quite problematic due to increased expenses, lack of knowledge, and not enough qualified specialists. Therefore, such companies are starting to look for someone to manage the cloud and reduce its costs. And this is where HPE GreenLake comes to the rescue, which allows combining the public cloud with other solutions, which is sometimes very difficult for customers.

  • How to achieve higher levels of digital transformation in a hybrid IT world (interview)

    How to achieve higher levels of digital transformation in a hybrid IT world (interview)

    Digital transformation is a new priority for many organisations. However, the pace of technological change and, at the same time, the complexity of today’s hybrid environments are such that harnessing the opportunities available and aligning them with business strategy is a major challenge. The support of experts equipped with proven methods and tools allows you to lead the transformation quickly yet securely. We talk about the biggest challenges of digitalisation and IT transformation and the WOW effect with Maciej Toroszewski and Krzysztof Chibowski from the Advisory & Professional Services department at Hewlett Packard Enterprise (HPE) Poland.

    What is the genesis of the Advisory & Professional Services department at HPE?

    Krzysztof Chibowski, HPE
    Krzysztof Chibowski, HPE
    .

    Krzysztof Chibowski [K.C.]: Companies deciding to take on the serious challenge of digitisation lack both competence and tools. HPE recognised this a long time ago and therefore acquired Cloud Technology Partners in 2018. In doing so, we acquired both. These competences were built on the basis of successfully running public cloud migration projects over a period of 10 years. These were the beginnings of the Advisory & Professional Services department in its current form. Today, we have a significant number of experts, methods and tools at our disposal to speed up the implementation of such projects, prevent common mistakes and ensure final success. As an organisation, we have nearly a thousand migrations to our credit. These practices have been developed over many years of experience, are used when supporting new customers and are updated with each successive project. This allows us to offer a repeatable process, rather than improvisation or starting from scratch each time.

    Maciej Toroszewski, HPE
    Maciej Toroszewski, HPE
    .

    Maciej Toroszewski [M.T.]: What is one of the biggest challenges for many companies today? As they want to drive digital transformation and migrate to the public cloud, they clash with issues of data sovereignty, regulatory compliance, cost management or resource consumption itself. After a period of fascination, they begin to see that, in most cases, it is not possible to easily move infrastructure from their own data centre to the public cloud. It turns out that top-notch specialists with vast experience in local data centre issues cannot transfer them to the cloud. The differences are too great. This is why people are the foundation of a successful transformation. So in addition to tools and hard skills, change management becomes critical: organisational and operational. And here, too, we offer our customers proven methods for transitioning from a traditional IT management model to one that fully exploits the capabilities of new technologies. At HPE, we call this model the Edge 2 Cloud Adoption Framework (E2CAF) and it is a complete offering that supports all stages of the transformation to the cloud. Still close to the technology, we are after all a technology company, but based on a flexible agile approach, in line with the expectations and needs of the business in companies.

    So cloud computing is the biggest challenge at the moment?

    K.C.: No, it is just one of the challenges. Others include Data Management or reducing CO₂ emissions. This is embedded in both IT department strategies and overall business plans. The traditional way of doing projects was to secure the complete infrastructure within a certain timeframe. Therefore, customers bought all the equipment at the beginning of the project. Only part of it worked in production, with the rest acting as a buffer to allow for operations in the event of an increase in power or capacity requirements. The whole had to be not only powered, but also kept in the right conditions, including the right temperature. All of this significantly increases energy consumption relative to actual, current needs, and in the context of the aforementioned reduction of CO₂ emissions, optimisation in this area is very much needed. For example, investments in Data Centre cooling equipment made 10 years ago can now be replaced by more efficient and economical equipment. The result? Savings of up to 70-80 per cent. In the case of old-generation servers, the situation is similar.

    So customers today expect more from a technology supplier than the product itself….

    K.C.: Yes, definitely yes. The hardware platform is slowly disappearing from the centre of attention. Containerised solutions, such as HPE Ezmeral, for example, make us able to use the infrastructure we have more efficiently. Few companies today buy only hardware. On their own, they find it difficult to find their way through the maze of new technologies and services. Some lack the time, others lack the competence or experience. This is why most opt for solutions and technology they already know. Some need a partner with whom they can discuss everything in detail and who can show them the full spectrum of available options. Invariably, they all want results – fast, spectacular, cost-effective, making the most of what they already have.

    HPE Advisory & Professional Services is a high-level consultancy offering, but with direct reference to specific technologies. At the moment, the main driver of change is data – its growth and the information it brings with it, but that’s a topic for a longer conversation. The bottom line is that we are able to guide the client from the very beginning to the end of their transformation.

    M.T.: One of our differentiators is proactivity. We analyse publicly available company strategies and prepare concrete solution proposals for clients. We suggest how business objectives can be achieved on the basis of existing technologies, advise on the order of projects comprising the transformation, which projects are worth undertaking and which should be omitted – due to risk, cost or scale of difficulty. There is no cost to the client for such a meeting. This is a major change in the approach to IT solution discussions because, as a technology provider, we want to make sure that we are delivering an optimal solution that supports the strategic and business goals of the service recipient within the customer’s organisation.

    What can customers expect from APS services?

    M.T.: Take the Digital Next Advisory service, which is about creating a detailed transformation map. This is based on a framework developed over years and hundreds of projects around the world that makes it easy to link IT projects to business objectives. We show precisely what, when and how to implement. We sometimes joke that we walk into a client and generate a WOW effect. However, there is something in that. We show clients new possibilities, opportunities and threats, pitfalls they hadn’t thought of before, and they gain a whole new perspective as a result. This clarifies for them what they want and what they can achieve, and at the same time what they need to do so.

    K.C.: It’s worth mentioning that APS’s services combine perfectly with HPE’s GreenLake offering, which makes it easier for customers to achieve their goals without having to move applications and data to the public cloud. Whatever the reason they don’t want to or can’t do that, we can offer them a cloud experience in their own data centre. This is hugely important in an organisation-wide digital transformation. It does not always make sense to move all a customer’s systems to the cloud. There is quite a strong current trend of going back to on-prem solutions and creating private cloud solutions. It turns out that it is best anywhere, but best in one’s own data centre.

    M.T.: Finally, I would like to add that our services are independent of specific technologies or cloud service providers – we always recommend to our customers what is optimal for them in their specific situation.