Tag: Infrastructure

  • Marriage of convenience – How is IT infrastructure forcing a new dialogue between CIO and CFO?

    Marriage of convenience – How is IT infrastructure forcing a new dialogue between CIO and CFO?

    For years, the relationship between the CIO and CFO resembled a long-established marriage, communicating mainly through laconic notes left on the fridge. The CIO would ask for budgets for ‘solutions that no one but him understands’, and the CFO would respond with a question about cost optimisation, treating the server room as a necessary evil – an expensive black box that would be best moved entirely to the cloud and forgotten about.

    This model is about to become history. The latest Deloitte report, based on a survey of leaders from more than 500 US corporations, leaves no illusions: a financial tsunami is coming that cannot be waited out in a silo.

    The projected tripling of AI infrastructure budgets by 2028 is the critical moment when the technology becomes too expensive, too energy-intensive and, most importantly, too strategic to leave its oversight solely in the hands of engineers. When spending on computing power quadruples in a few years, it ceases to be an issue for the IT department and becomes a matter of sovereignty and survival for the entire organisation.

    Blurring boundaries is a painful but fascinating process. The CFO spreadsheet and the CIO hybrid architecture diagram are no longer two different documents. It’s time to abandon translators and diplomatic protocols – the leaders of tomorrow must become bilingual, because a communication error between the ‘boardroom floor’ and the ‘server room’ could cost a fortune.

    Financial culture shock

    For the past decade, the mantra of CFOs has been ‘OpEx above all else’. The public cloud was supposed to be the cure-all for all evil – a flexible cost that could be scaled up or down, avoiding the costly maintenance of in-house ‘server housing’. However, artificial intelligence, with its insatiable appetite for computing power, is brutally verifying this optimism.

    There is a clear conclusion from the Deloitte report: the traditional IT spending model, based on one-off upgrade spurts, is becoming a thing of the past. Instead of cyclical ‘fleet replacement’ projects, IT departments are moving to a model of constant, high and growing annual spending. After all, AI is not a sprint after which you can rest; it is an arms race in which the fuel – i.e. computing power – gets more expensive with each new deployment.

    Interestingly, we are seeing a fascinating twist: the return to favour of the CapEx model. Companies that not long ago were aiming for total ‘hardwarelessness’ are now queuing up for their own GPUs and TPUs. Why? Because at the scale Deloitte is talking about – where the amount of tokens being processed is doubling every year – renting ‘power’ in the cloud is simply becoming economically inefficient.

    For CFOs, this is a real culture shock. They have to accept that having their own physical AI infrastructure becomes a strategic asset, not just an operational ballast. An in-house hybrid server room becomes an insurance policy for the future. Companies stop asking ‘how much is it going to cost us this month’ and start calculating how much computing power they need to own so that their models don’t get stuck in a queue at hyperscalers.

    The ’30 pilots’ trap, or where the money is running away

    The figure of ’30 pilot projects’ sounds impressive in an annual report and looks great on shareholder slides. However, for the CIO-CFO duo, this statistic is first and foremost a wake-up call. Deloitte indicates that by 2028, almost 70% of companies will be conducting such extensive AI trials. The problem is that, with soaring infrastructure costs, spreading resources across thirty different fronts is a straightforward way to cultivate so-called ‘innovation theatre’.

    There is a lot going on in this model, dozens of prototypes are being developed, but none of them get beyond the experimental phase to realistically feed into the profit and loss account. With giants such as Anthropic reserving gigawatts of power for years ahead, smaller players have to demonstrate downright surgical precision in resource allocation.

    This is where the new role of management manifests itself: The CIO and the CFO must jointly act as ‘silicon guardians’. Their job is no longer just to check that the budget is closing, but to build an absolute hierarchy of importance. Each of the 30 pilots should pass the sieve of a hard ROI analysis: does this model really optimise the process or is it just a technological curiosity?

    Any decision to allocate resources to a particular project is a de facto decision about which area the company wants to gain a competitive advantage in and which it is letting go. The real art of management in 2028 will not be how many AI projects can get off the ground, but how many of them can be killed off early enough for the most promising ones to have something to work on.

    New business grammar: Tokens instead of man-hours

    “The boundary between business and technology isn’t just blurring – it’s ceasing to exist” – these words from Chris Thomas of Deloitte should be engraved above the entrance to every modern conference room. The traditional grammar of business, based on man-hours, licences per user or the number of ‘seats’ in a CRM system, is giving way to a new currency: tokens.

    For CFOs, understanding what a token is and how it affects the balance sheet becomes as critical as analysing operating margins. Tokens are the blood in the veins of AI models, and their volume directly translates into computing power requirements. If, as the report predicts, their volume in corporate processes is set to double or triple in the next three years, then the infrastructure discussion is no longer a debate about ‘buying hardware’. It is a debate about the capacity of the entire enterprise and its ability to generate value.

    In this new hand, AI infrastructure is being promoted from the role of a quiet back office to that of a major actor on the frontline of the battle for customers. Companies that are able to effectively manage their own ‘computing portfolio’ – skilfully combining closed, open and proprietary on-premise models – are gaining a flexibility that competitors relying solely on off-the-shelf SaaS services can only dream of.

    Strategic advantage in 2028 will not come from having the best marketing slogans, but from optimising the cost of generating a single intelligent operation. Infrastructure becomes the foundation of innovation: it determines how quickly a company can implement new functions and how deeply it can automate its structures. He who controls access to processors and optimises their use de facto controls the rate at which his business can grow. This is the new economy of scale, in which hardware becomes the hardest of the hard currencies of business.

  • Attacks on US critical infrastructure. How Iran exploited flaws in the OT

    Attacks on US critical infrastructure. How Iran exploited flaws in the OT

    The false sense of security of modern infrastructure is shattered not by sophisticated algorithms, but by mundane negligence, which in the hands of state actors is gaining the status of a strategic weapon. Incidents targeting US operational technology systems prove that the weakest link in digital power can sometimes be a lack of elementary network hygiene, turning a routine configuration into a critical point for state stability.

    While the public debate revolves around mythical zero-day tools and sophisticated cyber-espionage, the reality turned out to be painfully trivial. The key to physical process control systems was not a new generation of digital lockpicks, but an open door that no one saw fit to close.

    Fundamental to this problem is the methodological regression of the aggressors. Traditionally, we view state-owned hacking groups as digital laboratories creating unique code with huge market value. Meanwhile, actions targeting the water or energy sectors reveal a shift towards an operational model based on cost efficiency.

    Instead of investing millions of dollars in finding unknown software vulnerabilities, the attackers used widely available scanners of network resources. In this new doctrine of ‘cyber-pragmatism’, it is not the hacker that adapts to the target, but the target that is chosen because of its public visibility and lack of elementary barriers such as unique passwords or multi-component authentication.

    This situation exposes a profound crisis in the concept of air-gapping, the physical isolation of operational technology (OT) systems from external networks. For decades, the belief in the security of PLC logic controllers or SCADA systems was based on their supposed inaccessibility. However, the Industry 4.0 paradigm, enforcing a constant flow of analytical data and the need to remotely service devices, has quietly and effectively crushed this wall.

    In many cases, systems that were listed as isolated in the documentation actually had active connections to the internet, configured on an ad hoc basis for the convenience of administrators or external providers. This ‘digital convenience’ has become the most effective ally of foreign intelligence.

    Operational technology has specific characteristics that make it extremely vulnerable to simple attacks. Unlike the dynamic world of IT, where the life cycle of hardware closes in a few years, industrial infrastructure is designed for decades. Many of the controllers currently in operation date back to a time when communication protocols such as Modbus were built with performance in mind, completely ignoring security aspects. In that world, trust was the default.

    Today, these same devices, lacking encryption or identity verification mechanisms, are rendered defenceless against anyone who can establish a communication session with them. This is not a bug in the code; it is a bug in the very design philosophy of systems that have suddenly gained global connectivity.

    An analytical look at the timing of these attacks allows us to see them as a form of digital signal diplomacy. These incidents occurred at a sensitive moment of international tensions, suggesting that their main objective was not total physical destruction, but a demonstration of capability. Hitting the municipal sector, often seen as less protected than military systems, allows the aggressor to dose the pressure with precision. It is a kind of proof of access – proof of having access to the critical switches of the state, which can be used as a bargaining chip at the negotiating table. Such a strategy allows operating below the threshold of open armed conflict, while creating real social and political unrest.

    It should be noted that attribution in cyberspace always remains subject to a degree of uncertainty, which favours a strategy of so-called plausible deniability. The use of simple tools and known vulnerabilities means that traces left by attackers can mimic the actions of amateur hacking groups or common cyber criminals. For the targeted state, this creates a doctrinal dilemma: how to respond to an incident that is technically primitive but strategically strikes at the heart of citizen security.

    The lessons learned are harsh for existing risk management models. Focusing resources on combating the most advanced threats while ignoring digital hygiene in the OT sphere is akin to building an armoured door in a house with open windows. The challenge is no longer simply to purchase more expensive AI-based defence systems, but to return to rigorous network segmentation and auditing of the simplest access settings.

  • The CIO’s dilemma: How to reconcile speed of development with maximum protection?

    The CIO’s dilemma: How to reconcile speed of development with maximum protection?

    Business architecture resembles a complex organism in which the flow of information determines survival and growth. For decades, those in charge of technology strategy in companies have operated within a paradigm that today is becoming not only inefficient, but downright risky. The traditional division of roles, in which one group of specialists built efficient data buses and another – often in some isolation – sought to secure them, is becoming a thing of the past.

    When security is tacked on as the final piece of the puzzle, it ceases to serve its purpose. It becomes a brake, a generator of unnecessary costs and, worst of all, a source of a false sense of control.

    Historically, the primary responsibility of CIOs has been to ensure operational and process continuity. Protecting digital assets has been treated as a necessary but secondary add-on, often implemented in response to emerging threats. Today’s regulatory landscape, boardroom pressures and unprecedented technological fragmentation, however, have forced a complete reversal of this order.

    Security is no longer a finish line to aim for, but a foundation without which modern business cannot take off at all. Accepting the premise that security must be an integral part of the design phase is not just a technical requirement, but above all a business maturity.

    For years, IT directors have been grappling with a classic dilemma: how to accelerate digital transformation while raising the security bar, operating within strictly defined budgets. In the traditional view, these two objectives appear to be mutually exclusive. Any additional security is seen as a layer that increases latency, and any attempt to speed up the network is seen as a risky exposure of the guard.

    This tension, however, is largely an illusion resulting from managing the two disciplines as independent mechanisms. The problem lies not in the sheer desire to be fast and safe at the same time, but in the architectural fragmentation that makes these systems constantly compete with each other instead of working together.

    Complexity has become the silent enemy of efficiency. For years, enterprises had been amassing point solutions from different vendors, building ecosystems consisting of dozens of independent consoles, agents and rule sets. Each new piece of this puzzle, while theoretically enhancing a particular slice of protection, actually generated more operational friction.

    Deadlocks were created and IT teams wasted time manually correlating data from multiple incompatible sources. In such an environment, business agility becomes a purely theoretical concept, as every attempt to change the configuration or implement a new service requires painstaking reconciliation of conflicting security and network policies.

    The solution to this crisis is convergence, i.e. adopting an operational model based on unified platforms that integrate network and security into a single, consistent data source. When these two worlds begin to speak the same language, the conflict of interest disappears. Security ceases to be an external filter and becomes a native function of the infrastructure itself.

    This allows for unprecedented operational clarity, even in the most distributed environments, from local data centres to public clouds and remote access points. With this approach, it is possible to drastically reduce the time it takes to detect anomalies and stop incidents before they can have a real impact on the company’s bottom line.

    When security is natively built into the network fabric, optimisation occurs that cannot be achieved by layer-by-layer methods. Systems respond more smoothly because the need for multiple inspections of the same packets by separate devices is eliminated. At the same time, policy consistency becomes a reality – the same access and protection rules apply whether an employee logs in from the company’s head office or home office.

    It is also worth noting that no platform, even the most advanced, can replace human intelligence, but it can significantly multiply its capabilities. The talent deficit in the area of cyber security is a structural challenge faced by almost every industry. In this context, artificial intelligence and automation are becoming key tools in the hands of the CIO.

    Properly integrated into the operations platform, this technology allows for instant analysis of patterns, summarising alerts and taking over repetitive, tedious tasks. This allows highly skilled professionals to focus on strategic operations and creative problem-solving, rather than getting lost in a thicket of false alerts.

    The evolution of the IT director’s role today is shifting from managing technology to building business resilience. Unified architectures are becoming the most important ally in this process. They allow regulatory requirements and compliance issues to be transformed from an onerous obligation into a natural, automated process. Instead of a constant race against time and attempts to patch more vulnerabilities, the organisation gains a solid foundation that supports innovation.

    Safety in this way is akin to the assistance systems in a modern racing car. They are not installed to make the driver go slower, but so that he or she can drive at maximum speed with complete confidence in the machine, confident that in a critical situation the systems will react faster and more precisely than he or she can.

  • Mac mini in the company: Why does it pay more than a PC?

    Mac mini in the company: Why does it pay more than a PC?

    For years, the choice of hardware infrastructure was based on a dichotomy between pragmatism and prestige. Apple’s solutions, while prized for their work culture and aesthetics, have often been relegated to the budget margins as an expensive privilege reserved for niche creative departments. However, 2026 brings a fundamental shift in these optics. The Mac mini, equipped with M4 generation chips and upcoming M5 units, has become the most precise tool in the hands of finance and technology executives. It turns out that the device with the lowest threshold for entry into the Apple ecosystem, can generate the highest return on investment.

    Revisiting the cost myth through the lens of TCO

    The foundation of scepticism towards macOS deployment in a business environment has always been the unit purchase price. However, this is a short-sighted perspective that ignores the real life-cycle costs of the product. An economic analysis covering a four-year period shows that the initial, slightly higher investment in the Mac mini is rapidly amortised through a dramatic reduction in operating costs. The stability of the architecture, based on Apple’s proprietary silicon, means that support departments are seeing a reduction in incidents by nearly half. The reduced failure rate not only saves IT professionals man-hours, but eliminates costly downtime for operations teams.

    Another pillar of this profitability is the residual value. Unlike standard PCs, which often lose almost all their market value after four years of use, the Mac mini remains a highly liquid asset. The ability to recoup a significant amount of capital when replacing the fleet with a newer generation drastically alters the bottom line, making this device a de facto cheaper solution than theoretically cost-effective alternatives.

    Mac mini 3

    Data sovereignty and the local power of AI

    Today’s businesses face the challenge of integrating artificial intelligence into everyday processes, while maintaining strict privacy standards and compliance with RODO. This is where Mac mini reveals its second, strategic face. Thanks to Neural Engine, Apple Intelligence processes and agents like OpenClaw can operate locally, without the need to transfer sensitive corporate data to external cloud servers. Transforming a workstation into a private AI server allows automation of calendar management, correspondence sorting or documentation analysis in a secure, isolated environment.

    An investment in M4 and M5 architecture is therefore an investment in digital sovereignty. The ability to process complex language models directly on the employee’s desk not only increases the speed of work, but also minimises the legal and reputational risks associated with potential data leaks from the cloud. In an age of increasing cybercrime, chip-integrated hardware security is a barrier that, when implemented in distributed PC environments, often requires the purchase of additional, expensive licences and filtering software.

    Resource recovery through deployment automation

    Managing a computer fleet in a large organisation can sometimes be a logistical nightmare, draining the energy of skilled technical staff. Mac mini, supported by the Apple Business Manager ecosystem, introduces the Zero-Touch Deployment standard, which redefines the role of the administrator. The process, in which the device goes directly from the vendor to the user and configures itself automatically when it first connects to the network, eliminates the need to manually prepare system images or install drivers.

    The lack of hardware fragmentation – the fact that the same manufacturer is responsible for the processor, motherboard, and operating system – results in the near elimination of system conflicts. In an environment where stability is synonymous with profit, the predictability of the Mac mini becomes a key advantage. Freed from having to put out bug fires after operating system updates, IT departments can focus on higher-value-added projects, which translates directly into company-wide innovation.

    Mac mini 2

    Performance psychology and employee well-being

    The human aspect is often overlooked when discussing corporate equipment, even though it determines the ultimate efficiency of processes. The choice of work tools is a clear signal sent to the team about the organisational culture and respect for the employee’s time. The Mac mini, with its silent operation even under heavy load and impeccable aesthetics, promotes an ergonomic and modern working environment.

    High satisfaction with the equipment in use translates into talent retention, especially in sectors requiring high digital competence. An employee who has a tool that is responsive, reliable and integrated with modern SaaS solutions not only works faster, but also with greater engagement. From the CIO’s perspective, ensuring the smoothness of the interface and stability of connections to the cloud via standards such as Wi-Fi 7 or Thunderbolt 5 is a form of nurturing the continuity of business processes.

    A new standard of pragmatism

    The paradox of the Mac mini is that the device, perceived by the brand as a ‘premium’ product, actually promotes a lean approach. Maximising impact while minimising unnecessary resources – both time and money – makes this unit the ideal building block for a scalable business. With the upcoming M5 generation placing an even stronger bet on the autonomy of artificial intelligence, choosing this platform seems to be the most logical step for organisations aspiring to be leaders in digital transformation.

  • Storage versus artificial intelligence. How to avoid bottlenecks in IT?

    Storage versus artificial intelligence. How to avoid bottlenecks in IT?

    When advanced artificial intelligence algorithms reach the heights of popularity , the attention of decision-makers often misses the foundation on which the entire digital transformation is based. Storage, for years treated like a digital basement for the mindless accumulation of information, has undergone a fundamental revolution. Modern storage is the intelligent nervous system on which operational fluidity and the ability to compete in a rapidly changing marketplace depend.

    Awakening from technological lethargy

    Technological development is inextricably linked to the exponential growth of information. According to forecasts by market analysts from IDC, global data generation will reach a dizzying volume of almost four hundred zettabytes by 2028. This is a volume that traditional infrastructure, designed for the realities of a decade ago, simply cannot cope with.

    The gap between tight IT department budgets and growing business expectations is becoming increasingly apparent. Treating storage purely as generic hardware, whose capacity is increased by mechanically adding more drives, is today an anachronistic approach. Organisations that remain with this model are themselves creating structural barriers that block their own flexibility and innovation.

    From a passive warehouse to an analytical centre

    There is now a clear paradigm shift in IT architecture. Modern data storage platforms are sophisticated systems that integrate machine learning and analytics directly at the infrastructure level. They are transforming into autonomous environments capable of independently predicting potential bottlenecks and dynamically optimising resource allocation.

    A perfect illustration of this phenomenon is the e-commerce sector. An advanced storage platform in a large retail company allows processes to be prioritised intelligently and automatically in real time. Systems managing stock levels or loyalty programmes run smoothly, while analysts work seamlessly to personalise offers. This relieves technology teams of tedious maintenance tasks and allows them to devote themselves fully to strategic initiatives.

    Stable foundation

    Today’s technological reality is all about hybrid and distributed environments. Enterprises are constantly seeking the optimum balance between private clouds, which guarantee maximum control, and public clouds, which offer unparalleled scalability.

    In this complex multi-cloud ecosystem, it is intelligent storage that acts as the glue connecting the distributed silos. It ensures that corporate resources remain secure, consistent and instantly available, regardless of their physical or virtual location. This issue becomes particularly important in the context of restrictive European regulations.

    Centralised management of security policies and compliance at the level of the storage itself avoids competency chaos and protects organisations from the harsh consequences of audits.

    Invisible artificial intelligence engine

    The loud enthusiasm around artificial intelligence is sometimes illusory if the basic laws of systems architecture are forgotten. Even the most sophisticated language model or predictive algorithm becomes useless in the face of data latency.

    In machine learning-based projects, modern storage acts as a seamless information highway, eliminating congestion that could drastically slow down the training of models. The importance of this throughput is best seen in the healthcare sector.

    The use of artificial intelligence to analyse high-resolution medical images requires microsecond response times. This speed translates directly into the accuracy of early diagnoses and the optimisation of patient care. On a purely corporate level, the same mechanism of instant access to information determines the ability to stay ahead of market rivals.

    New investment perspective

    The evolution of data storage systems is a perfect reflection of the deeper transformation of the entire digital business. The moment has come when the discussion at board level must change its vector. Instead of focusing on the per-unit cost of maintaining a terabyte of information, decision-makers should analyse how data architecture accelerates new product deployments and minimises operational risk.

    Upgrading storage is no longer just a routine administrative task. It is now a fully strategic investment to turn raw, disorderly collections of information into a smoothly functioning mechanism that generates real profit and business stability.

  • The value of IT M&A. Tech giants invest in AI foundations

    The value of IT M&A. Tech giants invest in AI foundations

    Artificial intelligence (AI) has dominated the technology discourse, making its way from a market curiosity to the most expensive ticket to the global business premier league. The year 2025 closed in the technology, media and telecommunications sector with an astronomical $903 billion spent on mergers and acquisitions. Behind the scenes of the fascination with new applications, however, another, much more brutal game is being played. It is a battle for physical infrastructure, computing power and chips. Those controlling the technological foundations will dictate the terms throughout the digital world in the coming decade.

    The figures from GlobalData’s contribution leave no illusions. The 76 per cent jump in the value of global TMT deals compared to the previous year is a clear signal that the market has moved into a completely new phase. Generative artificial intelligence has ceased to be regarded as a purely speculative technology. It has become a firm foundation on which key investment decisions of major corporations are now based. Although the attention of the mainstream media is still focused on innovative software and new end-user functionalities, the real battle for influence is taking place at the infrastructure layer.

    Anatomy of a hundred billion dollars

    When analysing the structure of spending, there is a clear shift in emphasis. Deals directly related to artificial intelligence alone took in $117 billion last year, an impressive 125 per cent year-on-year increase. Application software continues to generate a massive volume of capital, reaching a ceiling of $169 billion in almost two hundred deals, but it is the strategic moves on the technology back-end that will define the future balance of power.

    This landscape is being shaped by decisions of unprecedented scale. The record-breaking acquisition of Platform X by x.ai for $45 billion is a classic example of the consolidation of massive data sets needed to train sophisticated language models. Equally important are the powerful minority partnerships that allow the giants to build a back office without immediately causing antitrust authorities alarm. Microsoft and Nvidia’s $15 billion investment in Anthropic and Meta Platforms’ $14 billion acquisition of a 49 per cent stake in Scale AI are strategic moves on the chessboard to secure access to the most innovative algorithms and outstanding engineering talent.

    Bottleneck syndrome and new oil

    Understanding these phenomena requires looking at AI through the lens of physical constraints. Computing power has become the new oil, and leading AI chip companies and state-of-the-art data centres are now the most desirable investment targets. The demand for the resources required to support complex models is growing exponentially, exposing the industry-wide bottleneck syndrome.

    Building infrastructure from scratch is an extremely slow and capital-intensive process. Faced with a limited supply of equipment and an acute shortage of skilled professionals, mergers and acquisitions remain the fastest way to secure resources. The consequence of this race is an increasing oligarchisation of the market. The scale of the required financial outlay means that only the organisations with the deepest pockets remain in the battleground. Smaller players are inevitably relegated to the role of customers forced to rely on external infrastructure, which in the long term exacerbates the risk of technological dependence on a single supplier for entire sectors of the economy.

    A year of operationalising and seeking returns

    Despite the record results, analysts are predicting sluggish transaction activity in the current year, 2026. This projected stagnation, however, does not mean a retreat from innovation. Rather, it is the natural reaction of corporate bodies to the need to integrate giant acquisitions. The pace of further deals is also bound to be affected by unstable macroeconomic conditions and increasing pressure from regulators, who are looking increasingly closely at consolidation in the technology sector.

    The observed decline in merger dynamics is a clear signal of structural change. The market is moving from a phase of aggressive resource aggregation to a phase of operationalisation. The winners of the coming months will not be those making yet another spectacular acquisition, but those organisations that most effectively implement the acquired technologies into their own bloodstream and demonstrate a real return on these astronomical investments.

    Strategic implications for decision-makers

    Access to cutting-edge tools based on artificial intelligence will soon take the form of a fully commercialised service, almost entirely dominated by a narrow range of providers. Understanding this is fundamental to planning long-term operational strategies. The arms race currently taking place at the foundations of infrastructure will ultimately define market standards, pricing models and digital security paradigms for the entire coming decade. Awareness of these processes allows for better risk management and more prudent strategic relationships in a world where physical access to computing power is becoming the most important market advantage.

  • China is arming itself with AI chips. Hua Hong implements 7nm technology

    China is arming itself with AI chips. Hua Hong implements 7nm technology

    It has become clear to the global semiconductor sector that Beijing does not intend to wait for sanctions relief from Washington. While analysts’ attention has so far focused almost exclusively on SMIC, it has quietly sprouted a serious domestic competitor. The Hua Hong Group, China‘s second-largest chipmaker, has made significant advances in 7 nanometre (nm) technology, a critical twist in the race for the Middle Kingdom’s technological self-sufficiency.

    According to sources close to the matter, the group’s subsidiary Huali Microelectronics is preparing a production line at its Shanghai Fab 6 facility, which is where work is underway to deploy the 7nm process that has so far been the domain of SMICs alone in the local market. Although officially Fab 6 operates at 22nm and 28nm nodes, behind-the-scenes partnerships with domestic equipment suppliers, such as Huawei-backed SiCarrier, suggest that China is building its own manufacturing ecosystem, isolated from Western supply chains.

    From a business perspective, a key player in this puzzle is Huawei. The giant is not only working with Hua Hong to develop lithographic processes, but is actively supporting local hardware manufacturers. This strategy is starting to bring tangible benefits to smaller chip designers. One example is Biren, a Chinese graphics processing unit (GPU) developer, which, after being cut off from TSMC ‘s production capacity in 2023, is now expected to use Huali’s lines to test prototypes of its AI chips.

    The investment is not just a show of strength, but a real capital move. Hua Hong Semiconductor has announced plans to take control of Huali and raise more than $1 billion for technological upgrades. The goal is clear: to achieve a capacity of several thousand silicon wafers per month by the end of the year.

    Although the manufacturing performance of Chinese companies’ advanced processes is still challenged compared to leaders such as ASML and TSMC, Beijing’s determination to build an alternative AI infrastructure is progressing faster than expected. Hua Hong is ceasing to be a ‘backdrop’ for SMIC, becoming a full-fledged pillar of Chinese digital independence.

  • Sisyphean work in Silicon Valley. Physics teaches humility about AI

    Sisyphean work in Silicon Valley. Physics teaches humility about AI

    Cloud computing has for years effectively hidden the physical dimension of the technology, creating the illusion of infinite and seamlessly scalable resources. Generative AI is brutally tearing down this curtain. With the increasing complexity of models and the popularity of artificial intelligence, software development inevitably collides with the hard laws of physics and thermodynamics. Why do hardware engineers today resemble the mythical Sisyphus, and what does the looming technological token explosion mean for the operational strategies and cloud budgets of today’s enterprises?

    An end to the illusion of limitless computing space

    The early popularisation phase of generative artificial intelligence shaped an image in the market mindset of a technology that was lightweight, ubiquitous and almost free. However, consumer chatbots, efficiently generating lines or editing email correspondence, were merely an impressive display window. As analysis shows, the real business revolution, and the only way to generate a return on trillions of dollars of investment, lies in an entirely different area. The world of technology is moving inexorably towards a reality in which agent-based artificial intelligence becomes the operational foundation of businesses.

    The shift from simple text assistants to autonomous agents is a fundamental paradigm shift. It marks an evolution from single user queries to continuous multi-step inference and the execution of complex workflows in the background. Enterprises will soon be making tens of thousands of system calls to large language models every day. This phenomenon is no longer just a fascinating scientific experiment, but is becoming a process of scale and gravity typical of heavy industry, where process optimisation plays a central role.

    The brutal mathematics of floating point operations

    Understanding the challenges ahead requires looking under the hood of powerful language models. Each word generated, or more precisely each token, carries a measurable physical computational cost. The architecture of today’s systems typically requires two floating-point operations per second for each model parameter in the response generation process. The scale of this is striking when you consider that the most advanced market models operate on one to two trillion parameters. This means that even with highly sophisticated optimisation techniques, generating a single token forces the real-time conversion of between one hundred and two hundred billion variables.

    What’s more, the industry is dynamically shifting towards models based on deep reasoning, in which the contextual window is dramatically expanded. Agent-based artificial intelligence analyses problems multithreadedly, searching for optimal solution paths before formulating a final answer and executing an action. As a result, the number of tokens per query increases exponentially, often by a factor of ten or more. Referring to this phenomenon as a token explosion is not a literary exaggeration, but a chilling description of the digital reality to come.

    Energy consumption as a new unit of account in business

    The consequence of the aforementioned data growth is a return to the fundamentals of economics, where energy intensity becomes the main barrier. According to market analysts, energy consumption, measured in watts per single query, directly determines the profitability of the entire technological sector. The generative business model of artificial intelligence is unique in this respect; the target net margin here depends as much on ingenious code as it does on the cooling costs of the server room and a stable power supply.

    Currently, these costs are largely absorbed by model developers, leading to a situation where it is not uncommon for technology giants to subsidise query processing, relying on capital from investors. This model is not likely to stand the test of time in a mature market. The real beneficiaries of the ongoing investment boom today are not the developers of intelligent algorithms, but infrastructure providers, advanced chip manufacturers and data centre builders. The owners of language models do not have a profit machine, but a powerful mechanism in which capital burns in anticipation of the moment when massive use at the corporate level will offset the astronomical cost of maintaining servers.

    The myth of Sisyphus in the modern server room

    The market situation is forcing an unprecedented effort on the part of hardware manufacturers. The semiconductor industry is operating in a state of constant mobilisation, striving to increase the cost efficiency of graphics processing units, developing ever higher bandwidth memories and optimising the network architecture of cluster systems. Despite these colossal efforts, engineers working on hardware development today resemble the mythical Sisyphus.

    This phenomenon can be likened to a kind of Jevons paradox transposed to the digital world. Whenever a technological boulder is successfully rolled to the top of a mountain by creating a new, faster and more energy-efficient generation of processors, software developers immediately increase the complexity of their models. The boulder falls with a bang at the foot and the work begins again. As artificial intelligence continues to expand its analytical and operational capabilities, the quest for full cost optimisation seems a horizon that is constantly receding. Computational requirements are growing faster than the ability to handle them cheaply, representing an uncompromising clash between unlimited ambition and the limits imposed by semiconductor physics.

    Survival architecture, or cost engineering as an operational priority

    Awareness of the technological and physical considerations described is crucial for planning long-term business strategy. The end of the era of free experimentation means that target implementations of artificial intelligence systems in the corporate environment will have to be subject to rigorous financial and architectural evaluation. The implementation of agent-based systems will bring organisations leaps in productivity by automating complex workflows, but these benefits will be wiped out in a fraction of a second if the toll on computing resources gets out of hand.

    Modern IT infrastructure management will be inextricably linked to the implementation of advanced cloud cost engineering. Instead of routing every trivial task to the most resource-intensive models with trillions of parameters, organisations will be forced to design agile hybrid architectures. Intelligent process routing will involve delegating simple operations to much smaller, highly specialised and energy-efficient models. The costly computing power of the largest market systems will in turn be precisely reserved exclusively for tasks requiring the highest level of abstract inference.

    Understanding the physical, energy and economic limits of technology is becoming the new foundation for market advantage. Only those organisations that can harmoniously combine a bold vision of advanced automation with a cool, rigorous calculation of every watt consumed and token generated in the background will succeed in the target phase of artificial intelligence development.

  • AI infrastructure crisis: Lack of electricians and engineers a major brake on the digital revolution

    AI infrastructure crisis: Lack of electricians and engineers a major brake on the digital revolution

    In the common perception of executives, artificial intelligence appears as an ethereal, almost metaphysical entity. We see it through the prism of algorithmic elegance and the infinite scalability of the cloud, forgetting that every query sent to a language model initiates a cascade of events in the most material world possible. The latest market data forces us to brutally revise this digital idealism. For it turns out that the biggest brake on the modern economy is not a shortage of creative programmers, but hard infrastructure constraints: a lack of copper, a shortage of power in transmission networks and, most acutely, a dramatic shortage of manpower in professions that have so far rarely been on the agenda of technology company board meetings.

    The scale of this challenge is illustrated by the dynamics of energy forecasts. When, in just seven months, BloombergNEF analysts revise projected energy demand for data centres upwards by more than a third, it becomes clear that strategic planning in the IT sector has entered a terrain of high uncertainty. The projected 106 gigawatts of power consumption in the US infrastructure alone by 2035 is not just an engineering challenge, it heralds a new era in which computing power will become a scarce good, rationed by the physical capacity of transformers and the availability of technical staff.

    We are entering a period where the ‘fluidity’ of digital innovation is colliding with the ‘stickiness’ of real-world investment processes. Although the construction of AI data centres is progressing at an unprecedented pace, developers are encountering a glass ceiling that cannot be broken through with code optimisation. This problem is analysed by IEEE Spectrum, among others, pointing to a dangerous skills gap. While the labour market has been saturated with abstraction-layer specialists for years, the real technology base – server rooms, cooling systems and high-voltage networks – has begun to suffer from a chronic shortage of qualified structural, mechanical and electrical engineers.

    This paradigm shift is redefining the concept of ‘IT talent’. The traditional battle for developers is giving way to a much tougher battle for multi-tasking infrastructure operators. Data from the AFCOM report suggests that, for more than half of data centre managers, it is operations staff and physical security specialists who are the bottleneck to growth today. We need experts who can manage critical high-density liquid cooling systems with the same agility as their software colleagues manage databases. Unfortunately, the need for these competencies is growing at a time when the global electricity grid is undergoing its most serious upgrade in decades, leaving the AI sector to compete with the renewable energy and industrial construction industries for the same engineers.

    In response to these deficits, technology hegemons such as Microsoft, Google and Amazon are beginning to take on roles traditionally assigned to state education systems. The creation of their own academies and partnership programmes with technical schools is not a sign of philanthropy, but a pragmatic attempt to secure the competence supply chain. There is a lesson here for medium-sized market players about the need for a deep review of business resilience strategies. The success of AI deployment will increasingly depend on the ability to secure the physical resources and technical competencies that guarantee the continuity of systems in a world with rising energy and water costs.

    Ultimately, the issue of sustainability ceases to be the domain of PR departments and becomes the foundation of risk analysis. The increasing consumption of water to cool servers and the drastic differences in the carbon footprint of different geographical regions make the choice of infrastructure partner an ethical and financial decision. A lack of awareness regarding where the energy powering our AI models is coming from and who is looking after their physical performance can become a costly oversight. The future of business belongs to those leaders who can look beyond the monitor screen and see that their digital ambitions are inextricably intertwined with the fate of the engineer working on high-voltage systems.

    For years, we have lived in a paradigm where software has ‘eaten the world’, suggesting that hardware is merely a cheap and replaceable base. The AI revolution is reversing this vector. Today, it is the availability of physical infrastructure that dictates the pace of digital innovation. For business leaders, this means going back to the roots of operational planning: securing scarce resources, investing in people with specific physical skills and taking responsibility for the entire technology lifecycle – from water intake in cold storage to the energy mix of the local grid. It is a lesson in humility towards the physical world that will ultimately determine who emerges victorious from the race for supremacy in the age of algorithms.

  • IT declares death, business counts profits. Why does the mainframe still rule the world?

    IT declares death, business counts profits. Why does the mainframe still rule the world?

    Every morning, millions of people around the world perform the same, almost mechanical action: bringing their payment card close to a terminal, checking their balance on a mobile app or booking a train ticket to the other end of the country. All this is done in the aesthetically pleasing, responsive interfaces that we associate with modernity. Few people realise, however, that underneath this shiny layer of ‘front-end’ beats the heart of a technology that was already labelled an open-air museum in the 1990s.

    The mainframe and the COBOL language – as they are referred to – are the cornerstones of the global economy. Although there is a cult of novelty in the IT world, business reality is verifying the ‘death of the mainframe’ narrative. Today, we must ask ourselves: are these systems really the ballast of the past, or are they the most solid insurance policy available to modern business?

    The foundation of stability: Why don’t the giants go away?

    In the technology sector, myths die a slow death. One of the most persistent is the belief that modern distributed architecture (microservices, cloud) can seamlessly replace the mainframe monolith. Meanwhile, banks, insurance companies, public administration systems and logistics giants still base their critical processes on COBOL. Why?

    The answer is transactional performance, which cannot be easily faked. The mainframe was designed for one purpose – to handle a gigantic number of real-time input/output operations while maintaining almost 100 per cent availability. In a cloud architecture, latency resulting from communication between distributed servers can become an insurmountable barrier when processing thousands of transactions per second. The mainframe is a ‘money machine’ in the literal sense – it is the one that settles pensions, taxes and interbank transfers, with a stability that many modern platforms can only dream of.

    The economics of code: When the cloud becomes a trap

    Many business leaders look at the mainframe through the prism of the cost of maintaining their own infrastructure and licences (CapEx). Moving to a cloud model (OpEx) seems an enticing promise of savings and flexibility. However, the reality can be brutal on the wallet.

    In a mainframe environment, every instruction has a measurable price. CPU consumption, database operations, working time – all of this translates into monthly invoices. This is why traditional COBOL programmers were (and are) masters of optimisation. Every millisecond saved is profit for the company.

    By moving the same, often suboptimal processes to the cloud in a pay-as-you-go model, companies fall into a trap. Without deep code optimisation, the dynamic scaling of the cloud makes bills grow exponentially. Often, we find that escaping the ‘IBM monopoly’ ends up falling into an even more expensive dependency on cloud providers, where the cost of data transfer and computing power at massive transaction scale exceeds the budget for maintaining an in-house mainframe. Unsurprisingly, some organisations, after costly migration trials, are ‘falling off the cloud like rain’ and meekly returning to proven on-premise solutions.

    Risk management: The skills gap as a real threat

    The real threat to business is not mainframe technology itself, but what sociologists call the ‘silver tsunami’. The experts who have been building and maintaining these systems for the last 30-40 years are retiring.

    For decades COBOL has been removed from university curricula as an ‘unattractive’ language. Young programmers prefer JavaScript or Python frameworks, which offer instant visual gratification, autocomplete code and modern development environments. Working in a mainframe, where the compiler is often crude and errors are pointed out with absolute precision, is not ‘sexy’.

    For business, this is a critical situation. Unless there is a generational change, the systems that drive the economy will be left unattended. This is an operational risk greater than any hacking attack. The lack of specialists capable of optimising code and understanding the architecture of legacy systems could paralyse financial institutions within the next decade. Knowing how the ‘heart’ of a system works is now becoming a rarer and more valuable commodity than knowing the latest mobile app development framework.

    A strategy for tomorrow: Modernisation instead of revolution

    Instead of a radical and risky migration, more and more organisations are choosing the middle way – the hybrid model. This involves keeping a stable, optimised core in COBOL and encapsulating it with modern middleware layers. This allows the ‘old’ mainframe to communicate securely with new mobile applications or AI systems via APIs.

    Modernisation does not necessarily mean demolishing foundations. It can mean strengthening them. Investing in training for existing IT teams, valuing mature talent (mentoring) and opening up to cross-functional collaboration on critical systems is the only way to maintain business continuity.

    A heart that must beat

    The mainframe does not need our pity or nostalgia. It is a technology that defends itself – with performance, stability and scale. But as business leaders, we need to stop treating it as an ’embarrassing secret’ hidden in the server room.

    Recognising the value of these systems is the first step to securing the future. The mainframe is not a technology debt that needs to be repaid as soon as possible. It is a powerful, undervalued insurance policy. But in order for it to continue to protect our transactions and data, we need to nurture a new generation of ‘digital mechanics’ who will not be afraid to get their hands dirty in COBOL code. Because when the heart stops beating, even the most beautiful organism – which is the modern corporation – simply ceases to exist.

  • Another fibre optic cable damaged in the Baltic Sea. Critical infrastructure under the magnifying glass of investigators

    Another fibre optic cable damaged in the Baltic Sea. Critical infrastructure under the magnifying glass of investigators

    The issue of digital security in the Baltic Sea has once again become the number one topic for telecom operators and state services. Just a few days after the incident between Finland and Estonia, there was another damage to undersea infrastructure – this time off the coast of Latvia.

    Authorities in Riga have confirmed that a fibre-optic cable belonging to a private operator was ruptured on 2 January. The incident occurred near Lipava, Latvia’s third largest city. Prime Minister Evika Silina, communicating via the X platform, indicated that there was physical damage, and preliminary findings by services suggest the involvement of a vessel. While the incident did not cause noticeable service interruptions for Latvian consumers – indicating effective network redundancy – the situation is being prioritised by law enforcement authorities.

    The details of the investigation shed interesting light on the mechanics of this type of incident. According to an analysis of data from the Latvian Navy, the suspected vessel followed a trajectory that first crossed the line of the now-defunct cable before changing course towards the active infrastructure. Investigators boarded the vessel currently docked in the port of Lipava. The crew, who have submitted to questioning, are cooperating with the police and no arrests have been made at this time. The services are trying to establish whether this is an unfortunate navigational accident or gross negligence.

    What is of concern to the industry, however, is the frequency of these incidents. This is the second such incident in just one week. On New Year’s Eve, a data link between Estonia and Finland was damaged. In that case, the Finnish authorities took stronger measures, arresting a vessel that was found with its anchor chain down, directly linking it to the failure.

  • Patriotism or cold calculation? Why IT is going back to its roots (and local servers)

    Patriotism or cold calculation? Why IT is going back to its roots (and local servers)

    In growing geopolitical uncertainty, the mantra of unconditionally moving resources to the global cloud is losing relevance, giving way to the urgent need to build digital independence. Infrastructure leaders (I&O) need to prepare for a year in which physical data localisation and supplier diversification will become not so much a technological option as a key component of business survival strategies.

    For the past decade, the IT strategy of many businesses has been based on a simple premise: a global hyperscaler will do it better, cheaper and more securely. Local data centres were treated as a relic of the past and the notion of digital sovereignty was reduced to the need to meet RODO requirements. Today, this paradigm is being rapidly eroded. The tough question is increasingly being asked in CIOs’ offices: what happens if global digital supply chains are disrupted?

    Geopatria: A strategy for the times of “Decoupling”

    The notion of geopatriarchy, which is beginning to dominate trend analyses for the coming quarters, is sometimes mistakenly equated in the IT community with simple local economic patriotism. This is a cognitive error that can cost companies stability. In reality, geopatriotry is a reaction to the global trend of ‘decoupling’, or the separation of economic and technological blocks.

    Modern I&O cannot ignore the fact that the public cloud is not an ethereal entity, but a physical infrastructure under the jurisdiction of specific powers. Relocating workloads (workloads) from global platforms to regional or national solutions ceases to be a matter of ideology and becomes part of systemic risk management.

    The key shift is from data sovereignty (where the files lie) to operational sovereignty. IT leaders need to ask themselves: in the event of sanctions, regulatory changes in the US or Asia, or physical disruption of cross-border links, will my business retain operational capability? Geopatria is essentially building a technical insurance policy. It reduces geopolitical risk and makes critical business processes independent of decisions made on other continents.

    Composability: How to escape the “Vendor Lock-in” trap

    Critics of the local approach rightly point out that abandoning the global cloud could mean being cut off from innovation. Regional providers rarely have the R&D budgets of the Silicon Valley giants. The solution to this dilemma is a new approach to hybrid computing.

    Hybridisation in 2025 is not about bundling an old server room with a cloud VPN. It is a philosophy of composable and extensible architecture. I&O managers must build systems from interchangeable building blocks. It’s about coordinating compute, storage and networking mechanisms in such a way that resources can be freely interchanged between providers.

    If a global provider becomes risky (politically or cost-wise), the company should be technically able to move processes to local infrastructure without rewriting applications. This approach forces I&O leaders to change their thinking about architecture – from monolithic deployments to flexible, containerised architectures that ‘float’ between different environments. This is where the real business value is born: in the ability to adapt quickly, rather than in simply owning the servers.

    Crisis of confidence and defence of identity

    The proliferation of infrastructure (Edge, local cloud, global cloud) brings with it a new threat: the erosion of trust. In an environment where data travels across multiple jurisdictions and systems, verifying what is true becomes an engineering challenge.

    Therefore, security against disinformation is becoming an integral part of the new I&O strategy. We are not talking about PR image protection, but hard technologies for digital identity verification. In the era of Deepfakes and software supply chain attacks, companies need to implement mechanisms that guarantee that the code, command or user is who they say they are.

    For operations departments, this means implementing systems that validate the authenticity of communications at every stage. Protecting brand reputation starts deep at the infrastructure layer – from securing the identity of administrators to cryptographically signing application containers.

    The economics of independence: Energy efficiency as a necessity

    Building a sovereign, hybrid infrastructure is more expensive than renting computing power on a pay-as-you-go model from a giant. This is a fact that CFOs often do not want to discuss. However, I&O managers have a new argument in hand: energy-efficient computing.

    New technologies and practices to reduce the carbon footprint are not just a nod to ESG. It is a way to fund independence. The use of neuromorphic systems, optical computing or simply radical energy optimisation of data centres, reduces the operating costs of in-house and co-located infrastructure.

    In this way, ‘Green IT’ ceases to be a marketing add-on and becomes the foundation of the hybrid model’s profitability. I&O leaders who combine the geopatriation trend with an aggressive energy efficiency strategy will be able to prove to management what is most important: operational security while maintaining budgetary discipline.

    From administrator to strategist

    The infrastructure and operations areas are entering a phase of strategic maturity. The role of the head of I&O is evolving from a provider of resources (‘give me a server’) to an architect of state and business continuity.

    Understanding the impact of geopatriation and implementing a model where a company is not held hostage to one provider or one jurisdiction is the most pressing task for the coming months. Those who treat this trend as a trivial throwback to the past may wake up to the reality that they have no control over their own digital destiny.

  • Lenovo targets technology debt. New offensive in the area of storage and HCI

    Lenovo targets technology debt. New offensive in the area of storage and HCI

    In mid-December 2025, the IT infrastructure market received a clear signal from Lenovo that the Chinese giant intends to aggressively address the gap between the growing ambitions of AI and the outdated hardware back-up of enterprises. The company announced a major refresh of its ThinkSystem and ThinkAgile portfolios, addressing two of the most pressing concerns of today’s CIOs: insufficient storage performance for AI workloads and strategic uncertainty in the area of virtualisation.

    The decision to introduce new solutions is no accident and is a direct result of hard market data. According to IDC analysts, as much as 80 per cent of storage deployed in the last five years is still based on traditional rotating disks (HDDs). In the era of generative AI, such infrastructure is becoming a bottleneck, effectively stifling innovation. Lenovo is responding to this with its new ThinkSystem DS series of disk arrays. These are all-flash systems designed for SAN environments to eliminate data latency, while offering a simplicity of deployment that is often lacking in enterprise-class solutions.

    Equally important, the new offering is a response to the market turmoil around virtualisation platforms. Stuart McRae, executive director at Lenovo, directly points to the “unclear virtualisation strategy” in many organisations as a barrier to modernisation. The answer is to be found in the new release of hyperconverged infrastructure (HCI) from the ThinkAgile FX family. A key differentiator of these systems is their open architecture, allowing seamless migration between VMware and Nutanix solutions without replacing the hardware layer. For the partner channel, this is a strong sales argument, offering end customers real security against vendor lock-in and flexibility in their choice of software provider.

    The portfolio is complemented by solutions targeting the Microsoft and Nvidia ecosystem. The ThinkAgile MX series, integrated with Microsoft Azure Local and equipped with NVIDIA RTX Pro 6000 GPUs, clearly positions Lenovo as an infrastructure provider for edge AI processing. And for customers who prefer a Nutanix environment, there is the ThinkAgile HX series with the Nutanix Enterprise AI suite, which is expected to reduce the time to deploy machine learning models from weeks to minutes.

    Complementing the hardware offensive is the expansion of the services layer. Aware of the Gartner statistic that 63% of companies do not have adequate data management procedures for AI, Lenovo is emphasising consulting and implementation services. The whole thing is bundled with the TruScale model, which is part of the market trend away from one-off CAPEX outlays to a flexible consumption model. The December launch is Lenovo’s attempt to move forward – the company doesn’t want to be just a ‘box’ supplier, but the architect of a transformation in which hardware ceases to be a brake on business aspirations.

  • A billion dollars for clean server room water. Vertiv closes strategic acquisition of PurgeRite

    A billion dollars for clean server room water. Vertiv closes strategic acquisition of PurgeRite

    Vertiv Holdings Co, a global provider of critical digital infrastructure, has successfully completed its previously announced acquisition of Purge Rite Intermediate LLC (“PurgeRite”), a leading provider of mechanical flushing, venting and filtration services for liquid cooling systems for data centres and other business-critical business continuity facilities. The approximately US$1 billion transaction expands Vertiv’s capabilities in the area of thermal management services and strengthens the company’s position as a global leader in the entire chain of next-generation air conditioning services for liquid cooling systems.

    – ‘We are delighted to officially welcome PurgeRite to Vertiv, which will allow us to further develop our competence in liquid cooling services,’ said Gio Albertazzi, CEO of Vertiv. – PurgeRite’s expertise in the area of liquid management perfectly complements our current portfolio and enhances our ability to offer products and services in providing comprehensive support to Vertiv customers. This is because they use high-density computing environments with artificial intelligence solutions, where effective thermal management is crucial for performance and reliability.

    High-performance computing (HPC) systems and AI factories require liquid cooling systems to operate, and to maximise their efficiency, implementing and maintaining a clean coolant circuit is crucial. Achieving this starts with ensuring optimal flow from the start-up stage by ensuring the fluid is ultra-pure, free of air particles and chemically stable. It is also necessary to maintain this balance in order to maintain high performance throughout the lifecycle of the system.

    The integration of PurgeRite’s expertise into Vertiv’s existing portfolio of thermal management solutions will bring significant benefits to customers. Improved heat transfer and equipment performance will result in more efficient systems. Increased operational excellence will reduce the risk of downtime. The scale of services supporting Vertiv customers’ global operations will also be expanded, with consistent quality.

    Headquartered in Houston, Texas, USA, PurgeRite is an industry leader in mechanical flushing, venting and filtration of liquid cooling systems, and its key customers are hyper-scale and Tier 1 colocation providers managing critical data centre environments. It will bring to Vertiv’s resources the engineering expertise, proprietary technologies and scalability to cope with the challenges of demanding work schedules and data centre deployments. It will also enable the deployment of complex liquid cooling solutions across the thermal chain – from chillers to coolant distribution units. Its services will be integrated with Vertiv’s existing range of liquid cooling solutions to provide a comprehensive temperature management environment for entire facilities and individual rooms, as well as entire rows of server racks and individual units.

    Source: Vertiv

  • Oracle vs. market rumours: Is the infrastructure for OpenAI facing barriers?

    Oracle vs. market rumours: Is the infrastructure for OpenAI facing barriers?

    Friday’s trading session became a litmus test for investor sentiment around the artificial intelligence sector. Oracle, the tech giant trying to catch up with cloud leaders, was faced with a Bloomberg News report suggesting serious delays in building infrastructure for OpenAI. According to the report, labour and material shortages were said to be postponing the finalisation of key data centres until 2028. The company’s response was immediate and decisive. Oracle spokesperson Michael Egbert denied any slippage in a statement to Reuters, assuring that all “milestones remain on track” and that the company is fully meeting its commitments to the ChatGPT developer.

    Despite the dementia, market nervousness was evident. Oracle shares lost nearly 3% during the session, dragging down other beneficiaries of the AI boom such as Nvidia, AMD and Arm Holdings. This discount, however, is not solely the result of a single article. Investors are increasingly wary of Oracle’s aggressive strategy, having entered the fray with a massive $300 billion deal with OpenAI. In order to fund this arms race, the company has been forced to significantly increase its debt, which in a high interest rate environment raises legitimate concerns. The cost of securing the company’s debt against insolvency reached its highest level in five years on Thursday.

    The situation sheds light on a wider industry problem. Bottlenecks are shifting from chip manufacturing to mundane infrastructure issues: energy availability and the pace of construction work. Physical constraints are now becoming as much of a risk factor as the technological capabilities of the algorithms themselves. The market, which until recently uncritically rewarded every announcement of AI spending, is beginning to demand specifics and profitability.

    Warning signals are also coming from elsewhere. Broadcom reported a decline of more than 11 per cent after warning that growing sales of custom AI processors – while volumetrically impressive – were negatively impacting margins. This shows that the ‘blank cheque’ era for AI development is coming to an end. Investors are becoming choosy and Oracle, despite assurances of timeliness, is under pressure to prove that it can manage not only the technology but also the growing financial risks.

  • The end of ‘burning through’ AI budgets. 2026 will bring ROI verification and a new era of inference

    The end of ‘burning through’ AI budgets. 2026 will bring ROI verification and a new era of inference

    The hype for artificial intelligence continues in earnest, but the uncomfortable question, “Where’s the money?” is increasingly being asked in CFOs’ offices. Recent years in the AI industry have resembled a gold rush, where the mere fact of having a pick counted, not what you managed to dig up with it. According to predictions from experts at Colt Technology Services, 2026 will be a turning point. The time of costly experiments is coming to an end and the era of verification is beginning, where technology must defend itself in Excel tables.

    Big language models and generative artificial intelligence have captured the imagination of business. However, this fascination is being followed by gigantic amounts of money that are not always returning to company coffers. Research cited by Colt Technology Services shows the brutal truth: although one in five large business groups spends an average of $750,000 a year on AI, as many as 95% of participants in the MIT study say they have not received a return on that investment.

    This is a statistic that will no longer be tolerated in 2026. It is time to sober up and move from admiration of the possibilities to a hard accounting of the effects.

    From ‘school’ to ‘work’, or time to apply

    Until now, the industry’s attention – and most computing resources – has been focused on training models. This has been an energy-intensive, expensive and lengthy process, akin to sending an employee to a very expensive university. In 2026, that employee will finally start working.

    Inference is the point at which the model stops learning and starts operating in a production environment – generating knowledge, predicting events and making decisions in real time.

    This is not just a technical change, but first and foremost a business change. Shifting the centre of gravity from training to inference means moving from the investment phase (CAPEX) to the operational phase, which is expected to generate revenue or savings. McKinsey estimates that by 2030, inference will account for the majority of AI workloads. For CIOs, this means redesigning IT architecture to support fast, contextual decisions in the here and now, rather than just big data processing in the background.

    Agentic AI: Automation that finally works

    How to close the ROI gap? The answer may lie in the evolution towards so-called ‘Agentic AI’. Until now, we have been dealing with systems that can write lines or generate graphics. Now we are entering the era of agents that can do the job.

    Instead of a passive assistant, companies are gaining a digital executor. According to the IEEE analysis cited by Colt, ‘Agentic AI’ will automate and digitise everyday tasks – from managing consumer privacy and health, to the complex organisation of processes within companies.

    For business, this is a key difference. A chatbot answering questions is a convenience. An AI agent that autonomously schedules meetings, negotiates simple contracts or optimises the supply chain in real time – a real reduction in operational costs. In 2026, technology providers will need to offer tools to accurately measure the impact of these agents on a company’s bottom line. ROI models will become an integral part of the offering, not just an add-on to a sales presentation.

    Infrastructure must keep up with ambitions

    However, the implementation of AI into operational work raises a mundane but critical problem: how to transmit all this data? The forecasts are alarming. The volume of AI workloads moving across transatlantic cables, for example, could increase from 8% today to as much as 30% in 2035.

    The traditional network is not ready for such a leap, especially if it is to be a cost-effective process. Therefore, 2026 will bring a redefinition of wide area networks towards AI WANs. We are talking about programmable networks specifically designed to manage traffic generated by artificial intelligence.

    Why is this important for the budget? Because in the world of real-time inference, latency means loss. AI WAN is supposed to provide performance and security at the application level itself. Moreover, the environmental and cost aspects come into play. Increasing capacity by ‘brute force’ (adding more links) is no longer worthwhile. Innovations in sustainable networks that increase performance without a linear increase in energy consumption will become a purchasing priority.

    The concept of NaaS 2.0 (Network as a Service) is also on the horizon. The traditional network-as-a-service model is evolving into an intelligent, automated platform. Colt’s research shows that almost 60% of CIOs are already increasing their use of NaaS in the face of pressure from AI. The new version of this service is expected to provide the flexibility needed to handle the unpredictable load spikes inherent in modern algorithms.

    Data sovereignty as an insurance policy

    The conversation about money in IT in 2026 cannot ignore risk. As technology matures, there is a growing awareness of the importance of data sovereignty (Sovereign AI). Countries and organisations increasingly want to build systems based on their own infrastructure and talent to become independent of global giants and align with local regulations.

    This is a trend that is forcing changes in cloud strategy. Multicloud and hybrid models are becoming standard not only for technical reasons, but as a strategy to avoid dependence on a single provider (vendor lock-in) and to mitigate legal risks. Edge computing is gaining prominence, allowing data to be processed close to the source, which promotes both inference efficiency and compliance with data protection regulations.

    Balance of the IT director

    The year 2026 in the IT industry promises to be a time of great testing. IT executives will still have to balance on a fine line. On the one hand, the pressure for complex AI-driven digital transformation programmes. On the other – the absolute necessity to reduce costs and adapt to a changing regulatory environment.

    The potential is huge and the infrastructure more powerful than ever. However, the winners in the coming year will not be those who spend the most on innovation. The winners will be those who move the fastest from the ‘wow’ phase to the ‘how much’ phase – effectively deploying AI where it brings measurable value, backing this up with a flexible, secure and cost-effective network.

    The gap between investment and return will begin to close. For many companies, however, it will be a painful process of reviewing whether their digital strategy was visionary or just fashionable.

  • The calm before the storm: what does the Cloudflare failure teach us about ‘latent errors’ and proactive monitoring?

    The calm before the storm: what does the Cloudflare failure teach us about ‘latent errors’ and proactive monitoring?

    It is accepted that the worst failures are those caused by DDoS attacks or catastrophic errors in business logic. However, the events of 18 November 2025 at Cloudflare reminded us of a much more insidious enemy: routines that awaken dormant errors.

    Anyone who manages distributed systems is familiar with this scenario: everything works as planned, tests pass green and deployment seems a formality. And yet, moments later the dashboards are glowing red. This analysis of an incident that affected one of the key web services not only chronicles the events, but is above all a fascinating case study for SRE and DevOps engineers. It shifts the focus of the discussion from “how to fix” to the much more difficult question “how do you detect something that theoretically doesn’t exist?”.

    Latent Bug in the Code

    Experts analysing this case draw attention to the concept of a ‘latent bug’. This is a piece of code that is normally completely harmless. It sleeps, waiting for a specific, rare combination of events.

    In the case in question, the mechanism was almost textbook. On the one hand, we had a hard limit in the Rust code (a maximum $200 function in the configuration), designed as a performance optimisation. On the other, a routine change to the ClickHouse database that unexpectedly returned duplicate metadata. The result? The configuration file swelled twice, exceeding a limit that the system had ‘forgotten’ existed because it had never been tested under boundary conditions before.

    This leads to system panic (the famous `unwrap()` on error) and cascading failure. The lesson is brutal: performance optimisation that is not protected by resilience logic becomes a technical debt.

    Observability is not just about error logs

    The lessons learned from this incident are redefining the approach to monitoring. Traditional waiting for HTTP 500 codes is not enough. As reliability professionals rightly point out, proactivity based on saturation metrics is the key.

    Here is what engineers should implement ‘yesterday’ to avoid similar scenarios:

    Monitoring of ‘hard’ limits: If the system has a limit sewn in (e.g. the size of the buffer or the number of entries), the monitoring must alert when we are approaching 80% of the limit, and not only when the limit is exceeded. This is a classic use of one of the “Four Golden Signals” (Saturation).

    Correlation of deployments with anomalies: The failure was a direct result of the change. Modern observability systems need to automatically tie an application ‘panic’ to the last event in the CI/CD pipeline. This reduces the MTTI (Mean Time To Identify) time from hours to minutes.

    Canary checks on data structure: Synthetic tests should not just check that the service ‘gets up’, but that the data it generates (e.g. configuration files) is within security standards before it is globally propagated.

    Architecture of distrust

    Analysis of this case leads to another fundamental architectural conclusion: don’t trust your own configurations.

    We often treat user input as potentially dangerous (SQL Injection, XSS), but we consider configuration files generated by our own systems to be safe. This is a mistake. The Input Hardening approach suggests that internal configurations should be validated with the same rigour as external data. If the system had checked the size of the file before attempting to process it, it would have ended up rejecting the update, not global paralysis.

    It is also worth refreshing your knowledge of the Bulkhead Pattern. The isolation of processes and thread pools ensures that the failure of one component (in this case the bot management module) does not melt the whole ship.

    The November 2025 incident is proof that, on a macro level, small mistakes do not exist. Only bugs that have not yet found their trigger exist. It is a signal to the IT industry to stop relying solely on functional testing and start designing systems that are ready for the ‘impossible’. True resilience is not the absence of bugs, but the ability to survive their activation.

  • From soloist to manager. How the CPU gave up the crown to save performance

    From soloist to manager. How the CPU gave up the crown to save performance

    Up until a decade ago, we identified computer performance – be it a home PC or a server in a corporation – almost exclusively with the CPU model. The CPU was the star, the soloist that had to do everything from operating system support to complex rendering. Today, however, in the age of artificial intelligence and Big Data, this ‘orchestra man’ model has become inefficient. The CPU has not gone away, but has changed its position. It has become the manager who manages the new workhorse of today’s IT: the GPU. Why is this demotion in the hierarchy actually an evolutionary success?

    The end of the “One Man Show” era

    For decades, von Neumann architecture and the dominance of x86-type processors defined how we viewed computing power. The rule was simple: you want faster performance? You buy a CPU with a higher clock speed. The CPU was the heart and brain of every digital operation. However, in recent years we have collided with a wall. Moore’s Law slowed down, the physics of silicon started to resist and our processing requirements – instead of growing linearly – shot up exponentially.

    Modern workloads (workloads) have changed in nature. It is no longer just about rapidly executing instructions one after the other. It is about processing an ocean of data at the same time. In this new landscape, the traditional processor began to choke. A changing of the guard was needed.

    Architectural “glass ceiling”

    To understand this change, it is necessary to look at what is happening under the ‘hood’ of integrated circuits. A CPU is the technological equivalent of a racing car. It has several, sometimes more than a dozen powerful cores. It is incredibly fast at transporting a small group of passengers (data) from point A to point B in record time. It is optimised for sequential tasks requiring complex logic and low latency.

    On the other hand, we have the GPU (Graphics Processing Unit). If the CPU is a Ferrari, then the GPU is a fleet of thousands of buses. Each GPU core is weaker and slower than a CPU core, but there is a whole army of them. This architecture was originally designed for one purpose: to handle graphics in video games and visual rendering.

    But it turned out that the mathematics behind the display of three-dimensional worlds – that is, operations on matrices and vectors – is twinned with the mathematics needed to train artificial intelligence, scientific simulations or Big Data analysis. What was meant for entertainment has become the foundation of modern science. The GPU’s parallel architecture allows for thousands of simultaneous operations, making it ideal for tasks where throughput matters, not just the response time of a single thread.

    The new queen of computing

    This change is most evident in modern data centres. Server rooms used to be the kingdom of CPUs. Today, GPU accelerators are the most expensive, most sought-after and strategically most important part of the infrastructure.

    In areas such as Deep Learning, the advantage of a parallel architecture is crushing. Training a complex neural network on the CPU alone could take weeks. A GPU cluster can handle the same task in days, sometimes even hours. This difference in speed is not just about convenience – it is a ‘to be or not to be’ for innovation. Companies in the financial, medical or retail sectors that harness this power for real-time data analysis gain a competitive advantage unavailable to those sticking with the old architecture.

    It has come to the point where the GPU has become indispensable even at research facilities like CERN or NASA. From genome sequencing to climate change modelling, wherever terabytes of data need to be converted, the GPU is indispensable.

    The CPU as manager – a new definition of the role

    Does this mean the death of the central processor? Absolutely not. To herald the end of the CPU era is a cognitive error. Its role has simply evolved from executor to manager.

    Imagine a corporation.

    The CPU is the CEO or Project Manager. It is intelligent, versatile, able to manage a wide variety of problems, make resource allocation decisions, operate the operating system and ensure that applications run stably.

    The GPU is a specialised manufacturing department. It is a powerful factory that can process mountains of raw material, but is ‘blind’ without instructions.

    Without an efficient manager (CPU) to prepare the data, send it to the right place and receive the results, even the most powerful factory (GPU) will stand idle. In modern systems, the CPU delegates the heavy, repetitive computing work to the GPU, coordinating the entire system itself. It’s a perfect symbiosis. The CPU provides the logic and control, the GPU provides the brute computing power.

    The energy aspect is also worth noting. Although the top graphics cards consume huge amounts of power, in terms of work done (performance per watt) in parallel tasks they are much more efficient than CPUs. The CPU as manager therefore also ensures that this energy is not wasted.

    Ecosystem beyond silicon

    This hardware revolution would not have succeeded without software support. Platforms such as NVIDIA’s CUDA and AMD’s ecosystems have made the power of the GPU accessible to developers who do not need to be experts in hardware physics. Frameworks such as TensorFlow or PyTorch allow engineers to write code that automatically takes advantage of hardware acceleration.

    Moreover, cloud computing has democratised access to this power. Today, a startup does not need to invest millions in a server farm. With AWS, Google Cloud or Azure services, powerful GPU instances are available on demand. Small companies can use the same infrastructure as tech giants, paying only for the actual computing time. This makes the barrier to entry into the world of advanced AI drastically reduced.

    Symbiosis, not domination

    Looking ahead, we see a clear trend towards integration. The boundary between CPU and GPU is starting to blur, as seen in the hybrid architectures used in modern laptops or mobile devices. ICs are now combining functions in a single piece of silicon that used to require separate cards.

    The era of the CPU as ‘king’, single-handedly bearing the brunt of the entire digital world, is over. But its abdication was necessary for technology to move forward. In modern IT, the winner is not the one with the fastest CPU, but the one who can best organise the collaboration between the manager (CPU) and its powerful execution team (GPU). This is not a story about replacing one technology with another, but about their mature collaboration.

  • The end of ‘garage’ deployments. OCP standardises infrastructure for quantum computers

    The end of ‘garage’ deployments. OCP standardises infrastructure for quantum computers

    The Open Compute Project (OCP) is opening a new chapter in data centre design, attempting to reconcile two technological elements: classical large-scale computing (HPC) and highly sensitive quantum mechanics. The organisation has begun work on formulating precise guidelines to enable these systems to coexist within a single server room. Although the vision of hybrid computing promises a leap in performance, the engineering reality presents facility operators with challenges that standard procedures do not anticipate.

    The integration of quantum systems is primarily a struggle with mass and thermodynamics. Although quantum processors themselves may impress with their energy efficiency, their associated infrastructure is demanding. A key element here is the cryostat – a device weighing up to 750 kilograms – which forces designers to ensure that the floor load capacity is at least 1,000 kg/m².

    Managing the temperature of the cooling fluid is proving to be even more challenging. While modern HPC cabinets can run on water temperatures as high as 45°C, quantum systems require a fluid supply in the 15-25°C range. This necessitates maintaining two separate cooling loops or using advanced heat exchangers. Added to this is the rigorous control of humidity, which must oscillate between 25 and 60 per cent to avoid condensation on refrigeration components, which would be disastrous in a precision electronics environment.

    However, it is environmental factors, often ignored in classical IT, that can determine the success of a deployment. Quantum hardware exhibits extreme sensitivity to electromagnetic interference. Even such mundane items as fluorescent lighting must be at least two metres away from the computing unit. Magnetic fields must be strictly limited, and the location of the data centre itself requires a new urban planning analysis. The presence of a tramline, railway traction or mobile phone masts within 100 metres can generate noise that prevents stable operation of the cubits.

    OCP rightly points out that installing a quantum computer is no longer a standard ‘plug-and-play’ operation. It is an engineering process that takes a minimum of four weeks and requires the involvement of specialist electricians and refrigeration technicians, not just IT staff. The OCP initiative to create checklists and best practices is therefore not so much a facilitator as a necessity for hybrid HPC environments to move out of the experimental phase and become a market standard.

  • Europe in the infrastructural shadow of AI. Has the continent slept through its moment?

    Europe in the infrastructural shadow of AI. Has the continent slept through its moment?

    The global artificial intelligence market is experiencing an unprecedented boom that resembles more a violent gold rush than a steady technological evolution. The latest figures from IDC show the scale of this revolution: forecasts indicate that investment levels in AI infrastructure alone will reach a dizzying $758 billion by 2029. To understand this pace, one only needs to look at the second quarter of 2025, in which spending on AI hardware and storage rose 166% year-on-year to reach $82 billion.

    We are talking about a fundamental change here. This is not another software trend; it is a global arms race for raw computing power. However, in this race that will define the economic leaders for decades to come, the data reveals a worrying disparity. While America and Asia are building the foundations of the new economy, Europe seems to be only a silent observer.

    The architecture of global domination

    So who deals the cards in this high-stakes game? The data leave no illusions.

    The centre of the global market is the US, accounting for an overwhelming 76% of all AI infrastructure spending. It is where the hyperscalers, cloud providers and digital services giants reside, driving as much as 86.7% of all global investment, the data shows. They are the ones who buy the lion’s share of cutting-edge hardware, defining standards, pricing and availability.

    China is consolidating in second place. Although its current share (11.6%) is much smaller, the pace of the chase is key. IDC forecasts that it is the China region that will grow the fastest in the world, with a compound annual growth rate (CAGR) of as much as 41.5% over the next five years.

    The market is not diversified. We are dealing with the technological duopoly of the US and China, which seems to leave the rest of the world, including Europe, in the role of customer.

    EMEA: Just 4.7% of the pie

    It is in this context that the position of Europe (EMEA region) looks alarming. During the same period when the US was investing billions, the EMEA region accounted for just 4.7% of global spending. This is less than the Asia-Pacific region (with Japan, but without China).

    Worse still, the forecasts do not indicate a rapid catch-up. On the contrary, the gap may be widening. The projected growth for EMEA (CAGR of 17.3%) is more than double that of the US (40.5%) or China (41.5%). Not only are we starting from a low base, but we are running much slower.

    This raises fundamental questions. Is this disparity the result of a lack of European hyperscalers capable of competing with Google, Amazon or Alibaba? Do our regulations, while right, discourage investment in hard infrastructure before the market has had time to emerge in earnest? Or have European companies consciously adopted a ‘rent, don’t build’ strategy, accepting a ‘rentier’ role in a world defined by US cloud?

    The anatomy of dependency

    To understand the gravity of the situation, it is important to know where the gigantic money is going. Well, spending on AI infrastructure today is 98% spending on servers.

    However, it is not all about arbitrary machines. The king of the market, accounting for as much as 91.8% of server spending, is accelerated units – that is, servers equipped with powerful graphics processing units (GPUs) and other dedicated accelerators. These are the ‘golden shovels’ in this gold rush. Their sales are up an unimaginable 207.3% year-on-year.

    It is these components that are the bottleneck and the real driver of the AI revolution today. And it is these that Europe hardly produces and, as the data shows, does not buy on a mass scale to build its own infrastructure. By moving to the cloud, we become 100% dependent on the supply and pricing of a narrow group of (mainly American) companies.

    The strategic costs of European inaction

    Being merely a consumer, rather than a creator, in the age of AI carries three fundamental risks for European business.

    Firstly, we are losing digital sovereignty. There is a lot of talk about data protection and European values (as in the AI Act), but these discussions become academic when 84.1% of AI deployments are running in cloud and shared environments anyway, controlled by entities outside the continent.

    Secondly, we are giving away innovation and margin. The real money in this revolution today is being made by infrastructure providers (accelerator manufacturers) and hyperscalers (service providers). Europe, by focusing on being a ‘user’ of AI models, is giving up profits at the most fundamental and lucrative level.

    Thirdly, we are creating a barrier to competitiveness. If AI is the new electricity, then access to computing power is access to power plants. Companies in regions that do not have their own strong infrastructure will pay more and wait longer for the resources needed to train their own models and innovate.

    Sleeping through the moment to invest in the foundations of artificial intelligence is not a technical oversight. It is a strategic economic mistake that could define Europe’s position – as a dependent consumer rather than a co-creator of technology – for decades to come.

  • How the dollar and euro exchange rates are affecting the prices of servers, laptops and components

    How the dollar and euro exchange rates are affecting the prices of servers, laptops and components

    For every IT director and owner of a small or medium-sized business in Poland, planning a budget for technology equipment is like playing on two fronts. With one eye, they monitor technological advances and the needs of the company, and with the other – with growing anxiety – they follow the exchange rate charts. This is no coincidence. Fluctuations in the forex markets, especially the US dollar (USD/PLN) exchange rate, have a direct and often brutal impact on the final prices of servers, laptops and components.

    When the zloty was at a record low in autumn 2022 and the dollar exchange rate reached 5 zlotys, Polish consumers and companies were in for a shock. Apple’s introduction of new products was associated with price increases of up to 30%. This extreme example exposed a fundamental truth about the Polish IT market: we are an importer of technology and the global supply chain is priced in hard currency.

    However, reducing this relationship solely to a simple USD/PLN conversion rate is a mistake that can cost companies tens of thousands of zlotys. Analysis of the market in recent years shows that the invoice price is the product of at least four forces: the dollar exchange rate, the stabilising role of the euro, the global supply of semiconductors and price wars between technology giants.

    For Polish SMEs, understanding this complex mechanics and proactively managing risk is no longer an option but is becoming a strategic necessity.

    Anatomy of a price: why the server speaks Dollar and the laptop speaks European

    To manage costs effectively, it is important to understand why different categories of equipment react differently to exchange rate fluctuations.

    Most of the global technology trade, from silicon wafers in Taiwan to finished microprocessors from Intel or AMD, is settled in US dollars (USD). A Polish distributor or integrator, when buying components or servers, pays for them in USD. This means that any increase in the USD/PLN exchange rate almost immediately raises the cost of the purchase. Distributors, wishing to protect their margins, must pass this cost on to the end customer.

    The server market is the most sensitive here. Custom-tailored configurations (CTOs), ordered from manufacturers such as Dell or HPE, are often priced directly in USD, leaving the Polish company with an almost 100 per cent exchange rate risk.

    The situation is different in the laptop segment. A significant proportion of them come to Poland via European distribution centres located in the euro zone (e.g. in Germany or the Netherlands). The Polish distributor settles accounts with its European supplier in euro (EUR). The EUR/PLN exchange rate becomes a “filter” or “shock absorber” for sudden jumps in the dollar in this model. Laptop prices are thus more stable, but it should be remembered that the price of the euro already includes the USD/EUR exchange rate set by the European headquarters.

    There is also the phenomenon of ‘price lag’ (price lag). Distributors hold on to stock they bought at the old, lower exchange rate. Therefore, changes do not always transfer to 1:1 prices. This was perfectly demonstrated at the beginning of 2021: between December 2020 and March 2021, the USD/PLN exchange rate rose by more than 9%, but the average prices of smartphones and tablets rose by “only” 4% during this period. The market temporarily absorbed some of the hit, giving companies a brief ‘window’ to buy before the new, more expensive supply arrived.

    Server market trap 2024/2025: a missed SME opportunity

    Analysis of the server market reveals a key and risky paradox into which many Polish companies have fallen. The year 2024, paradoxically, was theoretically the best time in years to upgrade infrastructure. Two key factors contributed to this:

    • Strong zloty: In 2024, a ‘weaker dollar’ was recorded, significantly reducing the cost of importing equipment priced in USD.
    • Global price war: At the same time, there was a brutal battle for market share between Intel and AMD. This led to gigantic price cuts on key server processors (Xeon and EPYC), reaching up to 35-50% below list prices in the US market.

    A strong currency and cheap underlying components – a textbook ‘buying window’. Despite this, market data shows that the Polish IT equipment market has declined in 2024 (value in USD fell from 10.03 billion to 9.39 billion). Companies, probably due to the general macroeconomic situation and high interest rates, have halted investments.

    Now these companies could fall into a trap. Companies that have waited out 2024 in the hope of further declines will face a much worse situation in 2025. Forecasts for the beginning of 2025 show an 18 per cent increase in average chip prices and a renewed extension of lead times to more than four months. Trying to ‘wait it out’ has proved to be a strategic mistake – these companies will be forced to buy equipment more expensively and with longer lead times.

    Noise in the data: when the exchange rate goes down

    Analysis of IT prices solely through the prism of currencies is incomplete. There are factors that periodically become more important.

    The first is the availability of semiconductors. The 2021-2022 crisis has shown that price is becoming secondary to the ability to buy. What’s more, this crisis has generated a massive implicit currency risk. If the average waiting time for a server is more than four months, a Polish company placing an order in January (at an exchange rate of PLN 4.00) with a payment deadline in May, may have to pay 10% more if the exchange rate rises to PLN 4.40 in the meantime.

    The second factor is geopolitics. Customs decisions, such as those imposed by the US on Chinese imports, force manufacturers (Dell, HP, Lenovo) to costly relocate factories, for example to Vietnam. The costs of this global reorganisation of the supply chain are included in the base price of the product, raising it for everyone, regardless of local exchange rates.

    How can SMEs protect themselves?

    For Polish companies, passivity towards currency risk is a gamble. Instead of trying to predict the perfect ‘hole’ (which, as 2024 has shown, is almost impossible), companies need to implement conscious risk management strategies.

    1. purchase planning based on cycles, not ‘timing’: Instead of guessing, IT and finance departments should monitor both key indicators: the local USD/PLN exchange rate and global component price trends (e.g. CPU price wars). The budget should be flexible enough to accelerate key purchases when both indicators are favourable.

    2 Active management of currency risk (Hedging): Hedging instruments, hitherto seen as the domain of large corporations, are now also available to SMEs.

    • Forward contracts: This is the simplest tool. If a company knows that it needs to buy $50,000 worth of equipment in 3 months’ time, it can ‘freeze’ today’s rate in a contract with the bank. This eliminates the risk, although it also removes the benefit if the rate falls.
    • Currency options: They act as an ‘insurance policy’. The company pays a small premium for the right (but not the obligation) to buy the currency at a fixed rate. If the market rate is better – it benefits from the market. If worse – it exercises the option, protecting itself against loss.
    • Natural hedging: the simplest method for companies that have revenues in USD or EUR (e.g. from exporting IT services). It involves paying for imported equipment in the currency you have earned, thus bypassing currency conversion costs altogether.

    3 Building supply chain resilience: the risks for 2025 (more expensive chips, longer deliveries ) show that SMEs need to think not only about their risks, but also those of their suppliers. It is worth actively talking to local IT integrators. The key question is: does the supplier have diversified sources?

    The best strategy for SMEs may be to sign a framework agreement with a supplier for the cyclical delivery of equipment (e.g. 50 laptops per quarter) at a fixed price of PLN for 12 months. In this way, it is the supplier, who is much better equipped for professional hedging, who assumes the currency risk (USD/PLN) and the price risk of the components(projected increase of 18% ). Such an agreement provides invaluable predictability of operating costs.

    In a volatile economic environment, IT currency risk management is no longer the responsibility of the finance department. It is becoming a key element of a company’s technology strategy.