Tag: Servers

  • More expensive servers and smartphones? How the war in the Middle East is crippling production

    More expensive servers and smartphones? How the war in the Middle East is crippling production

    While Silicon Valley’s attention is focused on the architecture of the latest GPUs, the real threat to the pace of artificial intelligence development has manifested itself in the petrochemical sector. Recent disruptions in the Middle East, including the hit to the Saudi Jubail complex, have exposed the heavy dependence of global electronics on a narrow set of feedstock suppliers.

    A key flashpoint has been the stalled production of high-purity polyphenylene resin (PPE). This material is essential for the laminates in modern printed circuit boards (PCBs), the backbone of everything from smartphones to powerful AI servers. The fact that SABIC accounts for around 70% of the world’s supply of this component means that any break in its Gulf Coast facilities immediately resonates with factories in South Korea and China.

    The effects are tangible and costly. In April alone, PCB prices rose by 40% compared to March, which overlapped with the ongoing copper boom. Copper foil, which accounts for nearly 60% of raw material costs in wafer production, has become 30% more expensive this year. For manufacturers such as South Korea’s Daeduck Electronics, which supplies Samsung and AMD, this situation has forced a complete shift in management priorities. Instead of negotiating contracts with customers, operations directors now spend most of their time securing chemical supplies. Waiting times for epoxy resins have increased dramatically – from three to as much as fifteen weeks.

    The AI infrastructure sector is feeling the most pressure. Multilayer circuit boards used in data centres are many times more expensive than standard models, and prices can exceed 13,000 yuan per square metre. Despite this, cloud providers seem ready to accept these increases. With talk of the PCB market growing to nearly $96 billion by 2026, key players are prioritising continuity of supply over margins.

  • Hyperscalers are taking over the data centre market. Is this the end of on-premise?

    Hyperscalers are taking over the data centre market. Is this the end of on-premise?

    For decades, the company server room was the technological equivalent of a family castle. It was tangible proof of sovereignty, a safe haven for data and the pride of IT departments that nurtured their own silicon with almost craftsmanlike precision. But the latest predictions from Synergy Research Group plot a scenario in which these digital fortresses become costly open-air museums. By 2031, hyperscalers such as Google, Microsoft and AWS will have seized 67% of global data centre capacity for themselves. What we are seeing is a rapid shift in the centre of gravity of the digital world, necessitated by the brute physics of artificial intelligence.

    The architecture of coercion

    In 2018, enterprises controlled more than half of the world’s computing infrastructure. The prospect of 2031, in which this share shrinks to just 19%, seems at first glance a statistical error. However, the reason for this dip is not an unwillingness to own, but an inability to meet the demands of the new era. Modern AI systems, based on GPUs and specialised chips such as TPUs, require power densities and cooling systems that exceed the design standards of traditional office buildings.

    Hyperscalers are building infrastructure today at fourteen times the scale of just eight years ago. This scale creates a barrier to entry that is impossible for a single organisation to break through. When Satya Nadella announces a doubling of Microsoft’s physical data centre footprint in just two years, he is not talking about building data warehouses, he is talking about creating large-scale innovation reactors. For the average enterprise, trying to catch up to this pace in-house would be akin to building a private power plant network just to power the office kettle.

    The currency of gigawatts and limits

    In the new economic order, capital is no longer the only determinant of development opportunities. The availability of computing power, treated as a scarce and limited resource, is coming to the fore. Strategic partnerships, such as those entered into by Anthropic with Google or OpenAI with AMD, are in fact reservations of energy and silicon for years ahead. In a world dominated by language models and advanced analytics, the ‘power shortage’ referred to by Microsoft’s Amy Hood is becoming a real operational risk for any technology-dependent business.

    This phenomenon is fundamentally changing the role of technology leaders in organisations. The CIO ceases to be a steward of fixed assets and becomes a digital commodity strategist. He or she must operate in a reality where computing power is rationed and its price can skyrocket under local energy considerations. Projected energy price spikes of up to 79% in technology hubs will force a new discipline on business: algorithmic frugality.

    Physical resistance of the cloud

    Although the term ‘cloud’ suggests something ethereal and intangible, its foundations are heavy, loud and raising increasing public opposition. The expansion of technology giants is colliding with the barrier of local politics and ecology. Digital progress is no longer seen as an indisputable good.

    For business, this means a new form of localisation risk. Dependence on one region or supplier coming into conflict with a local community or energy system can become a bottleneck for AI-based product development. This is why more and more companies are attempting to secure operational continuity in the face of growing resentment towards energy-intensive giants.

    Risks of gigantism and opportunities of localism

    The dominance of hyperscale providers brings with it risks that become market opportunities for on-premise proponents. Dependence on a narrow group of suppliers (vendor lock-in) and their vulnerability to local social conflicts or investment blockades – such as those in Wisconsin or Maine – make a diversified in-house infrastructure an insurance policy.

    Opportunities for in-house data centres lie in their ability to adapt where the giants are too sluggish. Local units can deploy innovative heat recovery systems or use niche, green energy sources more quickly, building better relationships with the environment than anonymous, energy-intensive megastructures. This is where ‘edge AI’ is born, processing data where it arises, without the need for costly and slow transfer to global centres.

    Balance as the new overarching strategy

    A comprehensive look at 2031 dictates that we see it not as capitulation but as a new specialisation. The threat to business is not the power of Google or Microsoft, but the lack of an in-house, thoughtful infrastructure strategy. Organisations that indiscriminately abandon their own resources may wake up to a moment when access to innovation is rationed by external suppliers.

    The right chess move today is to reinvest in ‘intelligent on-premise’. This is a smaller but denser infrastructure, optimised for a company’s specific, unique algorithms, while generic computing tasks are delegated to the cloud. This duality allows the company to benefit from the enormity of hyperscalers’ investments, while retaining the hard core that makes the company a sovereign player in the market.

  • Rowhammer attacks: is this the end of secure multi-tenancy? Why GPU-level isolation is now just an illusion

    Rowhammer attacks: is this the end of secure multi-tenancy? Why GPU-level isolation is now just an illusion

    The architecture of cloud computing resembles the structure of a modern glass office building. Companies rent spaces in it, trusting that robust door locks, monitoring systems and professional security guarantee complete privacy. In the IT world, these safeguards are encryption, virtualisation and logical process isolation. However, recent reports from the world of hardware security suggest that the foundations of this office building hide a structural flaw.

    Rowhammer-type attacks, transferred from classical operational memories to graphics processing units (GPUs), show that walls between cloud users can become transparent under the influence of appropriately directed electrical oscillations.

    Graphics chips equipped with GDDR6 memory have become the foundation of the artificial intelligence revolution. It is their enormous bandwidth that allows language models to be trained or gigantic data sets to be analysed in real time. For years, there was a belief that GPUs were a safe enclave, isolated from the vulnerabilities plaguing traditional CPUs.

    Research conducted by scientists at UNC Chapel Hill and Georgia Tech brutally verifies this optimism. It turns out that the physical proximity of memory cells in NVIDIA’s state-of-the-art chips, such as the Ampere and Ada Lovelace architectures, becomes their greatest weakness.

    The Rowhammer phenomenon is not a bug in the code that can be fixed with a simple software update. It is a defect resulting from the very physics of silicon and the drive for extreme miniaturisation. When a system repeatedly and at high frequency references a particular row of data in DRAM, an electromagnetic field is created that begins to affect neighbouring cells. This ‘leakage’ of energy can lead to a spontaneous change in the state of a bit – zeros become ones and ones become zeros. On a micro scale, this is a minor anomaly, but on a system scale, it is a tool to break down the door to the core of the operating system. By precisely manipulating these changes, an attacker can achieve privilege escalation, gaining full administrative access to the host.

    For the business world, which is moving its most valuable resources en masse to the public cloud, this information is of strategic importance. The resource-sharing model, known as multi-tenancy, is based on the assumption that one client’s processes are completely separate from another client’s operations, even if they share the same physical GPU. The discovery of the GDDRHammer and GeForge vulnerabilities casts a shadow over this assumption. A theoretical, but evidence-based, possibility arises in which an entity with bad intentions rents a low-cost GPU instance on the same platform as a large financial institution or pharmaceutical company, and then uses the physical properties of the hardware to spy on its ‘neighbour’.

    The risks go beyond simple file theft. In the age of the AI arms race, a company’s most valuable asset is model weights and training data. By taking control of GPU memory, this information can be extracted, de facto stealing the competitive advantage developed over years. Moreover, cloud providers operate under a shared responsibility model. While they guarantee the security of the logical and network layers, they are rarely able to fully protect against fundamental design flaws in the processors themselves, especially when hardware manufacturers such as NVIDIA suggest using solutions with limited effectiveness.

    Proposed methods of mitigating these attacks, such as the inclusion of error correction codes or IOMMU memory management units, are only a partial barrier. A key concern for IT decision-makers becomes the economic calculus. The inclusion of full protection mechanisms is almost always associated with a perceived decrease in computing performance and available memory. In business realities, where model training time translates directly into costs of thousands of dollars, the choice between absolute security and operational efficiency becomes a difficult management dilemma.

    A key task for technical directors and security officers is becoming a new classification of resources. Not every process requires the highest degree of isolation, but projects critical to the future of the business may require a revision of the public cloud approach. Bare metal solutions, where the customer is given exclusive access to a physical server, or building dedicated private clouds, are no longer the domain of the paranoid and are becoming a rational response to the physical limitations of modern silicon.

    The 2026 audit of cloud service providers should include not only ISO certifications, but also specific questions about physical isolation architecture at the GPU level. A mature business needs to understand that as technology approaches physical barriers, traditional software security methods are becoming insufficient. Rowhammer on the GPU signals that it is time for a new era of hardware hygiene, where awareness of the limitations of matter is as important as the quality of the code being written.

  • Patriotism or cold calculation? Why IT is going back to its roots (and local servers)

    Patriotism or cold calculation? Why IT is going back to its roots (and local servers)

    In growing geopolitical uncertainty, the mantra of unconditionally moving resources to the global cloud is losing relevance, giving way to the urgent need to build digital independence. Infrastructure leaders (I&O) need to prepare for a year in which physical data localisation and supplier diversification will become not so much a technological option as a key component of business survival strategies.

    For the past decade, the IT strategy of many businesses has been based on a simple premise: a global hyperscaler will do it better, cheaper and more securely. Local data centres were treated as a relic of the past and the notion of digital sovereignty was reduced to the need to meet RODO requirements. Today, this paradigm is being rapidly eroded. The tough question is increasingly being asked in CIOs’ offices: what happens if global digital supply chains are disrupted?

    Geopatria: A strategy for the times of “Decoupling”

    The notion of geopatriarchy, which is beginning to dominate trend analyses for the coming quarters, is sometimes mistakenly equated in the IT community with simple local economic patriotism. This is a cognitive error that can cost companies stability. In reality, geopatriotry is a reaction to the global trend of ‘decoupling’, or the separation of economic and technological blocks.

    Modern I&O cannot ignore the fact that the public cloud is not an ethereal entity, but a physical infrastructure under the jurisdiction of specific powers. Relocating workloads (workloads) from global platforms to regional or national solutions ceases to be a matter of ideology and becomes part of systemic risk management.

    The key shift is from data sovereignty (where the files lie) to operational sovereignty. IT leaders need to ask themselves: in the event of sanctions, regulatory changes in the US or Asia, or physical disruption of cross-border links, will my business retain operational capability? Geopatria is essentially building a technical insurance policy. It reduces geopolitical risk and makes critical business processes independent of decisions made on other continents.

    Composability: How to escape the “Vendor Lock-in” trap

    Critics of the local approach rightly point out that abandoning the global cloud could mean being cut off from innovation. Regional providers rarely have the R&D budgets of the Silicon Valley giants. The solution to this dilemma is a new approach to hybrid computing.

    Hybridisation in 2025 is not about bundling an old server room with a cloud VPN. It is a philosophy of composable and extensible architecture. I&O managers must build systems from interchangeable building blocks. It’s about coordinating compute, storage and networking mechanisms in such a way that resources can be freely interchanged between providers.

    If a global provider becomes risky (politically or cost-wise), the company should be technically able to move processes to local infrastructure without rewriting applications. This approach forces I&O leaders to change their thinking about architecture – from monolithic deployments to flexible, containerised architectures that ‘float’ between different environments. This is where the real business value is born: in the ability to adapt quickly, rather than in simply owning the servers.

    Crisis of confidence and defence of identity

    The proliferation of infrastructure (Edge, local cloud, global cloud) brings with it a new threat: the erosion of trust. In an environment where data travels across multiple jurisdictions and systems, verifying what is true becomes an engineering challenge.

    Therefore, security against disinformation is becoming an integral part of the new I&O strategy. We are not talking about PR image protection, but hard technologies for digital identity verification. In the era of Deepfakes and software supply chain attacks, companies need to implement mechanisms that guarantee that the code, command or user is who they say they are.

    For operations departments, this means implementing systems that validate the authenticity of communications at every stage. Protecting brand reputation starts deep at the infrastructure layer – from securing the identity of administrators to cryptographically signing application containers.

    The economics of independence: Energy efficiency as a necessity

    Building a sovereign, hybrid infrastructure is more expensive than renting computing power on a pay-as-you-go model from a giant. This is a fact that CFOs often do not want to discuss. However, I&O managers have a new argument in hand: energy-efficient computing.

    New technologies and practices to reduce the carbon footprint are not just a nod to ESG. It is a way to fund independence. The use of neuromorphic systems, optical computing or simply radical energy optimisation of data centres, reduces the operating costs of in-house and co-located infrastructure.

    In this way, ‘Green IT’ ceases to be a marketing add-on and becomes the foundation of the hybrid model’s profitability. I&O leaders who combine the geopatriation trend with an aggressive energy efficiency strategy will be able to prove to management what is most important: operational security while maintaining budgetary discipline.

    From administrator to strategist

    The infrastructure and operations areas are entering a phase of strategic maturity. The role of the head of I&O is evolving from a provider of resources (‘give me a server’) to an architect of state and business continuity.

    Understanding the impact of geopatriation and implementing a model where a company is not held hostage to one provider or one jurisdiction is the most pressing task for the coming months. Those who treat this trend as a trivial throwback to the past may wake up to the reality that they have no control over their own digital destiny.

  • Hardware casualties of artificial intelligence. Why will PC and server prices shoot up?

    Hardware casualties of artificial intelligence. Why will PC and server prices shoot up?

    The world of technology today is looking in one direction, mesmerised by the promises made to us by artificial intelligence. However, behind the scenes of this media and stock market spectacle, there is a brutal battle for resources that could have far-reaching consequences for the entire partner channel. Experts are warning ever louder: the colossal infrastructure needed to ‘feed’ insatiable AI models is beginning to cannibalise the traditional IT market. In the name of global dominance by technology giants, is the professional and consumer customer sector facing a supply crunch and a drastic price increase?

    The technology industry has entered what many observers describe as a ‘journey of no return’. The decision that the coming decades will be dominated by artificial intelligence has been made at the highest strategic levels in Silicon Valley, and there is no turning back from it. The problem is that at the current stage of development, the beneficiaries of this revolution are mainly the technology providers themselves and the marketing departments of the corporations that are building the narrative of success.

    Apart from the big players, few yet see the promised massive business benefits that would justify such a gigantic outlay. Nevertheless, the ‘snowball’ effect is working – the ball is rolling faster and faster and continues to feed on itself, consuming capital and resources at a rate the IT market has not seen in years.

    Infrastructure at its limits

    A key issue that rarely breaks through to the consciousness of the average business user is the physicality of AI. AI is not an ethereal entity in the cloud – it’s thousands of tonnes of silicon, steel and copper. It is hectares of data centres that consume as much energy as a medium-sized country.

    We are currently facing a situation where the consumer electronics market has a large-scale problem. The colossal infrastructure required to power language and generative models needs to be prioritised. As a result, there is an unbridled battle for global control of these technologies and the resources required to maintain them. Technology giants are reserving capacity and energy for years ahead.

    For the traditional IT ecosystem – from hosting providers to integrators to enterprise IT departments – this means risking being pushed to the margins. If the priority of factories and data centres becomes servicing hyperscale AI projects, infrastructure availability for the ‘rest of the world’ can become, and is slowly becoming, a luxury.

    Memories: Silicon gold and looming shortages

    The most tangible evidence that the market is losing its balance is the news coming from the computer memory sector (DRAM and NAND). This is where the greatest risk to the distribution channel is currently concentrated.

    Training and operating AI models requires specific, expensive and difficult-to-manufacture High Bandwidth Memory (HBM). Manufacturers, seeing the gigantic margins and insatiable demand from AI accelerator developers, are shifting their production lines to this range. However, this is at the expense of standard DDR memory or NAND flash dice used in laptops, workstations and typical servers.

    Market signals emerged last week that predict a worrying future. Due to a shortage of supply and rising manufacturing costs, the consumer and professional customer segment could face steep price increases and problems with commodity availability. The threat of a collapse of this segment is real – if the supply of memory is sucked up by AI servers, PC and consumer electronics manufacturers will have to either drastically increase prices or reduce production.

    Dot-com 2.0 bubble or the foundation of a new era?

    Observing this race, it is impossible to escape questions about its economic basis. The strategies of the biggest stock market players are now one-way: all resources are directed towards AI. Satisfying investors, who demand that companies put ‘more eggs in one basket’, has become the overriding objective, often obscuring common-sense diversification.

    Analysts are increasingly bold in their thesis that we are inside a speculative bubble reminiscent of the famous ‘dot-com boom’ at the turn of the century. The stock market valuations of technology companies are rising in isolation from their traditional performance, driven only by the promise of future AI dominance. Some even argue that this bubble will sooner or later explode as ‘Punkt.com 2.0’. If the monetisation of AI does not come fast enough to cover the gigantic infrastructure costs (CAPEX), the correction could be painful for the entire industry – not just the leaders of the race.

    For resellers, distributors and system integrators, the current situation is a wake-up call. The market to which we have become accustomed – relatively stable prices and high component availability – is entering a phase of turbulence.

    The technology industry has put everything on the line. And while AI will undoubtedly change the world, the bill for this change – in the form of more expensive equipment and more difficult access to technology – will be paid by all of us before we even have time to feel the real benefits of this revolution.

  • How the dollar and euro exchange rates are affecting the prices of servers, laptops and components

    How the dollar and euro exchange rates are affecting the prices of servers, laptops and components

    For every IT director and owner of a small or medium-sized business in Poland, planning a budget for technology equipment is like playing on two fronts. With one eye, they monitor technological advances and the needs of the company, and with the other – with growing anxiety – they follow the exchange rate charts. This is no coincidence. Fluctuations in the forex markets, especially the US dollar (USD/PLN) exchange rate, have a direct and often brutal impact on the final prices of servers, laptops and components.

    When the zloty was at a record low in autumn 2022 and the dollar exchange rate reached 5 zlotys, Polish consumers and companies were in for a shock. Apple’s introduction of new products was associated with price increases of up to 30%. This extreme example exposed a fundamental truth about the Polish IT market: we are an importer of technology and the global supply chain is priced in hard currency.

    However, reducing this relationship solely to a simple USD/PLN conversion rate is a mistake that can cost companies tens of thousands of zlotys. Analysis of the market in recent years shows that the invoice price is the product of at least four forces: the dollar exchange rate, the stabilising role of the euro, the global supply of semiconductors and price wars between technology giants.

    For Polish SMEs, understanding this complex mechanics and proactively managing risk is no longer an option but is becoming a strategic necessity.

    Anatomy of a price: why the server speaks Dollar and the laptop speaks European

    To manage costs effectively, it is important to understand why different categories of equipment react differently to exchange rate fluctuations.

    Most of the global technology trade, from silicon wafers in Taiwan to finished microprocessors from Intel or AMD, is settled in US dollars (USD). A Polish distributor or integrator, when buying components or servers, pays for them in USD. This means that any increase in the USD/PLN exchange rate almost immediately raises the cost of the purchase. Distributors, wishing to protect their margins, must pass this cost on to the end customer.

    The server market is the most sensitive here. Custom-tailored configurations (CTOs), ordered from manufacturers such as Dell or HPE, are often priced directly in USD, leaving the Polish company with an almost 100 per cent exchange rate risk.

    The situation is different in the laptop segment. A significant proportion of them come to Poland via European distribution centres located in the euro zone (e.g. in Germany or the Netherlands). The Polish distributor settles accounts with its European supplier in euro (EUR). The EUR/PLN exchange rate becomes a “filter” or “shock absorber” for sudden jumps in the dollar in this model. Laptop prices are thus more stable, but it should be remembered that the price of the euro already includes the USD/EUR exchange rate set by the European headquarters.

    There is also the phenomenon of ‘price lag’ (price lag). Distributors hold on to stock they bought at the old, lower exchange rate. Therefore, changes do not always transfer to 1:1 prices. This was perfectly demonstrated at the beginning of 2021: between December 2020 and March 2021, the USD/PLN exchange rate rose by more than 9%, but the average prices of smartphones and tablets rose by “only” 4% during this period. The market temporarily absorbed some of the hit, giving companies a brief ‘window’ to buy before the new, more expensive supply arrived.

    Server market trap 2024/2025: a missed SME opportunity

    Analysis of the server market reveals a key and risky paradox into which many Polish companies have fallen. The year 2024, paradoxically, was theoretically the best time in years to upgrade infrastructure. Two key factors contributed to this:

    • Strong zloty: In 2024, a ‘weaker dollar’ was recorded, significantly reducing the cost of importing equipment priced in USD.
    • Global price war: At the same time, there was a brutal battle for market share between Intel and AMD. This led to gigantic price cuts on key server processors (Xeon and EPYC), reaching up to 35-50% below list prices in the US market.

    A strong currency and cheap underlying components – a textbook ‘buying window’. Despite this, market data shows that the Polish IT equipment market has declined in 2024 (value in USD fell from 10.03 billion to 9.39 billion). Companies, probably due to the general macroeconomic situation and high interest rates, have halted investments.

    Now these companies could fall into a trap. Companies that have waited out 2024 in the hope of further declines will face a much worse situation in 2025. Forecasts for the beginning of 2025 show an 18 per cent increase in average chip prices and a renewed extension of lead times to more than four months. Trying to ‘wait it out’ has proved to be a strategic mistake – these companies will be forced to buy equipment more expensively and with longer lead times.

    Noise in the data: when the exchange rate goes down

    Analysis of IT prices solely through the prism of currencies is incomplete. There are factors that periodically become more important.

    The first is the availability of semiconductors. The 2021-2022 crisis has shown that price is becoming secondary to the ability to buy. What’s more, this crisis has generated a massive implicit currency risk. If the average waiting time for a server is more than four months, a Polish company placing an order in January (at an exchange rate of PLN 4.00) with a payment deadline in May, may have to pay 10% more if the exchange rate rises to PLN 4.40 in the meantime.

    The second factor is geopolitics. Customs decisions, such as those imposed by the US on Chinese imports, force manufacturers (Dell, HP, Lenovo) to costly relocate factories, for example to Vietnam. The costs of this global reorganisation of the supply chain are included in the base price of the product, raising it for everyone, regardless of local exchange rates.

    How can SMEs protect themselves?

    For Polish companies, passivity towards currency risk is a gamble. Instead of trying to predict the perfect ‘hole’ (which, as 2024 has shown, is almost impossible), companies need to implement conscious risk management strategies.

    1. purchase planning based on cycles, not ‘timing’: Instead of guessing, IT and finance departments should monitor both key indicators: the local USD/PLN exchange rate and global component price trends (e.g. CPU price wars). The budget should be flexible enough to accelerate key purchases when both indicators are favourable.

    2 Active management of currency risk (Hedging): Hedging instruments, hitherto seen as the domain of large corporations, are now also available to SMEs.

    • Forward contracts: This is the simplest tool. If a company knows that it needs to buy $50,000 worth of equipment in 3 months’ time, it can ‘freeze’ today’s rate in a contract with the bank. This eliminates the risk, although it also removes the benefit if the rate falls.
    • Currency options: They act as an ‘insurance policy’. The company pays a small premium for the right (but not the obligation) to buy the currency at a fixed rate. If the market rate is better – it benefits from the market. If worse – it exercises the option, protecting itself against loss.
    • Natural hedging: the simplest method for companies that have revenues in USD or EUR (e.g. from exporting IT services). It involves paying for imported equipment in the currency you have earned, thus bypassing currency conversion costs altogether.

    3 Building supply chain resilience: the risks for 2025 (more expensive chips, longer deliveries ) show that SMEs need to think not only about their risks, but also those of their suppliers. It is worth actively talking to local IT integrators. The key question is: does the supplier have diversified sources?

    The best strategy for SMEs may be to sign a framework agreement with a supplier for the cyclical delivery of equipment (e.g. 50 laptops per quarter) at a fixed price of PLN for 12 months. In this way, it is the supplier, who is much better equipped for professional hedging, who assumes the currency risk (USD/PLN) and the price risk of the components(projected increase of 18% ). Such an agreement provides invaluable predictability of operating costs.

    In a volatile economic environment, IT currency risk management is no longer the responsibility of the finance department. It is becoming a key element of a company’s technology strategy.

  • OpenAI adds $100bn for backup servers in new spending plan

    OpenAI adds $100bn for backup servers in new spending plan

    In an unprecedented show of financial strength, OpenAI is signalling that it is prepared to spend almost any money to ensure its dominance in the field of artificial intelligence. The company plans to spend an additional $100 billion over the next five years on leasing back-up servers. This move dramatically increases its already gigantic cloud infrastructure commitments.

    The decision, which was first reported by The Information, citing management’s discussions with shareholders, comes on top of previously projected spending of $350 billion on server rentals by 2030.

    In total, including standby servers, OpenAI’s infrastructure spending is expected to average around $85 billion per year over the next five years. This is an amount that dwarfs the IT spending of most global corporations and some countries.

    This astronomical investment in computing power underlines a fundamental truth of the current AI era: advances in advanced modelling are inextricably linked to access to powerful and, crucially, limited infrastructure.

    Technology companies are in a fierce battle for every available gigawatt of power, driving up prices and securing huge profits for cloud service providers and chipmakers such as Nvidia.

    Internally, OpenAI executives see these back-up servers not as a mere security cost, but as a ‘monetisable’ asset. The company hopes that the extra computing power can be used to fuel unexpected research breakthroughs or to handle surges in the popularity of its products, thereby generating revenue that is not yet included in official forecasts.

    However, this strategy comes with a huge financial risk. According to earlier reports, the ChatGPT developer expects to ‘burn’ around $115 billion in cash by 2029. The gigantic expenditure on servers is a bet that OpenAI’s future capabilities and products will not only dominate the market, but also generate revenue on a scale to justify such a colossal investment. For now, it is the most expensive bet in the history of the technology industry.

  • Microsegmentation 2.0 – how to effectively protect a network without agents

    Microsegmentation 2.0 – how to effectively protect a network without agents

    Today’s IT environments, dominated by virtualisation, containers and cloud services, are characterised by dynamics that challenge classic security models.

    As infrastructure complexity increases, traditional perimeter security is proving insufficient to protect against advanced insider threats. Against this backdrop, the concept of micro-segmentation is gaining prominence, and its latest agentless incarnation is changing the rules of the game when it comes to network protection.

    Limitations of traditional security models

    Historically, network security was based on macrosegmentation. It consisted of dividing the infrastructure into large zones of trust, such as a production, development or office network. Such a model assumed a high level of trust in the resources inside a zone.

    Its main limitation, however, is the risk associated with lateral movement. Once an attacker has succeeded in compromising one device, they can move relatively freely within the entire zone, using standard administrative protocols to infect further systems.

    It is this mechanism that is often crucial to the success of large-scale ransomware attacks.

    The concept of microsegmentation and its initial implementation challenges

    Microsegmentation addressed the weaknesses of this approach. It aims to implement a Zero Trust model by creating granular security zones around individual applications or resources. Every communication, even inside a previously trusted zone, is subject to verification.

    However, the first generations of microsegmentation solutions faced significant deployment barriers that limited their widespread use. The reliance on software agents, installed on each protected system, generated an operational burden in terms of management, updates and potential performance or compatibility issues.

    Moreover, the configuration process was extremely labour-intensive. Manually mapping dependencies, tagging resources and creating thousands of rules in a dynamic environment was extremely challenging.

    All of this, combined with significant licensing costs, made traditional microsegmentation a complex project, available mainly to the largest organisations.

    Modern approach: agentless microsegmentation

    Developments in technology have led to a new, more practical approach that removes many of the historical barriers. Modern microsegmentation is based on the use of native security mechanisms built into operating systems, such as the Windows Filtering Platform or Linux IPtables.

    Such a solution is inherently agentless, which simplifies implementation and maintenance.

    Central to this architecture is the segmentation server, which acts as the analytical brain of the system. Its operation is methodical. In the first phase, the server learns the network topology, passively analysing traffic to understand the legitimate communication patterns between applications.

    It then automatically classifies and tags resources based on the data collected. In the final stage, based on this information, the system autonomously generates a precise set of firewall rules that only allows authorised traffic.

    Administrative access management is also a practical aspect of this solution. Rather than keeping ports permanently open, these systems integrate with multi-factor authentication(MFA) platforms.

    The administrator, wishing to access the server, initiates a request which, after successful MFA verification, temporarily opens the required communication path for a predetermined period of time.

    Operational and strategic benefits

    There are tangible benefits to moving to an agentless model. From a security perspective, it is a highly effective method of limiting the reach of attacks by blocking lateral movement.

    From an operational point of view, automating the mapping and rule creation processes significantly reduces administrators’ workload and minimises the risk of configuration errors. The use of existing system components lowers the total cost of ownership (TCO) and simplifies the security architecture. Finally, organisations gain detailed insight into the actual data flows in their infrastructure, which facilitates management and auditing.

    We are seeing an important evolution in network security today. Microsegmentation, which was once seen as a complex and costly project, is becoming an accessible and practical tool thanks to modern, agentless approaches. It enables organisations to implement granular control and Zero Trust policies, which are essential to effectively protect dynamic, virtualised and cloud-based IT infrastructures.

  • Technological relic or ticking bomb? A simple guide to the security of legacy systems

    Technological relic or ticking bomb? A simple guide to the security of legacy systems

    In the nooks and crannies of many company server rooms there are still devices running, the rebooting of which raises legitimate concerns. Office workstations run applications with interfaces that remember decades gone by.

    They are the quiet heroes of everyday work – systems that simply get the job done. The question remains, however, when is such a technological veteran a valuable, stable monument, and when does it become a ticking time bomb that can put the entire organisation at serious risk?

    Legacy systems, referred to in the industry as legacy, are in place in companies for a number of reasons. Sometimes this is determined by budget constraints, and sometimes by their critical importance to critical processes, making replacement seem an extremely complex operation.

    The problem is that age in technology is not just a metric. It’s often a lack of vendor support, a failure to patch known security vulnerabilities and an architecture designed at a time when the cyber threat landscape looked very different.

    This article provides a practical guide to diagnose risks and implement mitigating actions in a few steps, without the need for an immediate and costly revolution.

    A key action, with which the whole process begins, is a reliable inventory. It is impossible to effectively protect assets whose existence is not fully known. Therefore, the first step in regaining control is to create an inventory of technological veterans.

    It is worthwhile for such a register to include their name, age, date of last update and their role in the company. Just being aware of your assets is half the battle and a solid foundation for further thoughtful action.

    Next, it is worth asking fundamental questions about risk analysis. Not every old system poses the same risk. The key is its connection to the rest of the network and the internet. An old computer with a database running fully offline is a very different case from an unpatched production control system connected to the company network.

    It is important to assess whether the device is connected to the internet, whether it communicates with other systems and what data it processes. The answers to these questions allow proper prioritisation.

    Diagnosis should go even deeper, all the way to the software supply chain. Sometimes a threat is hidden in a seemingly modern solution that under the hood uses old, unsupported components.

    The so-called SBOM (Software Bill of Materials), a transparent ‘list of components’ of software, is becoming increasingly important in the industry. It is good practice to verify with suppliers which technologies their products are based on, as a new interface does not always guarantee modern and secure code.

    Once the picture is clear, countermeasures can be implemented. Often the quickest results come from taking care of basic digital hygiene. These are absolute foundations that are easily forgotten in the daily rush.

    Actions in this area include changing all default or weak passwords, systematically reviewing lists of users with access and disabling unused ports and services that may provide an unnecessary gateway for attackers.

    For systems that cannot be updated for fear of failure, isolation is an effective solution. One can use the analogy of a valuable but fragile exhibit in a museum that is placed behind armoured glass.

    In the IT world, a firewall or network segmentation mechanism is such a protective barrier. Isolating a critical but vulnerable system from the rest of the company’s infrastructure, especially the internet, drastically reduces potential attack vectors.

    The final piece of the puzzle is to implement continuous monitoring. Even the best-secured assets are worth keeping a close eye on. In practice, this comes down to the use of intrusion detection systems (IDS).

    To use another analogy, they act as a ‘smoke detector’ for infrastructure. They may not put out a fire, but they will immediately raise the alarm as soon as there is a threat, giving valuable time to react before an incident escalates into a major crisis.

    An old system does not have to be either a worthless antique or a ticking bomb. It should be a consciously managed part of the corporate ecosystem. The key is not to panically replace everything that is more than a few years old, but to proactively and wisely manage your technological heritage.

    The starting point for these activities can be the aforementioned stocktaking – a process that, with little effort, provides a great deal of knowledge and lays the foundation for a more secure future for the organisation.

  • Open Compute Project – how open servers are changing IT infrastructure

    Open Compute Project – how open servers are changing IT infrastructure

    Data centres have been regarded as the heart of the digital economy for years, but today they are beating faster and louder – literally and figuratively. Rising energy bills, the need to scale and regulatory pressures mean that classic server architectures are beginning to choke under their own weight.

    It is increasingly difficult to ignore the question: does closed infrastructure still have a future?

    There is already an alternative on the horizon – the Open Compute Project (OCP). It is an initiative that focuses on openness, modularity and independence from a single manufacturer. For some it is an experiment, for others it is the foundation of future IT infrastructure.

    OCP – Silicon Valley reinvents the server

    OCP’s history dates back to 2011, when Facebook decided to build its own data centre in a radically different way from the existing standards. Instead of buying off-the-shelf solutions from vendors, engineers began designing open, modular hardware – from servers to racks to power systems. The result? Higher efficiency, lower costs and the ability to share specifications with others.

    Today, there are more than 200 members – from Microsoft, Google and Intel to banks and cloud operators. Importantly, OCP is not a club of hyperscalers. It is also joined by smaller institutions and solution providers who want to avoid dependence on proprietary ecosystems.

    What is the advantage? Standardisation paves the way for innovation. With common specifications, companies can implement new solutions faster, reduce operating costs and choose suppliers without fear of vendor lock-in.

    Why are companies betting on open servers?

    This is supported by several factors that are difficult to ignore today.

    Scalability on demand

    OCP is based on a modular design. In practice, this means that companies can expand the infrastructure step by step, without costly downtime or large upfront investments.

    2. lower operating costs

    Open standards and central power supply reduce both CAPEX and OPEX. In an era of rising energy prices, the difference is noticeable.

    3 Energy efficiency

    Better airflow, 48V power supply and less redundancy are a simple way to improve PUE – a metric that has become to data centre operators what the 100km burn rate is to car manufacturers.

    4. Flexibility in the choice of suppliers

    The vendor-independent architecture allows different components to be combined and matched to business workloads rather than a single vendor catalogue.

    5. Sustainable development

    Replacing modules instead of entire systems reduces e-waste and extends the life cycle of equipment – an increasingly important argument in the ESG era.

    6 Simplified management

    Open interfaces and unified monitoring tools simplify control and reduce the complexity of daily operations.

    In short, OCP is not just a technology. It’s a survival strategy for companies that have to balance digital ambitions with energy bills and the demands of regulators.

    Integration – technology is easy, planning is difficult

    While the advantages of OCP sound compelling, implementation in existing data centres is not trivial. Most of the current infrastructure was designed at a time when proprietary standards prevailed.

    Most common obstacles:

    • Power and cooling – OCP servers use 48V buses, while most data centres rely on 230/400V. This requires adaptation of the power infrastructure.
    • Rack dimensions – OCP racks differ from classic 19-inch enclosures, which may mean that some of the space has to be converted.
    • Network integration – open network topologies require upgrades to existing infrastructure, especially in terms of capacity and redundancy.
    • Monitoring and management – OCP uses open APIs and proprietary controllers that need to be integrated with the tools used by IT teams.
    • Migration without downtime – replacing infrastructure components in critical environments requires detailed testing and redundancy plans.

    The technology is available. Rather, what slows down implementations are organisational issues and the lack of a coherent migration strategy.

    Companies that successfully transition to OCP tend to opt for an evolutionary rather than revolutionary approach.

    • Pilots and hybrid strategies – testing open architecture in selected clusters, e.g. cloud or HPC.
    • Modular conversions – phased introduction of OCP-compliant power and cooling systems, rather than a one-off conversion of the entire server room.
    • Working with independent partners – experienced integrators can avoid the mistakes that occur when trying to migrate on their own.
    • Build competence within the team – investing in knowledge of open hardware standards is the best way to become independent of external suppliers.

    This approach spreads costs, minimises risk and prepares the organisation for greater transformation in the future.

    Openness as a foundation for digital resilience

    The Open Compute Project shows that the data centre revolution does not have to be about the next ‘magic technology’, but about a simple question: should the infrastructure be open or closed?

    OCP servers offer real savings, greater flexibility and the chance for sustainability compliance. At the same time, implementation requires knowledge, patience and strategic planning.

    For companies that test the open approach today, the benefits are twofold. They gain a modern infrastructure and at the same time resilience to future crises – energy, regulatory or market.

  • Nuclear power or windmills? The data centre industry is looking for a plan B to power AI

    Nuclear power or windmills? The data centre industry is looking for a plan B to power AI

    The rise of artificial intelligence in the technology market has triggered an exponential increase in the demand for computing power – and with it, also for electricity.

    Data centres, until recently treated as a backdrop to digital transformation, are now at the heart of it. The problem is that their continued growth depends not so much on the number of servers as on the availability of power.

    Renewable energy sources were supposed to be the answer. But this is no longer enough. The IT industry is starting to look more and more seriously at nuclear power – especially in the form of small modular reactors (SMRs). Is this a viable alternative or just a long-term vision?

    According to the latest Business Critical Solutions survey, up to 92% of data centre market experts expect demand for computing power to continue to grow through to 2025.

    This is mainly driven by artificial intelligence, whose models – from LLM to generative AI – consume huge amounts of energy both in the training phase and in day-to-day operations.

    The problem is that the infrastructure cannot keep up. As many as 85% of existing data centres are not prepared for this type of workload.

    Scaling up computing power without providing adequate power is like building a skyscraper without foundations.

    The industry is declaring its willingness to switch to renewable energy sources – as many as 91% of survey participants believe that at least 90% of energy for data centres will come from RES in the future. Reality, however, strongly verifies these declarations.

    RES are distributed, weather-dependent and require expensive investment in transmission infrastructure and storage.

    In many locations, it is impossible to guarantee the level of supply continuity required by modern data centres. And where wind and solar farms are available, the relevant transmission networks or environmental permits are often lacking.

    As a result, companies are looking for alternative scenarios – ones that combine low carbon with reliability. Nuclear power is increasingly on this map.

    In the BCS survey, as many as 75% of industry representatives do not rule out the use of nuclear power – mainly in the form of so-called small modular reactors (SMRs).

    These compact, scalable reactors can be located closer to end users and potentially serve as an energy source for large data campuses.

    Their advantages are obvious: independence from weather conditions, predictability of production, no CO₂ emissions and the possibility of installation in industrial locations. However, enthusiasm is tempered by practice.

    70% of respondents believe that SMR technology will not be commercially available sooner than a decade from now. 60% expect strong public opposition – mainly due to concerns about safety, waste disposal and stereotypes around nuclear power.

    What would have been considered an extravagance just a few years ago is today becoming the new normal: data centre operators are investing in their own power supplies.

    From PPAs (Power Purchase Agreements) with renewable energy farms, to building microgrids, to experimenting with hybrid energy models, data centres are ceasing to be just a consumer of energy and are beginning to manage it.

    In this logic, SMRs can become not only a source of power, but also a tool for building energy independence from external suppliers and markets. Such a model – ‘data centre as digital power plant’ – is gaining traction especially in regions with unstable supplies or high energy prices.

    Changes in the data centre energy mix will have a direct impact on the entire IT ecosystem – from hardware manufacturers to systems integrators and cloud service providers.

    The pressure for energy efficiency is growing: servers, cooling systems, software and architectures need to consume less energy under increasing loads.

    At the same time, there is a new demand for competences – energy engineers, ESG specialists, energy management experts.

    For channel companies, this is an opportunity for new business lines: energy consultancy, consumption optimisation services, integration of RES and power management solutions can become a natural extension of IT offerings.

    All these trends lead to one conclusion: the development of data centres in the coming years will not only depend on IT technology, but increasingly on energy infrastructure.


    Green transformation is the way forward, but without viable solutions for a stable energy supply – such as SMRs or energy storage – the development of AI could come to a standstill. The industry knows that the time for ambitious declarations is over. What matters now is implementation.

    Decisions taken today will determine whether the European data centre market can meet the demands of tomorrow.

    The winds are blowing, the sun is shining, but AI cannot wait for the weather. If small reactors prove to be a viable scenario, they will not just be an energy solution – they will be the foundation of the new digital economy.