Tag: Data Center

  • Data centre spending peaks. How is AI driving infrastructure construction?

    Data centre spending peaks. How is AI driving infrastructure construction?

    Market forecasts for the technology sector are rarely so clear-cut. According to the latest data from analyst firm Gartner, global IT spending will reach $6.31 trillion in 2026. This is evidence of a shift in the centre of gravity of global business. The 13.5 per cent year-on-year increase, significantly higher than previous estimates, is a direct result of the artificial intelligence infrastructure arms race.

    A foundation of concrete and silicon: Exploding the data centre sector

    The most glaring point in the report is the dynamics of investment in data centres. Gartner predicts that spending in this segment will grow by 55.8% in 2026, surpassing the $788 billion barrier. To understand the scale of this phenomenon, it is important to look at it through the lens of technological change: we are not dealing with a simple expansion of existing resources, but with a complete reconfiguration of computing architecture.

    Traditional data centres, optimised for data storage and standard business applications, are giving way to HPC facilities. These are designed for the specific requirements of graphics processing units (GPUs) and TPUs, which are at the heart of modern AI. The surge in investment extends not only to the servers themselves, but also to advanced liquid cooling systems, high-density power infrastructure and enabling technologies, without which scaling large-scale language models (LLMs) would be impossible.

    In parallel, the IT services segment, infrastructure deployments and the IaaS model will generate a turnover of $1.87 trillion. This suggests that the market is ripe for consuming computing power in a hybrid model, where physical infrastructure goes hand in hand with specialised management.

    The dominance of hyperscalers: The computing oligopoly

    A phenomenon of a structural nature is the increasing concentration of computing power in the hands of a few players. By 2031, hyperscalers – mainly Microsoft, Google (Alphabet) and AWS (Amazon) – are forecast to control as much as 67% of global data centre capacity.

    This year alone, these three giants plan to spend more than $500 billion on capital expenditure related to AI infrastructure. Such gigantic outlays create a barrier to entry almost impossible for new players to overcome. For businesses, this means that they have to strategically choose a cloud provider that de facto becomes a partner in delivering a data-driven competitive advantage.

    We are also seeing a new geopolitical map of IT investment. Microsoft’s $25 billion investment in Australia or Meta’s construction of the world’s 32nd data centre show that the availability of stable energy sources and space is becoming more important than proximity to traditional business clusters.

    Strategic alliances and supply chain

    Analysis of recent market deals sheds light on the direction in which the industry is heading. Anthropic’s agreements with Google and Broadcom to supply TPU (Tensor Processing Unit) power from 2027 onwards point to the growing importance of proprietary chips to make the giants independent of the dominance of third-party processor suppliers.

    Even the biggest players need flexibility and specialised GPU cloud providers to cope with surges in computing power demand, as evidenced by Meta’s $21 billion partnership with CoreWeave. The biggest profits will be generated not by the AI developers themselves, but by the companies supplying the ‘components’ of this revolution – from accelerator manufacturers to power suppliers.

    Market insights for business

    In the context of the upcoming 2026 Investment Summit, business leaders should consider three key lessons:

    1. Infrastructure as a bottleneck: A 55.8% increase in spending on data centres suggests that access to computing power may become a scarce commodity. Companies planning large-scale AI deployments need to secure infrastructure resources in advance to avoid product development downtime.
    2. The need for cost optimisation: With IT spending reaching $6 trillion, efficiency becomes key. The shift from generic cloud solutions to AI-optimised infrastructure (such as IaaS supported by TPUs/GPUs) will determine the margins of digital projects.
    3. A new ecosystem of suppliers: Companies such as Broadcom and CoreWeave are worth watching. They represent a new category of technology partners who, through specialisation, are able to provide the components needed to scale AI faster and cheaper than traditional hardware suppliers.
  • Hyperscalers are taking over the data centre market. Is this the end of on-premise?

    Hyperscalers are taking over the data centre market. Is this the end of on-premise?

    For decades, the company server room was the technological equivalent of a family castle. It was tangible proof of sovereignty, a safe haven for data and the pride of IT departments that nurtured their own silicon with almost craftsmanlike precision. But the latest predictions from Synergy Research Group plot a scenario in which these digital fortresses become costly open-air museums. By 2031, hyperscalers such as Google, Microsoft and AWS will have seized 67% of global data centre capacity for themselves. What we are seeing is a rapid shift in the centre of gravity of the digital world, necessitated by the brute physics of artificial intelligence.

    The architecture of coercion

    In 2018, enterprises controlled more than half of the world’s computing infrastructure. The prospect of 2031, in which this share shrinks to just 19%, seems at first glance a statistical error. However, the reason for this dip is not an unwillingness to own, but an inability to meet the demands of the new era. Modern AI systems, based on GPUs and specialised chips such as TPUs, require power densities and cooling systems that exceed the design standards of traditional office buildings.

    Hyperscalers are building infrastructure today at fourteen times the scale of just eight years ago. This scale creates a barrier to entry that is impossible for a single organisation to break through. When Satya Nadella announces a doubling of Microsoft’s physical data centre footprint in just two years, he is not talking about building data warehouses, he is talking about creating large-scale innovation reactors. For the average enterprise, trying to catch up to this pace in-house would be akin to building a private power plant network just to power the office kettle.

    The currency of gigawatts and limits

    In the new economic order, capital is no longer the only determinant of development opportunities. The availability of computing power, treated as a scarce and limited resource, is coming to the fore. Strategic partnerships, such as those entered into by Anthropic with Google or OpenAI with AMD, are in fact reservations of energy and silicon for years ahead. In a world dominated by language models and advanced analytics, the ‘power shortage’ referred to by Microsoft’s Amy Hood is becoming a real operational risk for any technology-dependent business.

    This phenomenon is fundamentally changing the role of technology leaders in organisations. The CIO ceases to be a steward of fixed assets and becomes a digital commodity strategist. He or she must operate in a reality where computing power is rationed and its price can skyrocket under local energy considerations. Projected energy price spikes of up to 79% in technology hubs will force a new discipline on business: algorithmic frugality.

    Physical resistance of the cloud

    Although the term ‘cloud’ suggests something ethereal and intangible, its foundations are heavy, loud and raising increasing public opposition. The expansion of technology giants is colliding with the barrier of local politics and ecology. Digital progress is no longer seen as an indisputable good.

    For business, this means a new form of localisation risk. Dependence on one region or supplier coming into conflict with a local community or energy system can become a bottleneck for AI-based product development. This is why more and more companies are attempting to secure operational continuity in the face of growing resentment towards energy-intensive giants.

    Risks of gigantism and opportunities of localism

    The dominance of hyperscale providers brings with it risks that become market opportunities for on-premise proponents. Dependence on a narrow group of suppliers (vendor lock-in) and their vulnerability to local social conflicts or investment blockades – such as those in Wisconsin or Maine – make a diversified in-house infrastructure an insurance policy.

    Opportunities for in-house data centres lie in their ability to adapt where the giants are too sluggish. Local units can deploy innovative heat recovery systems or use niche, green energy sources more quickly, building better relationships with the environment than anonymous, energy-intensive megastructures. This is where ‘edge AI’ is born, processing data where it arises, without the need for costly and slow transfer to global centres.

    Balance as the new overarching strategy

    A comprehensive look at 2031 dictates that we see it not as capitulation but as a new specialisation. The threat to business is not the power of Google or Microsoft, but the lack of an in-house, thoughtful infrastructure strategy. Organisations that indiscriminately abandon their own resources may wake up to a moment when access to innovation is rationed by external suppliers.

    The right chess move today is to reinvest in ‘intelligent on-premise’. This is a smaller but denser infrastructure, optimised for a company’s specific, unique algorithms, while generic computing tasks are delegated to the cloud. This duality allows the company to benefit from the enormity of hyperscalers’ investments, while retaining the hard core that makes the company a sovereign player in the market.

  • The war in Iran and cloud pricing – How geopolitics is hitting the IT sector

    The war in Iran and cloud pricing – How geopolitics is hitting the IT sector

    The modern global economy resembles an intricate network of interconnected vessels, in which a tremor caused at one point on the globe resonates with unexpected force at the opposite end. While it might seem that the sterile, air-conditioned halls of Europe’s data centres are separated by an infinite distance from the dust and chaos of the Middle East, reality brutally verifies this belief.

    Today’s technology, despite its apparent ethereality, remains deeply rooted in the physicality of raw materials and the stability of trade routes. What is happening in the bottleneck of the Strait of Hormuz is not just a local armed incident, but a direct impetus adjusting the IT sector’s operating margins globally.

    This phenomenon can be described as a geopolitical risk premium. The market for digital services has ceased to respond solely to classic supply and demand mechanisms and has begun to price uncertainty. When the world’s key energy arteries are compromised, the price of technology rises not because the power socket has run out, but because the cost of maintaining the stability of this flow becomes dramatically higher.

    The foundation of any cloud infrastructure is energy. In Europe’s energy mix, natural gas still acts as the marginal price-setting fuel. Any disruption in the Middle East, which is the planet’s energy granary, immediately translates into higher electricity bills, which the operators of large server farms have to pay to keep their computing processes running.

    Often seen as an immaterial entity, the cloud actually ‘breathes’ electricity, and its breath becomes more expensive the more turbulent the regions of fossil fuel extraction.

    The situation is complicated by the fact that modern data centres are facilities designed for absolute reliability. Guaranteeing service availability of more than ninety-nine per cent relies on extensive emergency power systems. These generators, which are the last line of defence against a blackout, run on diesel.

    Rising oil prices therefore directly increase the cost of maintaining operational readiness. These accumulating energy costs cease to be just a spreadsheet item and become a barrier to entry for innovative projects, especially when AI, with its exponentially growing appetite for computing power, is developing rapidly.

    When analysing the supply chain, it is important to recognise that the impact of conflict goes far beyond energy alone. The logistics of IT equipment, including the transport of servers, disk arrays and advanced components, is extremely sensitive to fluctuations in transport fuel prices. However, even more acute, although less visible, is the increase in the cost of associated services.

    Geopolitical instability is forcing logistics and insurance companies to renegotiate rates. Risk premiums in maritime and air transport act as a hidden tax that ultimately burdens the end customer’s wallet.

    A particularly worrying aspect is the fate of critical raw materials such as helium supplied from Qatar. This gas is indispensable in the production of state-of-the-art semiconductors. A transport blockade in the region could paralyse factories in Taiwan, with a consequent return to the days of drastic component shortages.

    From a bizneus perspective, this means having to abandon the ‘just in time’ delivery strategy in favour of building up costly strategic reserves.

    The current balance of power on the world map is forcing a redefinition of digital asset placement strategies. Technological security today is also a geographical analysis. Cloud regions located in countries with high political risk are losing their attractiveness, while countries offering a stable energy mix, based on nuclear or renewables, are becoming new bastions of operational sovereignty.

    A key task for executives therefore becomes optimising cloud costs through advanced FinOps practices. IT financial management is now part of a company’s defence strategy.

    Understanding that every inefficiency in application code or unused server instance is a waste of resources that are becoming scarcer and more expensive is fundamental to modern technology leadership.

    In conclusion, the conflict in the Strait of Hormuz region represents a test of sorts for the resilience of the global technology sector. It demonstrates emphatically that the digital world is not isolated from tectonic shocks in geopolitics.

    Business must accept the new reality that energy inflation and supply uncertainty are constants in the equation. Adapting to these conditions requires, first and foremost, a deep awareness that cloud stability begins where dependence on uncertain energy sources and threatened trade routes ends.

  • The value of IT M&A. Tech giants invest in AI foundations

    The value of IT M&A. Tech giants invest in AI foundations

    Artificial intelligence (AI) has dominated the technology discourse, making its way from a market curiosity to the most expensive ticket to the global business premier league. The year 2025 closed in the technology, media and telecommunications sector with an astronomical $903 billion spent on mergers and acquisitions. Behind the scenes of the fascination with new applications, however, another, much more brutal game is being played. It is a battle for physical infrastructure, computing power and chips. Those controlling the technological foundations will dictate the terms throughout the digital world in the coming decade.

    The figures from GlobalData’s contribution leave no illusions. The 76 per cent jump in the value of global TMT deals compared to the previous year is a clear signal that the market has moved into a completely new phase. Generative artificial intelligence has ceased to be regarded as a purely speculative technology. It has become a firm foundation on which key investment decisions of major corporations are now based. Although the attention of the mainstream media is still focused on innovative software and new end-user functionalities, the real battle for influence is taking place at the infrastructure layer.

    Anatomy of a hundred billion dollars

    When analysing the structure of spending, there is a clear shift in emphasis. Deals directly related to artificial intelligence alone took in $117 billion last year, an impressive 125 per cent year-on-year increase. Application software continues to generate a massive volume of capital, reaching a ceiling of $169 billion in almost two hundred deals, but it is the strategic moves on the technology back-end that will define the future balance of power.

    This landscape is being shaped by decisions of unprecedented scale. The record-breaking acquisition of Platform X by x.ai for $45 billion is a classic example of the consolidation of massive data sets needed to train sophisticated language models. Equally important are the powerful minority partnerships that allow the giants to build a back office without immediately causing antitrust authorities alarm. Microsoft and Nvidia’s $15 billion investment in Anthropic and Meta Platforms’ $14 billion acquisition of a 49 per cent stake in Scale AI are strategic moves on the chessboard to secure access to the most innovative algorithms and outstanding engineering talent.

    Bottleneck syndrome and new oil

    Understanding these phenomena requires looking at AI through the lens of physical constraints. Computing power has become the new oil, and leading AI chip companies and state-of-the-art data centres are now the most desirable investment targets. The demand for the resources required to support complex models is growing exponentially, exposing the industry-wide bottleneck syndrome.

    Building infrastructure from scratch is an extremely slow and capital-intensive process. Faced with a limited supply of equipment and an acute shortage of skilled professionals, mergers and acquisitions remain the fastest way to secure resources. The consequence of this race is an increasing oligarchisation of the market. The scale of the required financial outlay means that only the organisations with the deepest pockets remain in the battleground. Smaller players are inevitably relegated to the role of customers forced to rely on external infrastructure, which in the long term exacerbates the risk of technological dependence on a single supplier for entire sectors of the economy.

    A year of operationalising and seeking returns

    Despite the record results, analysts are predicting sluggish transaction activity in the current year, 2026. This projected stagnation, however, does not mean a retreat from innovation. Rather, it is the natural reaction of corporate bodies to the need to integrate giant acquisitions. The pace of further deals is also bound to be affected by unstable macroeconomic conditions and increasing pressure from regulators, who are looking increasingly closely at consolidation in the technology sector.

    The observed decline in merger dynamics is a clear signal of structural change. The market is moving from a phase of aggressive resource aggregation to a phase of operationalisation. The winners of the coming months will not be those making yet another spectacular acquisition, but those organisations that most effectively implement the acquired technologies into their own bloodstream and demonstrate a real return on these astronomical investments.

    Strategic implications for decision-makers

    Access to cutting-edge tools based on artificial intelligence will soon take the form of a fully commercialised service, almost entirely dominated by a narrow range of providers. Understanding this is fundamental to planning long-term operational strategies. The arms race currently taking place at the foundations of infrastructure will ultimately define market standards, pricing models and digital security paradigms for the entire coming decade. Awareness of these processes allows for better risk management and more prudent strategic relationships in a world where physical access to computing power is becoming the most important market advantage.

  • Gigantic investment in Amberg, Germany. New AI data centre for hundreds of millions of euros

    Gigantic investment in Amberg, Germany. New AI data centre for hundreds of millions of euros

    In the heart of Bavaria, in the less than 40,000-strong city of Amberg, a new vision of European technological independence is beginning to crystallise. German start-up Polarise has announced plans to build a data center dedicated to artificial intelligence, with a capacity of 30 megawatts in its first phase. While this figure may seem modest compared to the campuses of Google or AWS, the strategic importance of the investment goes far beyond dry technical parameters.

    Scheduled to be operational by mid-2027, the project hits a sensitive point in the European economy: the dramatic shortage of sovereign computing infrastructure. According to Bitkom Group, at the end of last year, the total capacity of AI data centres in Germany was around 530 MW. The problem is that the lion’s share of these resources is in the hands of players from outside the continent. In an era of rising geopolitical tensions, uncertainty over tariffs and divergent regulations on content moderation, relying solely on US clouds is becoming a strategic risk for European business.

    Polarise, which currently operates thirteen facilities, plans to eventually expand its Amberg centre to as much as 120MW. This is a scale that would allow it to enter a league hitherto occupied almost exclusively by global hyperscalers. However, these ambitions require a huge amount of capital. The company suggests that the costs of the first phase will close in the “triple-digit million euro range”. Significantly, Marc Gazivoda, Polarise’s marketing director, stresses that the project is developing without the support of state subsidies, relying on commercial demand from customers who either rent power or install their own equipment in the facility.

    Local players are beginning to see an opportunity in niches that previously seemed unconquerable. Building an in-house AI facility is not only a matter of prestige, but above all of data security and the stability of the digital service supply chain. If Polarise proves the project on schedule, Amberg could become a key point on the map of European Industry 4.0, offering an alternative for companies for whom geographical and jurisdictional proximity of servers is critical. The success of this investment will show whether Europe can realistically fight for control of the foundations of its own digital future, or whether it will remain merely an ambitious consumer of other people’s technology.

  • The future of the data centre: Trends in AI infrastructure cooling

    The future of the data centre: Trends in AI infrastructure cooling

    The narrative of technological progress has accustomed the world to operating with metaphors of lightness. Words such as ‘cloud‘, ‘data flow’ or ‘virtual intelligence’ suggest the existence of an almost ethereal realm, detached from the weight of matter and the brutal laws of physics. However, on the threshold of 2026, this digital illusion collides painfully with the reality of machine halls. For it turns out that the biggest barrier to the development of an algorithm-based civilisation is not the lack of ingenious code, but the inexorable need to dissipate heat. In an era of hyperscale data centres and processors with power densities beyond previous standards, thermodynamics is becoming a key element of financial strategy and a new currency in the global race for supremacy in the AI sector.

    The paradox of digital heat

    The critical infrastructure cooling market is currently undergoing a transformation, the scale of which is reflected in hard economic data. The projected increase in the value of this sector from $19.5 billion in 2025 to almost $23 billion this year, with a sustained compound annual growth rate of 17 per cent, is a clear signal to investors. This is a surge triggered by hardware evolution. The traditional forced-air-based methods that have underpinned precision air conditioning for decades are beginning to resemble trying to cool a jet engine with a paper fan.

    The reason for this is mundane, yet technologically fundamental. Modern graphics accelerators and neural processing units, which are the backbone of large language models, generate temperatures at which air is no longer an effective thermal energy transport medium. As a result, the industry is facing the need to redefine the IT facility architecture itself. This challenge is primarily concerned with the economic viability of operating equipment, the price of which often matches the value of luxury real estate.

    The new geography of cold: The European perspective and the ESG imperative

    In the European economic context, the issue of thermal management takes on an additional regulatory dimension. While in other regions of the world the priority remains pure computing power, Europe is building its advantage on efficiency and responsibility. The Energy Efficiency Directive (EED) and increasingly stringent ESG reporting requirements are making the PUE) indicator little less important to corporate boards than quarterly results. The year 2024, chronicled as the warmest on record for observation, has made decision-makers realise that data centre thermal resilience is integral to operational risk management.

    There is a fascinating phenomenon of changing perceptions of the data centre in the urban fabric. Instead of insulated, energy-intensive monoliths, facilities are emerging to act as ‘digital heat plants’. In the Nordic countries or France, thanks to innovations by leaders, waste heat from servers is no longer treated as a nuisance by-product, but as a valuable commodity. Integrating IT infrastructure with district heating networks makes it possible to recover energy and power thousands of households. This industrial symbiosis not only improves a company’s environmental profile, but generates tangible economic benefits, changing the cost structure of cooling from a purely passive position to a potentially revenue-generating one.

    Industrial convergence and the twilight of fans

    One of the most telling pieces of evidence of the maturity and strategic importance of the cooling market is the entry into the game of players traditionally associated with the mining and petrochemical sectors. The fact that ExxonMobil Corporation is launching advanced dielectric fluids for immersion cooling is evidence of a profound convergence of industries. As the energy giant begins to design coolants for processors, it becomes clear that liquid cooling has become the new corporate paradigm.

    Immersion cooling technology, which involves completely immersing electronics in a chemically inert fluid, offers benefits that no CFO can pass by. The ability to reduce the total cost of ownership by nearly 40 per cent is due to the radical simplification of the infrastructure. Doing away with huge chillers, complex air supply systems and costly air cleanliness allows for a drastic increase in server packing density. In this new reality, a smaller server room footprint can offer many times more computing power, which, in the face of rising land prices and limited power allocations in hubs such as Frankfurt and London, is the ‘to be or not to be’ of many investments.

    Market consolidation, manifested for example by Vertiv’s acquisition of CoolTera, confirms the trend towards comprehensiveness. Today’s business is looking for integrated thermal management systems that are able to adapt to the changing load generated by AI in real time. Intelligent thermal monitoring allows resources to be dynamically redeployed and failures to be prevented before safety systems register that critical parameters have been exceeded.

    Cool calculation: ROI hidden in the flows

    Analysing the return on investment in modern cooling systems requires going beyond the simple time horizon of one fiscal year. While the capital outlay for liquid technology may seem higher than traditional solutions, its impact on the life of IT equipment cannot be overstated. The absence of vibration generated by thousands of fans, the elimination of humidity-induced corrosion and temperature stability mean that expensive silicon chips can run longer and more efficiently. Extending the lifecycle of the infrastructure becomes an important strategic asset, especially when the availability of the latest chips is limited by supply chains.

    Attention must also be paid to the operational aspect of the computing density itself. Modern data centres designed for AI must be ready to handle server racks consuming up to 100 kW of power. With such parameters, traditional air cooling simply does not physically fit into the machine hall – it would require ventilation ducts with cross-sections that would not allow for efficient space management. Liquid cooling therefore allows projects to be realised that would have been technically unfeasible with the old model.

    The intellect needs peace and quiet

    It is safe to say that the future of artificial intelligence is being forged not only in software labs, but above all in the silence of liquid-immersed server rooms. The market, which is expected to be worth close to $43 billion by 2030, is no longer just a back-office for the IT industry, but an important accelerator of it.

    There is a lesson here about the need to re-evaluate infrastructure foundations from the perspective of business leaders. The most powerful digital intellect needs conditions to work that only advanced thermal engineering can provide. The cool serenity of the processors becomes the guarantor of business continuity and the key to profitability.

    The question therefore remains as to how well prepared the current infrastructure strategies are for the coming years. Do they take into account the fact that in a digital world, the highest form of sophistication is now becoming the ability to keep a low profile in the hottest moments of a technological revolution? The answer to this question will define the balance of power in the economy of the coming decade.

  • 25% lower TCO: The new standard for data centre construction from Vertiv

    25% lower TCO: The new standard for data centre construction from Vertiv

    The traditional model of data centre construction – sequential, dependent on the vagaries of weather and local availability of specialists – is no longer keeping pace with the pace of investment by hyperscalers. Vertiv, the digital infrastructure giant, is challenging this status quo with the introduction of the Vertiv OneCore platform. This signals that the industry is abandoning the ‘real estate project’ paradigm in favour of an ‘integrated industrial system’.

    The key to this transformation lies in the transition from static BIM modelling to high-fidelity dynamic digital twins (Digital Twin). Using SimReady resources and the OpenUSD format, Vertiv creates an ecosystem where the digital design and the physical structure are an inseparable whole. This allows the collision of mechanical and electrical systems to be simulated in the virtual world before even a single excavator arrives on site.

    Vertiv’s collaboration with Hut 8 Corp. shows what this new correlation between power and infrastructure looks like in practice. Instead of building unique facilities, companies are implementing repetitive, modular functional blocks. This strategy is delivering tangible operational benefits:

    • Speed of monetisation: Prefabrication and factory testing reduce the time to commission a facility by up to half.
    • Space efficiency: Integrated cooling and power systems allow up to 30% of space to be recovered, which directly translates into higher revenue per square metre.
    • Cost optimisation: Moving work from the construction site to a controlled production environment reduces the total cost of ownership (TCO) by nearly a quarter.

    For operators or colocation providers, OneCore solves the problem of ‘industry silos’. Instead of fighting for space between installers of different systems, they get an interoperable product, ready for power densities of up to 600 kW per cabinet.

    As Giordano Albertazzi, CEO of Vertiv, points out, this is not a departure from engineering rigour, but its evolution towards convergence. Deployment predictability becomes the most valuable currency as sovereign infrastructure and AI factories become the backbone of the economy.

  • Market signal: CMR technology still wins economically in the 32 TB segment

    Market signal: CMR technology still wins economically in the 32 TB segment

    For months, the storage industry has been conjuring up HAMR technology and the Mozaic-3+ platform to revolutionise data density. Meanwhile, Seagate is taking a step that may seem conservative at first glance, but is in fact a pragmatic response to the current needs of data centres and business. The US giant has launched three new 32TB hard drive models, based entirely on its proven Classic Magnetic Recording (CMR). The data storage density in which manufacturers are now competing is incredibly important from a business perspective. With this direction, data growth does not necessarily mean increasing data centre space, but merely upgrading the current infrastructure to one with higher density. Such a direction can significantly reduce data storage and processing costs as the amount of data per square metre will increase.

    The decision to use conventional technology, at a time when the company already has next-generation solutions capable of producing media in excess of 30TB, is a signal to the market as a whole. It suggests that while HAMR is the undisputed future in the data packing race, CMR technology still offers a better balance between production cost, reliability and volume availability. Seagate thus proves that the older standard has not yet been definitively played out and still has the potential to scale.

    The new drives reinforce the manufacturer’s key product lines, precisely targeting various B2B sectors. The Exos model is designed for servers in data centres, the SkyHawk AI is optimised for advanced video surveillance, while the IronWolf Pro is dedicated to NAS systems. On the technical side, Seagate engineers have opted for unification: all three models are 3.5-inch 7200rpm SATA drives supported by 512MB of cache. The manufacturer claims their endurance at 550 TBW per year and a mean time without failure (MTBF) of 2.5 million hours, which is reflected in the five-year warranty.

    The pricing policy clearly differentiates the segments. The most affordable SkyHawk AI model is priced at USD 699.99. Enterprises will pay USD 729.99 for the Exos server variant, while the highest-positioned IronWolf Pro on this list involves an expenditure of USD 849.99. This launch demonstrates that the priority for business customers today remains capacity delivered in a proven architecture, and not necessarily a technology race at any price.

  • Data centre market against the wall. Lack of power hinders digital transformation

    Data centre market against the wall. Lack of power hinders digital transformation

    As recently as two years ago, at the height of AI fever in 2024, there was only one question being asked in boardrooms: ‘Where to get Nvidia processors?’ Chip availability was the bottleneck that dictated the pace of technological development. Today, in January 2026, the situation has changed dramatically. Hardware supply chains have cleared, distributors’ warehouses are full of the latest Blackwell and Ruby chips. Yet new data centre investment is stalling.

    The question of 2026 is no longer “Do you have the equipment?”, but “Where will you connect it?”. Power Availability has replaced silicon availability as the main operational risk factor. We are entering an era where the success of an AI project is determined by the old analogue power infrastructure rather than digital code.

    A new bottleneck. The geopolitics of the socket

    The average waiting time for a new power connection of more than 10 MW in Europe’s key hubs has lengthened from 18 months in 2023 to a shocking 4-5 years today. This means that a decision to build a server room taken today will only materialise operationally around 2030-2031. For the technology industry, this is an eternity.

    The problem hits the so-called FLAP-D market (Frankfurt, London, Amsterdam, Paris, Dublin) hardest. These traditional data capitals are energy saturated. Grid operators in the Netherlands or Ireland are refusing to issue new connection conditions, citing the risk of destabilising the national energy systems.

    In this landscape, Warsaw – emerging in recent years as a key hub for Central and Eastern Europe – has become a victim of its own success. Investments by giants such as Google, Microsoft or local cloud operators have rapidly consumed the available power reserves in the Warsaw agglomeration. Polskie Sieci Elektroenergetyczne (PSE) is facing a physical challenge: the networks in the capital area are not able to accommodate further gigawatt loads without a thorough modernisation that will take years. The result? Investors are forced to look for alternative locations – in the north of Poland (where offshore wind power is easier to come by) or to flee to southern Europe, where solar power is easier to come by.

    AI physics: Why do old server rooms ‘melt cables’?

    The energy crisis also has a second bottom – technical. Even if a company has space in a server room built in 2020, it often cannot install modern AI infrastructure there. This is due to a drastic change in the so-called power density (Rack Density).

    In traditional IT, the standard was 5-8 kW of power consumption per server rack. Power and cooling systems were designed for these values. Today’s AI clusters, based on the Nvidia Blackwell architecture or successors, require between 50 and even 100 kW per rack.

    Trying to put such infrastructure into an ‘old’ Data Centre (from 5 years ago) ends in failure. The building cannot deliver that many amps in one place and, more importantly, it cannot dissipate the heat generated. Trying to cool a 100 kW cabinet with traditional air (precision air conditioning) is akin to trying to cool a racing engine with an office fan. It is physically impossible and uneconomic.

    The cooling revolution: The end of the air era?

    Consequently, 2026 is the moment of the ultimate triumph of Liquid Cooling technology. What was until recently the domain of overclocking enthusiasts and cryptocurrency diggers has become the corporate standard.

    Every new Hyperscale development commissioned this year is being designed to a hybrid or all-liquid standard. Two technologies dominate:

    • Direct-to-Chip (DLC): Where the cooling liquid is piped directly to the water blocks on the CPUs and GPUs. This solution has become a warranty requirement for the latest servers.
    • Immersion Cooling: Where entire servers are ‘melted’ in tubs filled with a special dielectric (non-conductive) fluid.

    This change is driven not only by physics, but also by EU regulations (EED – Energy Efficiency Directive). Liquid cooling is much more energy efficient and, moreover, allows heat recovery. The fluid leaving the server has a temperature of 60-70°C, which allows the Data Centre to be plugged directly into the municipal district heating network. In 2026, server rooms become de facto digital combined heat and power (CHP) plants, heating office buildings and housing estates, which is key to obtaining environmental permits.

    The economics of scarcity: Power Banking and the atom

    The shortage of capacity has triggered a sharp rise in prices. Rates for colocation (renting space for servers) in Warsaw and Frankfurt have risen by 30-40% year-on-year. Customers are no longer negotiating prices; they are bidding for who will be the first to sign a contract for ‘powered racks’.

    The strategy of developers has also changed. In the real estate market, the phenomenon of ‘Power Banking’ is making waves. Investment funds are buying up old, bankrupt factories, steelworks or industrial plants. They are not interested in the buildings (often destined for demolition), but in the active, high power allocations assigned to the plot. A ‘power right’ is bought to put up containers with AI servers on the site of a former foundry.

    At the top of the investment pyramid, we see a shift towards nuclear power. Following in the footsteps of Microsoft and Amazon (high-profile 2024/2025 deals), European players are also looking to power their campuses from small modular reactors (SMRs) or via direct lines (PPAs) from existing nuclear power plants. The IT industry has realised that RES (wind and solar) are too unstable for AI, which has to ‘learn’ 24/7 with a constant load.

    A new indicator of success – Time-to-Power

    For Chief Information Officers (CIOs) planning strategies for 2026 and 2027, there is one key lesson: Hardware is easy, electricity is hard.

    The traditional model, in which servers are ordered first and then space is sought for them, is dead. Today, the process needs to be reversed. Booking Data Centre capacity 12-24 months in advance is a must. The Time-to-Market (time to deploy a product) indicator has been replaced by Time-to-Power (time to get power).

    The digital revolution today depends 100 per cent on analogue infrastructure. Without massive investment in transmission networks and new generation sources, artificial intelligence in Europe will hit a glass ceiling – not for lack of data or algorithms, but for the mundane lack of a socket to plug it into.

  • Record $61bn data centre market. AI is driving a historic wave of acquisitions

    Record $61bn data centre market. AI is driving a historic wave of acquisitions

    The arms race in the area of artificial intelligence is no longer the domain of software alone. The latest figures show that the real battle is now being fought over concrete and silicon. November brought a historic record in the data centre M&A market, confirming that the appetite for computing power is still far from being satisfied.

    According to the latest analysis by S&P Global Market Intelligence, the value of deals in the data centre sector has reached $61 billion this year, beating last year’s record ($60.81 billion). Significantly, this barrier broke even before the end of November, driven by more than 100 key deals in the global market. These numbers are not just statistics – they are a clear indication that the infrastructure market is undergoing a structural shift, with access to physical server space becoming as valuable a currency as AI algorithms themselves.

    Why it matters

    Behind the surge are primarily technology giants and so-called hyperscalers, who are reserving billions of dollars to expand the infrastructure needed to train and operate AI models. It is companies related to artificial intelligence that have been responsible for the lion’s share of this year’s increases in US stock markets. Despite the enthusiasm, analysts are increasingly pointing to growing risks: high asset valuations and debt-financed investments raise questions about how quickly these investments will translate into real operating profits.

    Seller’s market

    The geographic distribution of capital leaves no illusions about the dominance of North America. As of 2019, the value of transactions in the US and Canada totalled around $160 billion. By comparison, the Asia-Pacific region has attracted close to $40 billion and Europe $24.2 billion in that time.

    A key piece of this puzzle is private equity funds. Investors are tempted by the attractive risk-to-reward profile that data centres offer. This situation has led to a peculiar market impasse: funds are keen to buy, but reluctant to sell. This creates an environment of rarity, in which the supply of high-quality assets is severely limited, which further inflates the valuations of the facilities available on the market.

    As a result, 2025 ends with a clear message to the industry: owning your own infrastructure or secured colocation contracts is becoming a key competitive advantage, and the barrier to entry in the data centre market has never been higher.

  • Data gives you an edge, but requires control. 8 predictions for the enterprise market

    Data gives you an edge, but requires control. 8 predictions for the enterprise market

    Just a decade ago, the definition of a ‘secure business’ was simple: a robust firewall, up-to-date anti-virus and regular backup. Today, in the age of hybrid environments and ubiquitous artificial intelligence, this approach sounds like an archaism. Data has given businesses superpowers in the form of a competitive advantage, but it has also brought unprecedented operational complexity to IT departments. Looking at technology predictions for 2026, it is clear that we are entering an era where ‘digital sovereignty’ is becoming the new currency and speed is the only acceptable security parameter.

    Technology has ceased to be magic and has become critical logistics. If we look at what lies ahead over the next two years, the conclusions are clear: traditional cyber security is not enough. The arms race has moved to the infrastructure level, and it will be won by those who understand that the geographical boundaries of data matter, and that response times count more than the height of defence walls.

    Speed is the new benchmark

    For years, we have lived in a paradigm of perimeter protection – building a fortress where no unauthorised person has access. Predictions for 2026 brutally verify this approach. Cyber threats have evolved. These are no longer isolated incidents of ransomware, involving ‘just’ disk encryption. We are dealing with complex operations in which data is not only locked, but above all quietly exfiltrated and then sold on the black market or used for blackmail.

    In such a reality, a company’s resilience (resilience) is not measured by whether an attack can be avoided, but how quickly an organisation is able to recover from an incident. Traditional data recovery from tapes or free archive repositories becomes an unacceptable bottleneck.

    Speed is becoming the new standard. Anomaly detection must happen in real time and isolation of infected resources must happen automatically. Furthermore, the concept of ‘clean data recovery’ is becoming crucial. In the future, intelligent infrastructures will have to guarantee that the target state to which we return after a disaster is absolutely free of malicious code. This requires integrating security systems directly into the storage layer, rather than treating them as an external overlay.

    Geopolitics enters the server room

    Not so long ago, the cloud strategy of many companies was based on simple economic calculus and flexibility, often ignoring the physical location of bits and bytes. Those days are irrevocably passing. Governments around the world, concerned for national security and the privacy of citizens, are tightening regulations on where data can be stored and processed.

    Therefore, one of the key trends by 2026 will be data sovereignty. Companies and technology partners must respond by building environments that provide privacy without inhibiting innovation. Sovereign clouds and local hybrid environments are the market response. This is not about a complete retreat from global hyperscalers, but about managing risk wisely.

    Herein lies a huge opportunity for modern data platforms. They are designed to take the burden of bureaucracy off the shoulders of IT departments. Sustainable platforms are supposed to automate encryption, access policy management and regulatory compliance. This allows engineers to focus on creating business value, rather than wasting time manually aligning systems with regulatory requirements. Sovereignty ceases to be an obstacle and becomes part of the architecture.

    The race against time and quantum

    Looking to the future, it is impossible to ignore threats that seem distant today but could become standard in 2026. We are talking about post-quantum cryptography (PQC). Although quantum computers capable of breaking current security measures are still a song of the future, data that is stolen today could be decrypted in a few years (the so-called ‘harvest now, decrypt later’ attack).

    Therefore, the smart infrastructure of the future must integrate PQC standards now. Security cannot be a service tacked on at the end of the implementation process. It must be built into the DNA of data storage systems – from behavioural anomaly detection at the record level to advanced encryption. Only this approach will give companies peace of mind in the face of evolving threat models.

    Trust as a currency

    All of the above – speed, sovereignty, security – converge on one point: artificial intelligence. The year 2026 is when AI will cease to be just a content generator and will start to operate in the model of Agentic AI – autonomous systems that make decisions.

    However, for AI to be effective and secure, it must be trustworthy. Most AI initiatives fail not because of poor language models, but because of poor quality databases and lack of control over them. If a company is unsure who has accessed the training data, whether it has been manipulated and whether it complies with regulations, implementing AI becomes Russian roulette.

    Therefore, comprehensive data management (Data Governance) comes to the fore. Access control, data lifecycle tracking (data lineage) and integrity are foundations without which even the most advanced algorithm will be useless.

    The end of silos

    The path to 2026 is through understanding that artificial intelligence, cloud, cyber resilience and modern infrastructure are no longer separate areas. They are interconnected vessels.

    Cloud strategies are shifting towards workload-optimised (workload) platforms. Instead of managing separate consoles, companies will rely on unified platforms to decide where a given task will perform best – whether in the public cloud, sovereign cloud or local data centre.

    In the coming years, those who bet on an intelligent data infrastructure will win. One that ensures speed of recovery from attack, guarantees sovereignty in the face of regulation and provides the fuel for trustworthy artificial intelligence. It is time to stop treating infrastructure as a cost and start seeing it as the foundation of modern business.

  • A cold shower for the industry. Why won’t traditional data centres survive the AI boom?

    A cold shower for the industry. Why won’t traditional data centres survive the AI boom?

    The rapid adoption of artificial intelligence has ceased to be just a software trend and has become a tough engineering and logistical challenge. According to the latest Data Centre Construction Cost Index 2025 report by Turner & Townsend, the data centre market is on the threshold of a structural change. By 2027, AI-optimised facilities are expected to already account for 28 per cent of the global market, forcing a radical overhaul of existing construction and energy standards.

    The scale of the transformation is evident in order books. As many as three-quarters of the companies surveyed are currently running AI infrastructure projects, and nearly half of the respondents predict that these workloads will dominate their operations in just two years. This paradigm shift entails a move away from traditional air cooling. Although conventional racks are still the norm, the majority of the industry, 53 per cent, indicate liquid cooling as the preferred standard of the future. This technology, although currently costing 7 to 10 per cent more, is becoming essential with the power density required by modern processors. Significantly, closed water systems are growing in popularity, with their low resource consumption making it easier to obtain environmental permits in regions with restrictive water policies.

    However, the industry is facing bottlenecks that could hamper this growth. Although supply chains for key components such as generators have temporarily regained stability, confidence in the timeliness of suppliers in the 2026 outlook remains worryingly low. The skills gap is proving even more challenging. Only 17 per cent of companies claim to have sufficient expertise in implementing advanced cooling systems, which, when juxtaposed with rising construction costs – projected to increase by 5.5 per cent per watt in 2025 – creates a risky mix. The most expensive markets consistently remain Tokyo, Singapore and Zurich.

    However, access to energy is becoming the ultimate arbiter of where new developments are located. Power grid capacity is currently the biggest barrier for nearly half of developers. Data centres have to compete with industry and the residential sector for connection power, forcing operators to find creative solutions. Microgrids, on-site power generation and experiments with hydrogen are gaining prominence. As Chris Gorthy of DPR Construction points out, it is the availability of power today that dictates where and when the first shovel will be driven in, forcing the sector to balance the growing demand for data with the need to minimise environmental impact.

  • The myth of the cheap archive. Why are the hidden costs of Tiering draining IT budgets?

    The myth of the cheap archive. Why are the hidden costs of Tiering draining IT budgets?

    For almost two decades, cloud architecture has been based on one seemingly inviolable dogma: data that is rarely used should be ‘frozen’. The Cloud Object Storage model, shaped in the mid-2000s by Amazon (S3), defined the standard for thinking about infrastructure costs. But in 2025, in the age of real-time analytics, AI and rigorous compliance, this logic is beginning to crack. What looks like a saving in Excel becomes an unpredictable cost trap in operational practice.

    Only a decade ago, dividing data into classes (Hot, Warm, Cold/Glacier) was not only logical, but necessary. Storage media were expensive and bandwidth was limited. Outsourcing rarely touched data to cheaper, slower storage tiers (Tiering) promised CFOs and CIOs clear savings. The principle was simple: you pay a lot for what you use now, and pennies for what ‘lies and dusts’.

    On paper, this approach still seems rational. However, the reality of modern IT brutally verifies this model. Infrastructure teams are increasingly struggling with complex lifecycle policies, operational delays and – most importantly – costs that cannot be budgeted for annually. So is the era of Tiering coming to an end?

    Logic of the 2000s versus digital reality

    Data levelling had a strong economic mandate at a time when data was static. The archive served to be forgotten. Today, however, data has become fuel. The rise of machine learning, Big Data analytics and the need for real-time reporting has made the concept of ‘rarely used data’ fluid.

    A file that has not been opened for 180 days can become critical by the minute to a predictive algorithm, an audit process or an emergency RODO request. In the classic S3 model, IT systems collide with a wall. Data has been ‘pushed out’ to a low-cost tier according to Lifecycle Management policy, and immediate restoration is impossible or extremely expensive.

    The huge drop in the price of storage itself in recent years has meant that the difference in price per TB between hot and cold tiering is no longer the sole determinant of cost-effectiveness. In the new economic calculus, access costs, rather than resting costs, are becoming crucial.

    The maths that hurts – the hidden costs of ‘cold’ data

    Many IT managers fall into the trap of looking solely at the price of storage (storage at rest). However, this is only the tip of the TCO (Total Cost of Ownership) iceberg. Traditional Tiering is laden with a number of charges, which are written in the fine print in cloud providers’ price lists, and which hit companies when they least expect it.

    The main problem is a lack of transparency. Companies often omit from their calculations:

    • Retrieval Fees: The cost of ‘retrieving’ data from an archive can be many times the annual cost of storing it.
    • Minimum retention period: Many ‘low-cost’ memory classes enforce the retention of an object for, say, 90 or 180 days. Deleting or moving it earlier incurs a financial penalty.
    • Exit costs (Egress Fees): Data transfer outside the provider’s cloud.

    The scenario is repetitive: a company moves terabytes of legacy client data to a ‘cold’ classroom to save budget. Months later, the legal department orders an audit or historical review. The IT department has to ‘unfreeze’ these resources. Suddenly, the process generates an invoice that ‘eats up’ all the savings previously generated and further blocks the budget for new investments. Cost unpredictability becomes enemy number one for business stability.

    Time is money – operational paralysis

    The financial aspect is one thing, but Tiering also introduces operational risk. In the case of deep archives (Deep Archive type), the time to restore access to data is calculated in hours and sometimes days.

    For modern applications that expect millisecond responses, this is unacceptable. When an analytics tool or reporting system encounters archived data, workflows are interrupted. There are *time-outs*, error messages and business processes come to a standstill. In time-critical environments – such as banking, e-commerce or manufacturing – such a delay can mean real reputational and financial losses.

    In addition, data lifecycle management (Lifecycle Policies) is becoming increasingly complex. The ‘move to archive after 30 days without access’ rule sounds reasonable, but in practice it is a blunt tool. IT teams waste hundreds of hours configuring exceptions, monitoring rules and manually restoring data at the request of the business. Instead of dealing with innovation, administrators become custodians of the digital archive, fighting against a system that was supposed to make their job easier.

    The “Always-Hot” trend – predictability instead of gambling

    In response to these challenges, a new trend is crystallising in the storage market: a move away from class-based logic towards Always-Hot architectures.

    More and more IT decision-makers are questioning the relevance of Tiering. Instead of juggling data between different tiers, companies are opting for models in which all objects – regardless of age or frequency of use – are maintained in instant access mode.

    The advantages of this approach go beyond simple convenience:

    1. financial predictability: in the Always-Hot model, the variable costs of data recovery disappear. The company pays for capacity and transfer, but is not penalised for wanting to use its own information. Budgeting becomes simple and precise.

    2. Efficiency: the absence of ‘unfreezing’ processes means that every application, script or analyst has access to the full spectrum of data at the same time.

    3 Simplifying the architecture: Eliminating complex retention and portability rules frees up human resources.

    Security and Compliance in a flat structure

    A data warehouse that makes everything available instantly, however, requires a different security philosophy. Classic S3 mechanisms, such as ACLs (Access Control Lists) or policies at the individual bucket level, become unmanageable and confusing at large scale.

    Modern Object Storage systems rely on IAM (Identity and Access Management). Since data is always available (“hot”), access control must be surgical. Rights are assigned to the identity of the user or application, rather than being “stuck” to folders. This allows precise identification of who can read, write or delete objects, which is crucial in multi-tenancy environments.

    The legal aspect is equally important. Compliance with RODO, European data sovereignty or protection from extraterritorial regulations (such as the US CLOUD Act) are priorities today. Companies need to know where their data is and be confident that they can permanently delete or export it at the request of a regulator. In a tiered model, where data is spread across different classes of archiving, implementing the ‘right to be forgotten’ can be technically difficult and time-consuming. A flat architecture (with no layers) drastically simplifies auditability and compliance management.

    Resilience through accessibility

    Looking to the future, it is clear that data volumes will grow exponentially, but tolerance for access delays will decrease. Companies cannot afford to hold their digital assets hostage to complex pricing and slow archive drives.

    The Always-Hot approach fits into a broader strategy of business resilience (Resilience). It is a model that prioritises business continuity and responsiveness over theoretical carrier savings. The classic Tiering model, while well deserved for cloud development, has reached its limits in many scenarios. Its complexity and hidden costs make it a relic of a previous IT era.

    For CIOs and system architects, the lesson is clear: choosing storage today is a strategic decision, not just a purchasing one. Those who opt for direct availability and cost transparency are building the foundation for IT that is ready for the unpredictable challenges of the future – from sudden audits to the AI revolution.

  • Price shock in AI market. Nvidia’s decision will drive up data centre costs

    Price shock in AI market. Nvidia’s decision will drive up data centre costs

    Nvidia ‘s decision to fundamentally change the memory architecture in its AI servers could cause an unprecedented price shock across the semiconductor supply chain. According to the latest analysis by Counterpoint Research, server memory prices are on course to double by the end of 2026. The source of the turmoil this time is not a shortage of raw materials, but a strategic reorientation of the AI market leader, which is reaching for solutions hitherto familiar to consumers’ pockets in the search for energy efficiency.

    The Santa Clara-based chip giant has begun the process of replacing the industry-standard enterprise DDR5 modules with LPDDR (Low-Power Double Data Rate) chips. This is a low-power technology hitherto the domain of smartphones and tablets. However, this move, prompted by the desire to reduce the gigantic power costs of artificial intelligence servers, creates a problem of scale. A single AI server requires many times more memory than mobile devices, making Nvidia suddenly a customer with a purchase volume comparable to the largest smartphone manufacturers. Counterpoint refers to this phenomenon as a ‘seismic shift’ for which the supply chain is not prepared.

    The situation puts the major memory manufacturers against the wall: Samsung Electronics, SK Hynix and Micron. These companies are already operating at capacity, diverting most of their capacity to high-bandwidth memory (HBM), which is needed to power graphics accelerators. The sudden massive demand for LPDDR from the server sector threatens to cannibalise production lines and destabilise the market. Manufacturers that have recently reduced the supply of older memory types will not be able to easily absorb such a large scale of new orders without drastic price adjustments.

    The forecasts are unforgiving for end users. Analysts predict that overall memory chip prices will increase by 50 per cent from current levels as early as the second quarter of 2026. Higher component costs will hit cloud providers (hyperscalers) and AI developers directly, putting additional pressure on data centre CAPEX budgets, which are already historically stretched by record GPU spending and energy infrastructure upgrades.

  • The data centre gap is growing. Old systems can’t cope with AI, power and regulation

    The data centre gap is growing. Old systems can’t cope with AI, power and regulation

    Traditional data centres are failing to meet the demands of the new era. According to Lenovo’s ‘Data Centre of the Future’ study, almost half (46%) of IT managers in EMEA admit that their current infrastructure does not support power and CO2 reduction goals. This gap becomes critical in the face of increasing resource appetite from artificial intelligence and automation, clashing with the harsh realities of European energy regulations.

    The pressure on the AI industry is growing from three directions simultaneously. Firstly, AI workloads are rapidly increasing the demand for power. Second, European regulatory requirements, aiming for climate neutrality by 2030, are forcing unprecedented efficiency. Thirdly, regulatory issues are redefining architecture. As many as 99% of IT decision-makers indicate that data sovereignty – i.e. full control over the location and processing of data – will define the design of future centres. At the same time, 94% of respondents identify low latency as a key business requirement, driven by real-time applications and edge computing.

    The existing model, based mainly on air cooling, is no longer sufficient. In response, Lenovo, in collaboration with engineering firm AKT II and architects from Mamou-Mani, is proposing a radical paradigm shift, betting on liquid cooling. This technology, which is significantly more energy efficient, is the foundation for conceptual designs for decades to come.

    These visions range from data centres suspended in the stratosphere and powered by solar energy (‘Floating Cloud’) to modular urban facilities (‘The People of Data’). The latter, located near watercourses, could return waste heat to local grids, heating schools or homes. Another idea is the adaptation of disused spaces, such as bunkers or tunnels, which minimises the impact on the surroundings and naturally increases safety.

    These concepts have one common denominator: a move away from adapting old solutions. Lenovo stresses that in order to meet the dual challenge of growing demand for power and stringent regulatory requirements, companies need to change their mindset. Sustainability must be an integral part of the project from the outset, not an expensive add-on.

  • AI boom drives data centres. Iron Mountain exceeds Wall Street forecasts

    AI boom drives data centres. Iron Mountain exceeds Wall Street forecasts

    The growing demand for computing power for artificial intelligence is clearly translating into financial performance for infrastructure providers. Iron Mountain, a company historically associated mainly with the secure storage of physical documents, showed strength in this segment. The company beat Wall Street’s estimates for its key earnings indicator for the third quarter, a direct result of the boom in AI applications such as ChatGPT and growing demand for data centre space rental.

    Iron Mountain reported adjusted funds from operations (AFFO) of US$1.32 per share for the July-September period. This is well above analysts’ consensus of US$1.25, according to data compiled by LSEG. Importantly, the trend seems to have continued – the company’s forecast for the fourth quarter (US$1.39 per share) was also slightly ahead of market expectations (US$1.38).

    However, the success of the data centre segment does not mean abandoning its roots. The company is successfully combining a new growth branch with stable cash flows from its core business of records management and storage. This traditional business, which serves a large and diverse customer base (such as Boeing, Akamai Technologies or Coca-Cola), continues to generate solid revenues.

    Total revenue for the quarter ended 30 September increased by approximately 13% year-on-year to $1.75 billion. This growth was driven by both robust 16% growth in the services segment (often linked to digital transformation) and steady 10% growth in warehouse leasing. Iron Mountain’s results show how established companies are able to leverage the AI trend to drive a new wave of growth, while building on a profitable traditional business.

    For Polish business, the key takeaway from Iron Mountain’s results is that the AI boom is realistically and exponentially increasing demand for data centre infrastructure, creating huge opportunities for the IT industry and real estate investors. At the same time, the case proves that stable, traditional revenue streams (such as archiving or logistics) should not be abandoned, but used as leverage for capital-intensive investments in new technologies. Success lies in smart diversification and finding synergies between physical assets and the growing digital services market.

  • SK Hynix CEO: AI developments are strangling the global semiconductor supply chain

    SK Hynix CEO: AI developments are strangling the global semiconductor supply chain

    The explosion of investment in AI-powered data centres is creating serious bottlenecks in global supply chains. The warning came from Chey Tae-won, chairman of SK Group – a South Korean conglomerate that includes SK Hynix, one of the leading suppliers of key memory chips.

    “I believe that these rapid changes … ultimately lead to bottlenecks around the world,” – Chey said at a business event accompanying the APEC summit in Gyeongju. The demand to build new infrastructure is so rapid that component suppliers cannot keep up. “For everything [that goes into data centres], from chips to services, I think they are creating bottlenecks,” he added.

    The SK Group CEO’s comments are of particular importance. SK Hynix is a key manufacturer of advanced HBM (High Bandwidth Memory), essential for AI accelerators such as Nvidia’s GPUs. The growing demand for these specialised components already exceeds the industry’s production capacity, creating real constraints on the development of large language models and cloud services.

    The problem is exacerbated by intense global competition. Chey pointed out that the race for leadership in AI has become a matter of national importance, with powers such as the US and China launching national strategies to gain an advantage. This rivalry puts additional strain on supply chains and complicates resource allocation in the global semiconductor industry. This situation puts pressure on the entire technology industry, which has to balance the rapid development of AI with real production constraints.

  • Bezos knows how to cut data centre electricity bills. All it takes is a rocket and a few billion dollars

    Bezos knows how to cut data centre electricity bills. All it takes is a rocket and a few billion dollars

    Jeff Bezos, founder of Amazon, charts a vision of the future in which the gigawatt data centres powering the development of artificial intelligence leave Earth and move into orbit. In his view, this will happen in the next 10-20 years. The main argument is simple: constant and unlimited access to solar power in space will ultimately make such a solution more efficient than terrestrial infrastructure.

    The forecast, presented at the Italian Technology Week in Turin, addresses one of the technology industry’s biggest problems. Earth-based data centres, especially those used to train advanced AI models, consume huge amounts of electricity and water for cooling. Moving them to orbit, where solar energy is available 24/7 without weather disruption, seems a logical step in the evolution of infrastructure.

    Bezos sees this as a continuation of a trend that began with weather and communications satellites – using space to optimise life on Earth. The next step, after data centres, would be industrial manufacturing.

    However, this vision faces fundamental technological and physical barriers. The biggest challenge is latency. Even at the speed of light, transmitting data from Earth to orbit and back generates latency that is unacceptable for many applications that require immediate response. Another hurdle is hardware maintenance and upgrades. Replacing a broken server or updating components in space would be an extremely complex and expensive operation, if at all possible on a large scale.

    Add to the list of problems the high cost and risk of launching payloads, the threat from space junk and the need to develop effective heat dissipation systems in a vacuum.

    Bezos compared the current AI boom to the internet bubble of the early part of the century, suggesting that even if there is a market correction, the fundamental benefits of the technology will remain. The same may be true of his space vision – although it seems distant today, it solves a real, growing energy problem that the AI industry will have to face.

  • Megawatts to teraflops – how energy shapes AI hardware replacement cycles in the data centre

    Megawatts to teraflops – how energy shapes AI hardware replacement cycles in the data centre

    The development of artificial intelligence is not just about computational advances. Training linguistic and generative models requires thousands of GPU/TPU accelerators that devour tens of megawatts of power. As a result, electricity consumption in data centres is increasing – in Ireland, data centres consumed as much as 22% of the country’s total energy in 2024. Such a share is a challenge for energy suppliers and DC operators, who need to fit growing demand into increasingly expensive price tags while reducing CO₂ emissions.

    This article compares energy prices in three key European data hubs – Frankfurt, Dublin and Warsaw – with the energy efficiency of successive generations of AI accelerators. On this basis, we analyse how operational costs and technological advances shorten or lengthen the lifecycle of AI hardware .

    Energy prices in different hubs

    Frankfurt: high prices and environmental requirements

    Frankfurt is the second largest data centre market in Europe. Germany has some of the highest industrial energy prices; in 2024, companies paid an average of 16.77 cents/kWh, with the rate rising to 17.99 ct/kWh in January 2025. For companies with concessions (fixed consumption), the cost was 10.47 ct/kWh. These charges are made up of 29% taxes and charges and 27% network charges.

    A strong focus on RES energy and heat recovery is obliging data centre operators to invest in sustainable solutions. High energy costs motivate rapid deployment of more efficient systems to reduce consumption per teraflops.

    Dublin: the most expensive electricity in the EU and supply constraints

    In Ireland, energy prices for industrial consumers are among the highest in Europe – around €26 per 100 kWh in the first half of 2024. The SEAI report shows that in 2024 the weighted average price for business was 22.8 cents per kWh, with large consumers paying 16.3 c/kWh. The high rates are compounded by a shortage of power – Dublin’s data centres consume 22% of the country’s energy and EirGrid predicts this will rise to 30% by 2030. For this reason, new connections are only approved in exceptional cases, so operators must maximise the efficiency of existing infrastructure.

    Warsaw: lower prices but a growing market

    Poland stands out with lower prices – around €0.13 per kWh in 2024. According to GlobalPetrolPrices, in March 2025, businesses will pay an average of PLN 1.023/kWh (US$0.28), which is still lower than in Germany or Ireland. While lower energy costs allow for a longer amortisation cycle, increasing competition and demand for cloud services are encouraging investment in new hardware to increase computing density.

    Generations of accelerators: performance per watt

    GPU – from Volta to Blackwell

    Nvidia’s V100 (Volta) introduced tensor cores technology in 2017, but its TDP of 300 W and lower TFLOPS/W ratio are no longer viable. In 2020, the A100 (Ampere) came to market with a TDP of 400 W and doubled the performance per watt, reaching up to 10 TFLOPS/W. Another breakthrough was the 2022 H100, using the Hopper architecture: a 700 W chip delivers 20 TFLOPS/W and about three times the workload of the A100 per watt.

    In 2024, Nvidia announced the H200, a chip with a TDP of 700 W and featuring HBM3e memory with a bandwidth of 4.8 TB/s. This increased inference performance by 30-45% for the same power consumption. The DGX H200 system with eight such GPUs consumes 5.6 kW, but can do twice as much work per watt compared to its predecessor.

    The B200 (Blackwell), with a TDP of 1000 W and three times the computing power of the H100, is expected to debut in 2025. Although power consumption is increasing, the TFLOPS/W ratio continues to improve, pushing the frontier of computing density.

    TPU – an alternative with improved energy efficiency

    Google is developing Tensor Processing Units, dedicated AI accelerators. TPU v4 offers 1.2-1.7 times better performance per watt than the A100, and in general TPUs are 2-3 times more power efficient than GPUs. Upcoming generations, such as v6 ‘Trillium’ and v7 ‘Ironwood’, focus on maximising compute density while reducing power consumption.

    Equipment life cycle – flexibility instead of rigid cushioning

    In traditional data centres, hardware was replaced every five to seven years. However, decarbonisation research indicates that in AI environments, cycles of four years or longer are economically viable, although shortening the cycle can reduce emissions. When a new generation of GPUs provides several times the energy efficiency, early retirement of ageing chips is justified – the energy savings and emissions cost reductions outweigh the investment. Replacement every 4-5 years may become the norm in regions with high energy prices.

    How does the price of electricity affect decisions to upgrade?

    Dublin – need for computing density

    With prices of 22-26 cents per kWh and limited network capacity, Irish data centres are being forced to maximise efficiency. An investment in an H100 or H200 pays for itself faster with twice the performance per watt. Replacing old A100s with H100/H200s reduces the amortisation cycle to three to four years, as the energy savings and lower emissions costs outweigh the capital expenditure. The introduction of even more energy-efficient chips (B200, TPU v6) can further accelerate the upgrade.

    Frankfurt – a trade-off between cost and investment

    German energy prices (17-20 ct/kWh) are lower than in Ireland, but still motivate optimisation. Companies are keen to replace equipment every 4-5 years, especially when the gap between generations is large. At the same time, larger systems can benefit from discounts and long-term contracts, which reduces the pressure for immediate replacement. Regulations requiring the use of RES and heat recovery encourage the choice of energy-efficient platforms.

    Warsaw – a longer breath, but growing ambitions

    The lower cost of energy (around 13 ct/kWh) allows Polish operators to extend the life cycle of their equipment. Replacing the V100 with the A100 or H100 still brings savings, but they are not as spectacular as in Ireland. However, the growing demand for AI services, the development of R&D offices in Poland and competition from international players may shorten replacement cycles to 4-5 years, especially when B200s and energy-efficient TPUs appear on the market.

    Trends of the future: HBM3e memory, Blackwell architecture and TPU Trillium

    Accelerator performance is not only increasing with more cores. New chips, such as the H200, increase memory bandwidth to 4.8 TB/s via HBM3e. Another leap is the Blackwell B200 with a TDP of 1000W, which uses wider buses and improved Transformer Engine cores. Google, in turn, is developing the v6 ‘Trillium’ and v7 ‘Ironwood’ TPUs to improve power efficiency and compute density.

    Efficiency per watt is becoming the most important parameter as economic and regulatory pressures force operators to reduce emissions. High energy prices in Europe further exacerbate this trend.

    Differences in energy prices across Europe determine AI infrastructure modernisation strategies. Ireland and Germany, with the highest rates, are shortening equipment lifecycles to reduce operating costs. Poland, benefiting from lower prices, can afford to use existing systems for longer, although growing demand and competition will also accelerate change there.

    Technological advances – from the V100 GPU, to the A100 and H100, to the H200 and the upcoming B200 – mean that the TFLOPS/W ratio is growing exponentially. Alternative TPU accelerators are showing even greater energy efficiency, which could change GPU dominance in the future. Therefore, hardware replacement decisions cannot be rigid; they must take into account not only the cost of new hardware, but also energy prices, CO₂ emissions and customer requirements. Megawatts and teraflops will become increasingly intertwined in the strategies of data centre operators in the coming decade.

  • From server rooms to AI gigafactories: the decade that changed data centres

    From server rooms to AI gigafactories: the decade that changed data centres

    Until a decade ago, data centres remained in the shadows – treated as the technical back office of the business, a place to host applications, mail or store data. They were indispensable, but few outside the industry gave much thought to their role. Today, the situation is very different. In the age of artificial intelligence, data centres have become “gigafactories” of computing, without which the development of new technologies would be impossible.

    It was not an evolution, but a leap that completely changed the IT industry and the way we think about digital infrastructure.

    From the server room to the critical infrastructure

    Between 2010 and 2015, data centres were mainly associated with local server rooms that supported business applications and stored companies’ growing data resources. Their role was to ensure the stability and security of core processes – from ERP to email.

    The breakthrough came with the rapid growth of the public cloud. Amazon Web Services, Microsoft Azure and Google Cloud began to expand globally, investing in data centre networks that quickly ceased to resemble classic server rooms. Scale grew exponentially and the term ‘hyperscale’ began to dominate the industry.

    The era of the cloud and global scaling

    Hyperscale means hundreds of thousands of servers, deployed in facilities optimised for automation, performance and flexibility. It is thanks to them that the digital transformation of companies has accelerated – from simple hosting to advanced SaaS and IaaS services.

    For technology providers, it was a moment of consolidation. It was less and less about local server rooms and more about the ability of partners to integrate cloud services and global platforms. Data centre operators gained a new role – they became the backbone of the digital economy.

    AI as a turning point

    The real revolution came with the boom in artificial intelligence, particularly generative intelligence. AI models require massive computing power, specialised GPU and TPU chips and HPC-class infrastructure.

    Training one large generative model can involve tens of thousands of GPUs and take weeks. This has put data centres at the centre of the global technology race. Without them, AI development simply would not be possible.

    The scale of investment speaks for itself. According to market data, Microsoft, Alphabet, Amazon and Meta will spend a combined $245 billion on capital expenditure in 2024. Forecasts for 2025 predict that this figure could exceed $360 billion – in large part precisely because of AI. These are figures that change the balance of power across the industry.

    Rising energy costs and the sustainability dilemma

    However, this growth comes at a price. According to the International Energy Agency, data centres will consume around 945 terawatt hours of energy in 2030 – more than double the amount in 2024. This is equivalent to the demand of a medium-sized industrialised country.

    The biggest challenges are not only the cost of energy, but also cooling and water consumption. Traditional air-conditioning systems are consuming increasing amounts of energy, and local communities are increasingly raising questions about the environmental impact of data centre facilities.

    In response, operators are accelerating investment in innovative solutions. Liquid cooling is playing an increasingly important role to manage temperatures more effectively in dense GPU installations. In parallel, programmes to use renewable energy are being developed – some new data centres are being built close to RES sources to minimise the carbon footprint.

    Gigafactories of computing – the future of data centres

    Today, data centres are increasingly being compared to factories. Just as in the 20th century refineries or power plants drove industrial development, in the 21st century gigafactories of computing are becoming the foundation of the digital economy.

    Their role is no longer limited to supporting business processes. They are a strategic resource in the global technology race, in which companies that can combine computing power with energy efficiency will gain an advantage.

    In the next few years, we can expect further automation of management, integration with local energy sources, as well as new climate regulations. At the same time, opportunities are growing for the IT market and sales channel – from cooling technology suppliers to system integrators to energy optimisation support companies.

    New decade, new challenges

    A decade of transformation has transformed data centres from an invisible IT back-office to a central part of the digital infrastructure. The next one will bring even greater challenges – not only related to AI, but also to reconciling the scale of computing with environmental realities and energy costs.

    If history is to repeat itself, data centres will gain a similar status in the future as power plants and refineries once had – they will become not just a tool, but a strategic asset on which the pace of the global digital economy will depend.

  • Atman opened WAW-3 – the largest data centre campus in Poland near Ozarow Mazowiecki

    Atman opened WAW-3 – the largest data centre campus in Poland near Ozarow Mazowiecki

    In Duchnice near Ozarow Mazowiecki, in a site that only two years ago was an empty plot of land, today stands the first of three buildings of the most modern data centre campus in Poland. Atman, the leader of the Polish data centre market, has officially opened the gates of WAW-3 – an investment with a target value of PLN 2.5 billion, which not only redefines the scale of the Polish IT industry, but also sends a clear signal to the whole of Europe: Poland is becoming the digital heart of the region. The newly opened facility already offers 14.4 MW of power for IT equipment. The opening ceremony, combined with a well-organised guided tour of the facility, was an opportunity to get an up-close look at this technological colossus and understand the vision behind its creation.

    The atmosphere at the event was a mix of pride in the completed work and excitement for the future. Guests included industry leaders, representatives from global investment funds, key technology partners and local government representatives.

    The launch of the WAW-3 campus is much more than a technological show of force. It is a business move that places Atman and Poland at the centre of the European digital future game. The event was an opportunity to understand how the ambitions of global investors intertwine with local potential to create a project of fundamental importance.

    Atman Data Centre
    From left: Paweł Kanclerz, Mayor of Ożarów Mazowiecki Municipality and City; Slawomir Koszołko, President of Atman; Scott Peterson, Representative of the Supervisory Board and Global Investors – Goldman Sachs and Global Compute

    A global vision and a new role for Poland

    The perspective of the global investors – the Goldman Sachs and Global Compute funds – was presented by Scott Peterson, Chairman of Atman’s Supervisory Board. In his speech, he made it clear that the ambitions of the project went far beyond the borders of Poland from the very beginning: “We knew we were embarking on something much bigger than just a construction project. We were laying the foundations for a new digital hub, not just for Poland, but for the whole of Central and Eastern Europe.”

    Atman Data Centre
    Scott Peterson

    His words fit perfectly into a market context in which the traditional European data hubs, known as FLAP-D (Frankfurt, London, Amsterdam, Paris, Dublin), are struggling with limited capacity availability. This is what creates a huge opportunity for ‘second wave’ markets, with Warsaw at the forefront. As Peterson noted, Poland in this new reality is no longer just a follower, but a leader: ‘In many respects, Poland is no longer chasing the lead, but is beginning to set the pace’.

    Behind the numbers, strategy and global trends, however, there is a simple, fundamental truth, which Sławomir Koszołko, CEO of Atman, decided to bring out in his presentation. Departing from industry jargon, he explained why facilities such as WAW-3 are today an invisible but absolutely key element of our civilisation:

    “[…] Data centres are essential for each of us to function normally. Today, we cannot imagine a world without electronic payments, mobile phones, the internet, and even without functioning traffic lights or hospitals. If it were not for data centres, all this infrastructure would cease to function. […]

    Atman Data Centre
    Slawomir Koszołko, President of Atman

    Many people, even central decision-makers, understand the need for servers, graphics cards or cloud computing. However, when the question is asked where all this should physically be located, consternation often follows. The answer is simple: it is in data centres such as this. This is the foundation of digitalisation.”

    Local partnership and financial confirmation of ambitions

    This fundamental vision needs solid local foundations. Paweł Kanclerz, Mayor of the Municipality and Town of Ozarow Mazowiecki, proudly emphasised that his municipality was the home of this strategic investment. He also pointed out that the investment was made “with respect for nature, but also for people”.

    Atman Data Centre
    Paweł Kanclerz

    The scale of the project was reflected in the confidence of financial institutions. Atman has secured a loan of PLN 1.35 billion from a consortium of six entities, one of the largest financings of its kind in the region. Importantly, the loan agreement includes ambitious ESG (Environmental, Social, and Governance) compliant commitments, confirming that modern business is not only about profit, but also about responsibility.

    The technical dimension – the anatomy of a digital fortress

    After the speeches, it was time for what technology enthusiasts like best – a tour of the facility, which allowed us to understand what ‘state-of-the-art data centre’ means in practice. It is in the labyrinth of technical corridors, server rooms and a rooftop full of advanced equipment that the secret to the reliability of the WAW-3 campus lies. We had the pleasure of having as our guide Radosław Poter, board member and CTO at Atman, who outlined the scale of the facility’s innovation with passion and enormous commitment.

    Scale and architecture: foundations for a digital future

    Atman’s new campus is being built on an impressive 5.5 hectare site. The building that has just been commissioned is the first of three planned, and its parameters are already impressive and show the scale of the whole project:

    • IT capacity: 14.4 MW
    • IT area: 6 324 m²
    • Number of server rooms (Data Halls): 12

    When all phases are completed, the campus will offer a total of 43 MW of IT capacity in almost 19,000 m² of space, allowing for the installation of more than 50,000 servers. The architecture of the facility has been designed for maximum flexibility, allowing dedicated zones for the largest customers.

    Power supply – non-stop energy

    The backbone of a data centre is uninterrupted access to energy, and at WAW-3 the ‘non-stop-data’ philosophy has been implemented in an uncompromising manner.

    • Dual main connection: Two independent power lines, each with a capacity of 20 MW, are brought to each building. This is a powerful connection, capable of powering a medium-sized city.
    • Redundancy 2N: The power supply architecture is fully redundant. Two independent power supply paths (A and B, designated as white and black sockets in the server room) are supplied to each customer’s server rack. This allows devices with dual power supplies to be connected and ensures continuity of operation even in the event of failure of the entire track.
    • Emergency power supply (N+1): In the event of a power failure from both external lines, the powerful generator units start up within seconds. They operate in an N+1 system, meaning that for every six units in operation, there is one backup unit fully ready to take over the load.
    • Full autonomy: The accumulated fuel reserves in the underground tanks on site guarantee 48 hours of uninterrupted operation of the entire facility at full load, in accordance with the stringent EN 50600 standard. In addition, Atman has agreements with fuel suppliers that guarantee fuel delivery within 8 hours, which in practice ensures almost infinite autonomy.
    Atman Data Centre

    Cooling – closed loop efficiency

    Keeping thousands of servers at optimum temperature is one of the biggest challenges and the biggest consumer of energy, next to IT itself. Here, WAW-3 is committed to being environmentally friendly and highly efficient.

    • Technology: at the heart of the system is closed-cycle precision air conditioning. The cooling medium is so-called ‘chilled water’, a 40% glycol solution that circulates between powerful chillers on the roof and refrigerated cabinets (CRACs) inside the server room.
    • Air management: A separation system for ‘cold’ and ‘warm’ aisles is used in the server rooms. Cold air at 24-27°C is forced under the technical floor and supplied through grilles directly into the “cold aisles” in front of the racks. The servers suck it in and the hot air is blown into the enclosed ‘warm aisles’, from where it is extracted and routed back to the cooling units. Such insulation dramatically increases performance.
    • Ecology and economy: the facility is powered by 100% renewable energy (based on a guarantee of origin). Thanks to the closed cooling circuit, water consumption is negligible and comparable to the annual consumption of 40 people. Residual heat is recovered and used to heat the offices.
    Atman Data Centre
    Cooling, WAW-3 building roof

    Security – from the fence to cyberspace

    Data protection is not just a question of software, but also of a robust physical infrastructure.

    • Fire protection: depending on their standards, customers can choose between two extinguishing systems for their space: a state-of-the-art gas system (Inergen) that displaces oxygen, suppressing the fire without damaging electronics, or an advanced water mist system, favoured by major global players.
    • Cyber infrastructure security: Atman’s approach to security is uncompromising. The monitoring and management system (SCADA) operates in read-only mode. This means that all parameters can be observed from the monitoring centre, but nothing can be changed remotely. This is a physical barrier that protects the critical infrastructure from remote attacks.
    Atman Data Centre
    Atman Data Centre

    A new chapter for digital Poland

    The opening of the WAW-3 campus is more than just the launch of another data centre. It is proof of the maturity of the Polish market, the strategic wisdom of investors and the efficiency of the local administration, which is capable of creating a climate for innovation. Confidence in the project is confirmed by the granting of a PLN 1.35 billion loan to Atman by a consortium of six financial entities, as part of an agreement containing ambitious ESG-compliant commitments.

    “The growing demand for digital services and the increasing workloads associated with the use of artificial intelligence require a reliable, easily scalable infrastructure with high computing power. The WAW-3 campus is our answer to these needs – and our advantage. Anticipating market trends, we were the first in Poland to implement a project of this scale and technological sophistication,” adds Slawomir Koszolko.

    The event in Duchnice was not the end, however, but only the beginning. As CEO Slawomir Koszolko declared, the company’s appetite for growth is far from satisfied: “This is not the end. We, as Atman, are already looking for further locations and further investments”.

    Looking at the scale and technological sophistication of the building, it is hard not to agree with Scott Peterson’s words, which best sum up the significance of the day: “As we celebrate this first building today, let us also celebrate the future it represents – a future of connectivity, innovation and opportunity for us all.”