Tag: Energy

  • How to stabilise the grid in the city? Energy storage from Stoen and ZPUE

    How to stabilise the grid in the city? Energy storage from Stoen and ZPUE

    Stoen Operator and ZPUE are implementing a project in Warsaw that pushes the boundaries of the use of energy storage in the Polish electricity infrastructure. Instead of isolated test installations, ten battery-based units integrated directly into medium- and low-voltage (MV/nn) substations are appearing in the capital’s distribution network. This initiative is not just an experiment, but an operational response to the specific challenges of a large agglomeration: dense housing, surging power demand and the dynamic development of RES micro-installations. In this system, the storages take on the role of active voltage stabilisers, becoming an integral part of the daily operation of the system.

    This implementation sheds new light on the evolving role of distribution system operators (DSOs). The shift from passive energy transmission to active management of energy resources is now becoming a business necessity and not just a technological curiosity. The example of Warsaw shows that energy storage is no longer seen as a costly addition to the infrastructure and is starting to be treated as one of the foundations of modern distribution. A key lesson from the Warsaw project is that, in an urban setting, the success of an investment depends not on battery performance alone, but on deep system integration and the ability to operate in different load scenarios.

    It is worth noting several aspects that may determine the effectiveness of similar projects in the future. It seems sensible to move away from point-based design to thinking about the full life cycle of an installation. Taking into account the costs of operation, service and emergency behaviour of the system as early as the planning stage makes it possible to avoid costly adjustments later.

    It is also worth considering closer collaboration between technology providers and operators to develop standards that will facilitate the scaling of solutions in other regions of the country. Rather than waiting for a final regulatory settlement, the market has the most to gain from gathering and sharing operational experience. It is this practical data, gained from working in a living urban organism, that is today’s most valuable asset for energy companies planning long-term investments in network flexibility.

  • How much electricity does AI use? The US government is launching a big count. The consequences could reach Poland

    How much electricity does AI use? The US government is launching a big count. The consequences could reach Poland

    The US Department of Energy (DOE) is ending the guesswork. The launch of a pilot study into the real-world energy consumption of data centres signals that the AI sector’s uncontrolled appetite for electricity is coming to an end. Although the study applies to Texas, Virginia and Washington, its echoes will in time hit the European market, including the rapidly growing technology hub in Poland and the CEE region.

    Until now, technology giants have operated largely in the realm of estimates. Now that the Energy Information Administration (EIA) is starting to ask for specific sources of emergency power and actual network load, Polish data centre operators and investors must prepare for a similar tightening of course from EU and national regulators.

    Pressure on efficiency in the CEE region

    As a key point on the map of digital expansion in Central Europe, Poland faces a unique challenge. Our energy mix, still heavily based on coal, means that the construction of more ‘server farms’ raises social and environmental tensions. A US study shows that the gigantic demand for AI can no longer be hidden under the guise of general declarations about green energy. Polish entrepreneurs should pay particular attention to three aspects: the stability of energy prices for individual consumers, the risk of overloading local grids and the need to invest in self-consumption and own RES sources.

    In the CEE region, where energy costs are a key competitive factor, transparency can prove to be a double-edged sword. On the one hand, accurate data will allow better planning of critical infrastructure. On the other, they may expose the weaknesses of energy systems that are not ready for the demand surges generated by language models.

    Tristan Abbey’s EIA initiative is a lesson in humility for the Big Tech sector. It demonstrates that technology does not develop in a vacuum and is underpinned by a physical energy infrastructure with limited resources.

    This is why echoes from Virginia or Texas are likely to be heard in Warsaw and Prague:

    1. standardisation of reporting requirements

    When the US giants (Amazon, Google, Microsoft) are forced to report their energy consumption in detail in the US, over time they will implement the same monitoring systems in their European branches. For Polish business, this means that local subcontractors and co-location operators will have to adapt to the same rigorous transparency standards in order to maintain contracts with global players.

    2. fight for scarce resources

    The problem of ‘no power’ for AI is global. If the US – a country with huge gas reserves and a developed grid – starts to officially measure the problem, it is a wake-up call for Europe, where the grid is older and more burdened by the energy transition. Poland, being in the process of moving away from coal, has even less margin for error. Investors are looking at the DOE’s hands because they know that if the US ‘runs out of space’ in sockets, the pressure to build in the CEE region will increase, driving up connection prices in our country.

    3. “Export” of regulations

    Historically in tech it works like this: The US defines the technical problem and Europe (EU) gives it a legal framework. The data collected by the EIA in Houston will be carefully analysed by Brussels when designing the next iteration of the Energy Efficiency Directives (EEDs). Poland, as a country with high CO2 emissions per kWh in the region, is most vulnerable to the negative effects of such regulations if data centres are found to consume more than assumed.

    4 Chain reaction in the supply chain

    The questions about backup power (diesel generators vs. batteries) that Tristan Abbey asks are a direct hit to the infrastructure market. Polish power equipment companies need to keep an eye on these trends, as they will set the procurement standards for the next decade.In short: this is not a local dispute in Virginia. The cloud has become a measurable, heavy burden on the national economy. Any Polish CEO planning to migrate to the cloud in 2026 must take into account that its cost will be increasingly linked to the price of emission allowances and network capacity, which the US has just started to ask about.

  • Digitisation of energy: costs of cyber attacks and operational risks

    Digitisation of energy: costs of cyber attacks and operational risks

    The vision of a modern power station, controlled by artificial intelligence algorithms and patrolled by autonomous drones, sounds like the promise of infinite efficiency. The digital transformation in the energy sector is gaining unprecedented momentum, however, resembling the construction of a luxury smart skyscraper where the door locks were forgotten to be installed in a hurry.

    Investments that are intended to optimise operating costs unexpectedly become the Achilles’ heel of the industry. They threaten not only the integrity of sensitive data, but, above all, production continuity and return-on-investment rates.

    From ambition to pressure: the digital sprint

    The current technology landscape of the energy industry looks extremely intriguing from a business strategy point of view. Only a small percentage of companies, estimated at less than five per cent, can today be considered fully digitised entities.

    The great promise of innovation, however, tempts with tangible benefits. The use of digital twins, advanced analytics and predictive maintenance appears to be a proven mechanism leading to drastic reductions in operating costs and improved delivery reliability.

    Faced with such attractive financial prospects, almost three-quarters of organisations plan to achieve full digital maturity in just twenty-four months. Such an ambitious, if not bravura, timetable imposes a killer pace of change.

    This naturally generates the risk of critical vulnerabilities in the security architecture, as the pressure for rapid deployments often wins out over the need to painstakingly test the resilience of new systems.

    Profit and loss account after a hard landing

    Enthusiasm about the implementation of innovations regularly collides with the brutal financial reality. Analytical data from market research sheds a whole new light on the ultimate cost of this technological rush.

    It appears that around half of the energy companies operating in the market have already fallen victim to incidents with financial consequences exceeding the million-dollar threshold.

    Crucially from a risk management point of view, it is not the possible ransoms levied by cybercriminals or the direct costs of advanced analytical investigations that place the greatest burden on corporate budgets.

    The real, powerful problem is the hidden costs, more specifically production failures and halted operations. The average downtime after a successful security breach is around nineteen hours.

    In a strategically important sector, where every minute of supply disruption means gigantic losses and the potential paralysis of the local economy, such a pause takes on an absolutely critical value.

    Cracks in the system architecture

    It is worth asking the question about the source of such drastic losses. The answer lies in the very structure of modern industrial networks. With every industrial internet of things sensor deployed, with every automated inspection drone integrated into the fleet and with every new connection between internal operational technology systems and the external cloud, the potential attack surface increases dramatically.

    Historically, critical infrastructure has been protected by physical and logical isolation from global networks. However, this illusion of complete encapsulation has become a thing of the past in the age of ubiquitous convergence of IT environments with operating systems.

    Modern control platforms continually exchange data with corporate networks. This creates a highly complex ecosystem in which the weakest, least secure link determines the stability of the entire energy company.

    People and processes in the shadow of technology

    The technology layer is just the tip of the iceberg, underneath which lie extremely complex organisational and human resource challenges. Rapid transformation requires not only massive investment in new software, but above all the right human skills.

    Nearly half of the market players identify a severe shortage of skilled cyber security professionals as the most serious barrier to digitalisation.

    An additional, often underestimated risk factor is the diluted responsibility within management structures. In most cases, the burden of creating security policies for industrial environments rests entirely on the shoulders of IT departments.

    yTemporary in-depth understanding of the specifics of physical processes, production cycles and maintenance lies solely within the remit of operational engineers. This evident dissonance in decision-making creates a dangerous vacuum inside companies, which is exploited with great ease and precision by sophisticated criminal groups.

  • Cyber attack on Poland’s only nuclear reactor Maria

    Cyber attack on Poland’s only nuclear reactor Maria

    The Polish National Centre for Nuclear Research has reported the successful thwarting of a targeted cyber attack on its IT infrastructure. Early detection systems and internal security procedures allowed IT staff to quickly isolate the threat before the integrity of key operating systems was compromised.

    The Institute plays a strategic role in Poland’s nuclear power programme, providing technical and scientific support to national infrastructure projects. NCBJ Director Professor Jakub Kupecki confirmed that the incident had no impact on the operation of Poland’s only research nuclear reactor MARIA. The unit, used for scientific purposes and medical isotope production, continues to operate at full power in safe operational mode.

    Although no official attribution of the attack has been made by the NCBJ authorities, there are reports in the public space of a possible Iranian trail. Investigators, however, are far from cautious, pointing to the high probability of a ‘false flag’ operation aimed at disinformation and misidentification of the perpetrators. The situation is part of a wider trend of increased cyber activity targeting Poland, as evidenced by data on last year’s attacks by the Russian group APT44 (Sandworm) on distributed and renewable energy systems.

    According to the latest analytical reports, Poland has become one of the main targets for state cyber actors in the region, recording more than 30 major incidents in the past few months. In response to the recent incident in Świerk, the country’s cyber security services have been placed on high alert. NCBJ continues to work closely with law enforcement agencies to fully explain the mechanism of the attack and strengthen the resilience of critical research assets.

  • AI needs energy. Will the lack of a nuclear power plant slow down Poland’s technological development?

    AI needs energy. Will the lack of a nuclear power plant slow down Poland’s technological development?

    Driven by the unprecedented development of artificial intelligence, today’s digital economy seems to be operating in a kind of paradox. On the one hand, the world delights in the intangible nature of algorithms, the lightness of cloud computing and the finesse of generative models that redefine the concept of productivity. On the other hand, however, this digital superstructure is set on an extremely heavy physical foundation: energy infrastructure. Artificial intelligence, hailed as the new electricity of our time, paradoxically exhibits an insatiable hunger for this traditional, socket-flowing energy. In a public debate dominated by considerations of code ethics or data security, too little attention is paid to the fundamental question: where will the electricity necessary to power this revolution come from, so that the process is stable, clean and strategically secure.

    Data from reports by the International Energy Agency leave no illusions about the scale of the challenge. It is estimated that the global energy consumption of data centres could double as early as 2030 as a direct consequence of the expansion of cloud computing and the training of increasingly complex language models. However, this is just the tip of the iceberg, underneath which lies the massive digitalisation of the entire industrial, transport and residential sectors. Projections indicate that by 2035, data centres alone will require an additional 1,000 terawatt hours, but the needs of the rest of the economy will increase by nearly six times this figure. Global energy demand, according to analysis by Rystad Energy, is expected to increase by almost a third in just a decade. In this context, the traditional approach to the energy transition, based solely on classic renewables, is showing its limitations.

    The business and technology sector is facing the need to redefine the concept of operational stability. Indeed, digital security is inextricably linked to the security of power supply, and this requires a source that is not only environmentally friendly, but above all controllable and independent of the vagaries of the weather or geopolitical turmoil.

    This is where fusion energy comes onto the scene, which has undergone a fascinating transformation in recent years from the domain of science fiction literature to the realm of hard business strategy. Major players in the global technology market, such as Microsoft, Google and Amazon, have long since abandoned their role as passive observers and become active investors in fusion projects. Cumulative funding in private fusion companies has risen to €13 billion by 2025, an eightfold increase from the beginning of the decade. The involvement of IT leaders is not driven by philanthropic motives, but by a pragmatic risk assessment. Having a stake in a technology that generates almost unlimited and pure power is an insurance policy for further innovation.

    However, the current investment landscape reveals a worrying asymmetry for Europe. The US accounts for more than half of global fusion investment, treating the technology as an element of national security and competitive advantage. The US government’s change of stance at the end of 2025, making fusion a strategic priority, clearly defines the rules of the new game. Right behind America is China, pumping huge state resources into building its own energy ecosystem. Such a bipolar panorama should be an alarm bell for European decision-makers. The continent cannot afford to repeat the mistake made in the semiconductor sector or in artificial intelligence itself, where marginalisation has led to deep dependence on external suppliers and technologies.

    What about Europe?

    With the demand for computing power growing exponentially, building a sovereign and inexhaustible source of power is becoming an absolute requirement for maintaining the competitiveness of the modern economy. This paradigm shift, although seen in the silicon valleys of the world, took on particular political weight at the recent Nuclear Energy Summit in Paris. It was there that the President of the European Commission, Ursula von der Leyen, uttered words that, to many, sound like a belated but necessary thumping of the chest: Europe’s turn away from the atom was a strategic mistake, and the figures describing this regression speak for themselves.

    Acknowledging that the systematic extinguishment of the nuclear sector on the Old Continent – a decline in its share from a third in 1990 to just fifteen per cent today – was a geopolitical blunder directs attention to the challenges facing the technology sector. Today’s digital economy, fascinated by the lightness of artificial intelligence algorithms, is in violent collision with the physical reality of transmission networks. Often referred to as an immaterial revolution, artificial intelligence displays an insatiable hunger for stable, clean and cheap energy. In this context, Europe’s dependence on unstable fossil fuel imports is becoming not only an economic ballast but, above all, a development barrier that could relegate the continent to the role of a technological open-air museum.

    Poland’s situation in this new deal appears particularly dramatic and requires immediate strategic reflection. While the leaders of the European Union are beating their chests and drawing up plans for a return to nuclear power, the Polish energy landscape remains afflicted by the historic lack of even a single operational nuclear power plant. This structural shortage, at a time of expansion of generative models and data processing centres, ceases to be merely a matter of energy security and becomes a considerable problem for the Polish IT sector. Ambitions to build an innovation hub on the Vistula and develop indigenous artificial intelligence systems may be effectively stifled by the lack of a foundation of stable network base load.

    Investors planning to build large-scale data centres are guided by pragmatism, in which the availability of low-carbon and uninterruptible energy plays a key role. Poland, basing its energy mix on declining coal and rapidly growing but weather-dependent renewables, without a ‘nuclear stabiliser’ becomes a location with high operational risk. Not only does the absence of nuclear mean higher emission costs affecting the margins of technology companies, but above all the lack of a guarantee of power continuity, without which advanced training of AI models is simply impossible. As a result, the most valuable digital projects may bypass Polish soil, choosing countries that have been able to turn nuclear pragmatism into a competitive advantage.

    The clear change of course in Brussels, emphasising the role of small modular reactors and nuclear fusion, should be a signal for Polish business to mobilise. Since the European Union intends to allocate billions of euros to fusion research as part of the ITER project and to create guarantees for private investment in a new generation of nuclear technologies, Poland cannot afford to be a passive observer. It is necessary to create mechanisms that will allow Polish technology companies to actively participate in building a value chain for the nuclear sector. Fusion, although still seen as the horizon of the future, is today the only real answer to the energy blackmail facing the digital world.

    The geopolitical race for control of the earth’s ‘artificial sun’ is gathering pace. The United States, considering the development of fusion technology as a matter of national security, and China, heavily funding state nuclear projects, have created a bipolar power structure. Europe, if it does not want to become a mere client of these powers, must develop its own model of cooperation – a kind of ‘Eurofighter of Energy’. This comparison to a European fighter is not coincidental; building a modern fusion-based energy system requires an analogous scale of industrial, scientific and financial coordination. For Poland, participation in this endeavour is a chance to leapfrog several stages of technological backwardness and enter directly into the elite of tomorrow’s energy management countries.

    It is worth noting that nuclear fusion offers more than just electricity – it offers sovereignty. When defence systems, critical infrastructure and everyday communications are based on artificial intelligence, any interruption in energy supply becomes an attack vector. A stable, indigenous source of power, located close to decision-making and technology centres, is the best shield against external pressures. What this means for Polish business is that more pressure needs to be exerted to accelerate nuclear projects, not only in the traditional sense, but especially in the area of innovative SMR and fusion technologies that can be implemented closer to the industrial consumer.

    The diagnosis made by Ursula von der Leyen is painful but invigorating for the European debate. Europe, and Poland in particular, must reject preconceptions in favour of engineering realism. Artificial intelligence will not wait for energy systems to keep up with its needs; it will simply move to where energy is abundant, cheap and clean. Poland, facing the historic challenge of building its first reactor, must understand that this is not a building project, but the foundation of a future digital power. Without the atom, the dream of Polish artificial intelligence will remain just a beautiful code written on servers that will be impossible to run. It is time for an honest analysis of the numbers and strategic shortcomings to become the impetus for building the energy sovereignty that will allow innovation to fully flourish on our soil.

  • OT cyber security, a new pillar of profitability for energy companies

    OT cyber security, a new pillar of profitability for energy companies

    The economic landscape dictates an operating model for utilities companies that can be described as hybrid. On the one hand, these organisations have to implement complex, multi-year investment plans related to decarbonisation and energy transition. On the other hand, they are required to be able to respond immediately, almost instinctively, to unpredictable events: from sudden changes in commodity flows to economic sanctions to precision hacking attacks targeting critical infrastructure. In this context, operational resilience has ceased to be the domain of engineers and technicians, becoming one of the most important topics on the agenda of boards of directors.

    Energy has returned to the heart of the international chessboard, acting not only as a commodity, but above all as a tool of political pressure. Decisions on the direction of investments or network upgrades are closely correlated with the reconfiguration of global influence and the need to become independent from unstable suppliers. For the modern energy company, this means having the tools to simulate complex geopolitical scenarios in real time.

    Advanced analytics and predictive modelling are no longer just about squeezing extra margins out of existing assets. They have become strategic weapons to predict the impact of distant conflicts or regulatory changes on local supply stability. In the world of 2026, survival is ensured by organisations that can turn data into instant, accurate operational decisions. Information without the ability to execute it becomes mere costly ballast in this scenario.

    The price of hyperconnectivity and the trap of the attack surface

    The paradox of modern modernisation is that every improvement aimed at increasing efficiency simultaneously opens new doors for potential adversaries. Digitalisation and ubiquitous connectivity, encompassing even the most remote industrial assets connected by satellite, have dramatically expanded the so-called attack surface. OT infrastructure, which for years enjoyed the security of physical isolation, is now fully integrated into the global network.

    Geopolitically motivated attacks, industrial sabotage or intellectual property theft are no longer theoretical threats from consultancy reports. They are a daily reality faced by transmission system operators and power generators. This is why the cyber security of industrial environments has evolved from the role of a technical compliance requirement to the level of a necessary condition for business continuity. The security of physical assets today is inextricably linked to the security of the code that controls them. Every turbine, every transformer station and every smart meter are now elements of a digital front-end, and their protection is as essential to profitability as the selling price of a unit of energy.

    From data to the dictates of automation

    Today’s challenge is no longer simply capturing information from sensors and SCADA systems. The challenge is to create an architecture capable of autonomous or assisted decision-making on a micro and macro scale. Predictive maintenance, management of smart grids or optimisation of energy resources increasingly rest on the shoulders of algorithms capable of predicting failures before they actually occur.

    However, there is an important risk that is often forgotten in enthusiastic visions of digital transformation. Artificial intelligence, lacking the foundation of reliable, properly managed data, can become a catalyst for errors instead of an eliminator. Implementing advanced algorithms without first sorting out the data layer is a straightforward route to amplifying inefficiencies. The leaders in the energy sector in 2026 are those who have understood that the battle for competitive advantage is played out on the quality of the data architecture. The others will be left with costly, mono-alytical systems that, despite their modernity, do not generate real added value.

    The end of the traditional model

    In parallel with the data revolution, we are seeing the twilight of the traditional business software model. Dubbed the ‘SaaS-apocalypse’, the phenomenon reflects the retreat of organisations from heavy, closed software packages in favour of more flexible and composable architectures. IT departments of leading energy companies are moving away from the integration of rigid platforms to the orchestration of specialised microservices.


    This change is deeply strategic. It allows energy operations to be much more closely aligned with corporate strategy. Instead of adapting business processes to the limitations of their existing systems, companies are building their own technology ecosystems that are perfectly tailored to their specific needs and risk profile. This ‘composable’ architecture provides an agility that, in the face of sudden market shocks, is more valuable than the stability guaranteed by the major software vendors.

    Innovation as a stabiliser, not an experiment

    Innovation can no longer be regarded as an isolated experiment conducted on the margins of the main activity. It must be a stabilising element of the organisation. Indeed, the real challenge for management is not simply to adopt yet another technological innovation, but to precisely define where it will be applied, ensure data protection and guarantee operational security in an environment of permanent uncertainty.

    Innovation in 2026 is about building bridges between old, proven systems and new technologies in a way that does not compromise scalability and security. In this scenario, technology acts as a stabiliser, ensuring that processes and systems are harmonised with each other from day one of integration or merger.

  • In 2026, the lack of an ESG strategy is a real financial risk – Przemysław Brzywcy, Polenergia Fotowoltaika

    In 2026, the lack of an ESG strategy is a real financial risk – Przemysław Brzywcy, Polenergia Fotowoltaika

    While energy stock market quotations may give an illusory sense of stability, the reality for entrepreneurs is shaped by expenditure on grid upgrades and the stringent requirements of Western contractors. In this new dispensation, photovoltaics and energy storage are becoming a critical tool for optimising the bottom line.

    We talk to Przemysław Brzywcy, CEO of Polenergia Fotowoltaika, about whether Polish companies are ready to ‘move energy over time’, why the lack of a low-carbon strategy may cut off access to capital and how to realistically secure a budget in 2026.

    Brandsit: In today’s market reality, is the transition to green energy still mainly a matter of image-building for the company, or is it already a hard cost optimisation that defends itself in the financial results?

    Przemysław Brzywcy: Just looking at the quotations of the Polish Power Exchange over the last two or three years, one might get the impression that energy prices have fallen and the topic is no longer pressing. However, this is a very apparent picture.

    In parallel to the price of energy itself, distribution charges are rising steadily, and far faster than inflation. This is due to the need for huge investments in the modernisation and expansion of the grid, which are necessary in order for the system to accommodate increasing amounts of renewable sources. These outlays are then naturally passed on to the grid users.

    In addition, large-scale RES projects under ERO auctions and contracts for difference are coming into the system. When these installations come on stream, the costs of operating the system will also be spread across all consumers.

    As a result, companies do not just pay for the ‘price of energy from the exchange’, but for the whole system. Therefore, investments in photovoltaics and energy storage cease to be an image element and become a very rational tool for cost optimisation, which can be seen in the financial results in real terms.

    Brandsit: Combining energy storage with dynamic tariffs sounds promising, but requires a change in thinking about power consumption. Are Polish companies technologically ready to automatically control their energy consumption depending on the instantaneous price of energy, and how do storages help in this?

    P.B.: This does indeed require a change in approach, but technologically Polish companies are increasingly better prepared for this. Many plants already have energy monitoring and management systems in place, and their integration with energy storage and algorithms that react to market prices is not a technological barrier today, but rather a matter of a business decision.

    In my opinion, this is one of the most underestimated directions for optimising energy costs. Companies very often have an unstable demand for power, which generates additional costs, overrun charges and the risk of price fluctuations. Energy storage makes it possible to compensate for this instability by stabilising the consumption profile.

    “Integration with energy storage and market price-responsive algorithms is not a technological barrier today, but rather a business decision issue.”

    Tariffs linked to the energy market, on the other hand, offer the opportunity to buy electricity at times of very low or even negative energy prices and to reduce consumption when prices are highest. You could say that we ‘transfer’ energy over time.

    This allows the company to simultaneously take advantage of market opportunities and hedge against price spikes. In practice, it is a solution that, when properly designed, can significantly improve the economics of a company’s overall energy consumption and increase its cost predictability.

    Brandsit: Are you observing a trend where Polish companies are having to switch to green energy not by their own choice, but under pressure from Western contractors who require suppliers to report a zero carbon footprint?

    P.B.: Yes, this is a very clear and widespread trend that we are seeing with our customers, especially those with export operations in Western European markets. You can see it strongly in the food industry, but also in the whole supply chain linked to the automotive sector.

    Increasingly, Polish companies involved in these supply chains have to report their carbon footprint and demonstrate the share of energy from renewable sources. In many industries, this is no longer an advantage, but a condition for keeping the contract.

    At the same time, entrepreneurs are seeing more and more clearly that this is not just a response to ESG requirements or contractor expectations. Properly designed RES-based solutions simply pay off. Green energy is ceasing to be an image element and is becoming a source of real cost and competitive advantage.

    That is why companies today combine two aspects. On the one hand, they are building their credibility with their foreign partners, and on the other, they are making an investment that is defended by a very concrete business case.

    Brandsit: To what extent does the lack of an implemented ESG strategy and the use of conventional energy sources make it difficult for companies today to obtain cheap investment credit or attract investors?

    P.B.: To a very large extent. Financial institutions and investment funds are increasingly evaluating companies not only through the prism of financial performance, but also through how they manage environmental and energy risks. The lack of an ESG strategy, including the lack of energy transition measures, is having a real impact on financing conditions today.

    “The lack of an ESG strategy, including the lack of energy transition activities, is having a real impact on financing conditions today.”

    It is also of paramount importance for exports. Foreign contractors pay attention to how goods are produced and what energy is used in the manufacturing process. This ceases to be an element of image and becomes an element of assessing business credibility.

    Brandsit: What one key step would you advise company managements who want to not only meet the new regulatory requirements in 2026, but above all to realistically protect their budgets against rising energy costs?

    P.B.: First of all, ask for a solid business case. Boards should talk to technology providers in a very simple way – ‘how will this investment pay off in my organisation’. Whether we are talking about solar PV, energy storage or a combination of both.

    It is worth clearly defining acceptable criteria for the rate of return and evaluating these solutions from this angle. Photovoltaics or energy storage are not gadgets or fashion items today. They are real, quantifiable tools that allow a company to stabilise its energy costs and improve its bottom line.

  • Sisyphean work in Silicon Valley. Physics teaches humility about AI

    Sisyphean work in Silicon Valley. Physics teaches humility about AI

    Cloud computing has for years effectively hidden the physical dimension of the technology, creating the illusion of infinite and seamlessly scalable resources. Generative AI is brutally tearing down this curtain. With the increasing complexity of models and the popularity of artificial intelligence, software development inevitably collides with the hard laws of physics and thermodynamics. Why do hardware engineers today resemble the mythical Sisyphus, and what does the looming technological token explosion mean for the operational strategies and cloud budgets of today’s enterprises?

    An end to the illusion of limitless computing space

    The early popularisation phase of generative artificial intelligence shaped an image in the market mindset of a technology that was lightweight, ubiquitous and almost free. However, consumer chatbots, efficiently generating lines or editing email correspondence, were merely an impressive display window. As analysis shows, the real business revolution, and the only way to generate a return on trillions of dollars of investment, lies in an entirely different area. The world of technology is moving inexorably towards a reality in which agent-based artificial intelligence becomes the operational foundation of businesses.

    The shift from simple text assistants to autonomous agents is a fundamental paradigm shift. It marks an evolution from single user queries to continuous multi-step inference and the execution of complex workflows in the background. Enterprises will soon be making tens of thousands of system calls to large language models every day. This phenomenon is no longer just a fascinating scientific experiment, but is becoming a process of scale and gravity typical of heavy industry, where process optimisation plays a central role.

    The brutal mathematics of floating point operations

    Understanding the challenges ahead requires looking under the hood of powerful language models. Each word generated, or more precisely each token, carries a measurable physical computational cost. The architecture of today’s systems typically requires two floating-point operations per second for each model parameter in the response generation process. The scale of this is striking when you consider that the most advanced market models operate on one to two trillion parameters. This means that even with highly sophisticated optimisation techniques, generating a single token forces the real-time conversion of between one hundred and two hundred billion variables.

    What’s more, the industry is dynamically shifting towards models based on deep reasoning, in which the contextual window is dramatically expanded. Agent-based artificial intelligence analyses problems multithreadedly, searching for optimal solution paths before formulating a final answer and executing an action. As a result, the number of tokens per query increases exponentially, often by a factor of ten or more. Referring to this phenomenon as a token explosion is not a literary exaggeration, but a chilling description of the digital reality to come.

    Energy consumption as a new unit of account in business

    The consequence of the aforementioned data growth is a return to the fundamentals of economics, where energy intensity becomes the main barrier. According to market analysts, energy consumption, measured in watts per single query, directly determines the profitability of the entire technological sector. The generative business model of artificial intelligence is unique in this respect; the target net margin here depends as much on ingenious code as it does on the cooling costs of the server room and a stable power supply.

    Currently, these costs are largely absorbed by model developers, leading to a situation where it is not uncommon for technology giants to subsidise query processing, relying on capital from investors. This model is not likely to stand the test of time in a mature market. The real beneficiaries of the ongoing investment boom today are not the developers of intelligent algorithms, but infrastructure providers, advanced chip manufacturers and data centre builders. The owners of language models do not have a profit machine, but a powerful mechanism in which capital burns in anticipation of the moment when massive use at the corporate level will offset the astronomical cost of maintaining servers.

    The myth of Sisyphus in the modern server room

    The market situation is forcing an unprecedented effort on the part of hardware manufacturers. The semiconductor industry is operating in a state of constant mobilisation, striving to increase the cost efficiency of graphics processing units, developing ever higher bandwidth memories and optimising the network architecture of cluster systems. Despite these colossal efforts, engineers working on hardware development today resemble the mythical Sisyphus.

    This phenomenon can be likened to a kind of Jevons paradox transposed to the digital world. Whenever a technological boulder is successfully rolled to the top of a mountain by creating a new, faster and more energy-efficient generation of processors, software developers immediately increase the complexity of their models. The boulder falls with a bang at the foot and the work begins again. As artificial intelligence continues to expand its analytical and operational capabilities, the quest for full cost optimisation seems a horizon that is constantly receding. Computational requirements are growing faster than the ability to handle them cheaply, representing an uncompromising clash between unlimited ambition and the limits imposed by semiconductor physics.

    Survival architecture, or cost engineering as an operational priority

    Awareness of the technological and physical considerations described is crucial for planning long-term business strategy. The end of the era of free experimentation means that target implementations of artificial intelligence systems in the corporate environment will have to be subject to rigorous financial and architectural evaluation. The implementation of agent-based systems will bring organisations leaps in productivity by automating complex workflows, but these benefits will be wiped out in a fraction of a second if the toll on computing resources gets out of hand.

    Modern IT infrastructure management will be inextricably linked to the implementation of advanced cloud cost engineering. Instead of routing every trivial task to the most resource-intensive models with trillions of parameters, organisations will be forced to design agile hybrid architectures. Intelligent process routing will involve delegating simple operations to much smaller, highly specialised and energy-efficient models. The costly computing power of the largest market systems will in turn be precisely reserved exclusively for tasks requiring the highest level of abstract inference.

    Understanding the physical, energy and economic limits of technology is becoming the new foundation for market advantage. Only those organisations that can harmoniously combine a bold vision of advanced automation with a cool, rigorous calculation of every watt consumed and token generated in the background will succeed in the target phase of artificial intelligence development.

  • AI infrastructure crisis: Lack of electricians and engineers a major brake on the digital revolution

    AI infrastructure crisis: Lack of electricians and engineers a major brake on the digital revolution

    In the common perception of executives, artificial intelligence appears as an ethereal, almost metaphysical entity. We see it through the prism of algorithmic elegance and the infinite scalability of the cloud, forgetting that every query sent to a language model initiates a cascade of events in the most material world possible. The latest market data forces us to brutally revise this digital idealism. For it turns out that the biggest brake on the modern economy is not a shortage of creative programmers, but hard infrastructure constraints: a lack of copper, a shortage of power in transmission networks and, most acutely, a dramatic shortage of manpower in professions that have so far rarely been on the agenda of technology company board meetings.

    The scale of this challenge is illustrated by the dynamics of energy forecasts. When, in just seven months, BloombergNEF analysts revise projected energy demand for data centres upwards by more than a third, it becomes clear that strategic planning in the IT sector has entered a terrain of high uncertainty. The projected 106 gigawatts of power consumption in the US infrastructure alone by 2035 is not just an engineering challenge, it heralds a new era in which computing power will become a scarce good, rationed by the physical capacity of transformers and the availability of technical staff.

    We are entering a period where the ‘fluidity’ of digital innovation is colliding with the ‘stickiness’ of real-world investment processes. Although the construction of AI data centres is progressing at an unprecedented pace, developers are encountering a glass ceiling that cannot be broken through with code optimisation. This problem is analysed by IEEE Spectrum, among others, pointing to a dangerous skills gap. While the labour market has been saturated with abstraction-layer specialists for years, the real technology base – server rooms, cooling systems and high-voltage networks – has begun to suffer from a chronic shortage of qualified structural, mechanical and electrical engineers.

    This paradigm shift is redefining the concept of ‘IT talent’. The traditional battle for developers is giving way to a much tougher battle for multi-tasking infrastructure operators. Data from the AFCOM report suggests that, for more than half of data centre managers, it is operations staff and physical security specialists who are the bottleneck to growth today. We need experts who can manage critical high-density liquid cooling systems with the same agility as their software colleagues manage databases. Unfortunately, the need for these competencies is growing at a time when the global electricity grid is undergoing its most serious upgrade in decades, leaving the AI sector to compete with the renewable energy and industrial construction industries for the same engineers.

    In response to these deficits, technology hegemons such as Microsoft, Google and Amazon are beginning to take on roles traditionally assigned to state education systems. The creation of their own academies and partnership programmes with technical schools is not a sign of philanthropy, but a pragmatic attempt to secure the competence supply chain. There is a lesson here for medium-sized market players about the need for a deep review of business resilience strategies. The success of AI deployment will increasingly depend on the ability to secure the physical resources and technical competencies that guarantee the continuity of systems in a world with rising energy and water costs.

    Ultimately, the issue of sustainability ceases to be the domain of PR departments and becomes the foundation of risk analysis. The increasing consumption of water to cool servers and the drastic differences in the carbon footprint of different geographical regions make the choice of infrastructure partner an ethical and financial decision. A lack of awareness regarding where the energy powering our AI models is coming from and who is looking after their physical performance can become a costly oversight. The future of business belongs to those leaders who can look beyond the monitor screen and see that their digital ambitions are inextricably intertwined with the fate of the engineer working on high-voltage systems.

    For years, we have lived in a paradigm where software has ‘eaten the world’, suggesting that hardware is merely a cheap and replaceable base. The AI revolution is reversing this vector. Today, it is the availability of physical infrastructure that dictates the pace of digital innovation. For business leaders, this means going back to the roots of operational planning: securing scarce resources, investing in people with specific physical skills and taking responsibility for the entire technology lifecycle – from water intake in cold storage to the energy mix of the local grid. It is a lesson in humility towards the physical world that will ultimately determine who emerges victorious from the race for supremacy in the age of algorithms.

  • Data centre market against the wall. Lack of power hinders digital transformation

    Data centre market against the wall. Lack of power hinders digital transformation

    As recently as two years ago, at the height of AI fever in 2024, there was only one question being asked in boardrooms: ‘Where to get Nvidia processors?’ Chip availability was the bottleneck that dictated the pace of technological development. Today, in January 2026, the situation has changed dramatically. Hardware supply chains have cleared, distributors’ warehouses are full of the latest Blackwell and Ruby chips. Yet new data centre investment is stalling.

    The question of 2026 is no longer “Do you have the equipment?”, but “Where will you connect it?”. Power Availability has replaced silicon availability as the main operational risk factor. We are entering an era where the success of an AI project is determined by the old analogue power infrastructure rather than digital code.

    A new bottleneck. The geopolitics of the socket

    The average waiting time for a new power connection of more than 10 MW in Europe’s key hubs has lengthened from 18 months in 2023 to a shocking 4-5 years today. This means that a decision to build a server room taken today will only materialise operationally around 2030-2031. For the technology industry, this is an eternity.

    The problem hits the so-called FLAP-D market (Frankfurt, London, Amsterdam, Paris, Dublin) hardest. These traditional data capitals are energy saturated. Grid operators in the Netherlands or Ireland are refusing to issue new connection conditions, citing the risk of destabilising the national energy systems.

    In this landscape, Warsaw – emerging in recent years as a key hub for Central and Eastern Europe – has become a victim of its own success. Investments by giants such as Google, Microsoft or local cloud operators have rapidly consumed the available power reserves in the Warsaw agglomeration. Polskie Sieci Elektroenergetyczne (PSE) is facing a physical challenge: the networks in the capital area are not able to accommodate further gigawatt loads without a thorough modernisation that will take years. The result? Investors are forced to look for alternative locations – in the north of Poland (where offshore wind power is easier to come by) or to flee to southern Europe, where solar power is easier to come by.

    AI physics: Why do old server rooms ‘melt cables’?

    The energy crisis also has a second bottom – technical. Even if a company has space in a server room built in 2020, it often cannot install modern AI infrastructure there. This is due to a drastic change in the so-called power density (Rack Density).

    In traditional IT, the standard was 5-8 kW of power consumption per server rack. Power and cooling systems were designed for these values. Today’s AI clusters, based on the Nvidia Blackwell architecture or successors, require between 50 and even 100 kW per rack.

    Trying to put such infrastructure into an ‘old’ Data Centre (from 5 years ago) ends in failure. The building cannot deliver that many amps in one place and, more importantly, it cannot dissipate the heat generated. Trying to cool a 100 kW cabinet with traditional air (precision air conditioning) is akin to trying to cool a racing engine with an office fan. It is physically impossible and uneconomic.

    The cooling revolution: The end of the air era?

    Consequently, 2026 is the moment of the ultimate triumph of Liquid Cooling technology. What was until recently the domain of overclocking enthusiasts and cryptocurrency diggers has become the corporate standard.

    Every new Hyperscale development commissioned this year is being designed to a hybrid or all-liquid standard. Two technologies dominate:

    • Direct-to-Chip (DLC): Where the cooling liquid is piped directly to the water blocks on the CPUs and GPUs. This solution has become a warranty requirement for the latest servers.
    • Immersion Cooling: Where entire servers are ‘melted’ in tubs filled with a special dielectric (non-conductive) fluid.

    This change is driven not only by physics, but also by EU regulations (EED – Energy Efficiency Directive). Liquid cooling is much more energy efficient and, moreover, allows heat recovery. The fluid leaving the server has a temperature of 60-70°C, which allows the Data Centre to be plugged directly into the municipal district heating network. In 2026, server rooms become de facto digital combined heat and power (CHP) plants, heating office buildings and housing estates, which is key to obtaining environmental permits.

    The economics of scarcity: Power Banking and the atom

    The shortage of capacity has triggered a sharp rise in prices. Rates for colocation (renting space for servers) in Warsaw and Frankfurt have risen by 30-40% year-on-year. Customers are no longer negotiating prices; they are bidding for who will be the first to sign a contract for ‘powered racks’.

    The strategy of developers has also changed. In the real estate market, the phenomenon of ‘Power Banking’ is making waves. Investment funds are buying up old, bankrupt factories, steelworks or industrial plants. They are not interested in the buildings (often destined for demolition), but in the active, high power allocations assigned to the plot. A ‘power right’ is bought to put up containers with AI servers on the site of a former foundry.

    At the top of the investment pyramid, we see a shift towards nuclear power. Following in the footsteps of Microsoft and Amazon (high-profile 2024/2025 deals), European players are also looking to power their campuses from small modular reactors (SMRs) or via direct lines (PPAs) from existing nuclear power plants. The IT industry has realised that RES (wind and solar) are too unstable for AI, which has to ‘learn’ 24/7 with a constant load.

    A new indicator of success – Time-to-Power

    For Chief Information Officers (CIOs) planning strategies for 2026 and 2027, there is one key lesson: Hardware is easy, electricity is hard.

    The traditional model, in which servers are ordered first and then space is sought for them, is dead. Today, the process needs to be reversed. Booking Data Centre capacity 12-24 months in advance is a must. The Time-to-Market (time to deploy a product) indicator has been replaced by Time-to-Power (time to get power).

    The digital revolution today depends 100 per cent on analogue infrastructure. Without massive investment in transmission networks and new generation sources, artificial intelligence in Europe will hit a glass ceiling – not for lack of data or algorithms, but for the mundane lack of a socket to plug it into.

  • Billions of euros are slipping through their fingers. European SMEs are oversleeping the energy revolution

    Billions of euros are slipping through their fingers. European SMEs are oversleeping the energy revolution

    Although the small and medium-sized enterprise (SME) sector is the backbone of the European economy, generating more than half of the EU’s GDP, its role in the green revolution remains surprisingly marginal. The latest report published by the Solar Impulse Foundation and Schneider Electric, entitled Unlocking SME Competitiveness in Europe, sheds light on an important paradox. Although these companies account for 99 per cent of all players in the market, only 11 per cent of them are making significant investments in sustainability. This sluggishness could cost the European economy billions of euros in potential value lost by 2030.

    The authors of the study point out that the key to unlocking this potential is the synergy of electrification and digitalisation. Estimates are promising: the implementation of appropriate technologies would reduce the sector’s energy consumption by 20 to 30 per cent and, in some industries, reduce CO2 emissions by up to 40 per cent. However, the problem lies in the approach to strategy. The report makes a clear distinction between ad hoc measures and long-term plans. While as many as 93 per cent of companies are taking single efficiency measures such as replacing equipment, only a quarter have developed a comprehensive decarbonisation strategy.

    The market needs systemic solutions, not just spot-on implementations. Digital integration is now a matter of tough competitiveness and cost control, not just image. Indeed, smaller players, due to their limited bargaining power, are extremely vulnerable to energy price shocks, making them ideal candidates for the adoption of models such as Energy-as-a-Service.

    However, this transformation faces infrastructural barriers. Europe, wanting to increase the electrification rate to 32 per cent by the end of the decade, has to face the fact that 40 per cent of electricity grids are over 40 years old. This modernisation requires an investment of €584 billion. Against this backdrop, the report calls on policymakers to take urgent action, including simplifying permitting procedures and revising tax directives, which could lower investment risks for the SME sector and accelerate the adoption of modern energy technologies.

  • AI is devouring power. The real winner of the boom will be energy storage

    AI is devouring power. The real winner of the boom will be energy storage

    The AI boom is generating a massive demand for computing power, but its biggest bottleneck is proving to be access to electricity. Analysts at UBS Securities point out that this hunger for power is likely to set off a ‘boom cycle’ for the energy storage sector over the next five years.

    The problem is that the only segment of power generation that is expected to grow significantly in the US is renewables. As wind and solar farms generate intermittently, the grid urgently needs more batteries to store surpluses and smooth out fluctuations in supply.

    According to UBS forecasts, global demand for energy storage could increase by up to 40% year-on-year as early as 2026. The US market is key for global players, especially Chinese manufacturers, who already control a 20% share there. The US remains one of the highest-margin markets.

    However, it is not the US that will grow fastest. UBS predicts that emerging markets – the Middle East, Latin America, Africa and Southeast Asia – could record growth rates of between 30% and even 50% per year.

    For Chinese exports, US policy remains the biggest risk. Analysts point to regulations, such as those contained in President Trump’s ‘One Big Beautiful Bill’, which limit Chinese participation in the US energy sector.

    Meanwhile, in China itself, the storage market is being driven by the increasing liberalisation of energy prices. Independent storage projects are becoming profitable through arbitrage – buying energy when it is cheap and selling it at peak demand. A price differential of 0.4 yuan (about US$0.06) per kilowatt hour is already enough to make these projects profitable. UBS also expects Chinese provinces to introduce additional incentives such as capacity payments, rewarding battery operators for the mere availability of their resources to the grid.

  • Apple invests in RES in Poland – solar and wind farms by 2030

    Apple invests in RES in Poland – solar and wind farms by 2030

    Apple is accelerating European investment in renewable energy sources, with Poland at the centre of this strategy. The company has announced that it wants to fully offset the energy consumed by users of its devices in Europe by 2030. This is an ambitious goal – especially in countries such as Poland, where the energy industry is still based on coal.

    The company is launching projects with a total capacity of 650 MW – with wind and solar farms in Poland, Romania, Greece, Italy, Latvia and a new farm already in operation in Spain. Total production is expected to reach more than one million megawatt hours per year, to offset the charging of iPhones, Macs and iPads by European users. The budget? More than $600 million.

    Poland: Test of the credibility of Apple’s transformation

    The most interesting case in this puzzle is Poland – a country with one of the most carbon-intensive power grids in Europe. Here, Apple is supporting the construction of the 40 MW Ecoenergy photovoltaic farm, which is due to start operations later this year. However, it is not all about symbolism. The choice of Poland is a signal that Apple wants to invest where the real impact on reducing emissions is greatest, and not just where renewable energy is available ‘off the shelf’.

    It’s also a gesture to local regulators – pressure for corporate decarbonisation is growing and projects like this can become an argument in the discussion about technology investment in the region.

    Central Europe as a Big Tech energy laboratory

    Apple is simultaneously investing in a 99 MW wind farm in Romania (in partnership with Nala Renewables and OX2) and has signed its first corporate PPA in Latvia – 110 MW of solar from European Energy. This is important, as the PPA market in Central and Eastern Europe has so far lagged behind Western Europe.

    A 131 MW full-scale farm, purpose-built for Apple, is already in operation in Segovia, Spain. The company is not revealing whether the power will go directly to its infrastructure or serve as a market balance, but one thing is clear: Apple is starting to think like an energy operator.

    Why does it matter?

    Product use – device charging – accounted for 2024. 29% of Apple’s total emissions. This is the area over which Big Tech has the least control, because it depends on the energy mix of individual countries. That’s why Apple doesn’t wait for governments – it builds its own sources.

    Looking ahead to 2030, Apple and its suppliers have already procured 19 GW of RES power for factories and data centres. Now the company is transferring this model to end users. It’s a precedent that could have a knock-on effect – Google and Microsoft are also declaring climate neutrality, but none yet conduct such extensive consumer energy balancing.

    For Poland, Apple’s projects are more than PR. It is a signal that the country – despite its coal heritage – can become an arena for corporate investment in renewable energy. If today Apple finances photovoltaic farms, tomorrow it may look to Poland for data centres or service hubs – provided the energy really does go green.

  • IoT in manufacturing. How 5 key technologies are changing the face of industry

    IoT in manufacturing. How 5 key technologies are changing the face of industry

    Modern manufacturing, despite advanced automation, still faces fundamental challenges: unplanned downtime, wasted raw materials and increasing pressure for flexibility. The answer to these problems is not the next faster machine, but the intelligent use of data. This is where the Industrial Internet of Things(IIoT) becomes the bloodstream of the fourth industrial revolution, transforming isolated ‘islands’ of automation into a single, cohesive and self-regulating organism.

    The IIoT market, valued at $119.4bn in 2024, is expected to grow to $286.3bn by 2029, demonstrating the scale of this transformation. This is not a niche trend, but a fundamental shift that is redefining the concept of efficiency. Here are five key IIoT solutions that are already offering the most measurable return on investment and building competitive advantage on the shop floor.

    Predictive maintenance (PdM): no more unplanned downtime

    Traditional maintenance, based on reactive repairs or rigid schedules, is inefficient and costly. Predictive maintenance (PdM) changes this paradigm by using IoT sensors to continuously monitor the condition of machines – their vibration, temperature or energy consumption. The collected data is analysed by artificial intelligence algorithms, which predict upcoming failures with high precision. This ensures that service interventions are only made when they are actually needed, with spectacular results. PdM implementations can reduce unplanned failures by 70%, reduce downtime by 35% and cut maintenance costs by 25%. Some automotive companies have achieved a full return on investment (ROI) in less than three months.

    2. The digital twin: a virtual factory in the service of optimisation

    The Digital Twin is a dynamic, virtual replica of a machine, production line or even an entire factory that is constantly updated with real-time data from IoT sensors. Unlike a static simulation that asks “what if?”, the Digital Twin answers the question “what is happening now?” and “what will happen in the future?”. This allows the testing of any scenario – a change in the layout of the shop floor, the acceleration of a robot or a new production schedule – in a virtual environment, without the risks and costs associated with experiments on a physical line. The potential savings are enormous; Unilever generated $2.8 million in savings in one pilot implementation using this approach.

    3. intelligent quality control: aszyn vision and AI

    Manual quality control is prone to error, fatigue and subjectivity. Machine vision systems, using industrial cameras and deep learning algorithms, bring control to a level of precision and repeatability unattainable by humans. They can identify the tiniest defects in a fraction of a second, verify assembly completeness or read serial codes, automatically rejecting defective products from the line. ROI from such deployments is often achieved in less than a year and the reduction in defects can exceed 90% . However, the real value lies in the so-called ‘hidden ROI’: the data from the vision system becomes a source of information for optimising the entire process, allowing the causes of defects to be eliminated, not just their effects.

    4 Integrated energy management

    In an era of rising energy prices and pressure for sustainability, efficient management of energy consumption is becoming a priority. IoT-based systems allow the granular monitoring of power consumption by individual machines and lines in real time. This makes it possible to identify the most energy-intensive processes, detect anomalies (e.g. equipment drawing power during downtime) and optimise production schedules to avoid costly peaks in power consumption. Implementing such a system can reduce energy costs by up to 30% and significantly facilitates ISO 50001 certification.

    5. Complex IIoT platforms: a single source of truth

    Data in a typical factory is scattered across many non-communicating systems (SCADA, MES, ERP), making it impossible to get the big picture. The IIoT platform acts as a central operating system that integrates data from the operational technology (OT) and information technology (IT) worlds, creating a ‘single source of truth’ for the entire organisation. This allows real-time automatic calculation of the key indicator of Overall Equipment Effectiveness (OEE), which combines availability, productivity and quality. Implementing such a platform is a strategic investment in agility, enabling the entire organisation to react faster to market changes and make decisions based on complete, reliable data.

    From technology to strategy

    The fourth industrial revolution is not a futuristic vision, but a set of concrete tools that are already generating real benefits today. The key to success, however, is a strategic approach. Rather than deploying technology for technology’s sake, companies should start by defining a key business problem and then selecting a solution that addresses it most effectively. Starting with a small-scale pilot project allows you to prove the value of implementation, calculate ROI and gain invaluable experience. Companies that invest in building a solid, data-driven foundation today will lead their industries tomorrow, redefining the notion of efficiency and innovation in manufacturing.

  • Megawatts to teraflops – how energy shapes AI hardware replacement cycles in the data centre

    Megawatts to teraflops – how energy shapes AI hardware replacement cycles in the data centre

    The development of artificial intelligence is not just about computational advances. Training linguistic and generative models requires thousands of GPU/TPU accelerators that devour tens of megawatts of power. As a result, electricity consumption in data centres is increasing – in Ireland, data centres consumed as much as 22% of the country’s total energy in 2024. Such a share is a challenge for energy suppliers and DC operators, who need to fit growing demand into increasingly expensive price tags while reducing CO₂ emissions.

    This article compares energy prices in three key European data hubs – Frankfurt, Dublin and Warsaw – with the energy efficiency of successive generations of AI accelerators. On this basis, we analyse how operational costs and technological advances shorten or lengthen the lifecycle of AI hardware .

    Energy prices in different hubs

    Frankfurt: high prices and environmental requirements

    Frankfurt is the second largest data centre market in Europe. Germany has some of the highest industrial energy prices; in 2024, companies paid an average of 16.77 cents/kWh, with the rate rising to 17.99 ct/kWh in January 2025. For companies with concessions (fixed consumption), the cost was 10.47 ct/kWh. These charges are made up of 29% taxes and charges and 27% network charges.

    A strong focus on RES energy and heat recovery is obliging data centre operators to invest in sustainable solutions. High energy costs motivate rapid deployment of more efficient systems to reduce consumption per teraflops.

    Dublin: the most expensive electricity in the EU and supply constraints

    In Ireland, energy prices for industrial consumers are among the highest in Europe – around €26 per 100 kWh in the first half of 2024. The SEAI report shows that in 2024 the weighted average price for business was 22.8 cents per kWh, with large consumers paying 16.3 c/kWh. The high rates are compounded by a shortage of power – Dublin’s data centres consume 22% of the country’s energy and EirGrid predicts this will rise to 30% by 2030. For this reason, new connections are only approved in exceptional cases, so operators must maximise the efficiency of existing infrastructure.

    Warsaw: lower prices but a growing market

    Poland stands out with lower prices – around €0.13 per kWh in 2024. According to GlobalPetrolPrices, in March 2025, businesses will pay an average of PLN 1.023/kWh (US$0.28), which is still lower than in Germany or Ireland. While lower energy costs allow for a longer amortisation cycle, increasing competition and demand for cloud services are encouraging investment in new hardware to increase computing density.

    Generations of accelerators: performance per watt

    GPU – from Volta to Blackwell

    Nvidia’s V100 (Volta) introduced tensor cores technology in 2017, but its TDP of 300 W and lower TFLOPS/W ratio are no longer viable. In 2020, the A100 (Ampere) came to market with a TDP of 400 W and doubled the performance per watt, reaching up to 10 TFLOPS/W. Another breakthrough was the 2022 H100, using the Hopper architecture: a 700 W chip delivers 20 TFLOPS/W and about three times the workload of the A100 per watt.

    In 2024, Nvidia announced the H200, a chip with a TDP of 700 W and featuring HBM3e memory with a bandwidth of 4.8 TB/s. This increased inference performance by 30-45% for the same power consumption. The DGX H200 system with eight such GPUs consumes 5.6 kW, but can do twice as much work per watt compared to its predecessor.

    The B200 (Blackwell), with a TDP of 1000 W and three times the computing power of the H100, is expected to debut in 2025. Although power consumption is increasing, the TFLOPS/W ratio continues to improve, pushing the frontier of computing density.

    TPU – an alternative with improved energy efficiency

    Google is developing Tensor Processing Units, dedicated AI accelerators. TPU v4 offers 1.2-1.7 times better performance per watt than the A100, and in general TPUs are 2-3 times more power efficient than GPUs. Upcoming generations, such as v6 ‘Trillium’ and v7 ‘Ironwood’, focus on maximising compute density while reducing power consumption.

    Equipment life cycle – flexibility instead of rigid cushioning

    In traditional data centres, hardware was replaced every five to seven years. However, decarbonisation research indicates that in AI environments, cycles of four years or longer are economically viable, although shortening the cycle can reduce emissions. When a new generation of GPUs provides several times the energy efficiency, early retirement of ageing chips is justified – the energy savings and emissions cost reductions outweigh the investment. Replacement every 4-5 years may become the norm in regions with high energy prices.

    How does the price of electricity affect decisions to upgrade?

    Dublin – need for computing density

    With prices of 22-26 cents per kWh and limited network capacity, Irish data centres are being forced to maximise efficiency. An investment in an H100 or H200 pays for itself faster with twice the performance per watt. Replacing old A100s with H100/H200s reduces the amortisation cycle to three to four years, as the energy savings and lower emissions costs outweigh the capital expenditure. The introduction of even more energy-efficient chips (B200, TPU v6) can further accelerate the upgrade.

    Frankfurt – a trade-off between cost and investment

    German energy prices (17-20 ct/kWh) are lower than in Ireland, but still motivate optimisation. Companies are keen to replace equipment every 4-5 years, especially when the gap between generations is large. At the same time, larger systems can benefit from discounts and long-term contracts, which reduces the pressure for immediate replacement. Regulations requiring the use of RES and heat recovery encourage the choice of energy-efficient platforms.

    Warsaw – a longer breath, but growing ambitions

    The lower cost of energy (around 13 ct/kWh) allows Polish operators to extend the life cycle of their equipment. Replacing the V100 with the A100 or H100 still brings savings, but they are not as spectacular as in Ireland. However, the growing demand for AI services, the development of R&D offices in Poland and competition from international players may shorten replacement cycles to 4-5 years, especially when B200s and energy-efficient TPUs appear on the market.

    Trends of the future: HBM3e memory, Blackwell architecture and TPU Trillium

    Accelerator performance is not only increasing with more cores. New chips, such as the H200, increase memory bandwidth to 4.8 TB/s via HBM3e. Another leap is the Blackwell B200 with a TDP of 1000W, which uses wider buses and improved Transformer Engine cores. Google, in turn, is developing the v6 ‘Trillium’ and v7 ‘Ironwood’ TPUs to improve power efficiency and compute density.

    Efficiency per watt is becoming the most important parameter as economic and regulatory pressures force operators to reduce emissions. High energy prices in Europe further exacerbate this trend.

    Differences in energy prices across Europe determine AI infrastructure modernisation strategies. Ireland and Germany, with the highest rates, are shortening equipment lifecycles to reduce operating costs. Poland, benefiting from lower prices, can afford to use existing systems for longer, although growing demand and competition will also accelerate change there.

    Technological advances – from the V100 GPU, to the A100 and H100, to the H200 and the upcoming B200 – mean that the TFLOPS/W ratio is growing exponentially. Alternative TPU accelerators are showing even greater energy efficiency, which could change GPU dominance in the future. Therefore, hardware replacement decisions cannot be rigid; they must take into account not only the cost of new hardware, but also energy prices, CO₂ emissions and customer requirements. Megawatts and teraflops will become increasingly intertwined in the strategies of data centre operators in the coming decade.

  • The handbrake of the AI revolution. It’s not a lack of chips, but a lack of power that will halt development

    The handbrake of the AI revolution. It’s not a lack of chips, but a lack of power that will halt development

    The global technology debate has been revolving around one topic for several years: the availability of GPUs. The race for Nvidia’s chips, reminiscent of the modern gold rush, is being portrayed as a key front for the AI revolution .

    Companies and countries are bidding for supplies of technological ‘picks’, believing that whoever has more of them will dig up the digital wealth of the future. But what if the real limitation is not the availability of tools, but the lack of land on which to mine?

    Recent market analyses, including those from Gartner analysts, shed a whole new light on the matter. They suggest that the narrowest throat for the development of artificial intelligence is shifting from silicon factories towards a much more mundane infrastructure – power grids.

    The problem is no longer a lack of chips, but starting to be a lack of power and space to plug them all in. The battle for the future of AI is no longer just about teraflops, but increasingly about megawatts.

    Hungry like AI – Energy Appetite Scale

    To understand the scale of the challenge, you need to look into the engine room of this revolution. The process of training a single large-scale language model (LLM), such as those powering advanced chatbots, requires thousands of specialised processors to work for weeks, even months, without a moment’s pause. These are operations with an unimaginable appetite for energy.

    The scale of the phenomenon is best illustrated by comparisons. It is estimated that one complex query to generative AI can consume up to ten times more energy than a simple Google search.

    Going further, a modern data centre built specifically for AI workloads needs as much electricity as a city of several thousand inhabitants. And more and more such centres are being built. Market data leaves no illusions – investment in AI servers is almost doubling the value of this segment, and overall data centre investment is growing at a rate of more than 75% year-on-year.

    This surge in demand for computing power is translating into an explosion in energy demand that global grids are simply not ready for.

    Consequences here and now: Where to plug in all this intelligence?

    This energy dilemma already has very real global consequences. Access to cheap, stable and, increasingly, green energy is becoming a key strategic asset. ‘AI energy hubs’ are emerging on the new technological world map.

    Scandinavia, with its abundance of hydropower, or the Middle East, investing billions in solar farms, are becoming magnets for major technology players.

    At the same time, rising energy costs are becoming one of the main components of the final price of AI-based services. Rising electricity prices will inevitably translate into more expensive access to advanced models, which could deepen the digital and economic divide in the future.

    However, the biggest challenge is the physical infrastructure. As analysts aptly point out, the problem is becoming a lack of ‘space to connect servers’. Building new high-voltage lines or transformer stations are processes that take years.

    This pace is out of step with the exponential growth of the technology industry, which has become accustomed to cycles measured in months. The digital revolution is colliding with the brutal realities of civil engineering and energy.

    A race against time – How is the industry trying to ‘cool’ AI’s appetite?

    However, the technology world is aware of the growing problem and is working intensively on solutions. This race against time is taking place on several fronts. The first is optimisation. Engineers are switching from traditional air cooling of servers to much more efficient liquid cooling. This allows for denser hardware packing and better heat management, although it does not solve the root problem.

    The second front is efficiency. A race is underway to design more energy-efficient chips and accelerators, as exemplified by proprietary units being designed by Google or AWS. In parallel, work is being done to optimise the AI models themselves to achieve similar results with less computing power.

    Most futuristic, but taken deadly seriously, is the energy front. Giants such as Microsoft are openly investing in fusion energy research and have declared their intention to use small modular reactors (SMRs) to power their future data centre campuses.

    This best demonstrates that for industry leaders, providing gigawatts of power has become a priority equal to creating intelligent algorithms.

    The AI revolution is entering a crucial phase in which its pace will be dictated not only by Moore’s Law, but also by the laws of physics and the limitations of the material world. The race for supremacy will be won not only by those with the sharpest algorithms, but by those who can provide them with stable and powerful power.

    So before we once again ask what else artificial intelligence can do for us, we need to answer a much more down-to-earth question: where do we find an outlet for it?

  • Will AI resuscitate the atom? Data centres in desperate search of clean and stable energy

    Will AI resuscitate the atom? Data centres in desperate search of clean and stable energy

    With a single command, we turn a few words into a photorealistic image. We have complex conversations with a chatbot that writes code for us in seconds or analyses voluminous documents. This digital ‘magic’, which only a few years ago seemed the domain of science fiction, is now becoming part of our everyday lives. However, it has a very real, physical price that cannot be seen on the screen – gigantic electricity consumption.

    The technology industry has been building its image as a driver of sustainability for years, investing billions in green initiatives. Today, it faces a fundamental paradox. Artificial intelligence’s insatiable appetite for computing power is putting these aspirations to the test, clashing the image with the inexorable laws of physics. This raises a key question: in the search for a stable and emission-free power source for its ‘AI factories’, will the industry be forced to turn to one of the most controversial technologies in history – nuclear power?

    AI’s insatiable appetite: why is the cloud so hungry?

    To understand the scale of the problem, it is necessary to look under the bonnet of artificial intelligence. Its huge energy requirements are driven by two main processes. The first is model training. Learning a large neural network, such as those driving ChatGPT, is a process that requires thousands of GPUs running continuously for weeks or months to process unimaginable amounts of data. The energy consumption of one such cycle can be compared to the annual demand of a small city.

    The second, often underestimated power devourer is inference, the daily use of an already trained model. Every query we make to a chatbot, every request to generate an image, sets off a complex chain of calculations in the data centre. These individual small power consumptions, multiplied by millions of users worldwide, add up to astronomical values. As the latest market research confirms, AI is the main driver of the explosive growth in computing power demand, pushing existing infrastructure to its limits.

    Green energy under pressure – dreams versus reality

    It is fair to say that the data centre industry is not ignoring the problem. For years, technology giants have been outdoing themselves in investing in renewable energy sources (RES), building huge wind and solar farms to power their server rooms. In line with industry sentiment, as many as 91% of professionals believe that the future of energy for data centres lies in wind, solar and water.

    However, this is where the fundamental challenge arises – stability. Data centres are critical infrastructure that must operate with a guaranteed availability of 99.999%. What happens when the sun goes down and the wind stops blowing? While for a household a temporary drop in power from photovoltaic panels is not a catastrophe, for a server room serving global financial or medical systems it is an unacceptable scenario. Renewable energy sources are great at covering part of the demand, but they have the problem of providing the so-called base load – a constant, reliable supply of energy 24 hours a day.

    Back to the atom? A new opening in the form of the SMR

    It is in this gap that a concept is emerging that until recently lay on the margins of the technological debate: small modular reactors (SMRs). This is a new generation of nuclear power that has little in common with the giant plants of the Cold War era. SMRs, as the name suggests, are much smaller and their key feature is modularity. They are to be mass-produced in factories, potentially lowering costs and dramatically reducing construction time. Most importantly, they use modern passive safety systems to minimise the risk of failure.

    For the data centre industry, such a solution seems almost ideal. Imagine a server campus powered by its own small reactor, delivering stable, emission-free energy directly to the site, without dependence on an external grid. It’s a vision that is no longer a fantasy – as many as 75% of industry experts are now open to considering nuclear power as part of the energy mix.

    Cold shower – barriers on the road to the atom

    However, before declaring the renaissance of the atom, it is necessary to come down to earth. The road to implementing SMRs is long and bumpy, as the experts themselves acknowledge.

    Firstly, time. This is not a technology that can be implemented next year. 70% of experts believe that SMRs will not be ready for commercial use until a decade from now at the earliest.

    Secondly, the public. Nuclear power still arouses strong emotions and fears. As many as 60% of professionals expect significant public opposition, which could effectively block or delay projects.

    Thirdly, regulation and costs. Any nuclear project, even on a ‘small’ scale, is subject to extremely complex and lengthy licensing processes, and the upfront costs remain enormous.

    Energy crossroads

    The technology industry is at an energy crossroads. On one side stands the unbridled appetite of artificial intelligence, on the other the ideal of a green, sustainable future. In between is the pragmatic, if distant and controversial, promise of the atom.

    There is no simple answer to this dilemma today. One thing is certain: the discussion about AI-driven nuclear reactors has returned from academic corridors to the boardrooms of the world’s largest technology companies. The future and pace of development of AI may depend on whether we can find a suitably powerful and clean power source for it. The atom has not yet been ‘resuscitated’, but its pulse, after decades of dormancy, is becoming palpable again.

  • Nuclear power or windmills? The data centre industry is looking for a plan B to power AI

    Nuclear power or windmills? The data centre industry is looking for a plan B to power AI

    The rise of artificial intelligence in the technology market has triggered an exponential increase in the demand for computing power – and with it, also for electricity.

    Data centres, until recently treated as a backdrop to digital transformation, are now at the heart of it. The problem is that their continued growth depends not so much on the number of servers as on the availability of power.

    Renewable energy sources were supposed to be the answer. But this is no longer enough. The IT industry is starting to look more and more seriously at nuclear power – especially in the form of small modular reactors (SMRs). Is this a viable alternative or just a long-term vision?

    According to the latest Business Critical Solutions survey, up to 92% of data centre market experts expect demand for computing power to continue to grow through to 2025.

    This is mainly driven by artificial intelligence, whose models – from LLM to generative AI – consume huge amounts of energy both in the training phase and in day-to-day operations.

    The problem is that the infrastructure cannot keep up. As many as 85% of existing data centres are not prepared for this type of workload.

    Scaling up computing power without providing adequate power is like building a skyscraper without foundations.

    The industry is declaring its willingness to switch to renewable energy sources – as many as 91% of survey participants believe that at least 90% of energy for data centres will come from RES in the future. Reality, however, strongly verifies these declarations.

    RES are distributed, weather-dependent and require expensive investment in transmission infrastructure and storage.

    In many locations, it is impossible to guarantee the level of supply continuity required by modern data centres. And where wind and solar farms are available, the relevant transmission networks or environmental permits are often lacking.

    As a result, companies are looking for alternative scenarios – ones that combine low carbon with reliability. Nuclear power is increasingly on this map.

    In the BCS survey, as many as 75% of industry representatives do not rule out the use of nuclear power – mainly in the form of so-called small modular reactors (SMRs).

    These compact, scalable reactors can be located closer to end users and potentially serve as an energy source for large data campuses.

    Their advantages are obvious: independence from weather conditions, predictability of production, no CO₂ emissions and the possibility of installation in industrial locations. However, enthusiasm is tempered by practice.

    70% of respondents believe that SMR technology will not be commercially available sooner than a decade from now. 60% expect strong public opposition – mainly due to concerns about safety, waste disposal and stereotypes around nuclear power.

    What would have been considered an extravagance just a few years ago is today becoming the new normal: data centre operators are investing in their own power supplies.

    From PPAs (Power Purchase Agreements) with renewable energy farms, to building microgrids, to experimenting with hybrid energy models, data centres are ceasing to be just a consumer of energy and are beginning to manage it.

    In this logic, SMRs can become not only a source of power, but also a tool for building energy independence from external suppliers and markets. Such a model – ‘data centre as digital power plant’ – is gaining traction especially in regions with unstable supplies or high energy prices.

    Changes in the data centre energy mix will have a direct impact on the entire IT ecosystem – from hardware manufacturers to systems integrators and cloud service providers.

    The pressure for energy efficiency is growing: servers, cooling systems, software and architectures need to consume less energy under increasing loads.

    At the same time, there is a new demand for competences – energy engineers, ESG specialists, energy management experts.

    For channel companies, this is an opportunity for new business lines: energy consultancy, consumption optimisation services, integration of RES and power management solutions can become a natural extension of IT offerings.

    All these trends lead to one conclusion: the development of data centres in the coming years will not only depend on IT technology, but increasingly on energy infrastructure.


    Green transformation is the way forward, but without viable solutions for a stable energy supply – such as SMRs or energy storage – the development of AI could come to a standstill. The industry knows that the time for ambitious declarations is over. What matters now is implementation.

    Decisions taken today will determine whether the European data centre market can meet the demands of tomorrow.

    The winds are blowing, the sun is shining, but AI cannot wait for the weather. If small reactors prove to be a viable scenario, they will not just be an energy solution – they will be the foundation of the new digital economy.

  • Energy efficiency in the data centre – from cost to profit source

    Energy efficiency in the data centre – from cost to profit source

    Until a few years ago, green transformation in the data centre was treated as an expensive add-on – welcome in CSR brochures, but rarely written into key business indicators.

    Today, with global data centre energy consumption expected to reach 2% of total electricity demand (around 536 TWh) as early as 2025, and projected to double within five years, the situation has changed dramatically.

    Rising energy prices, regulatory pressures and increasingly stringent customer requirements mean that investments in energy efficiency are beginning to act as an accelerator of growth – rather than a brake on the bottom line.

    Demand for computing power is exploding with the popularisation of generative artificial intelligence, real-time analytics and cloud services.

    For data centre operators, this means one thing – the scale of operations is growing faster than the capacity of traditional infrastructure. And more scale means a bigger energy bill and a larger carbon footprint.

    More and more companies understand that sustainability is no longer a matter of image, but a sustainable element of strategy.

    In this new set-up, companies that are able to reduce energy intensity with increasing load gain both a cost and market advantage.

    Optimising cooling, replacing equipment with energy-efficient equipment and implementing intelligent energy management can translate into double-digit savings in operating costs.

    In addition, ‘green’ data centres are increasingly likely to win tenders – particularly where environmental criteria are as important as price and technical parameters.

    In the enterprise segment and public contracts, compliance with ESG standards is simply becoming an input requirement.

    Customers are keen to ensure that their own carbon footprint does not grow as a result of using IT services. Companies that can document their activities in this area gain a negotiating advantage.

    Cooling consumes the most energy in data centres. Therefore, liquid cooling systems, which can be up to several tens of percent more efficient than traditional air-based solutions, are growing in popularity.

    Hybrid systems combining both approaches allow the operating mode to be adapted to the current load, minimising energy consumption during periods of lower demand.

    The second pillar is renewable energy sources – increasingly integrated with local energy storage, which increases independence from market price fluctuations and reduces emissions.

    The third element is AI-based automation, which allows for predictive load management, intelligent control of cooling systems and optimal use of renewable energy.

    Paradoxically, the technology responsible for the increase in energy consumption – artificial intelligence – can also help to reduce it.

    Even the most energy-efficient data centre will not be truly ‘green’ if its suppliers do not meet environmental standards.

    Indirect emissions (scope 3) can account for a significant proportion of the total carbon footprint. Therefore, reporting and emission reduction requirements, as well as certifications of sustainable practices, are increasingly appearing in contracts with partners.

    For data centre operators, this means vetting the entire ecosystem of suppliers – from equipment manufacturers to energy providers.

    For the IT channel, an opportunity to differentiate itself by providing solutions that help customers meet growing ESG requirements.

    The EU CSRD (Corporate Sustainability Reporting Directive) extends the ESG reporting obligation to further groups of companies.

    This means that in the coming years, transparent reporting of energy consumption and emissions will become not an option, but a legal requirement.

    At the same time, green financing is growing in importance – from concessional loans to investments by funds focused on companies with a low carbon footprint.

    Companies that meet environmental standards have greater access to cheaper capital, allowing them to accelerate their technological transformation.

    Sustainability in data centres is no longer a cost embedded in CSR policies, but is becoming a factor that can increase margins, facilitate new customers and improve access to capital.

  • UN calls on Big Tech for 100% RES in data centres by 2030

    UN calls on Big Tech for 100% RES in data centres by 2030

    The UN is calling on major technology companies to switch completely to renewable energy in data centres by 2030. Secretary-General António Guterres didn’t beat around the bush – his call strikes at the heart of the IT industry‘s energy dilemmas. And there is no shortage of these, as the development of artificial intelligence is already consuming huge amounts of energy that are barely covered by current renewable sources.

    Meanwhile, many companies – in defiance of climate trends – are returning to coal, gas and nuclear power. According to data from the International Energy Agency (IEA), energy consumption by data centres could double by the end of the decade, and AI and blockchain are only accelerating this curve. But Guterres is in no doubt: the digital future must go hand in hand with decarbonisation, even if today this means costly investments and technical compromises.

    At the other extreme was the United States. President Donald Trump, while announcing a national AI plan, simultaneously declared a state of energy emergency. New regulations are expected to make it easier to build fossil fuel power plants and weaken environmental regulations. What’s more, the US administration is slashing subsidies for wind and solar power – exactly the kind of projects that dominate the queue of new projects waiting to be connected to the grid.

    This is not just an ideological dispute, but a real battle to shape the infrastructure under the future of AI. Amazon, Google or Microsoft are already declaring climate neutrality, but increasingly need to use ‘dirty’ energy to meet growing demand. China, on the other hand, is building new carbon-powered data centres, making no secret of the fact that cost and speed of development, not carbon footprint, is key.

    The UN is counting on September’s climate summit to bring new commitments from countries on zero-carbon power. For the IT sector, this means combining two – often contradictory – worlds: unlimited computing power and limited energy resources.

  • The myth of the energy AI boom: report exposes weaknesses in forecasts

    The myth of the energy AI boom: report exposes weaknesses in forecasts

    The investment fever around artificial intelligence is fuelling ambitious forecasts for the growth of data centre power consumption. In the US, discussion of the need to build dozens of new gigawatts of power is becoming a political and technological priority. But new analysis suggests that these estimates may simply be unrealistic – mainly due to constraints in global semiconductor production.

    These conclusions are not only relevant for the US, but also for Europe, which is increasingly boldly investing in infrastructure under generative AI. And they may be particularly valuable in Poland – a country with a tight energy balance, a transforming transmission network and ambitions to build local computing centres.

    AI is heating up the energy market

    In a wave of enthusiasm for AI, the US Department of Energy estimates that the country will need an additional 100 GW of peak power by 2030 – half of which is expected to be dedicated to powering data centres. This is a huge number, given that 100 GW is the equivalent of all the current installed capacity in Poland.

    These predictions have triggered a wave of activity. Grid operators, utilities, local authorities and investors are rushing to design and build infrastructure to secure the energy needs of future AI clusters. Some are already warning that there is a growing number of grid connection requests, the implementation of which could overload systems and lead to delays or the need for expansion.

    A cool shower from London

    In this context, a report prepared by London Economics International (LEI) on behalf of the Southern Environmental Law Center acts as a bucket of cold water. The main conclusion? US energy forecasts for AI are significantly overstated – because they do not take into account global limitations in the availability of semiconductor chips, without which data centres simply do not work.

    LEI analysts contrasted projected increases in energy demand with the reality of AI chip supply. A simple equation emerged: in order to realise the projected 57 GW increase in data centre energy consumption in the US by 2030, the country would need to ‘take over’ more than 90% of the global supply of new chips. And this is an extremely unlikely scenario – today, the US share of global chip demand is less than 50%, and other regions – China, the EU, India – are also planning dynamic growth in this area.

    The trap of false assumptions

    The main problem the report points to is the lack of ‘common sense adjustment’ in the forecasts. The analyses lack real consideration of manufacturing and logistical constraints – from chip factories to cooling, space, skilled staff and network capacity. The result? Investors and regulators may be making decisions based on data that is not grounded in technological realities.

    There is huge money at stake. The construction of new transmission lines, power plants or connection infrastructure are investments that take decades to settle. If it turns out that data centres are not built or do not operate at the assumed load – the costs will be passed on to energy consumers and taxpayers.

    And this is already happening. An example? In some areas of the US, network operators report 5-10 times more connection requests than the actual number of projects underway. Such discrepancies can lead to overinvestment and infrastructural deadlocks.

    Polish lesson: let’s not copy the enthusiasm

    What does this mean for Europe and Poland? Although the pace of AI infrastructure development on the Old Continent is slower than in the US, there is no shortage of announcements to build data centres supporting language models, industrial analytics or sovereign cloud. Hyperscalers are already operating in Poland, the number of server farms is growing and the development of the domestic AI market is to be supported by funds from KPO and strategic programmes.

    But the Polish energy system does not have the same flexibility as the US. There is a shortage of redundant power, and the energy transition requires billions of dollars of investment in grids and renewables. Any decision to build an ‘energy-intensive’ data centre must be based on rationale, not just trend. Otherwise, Poland may repeat the mistakes of the US – but with much greater systemic risk.

    It is also worth looking at this from the perspective of the IT channel. Technology providers, integrators, resellers and cloud operators should learn to distinguish between real customer needs and enthusiastic declarations about the ‘need for AI on an industrial scale’. For it may turn out that some of the plans are more of a PR strategy than an actual investment path.

    Common sense as an advantage

    The idea is not to slow down the development of artificial intelligence – but not to build an infrastructure for it that will have no audience. The London Economics report is a reminder that technology does not develop in a vacuum – and the most advanced AI models need not only power, but physical chips that have limited production.

    For policymakers and investors, this is the moment to consider whether the numbers they see in reports are realistic. And for technology companies, it’s a time to make cool data analysis a competitive advantage over the fad of quick declarations.