Tag: Cloud

  • Challenges and priorities in the managed services market: Evolving from ‘handyman’ to business partner

    Challenges and priorities in the managed services market: Evolving from ‘handyman’ to business partner

    Imagine two scenarios. In the first, it’s 2003, and the owner of a small manufacturing company looks anxiously at a silent server that has paralysed the ordering system.

    In a panic, he calls his ‘IT man’, hoping that he will find time to come and diagnose the problem. Every minute of downtime is a measurable loss.

    In the second scenario, it is today. The CEO of a technology company receives a notification on his smartphone. It’s an automated report from his Managed Service Provider (MSP), informing him that a potential vulnerability in the company’s cloud security was discovered and patched overnight, before cybercriminals had time to exploit it.

    The company’s operations were not disrupted even for a second.

    This contrast perfectly illustrates the fundamental transformation that has taken place in the world of IT services. The evolution of managed service providers is not just a story of adaptation to new technologies.

    It is a story of a complete redefinition of the business model, driven by escalating cyber threats, the increasing complexity of cloud environments and the need for automation.

    The modern MSP has ceased to be just an external IT department called in to put out fires. It has become a key partner in risk management, an engine of digital transformation and a guardian of business continuity.

    Foundations of the past: the era of the “Break-Fix” model

    Before IT service providers became proactive partners, the dominant operating model was the so-called ‘break-fix’. Its logic was simple: when something breaks, a specialist is called in to fix it.

    The process was purely transactional: the customer experienced a breakdown, the technician arrived, repaired it and invoiced for his time and parts.

    The biggest drawback of this model was its fundamental economic structure, which created an inevitable conflict of interest. The IT service provider only made money when there were problems at the client.

    The more failures, the higher the provider’s profits. The customer sought maximum stability, while the provider’s business model depended on instability.

    This structural flaw prevented the building of relationships based on trust and had to give way as soon as companies understood that their survival depended on reliable technology.

    Proactive breakthrough: the birth of the modern SME

    The twilight of the ‘break-fix’ era has been accelerated by technologies that have enabled fundamental change. Remote monitoring and management (RMM) and professional services automation (PSA) platforms have catalysed the revolution.

    RMM tools allowed suppliers to continuously monitor the health of customer systems in an automated manner, enabling issues to be identified and resolved before they led to downtime.

    The most important innovation, however, was a change in the business model. MSPs moved away from hourly rates to a fixed monthly subscription fee (Monthly Recurring Revenue, MRR).

    For the customer, this meant cost predictability and for the SME, a stable revenue stream. The introduction of service level agreements (SLAs) gave customers contractual guarantees on response times or system availability.

    Most importantly, this model has united the interests of both parties. The MSP’s profitability became directly proportional to the stability of the client’s IT environment. Each failure was now a cost to the provider, rather than an opportunity to make money, motivating the provider to ensure maximum efficiency.

    The cyber security imperative: from administrator to defender

    If proactivity was the spark that started the revolution, the explosion of cyber threats has become the fuel that drives further evolution. Small and medium-sized enterprises (SMEs) have become a prime target for cybercriminals, and the fear of attack has become one of the top business priorities.

    Research from 2024 revealed that as many as 78% of SME companies fear that a major cyber-attack could bankrupt them.

    In response, cyber security has ceased to be an add-on and has become central to the MSP’s offering and a key driver of revenue growth.

    Market analysis shows that 97% of the highest revenue MSPs offer a wide range of managed security services. Clients are no longer just looking for tools; 64% expect strategic guidance from their MSP.

    This has forced providers to evolve towards a managed security service provider (MSSP) model, offering advanced solutions such as managed detection and response (MDR), security information and event management (SIEM) and security awareness training.

    By taking responsibility for cyber security, the MSP has fundamentally changed its role – it no longer just manages the technology, but the customer’s business risk.

    The cloud revolution: managing hybrid complexity

    Contrary to early predictions, the growth of public clouds has not made MSPs redundant. On the contrary, the mass adoption of hybrid and multi-cloud (multi-cloud) strategies has created an intense new level of complexity that companies have been unable to cope with on their own.

    This has opened up a huge opportunity for mature MSPs. They have transformed themselves into cloud strategists and integrators, helping clients develop strategies, implement complex migrations and, crucially, optimise cloud costs (FinOps).

    In an era of increasing data privacy regulation, MSPs have also started to act as a ‘data sovereignty broker’, advising on where data can and should be stored to comply with regulations.

    The ability to design and manage a fully customised hybrid environment, combining on-premises resources with private and public cloud, has strengthened the MSP’s position as a central coordinator of the client’s entire IT ecosystem.

    Innovation horizon: AIOps and Hhperautomation

    The most mature MSPs today stand on the threshold of the next evolutionary leap, whose horizon is marked by AIOps (AI for IT Operations) and hyper-automation. AIOps uses big data and machine learning to automate and streamline IT operations, moving management from proactive to predictive.

    Instead of reacting to known potential problems, AIOps predicts and prevents them before any symptoms become apparent.

    Practical applications include intelligent correlation of thousands of alerts into a single usable incident, predictive analytics that forecast future resource requirements and automated remediation that resolves repetitive problems without human intervention.

    Combined with hyper-automation, which streamlines entire business processes (e.g. implementation of new customers), these technologies become a key competitive advantage.

    AIOps is becoming a prerequisite for managing modern, complex IT environments, and vendors who successfully implement these technologies will be able to serve more demanding customers with greater efficiency.

    An essential engine for digital transformation

    The evolution of managed service providers is a story of remarkable adaptation and continuous climb up the value chain. From a reactive technician whose success was measured by the speed of repair, to a predictive, strategic partner whose value is defined by its contribution to the innovation, resilience and profitability of the client’s business.

    The MSP of the future is not a technology vendor, but a consultancy with deep technical expertise. It thrives in an environment of complexity, actively manages risk and uses intelligent automation to deliver measurable results.

  • Azure tax? UK court clears the way for billion-pound lawsuit against Microsoft

    Azure tax? UK court clears the way for billion-pound lawsuit against Microsoft

    The London Competition Appeal Tribunal (CAT) has made a decision that could fundamentally change the European cloud infrastructure market. Microsoft, after months of trying to dismiss the claims, must brace itself for a massive lawsuit. At stake is £2.1 billion in damages and the future of a licensing strategy that has been controversial for years among finance and technology executives around the world.

    The case, led by Maria Luisa Stasi on behalf of nearly 60,000 UK businesses, strikes at the heart of Microsoft’s business model. The crux of the dispute is not about the quality of cloud services per se, but about the way the Redmond giant prices Windows Server software licences. According to the plaintiffs, Microsoft has a discriminatory pricing policy: companies choosing to run Windows Server on competitors’ platforms, such as Amazon Web Services, Google Cloud or Alibaba, pay much higher wholesale rates than users choosing the native Azure environment.

    From a business perspective, this means that Azure does not just win by technological prowess, but by an artificially generated cost advantage. For many organisations that have historically based their infrastructure on Microsoft solutions, moving to a competing cloud involves a hidden ‘tax’ that is ultimately charged to their margins or passed on to end customers.

    Microsoft has consistently defended its strategy, arguing that an integrated business model fosters innovation and allows it to offer better solutions within its own ecosystem. Company representatives have announced an appeal, challenging the methodology for calculating the alleged losses and pointing to the dynamic nature of the cloud market.

    However, the London tribunal’s decision coincides with increasing regulatory pressure. The UK Competition and Markets Authority (CMA) and authorities in the EU and US are looking increasingly closely at practices that restrict software interoperability.

    The market is no longer willing to accept technology lock-in with impunity. If Microsoft loses or is forced to settle, we will see not only gigantic compensation payments, but above all a levelling of the price playing field in the cloud. This could pave the way for a new wave of data migration, where performance rather than convoluted and expensive licensing provisions will determine the choice of provider.

  • European Commission becomes independent of Big Tech. Four suppliers have been selected for the €180 million contract

    European Commission becomes independent of Big Tech. Four suppliers have been selected for the €180 million contract

    The European Commission has stopped merely theorising about ‘digital sovereignty’ and has started paying for it. By awarding a €180 million tender for cloud services, Brussels is sending out a clear signal: reliance on Silicon Valley technology has its limits, especially when it comes to data critical to the functioning of EU institutions. The selection of four European players is not only an administrative move, but above all a strategic test of the maturity of the continental tech ecosystem.

    The beneficiaries of the six-year contract include an interesting business mosaic. On the one hand, we have strictly technological players such as the French Scaleway (part of the Iliad Group) or the consortium around OVHcloud led by Post Telecom. On the other, retail powerhouses like Germany’s STACKIT, owned by the Schwarz Group (owner of Lidl), which shows that cloud infrastructure is becoming a key asset even for retail giants. The stakes are rounded off by Belgium’s Proximus, which is working with Google Cloud as part of the S3NS partnership, proving that European sovereignty does not have to mean total isolation, but rather skilful management of ‘bridges’ with US technology

    Key to understanding this contract is the new SEAL certification system. It has moved away from vague declarations to eight measurable criteria, assessing, among other things, resistance to foreign jurisdictions and supply chain control. Most of the selected suppliers have reached SEAL-3 level, which in practice means that their services are designed to prevent interference from non-EU actors. This is an attempt to create a standard that could become a benchmark for the banking or energy sector across Europe.

    From a business perspective, the €180 million spread over six years is modest compared to the R&D budgets of giants such as AWS or Azure. However, the importance of this contract goes beyond pure profit. For selected companies, it is the ultimate ‘stamp’ of credibility that will make it easier for them to fight for corporate customers who fear so-called vendor lock-in, i.e. dependence on a single supplier.

  • CFO: 30% of cloud spend is wasteful. How do you get your AI budget back?

    CFO: 30% of cloud spend is wasteful. How do you get your AI budget back?

    For the past decade, migration to the cloud has been synonymous with modernity and inevitability for managements. The promise was simple: flexibility, scalability and – ultimately – cost savings. Today, however, as the enthusiasm for digital transformation clashes with the hard reality of bills from providers such as AWS and Azure, the tone of conversation in finance cabinets is changing radically.

    A picture of growing frustration is emerging from Azul ‘s latest report, with chief financial officers (CFOs) beginning to see the cloud not as an unlimited resource, but as a strategic financial risk that requires top-level intervention.

    The scale of the problem is difficult to ignore. As many as 69% of CFOs admit that between 10% and up to 30% of their spending on cloud infrastructure is pure waste. This means billions leaking through their fingers due to inefficient architecture, unused instances or errors in demand forecasting.

    This is no longer an operational issue that can be delegated to the DevOps department. It’s a structural problem that directly hits the margins and profitability of businesses.

    The timing of this sobering development is no coincidence. The surge in interest in artificial intelligence has dramatically increased demand for computing power, which in turn has pushed up cloud invoices to levels that were not anticipated by last year’s forecasts.

    Nearly 90 per cent of the finance leaders surveyed indicate that infrastructure costs in their organisations are steadily increasing, and for two-thirds of them, oversight of these expenses has become a standing item on the board’s agenda.

    In this new landscape, cloud cost optimisation is no longer seen as ‘belt-tightening’. Instead, it is becoming a strategic lever. CFOs such as Azul’s Scott Sellers note that recouping wasted resources is the fastest way to fund AI innovation.

    In a period of high market volatility, where capital is more expensive than it was a few years ago, companies cannot count on unlimited increases in budgets. They have to look for money within their own structures. For 45% of finance managers, the overriding goal of optimisation is precisely to increase budget flexibility to allow digital projects to be implemented without jeopardising the financial stability of the company.

    The main obstacle, however, remains a lack of transparency. Modern cloud environments are so complex that pinpointing who is spending money in real time, and on what, borders on the miraculous. This ‘technological fog’ makes demand forecasting a guessing game.

    But for finance leaders, whose performance is increasingly linked to operational efficiency, the status quo is unacceptable. 42% of respondents explicitly indicate that margin improvement today depends directly on how efficiently an organisation manages its resources in the cloud.

    The message coming from the market is clear: the period of carefree scaling at any cost is over. We are entering an era of cloud maturity in which those companies that can combine technological ambition with ruthless financial discipline will win.

    The cloud, once seen as an escape from fixed costs, has itself become a burden that, if not properly managed, could slow down the next wave of innovation.

  • The great reallocation in IT: analysis of a $5.7 trillion market

    The great reallocation in IT: analysis of a $5.7 trillion market

    The global IT market is on the verge of an unprecedented boom. Leading analyst firms such as Gartner forecast that global IT spending will reach an astronomical $5.7 trillion in 2025, an impressive increase of more than 9% from 2024.

    Other forecasts, although differing in detail, agree on one thing: we are witnessing a historic influx of capital into the technology sector. However, to stop at this headline figure would be a mistake. The amount itself, while impressive, is merely a facade for a much deeper and more fundamental transformation.

    The story that this money tells is not about simple growth, but about a strategic and rapid reorientation of global business.

    The real story lies in the asymmetry of this growth. While the overall market is growing by around 9%, some segments are exploding. Spending on data centre systems is set to grow by a staggering 23.2% and on software by 14.2%.

    Communication services, on the other hand, will see a much more modest increase of just 3.8% . This disproportion is no accident. It is evidence of a conscious, strategic business decision that can be called the ‘Great Reallocation’ of capital.

    Companies are not just spending more; they are actively shifting resources from one area to another, de-prioritising maintenance of the status quo in favour of aggressive investment in intelligence and services.

    IT budgets in 2025 are not just bigger – they are smarter, more focused and ruthlessly geared towards a future where software and artificial intelligence are no longer support tools, but the very heart of value creation.

    The AI gold rush: from grand experimentation to pragmatic integration

    The undisputed driver of spending in 2025 is generative artificial intelligence (GenAI). It is the epicentre of the ‘Great Reallocation’, attracting capital at a scale that is redefining investment priorities around the world.

    The physical manifestation of this gold rush is a monumental expansion of infrastructure. Spending on AI-optimised servers is forecast to reach $202 billion by 2025, doubling spending on traditional servers.

    The entire data centre systems segment is expected to grow by the aforementioned 23.2 per cent as a direct result of the demand for computing power required to train and deploy advanced AI models .

    At the forefront of this boom are the hyperscalers – cloud giants such as Amazon Web Services, Microsoft Azure and Google Cloud. These companies, along with IT service providers, will account for more than 70% of all IT spending in 2025. Their role is evolving.

    They are no longer just infrastructure-as-a-service (IaaS) providers; they are becoming the foundation of a new, oligopolistic market for AI models.

    At the same time, the market is maturing at an extremely fast pace. The phase of unrestricted, often chaotic experiments with AI inside companies is coming to an end. Many companies have bumped into a wall: the capital and operational costs of creating their own models have turned out to be much higher than expected, the skills gaps in the teams have been too large, and the return on investment (ROI) from pilot programmes has been disappointing.

    As a result, a key change in strategy is taking place: a shift from an expensive ‘build’ model to a pragmatic ‘buy’ model. IT directors are no longer creating GenAI tools from scratch; instead, they are buying off-the-shelf functionality that software providers build into existing platforms.

    The market is entering a phase that Gartner refers to as the ‘bottom of disillusionment’ (trough of disillusionment) . Paradoxically, this does not mean a decline in spending, only a decline in unrealistic expectations.

    Companies are moving away from chasing revolutionary breakthroughs to practical applications of AI that increase employee productivity, automate processes and give real competitive advantage.

    Software-defined economics: how your car explains the future of business

    The spectacular growth in spending on software (+14.2%) and IT services (+9%) is the strongest signal yet that we are witnessing the birth of a new economic paradigm . You don’t have to look far to understand its essence – just look at the transformation taking place in the automotive industry.

    The Software-Defined Vehicle (SDV) model is an excellent, tangible case study that illustrates how physical products are transformed into platforms for delivering high-margin, cyclical digital services.

    The SDV revolution is the fundamental separation of the hardware layer from the software layer in the vehicle. This allows carmakers to deploy new features and enhancements continuously, via Over-The-Air (OTA) wireless updates, without having to physically interfere with the car.

    This completely changes the nature of the product. The car ceases to be an asset whose value diminishes over time and becomes a dynamic platform capable of generating revenue throughout its life cycle.

    Manufacturers are already experimenting with new business models: BMW is testing subscriptions for heated seats and Volkswagen plans to offer autonomous driving features in a pay-as-you-go model.

    However, this trend is not limited to automotive. It is a leading indicator of the universal transformation of business models. The entire software market is moving towards subscription and Software-as-a-Service (SaaS) models.

    Software is the fastest growing technology sector and is predicted to account for 60% of global technology spending growth by 2029 . This confirms that the SDV model heralds a broader shift in which the boundaries between product and service are blurring.

    In this new economy, the IT department, traditionally seen as a cost centre, is being promoted to the role of central value creator.

    The chief information officer (CIO) and chief technology officer (CTO) become key figures in the product strategy, and their expertise is essential to the creation of the company’s core product.

    Professional 2025: shaping a modern IT skill set

    Technological and business transformation is having a profound impact on the labour market, reshaping the demand for skills. To succeed in this dynamic environment, IT professionals need to develop a hybrid skill set, combining deep technical knowledge with sustainable ‘soft’ skills.

    The analysis of the labour market for 2025 leaves no doubt: the most sought-after professions are almost entirely technology-related. At the top of the lists are AI and machine learning specialists, data analysts and cyber security analysts.

    Demand for cyber security professionals alone is forecast to increase by 33% between 2023 and 2033, with artificial intelligence, data analytics, cloud computing and programming, with a particular focus on the Python language, dominating among the key technical skills employers are looking for.

    However, technical proficiency alone is no longer sufficient. As AI takes on more and more analytical tasks, the value of skills that machines cannot easily replicate increases.

    Employers are increasingly prioritising abilities such as analytical and creative thinking, complex problem solving, emotional intelligence and adaptability.

    Artificial intelligence will certainly lead to a displacement of the labour market. It is estimated that AI could automate up to a quarter of job tasks in the US and Europe, especially routine tasks such as basic programming or customer service.

    However, the dominant expert narrative does not focus on mass unemployment, but on the transformation of work. AI is not so much eliminating occupations as redefining them, creating new, often more strategic roles. In this new occupational landscape, the ‘half-life of technological skills’ is now less than five years .

    This means that continuous learning agility is becoming the most important meta-skill. The future of work is not about competition between humans and AI, but about their symbiosis.

    The most effective professionals are those who master the art of using AI as a collaborative partner to enhance their own creativity and productivity.

    Navigating the next wave of IT transformation

    Analysis of global IT spending trends for 2025 clearly shows that we are witnessing profound, structural changes. We are seeing a shift from spending more to spending smarter, and the AI market is maturing, moving from building to integrating off-the-shelf solutions.

    At the same time, business models are evolving from selling products to selling services, forcing a transformation in the labour market – from static roles to dynamic skills.

  • How to use the cloud wisely? The balance between profit and vendor lock-in

    How to use the cloud wisely? The balance between profit and vendor lock-in

    The technology landscape is somewhat reminiscent of the architecture of major metropolises – it is impressive, functional and offers almost unlimited growth opportunities, but at the same time rests on a foundation of deep, often invisible at first glance dependencies. The adoption of native services offered by global cloud giants has become almost an unconditional reflex for modern businesses.

    Indeed, a more rational decision is hard to make in the face of pressure to deliver innovation quickly. Tools integrated into a single ecosystem promise an immediate leap in productivity, taking barriers out of developers’ way that a decade ago required months of infrastructure investment. However, within this idyllic picture lies a fundamental question about the price of convenience, which over time can turn into technological bondage.

    Vendor lock-in, commonly known as vendor lock-in, is not a phenomenon that occurs suddenly as a result of a gross planning error. Rather, it is an incremental process, the result of hundreds of small, fully justified technical choices. When an engineering team decides to use a specific NoSQL database because of its unique latency parameters or implements advanced process orchestration features only available from one vendor, it builds real business value.

    At the same time, however, each such decision adds another brick to the wall of dependency. The problem arises when the organisation loses sight of the aggregate cost of these micro-decisions, waking up to the reality that it becomes financially and operationally unfeasible to change strategic direction.

    When analysing the nature of cloud lock-in, it is important to go beyond the simplistic framework of subscription costs. The real risk is multidimensional. The economic layer is the most tangible – the lack of a viable alternative deprives the company of a key asset in commercial negotiations. The supplier, aware of the migration costs on the customer side, is free to price, knowing that the exit barrier is almost impenetrable.

    Equally important, although less often discussed, is the competence aspect. The specialisation of teams in specific, proprietary technologies means that engineering expertise is no longer universal. The engineer becomes not so much an expert on cloud solutions as an expert on a specific product, which in the long term limits the company’s staffing flexibility.

    The most serious threat, however, is the loss of strategic sovereignty. When a company’s product development plan becomes hostage to the cloud provider’s roadmap, the organisation loses its ability to respond autonomously to market changes.

    If a key function of an application relies on a specific AI service that is withdrawn or drastically changed by the supplier, the company is faced with a fait accompli, with no impact on its own technological foundations.

    However, the answer to these challenges is not technological fundamentalism. Trying to build systems so that they are fully portable between different clouds in a matter of days is most often a pipe dream that generates gigantic costs and deprives the company of the benefits of modern solutions. The key to success is to adopt an informed choice architecture strategy. It requires a precise categorisation of services into those that are treated as commodities and those that represent a competitive advantage.

    Commodity-like services, such as standard computing capacity or data storage, should be implemented using layers of abstraction. Containerisation and infrastructure-as-code management tools allow a high degree of mobility where the uniqueness of the solution does not bring direct business gain.

    In contrast, in areas that constitute product uniqueness – for example, in advanced analytics or specific server services – deep integration into the vendor ecosystem can be fully justified. The market advantage resulting from faster innovation often outweighs the risk of future lock-in, provided the decision is documented and informed.

    Introducing defensive mechanisms into the software architecture allows control to be maintained without sacrificing performance. Using proven design patterns that separate business logic from vendor-specific programming interfaces is an investment in future flexibility. Thanks to such measures, replacing one component with another does not necessarily mean having to rewrite the entire system from scratch.

    However, it is important to avoid the trap of over-engineering. Building complex layers of isolation for each, even the simplest service, can prove more expensive than the eventual migration itself in the future.

    Technology sovereignty management is essentially a process of continuous risk management. The organisations demonstrating the greatest maturity are those that regularly audit their infrastructure against their exit strategy.

    It is not a matter of constantly planning the migration, but of being aware of what steps would be necessary in a critical scenario. This approach changes the position of the IT department within the company structure – it ceases to be merely a cost centre and becomes the guardian of strategic security.

    In the final analysis, supplier lock-in is not an unambiguously negative phenomenon. It is a risk vector that, skilfully exploited, can become a catalyst for growth. Technical sovereignty in 2026 is not about total independence, which is virtually impossible in a globalised world, but about having full knowledge of the price you pay for whatever path you choose.

    Deciding how much freedom to give up in exchange for speed and innovation remains one of the most difficult and important competencies of today’s technology leader. It is this ability to balance convenience and control that will define the winners of the coming decade in digital business.

  • The war in Iran and cloud pricing – How geopolitics is hitting the IT sector

    The war in Iran and cloud pricing – How geopolitics is hitting the IT sector

    The modern global economy resembles an intricate network of interconnected vessels, in which a tremor caused at one point on the globe resonates with unexpected force at the opposite end. While it might seem that the sterile, air-conditioned halls of Europe’s data centres are separated by an infinite distance from the dust and chaos of the Middle East, reality brutally verifies this belief.

    Today’s technology, despite its apparent ethereality, remains deeply rooted in the physicality of raw materials and the stability of trade routes. What is happening in the bottleneck of the Strait of Hormuz is not just a local armed incident, but a direct impetus adjusting the IT sector’s operating margins globally.

    This phenomenon can be described as a geopolitical risk premium. The market for digital services has ceased to respond solely to classic supply and demand mechanisms and has begun to price uncertainty. When the world’s key energy arteries are compromised, the price of technology rises not because the power socket has run out, but because the cost of maintaining the stability of this flow becomes dramatically higher.

    The foundation of any cloud infrastructure is energy. In Europe’s energy mix, natural gas still acts as the marginal price-setting fuel. Any disruption in the Middle East, which is the planet’s energy granary, immediately translates into higher electricity bills, which the operators of large server farms have to pay to keep their computing processes running.

    Often seen as an immaterial entity, the cloud actually ‘breathes’ electricity, and its breath becomes more expensive the more turbulent the regions of fossil fuel extraction.

    The situation is complicated by the fact that modern data centres are facilities designed for absolute reliability. Guaranteeing service availability of more than ninety-nine per cent relies on extensive emergency power systems. These generators, which are the last line of defence against a blackout, run on diesel.

    Rising oil prices therefore directly increase the cost of maintaining operational readiness. These accumulating energy costs cease to be just a spreadsheet item and become a barrier to entry for innovative projects, especially when AI, with its exponentially growing appetite for computing power, is developing rapidly.

    When analysing the supply chain, it is important to recognise that the impact of conflict goes far beyond energy alone. The logistics of IT equipment, including the transport of servers, disk arrays and advanced components, is extremely sensitive to fluctuations in transport fuel prices. However, even more acute, although less visible, is the increase in the cost of associated services.

    Geopolitical instability is forcing logistics and insurance companies to renegotiate rates. Risk premiums in maritime and air transport act as a hidden tax that ultimately burdens the end customer’s wallet.

    A particularly worrying aspect is the fate of critical raw materials such as helium supplied from Qatar. This gas is indispensable in the production of state-of-the-art semiconductors. A transport blockade in the region could paralyse factories in Taiwan, with a consequent return to the days of drastic component shortages.

    From a bizneus perspective, this means having to abandon the ‘just in time’ delivery strategy in favour of building up costly strategic reserves.

    The current balance of power on the world map is forcing a redefinition of digital asset placement strategies. Technological security today is also a geographical analysis. Cloud regions located in countries with high political risk are losing their attractiveness, while countries offering a stable energy mix, based on nuclear or renewables, are becoming new bastions of operational sovereignty.

    A key task for executives therefore becomes optimising cloud costs through advanced FinOps practices. IT financial management is now part of a company’s defence strategy.

    Understanding that every inefficiency in application code or unused server instance is a waste of resources that are becoming scarcer and more expensive is fundamental to modern technology leadership.

    In conclusion, the conflict in the Strait of Hormuz region represents a test of sorts for the resilience of the global technology sector. It demonstrates emphatically that the digital world is not isolated from tectonic shocks in geopolitics.

    Business must accept the new reality that energy inflation and supply uncertainty are constants in the equation. Adapting to these conditions requires, first and foremost, a deep awareness that cloud stability begins where dependence on uncertain energy sources and threatened trade routes ends.

  • Microsoft halts cloud and sales hiring

    Microsoft halts cloud and sales hiring

    Microsoft has ordered managers of key units, including its strategic cloud division and North American sales groups, to halt the recruitment of new employees, reports The Information. The decision, while not corporate-wide, signals a deeper shift in resource management at the threshold of the end of the fiscal year.

    Microsoft’s move is a classic example of margin optimisation in the face of gigantic capital expenditure. The company, which employs more than 220,000 people globally, is under increasing pressure from Wall Street. Investors, accustomed to the steady growth of the Azure sector, are anxious to see record spending on the data centres and processors needed to support language models. The hiring freeze in sales and cloud infrastructure is a signal that the company is looking for savings where growth rates have stabilised in order to fund the areas with the highest potential for breakthrough.

    Importantly, the recruitment lock-in is selective. The teams responsible for developing Microsoft’s Copilot tool and key AI projects still have the green light to recruit talent. This is a clear indication that, for CEO Satya Nadella, ‘artificial intelligence’ is no longer just an add-on to the portfolio, but a new business core to which the cost structure of the entire organisation is subordinated.

    Microsoft’s actions are part of a wider ‘year of efficiency’ trend in Silicon Valley. While Meta is cutting a fifth of its workforce and Amazon is correcting pandemic-era over-expansion, Microsoft is taking the route of surgical precision. Instead of mass layoffs on the scale of its market rivals, the company is relying on budgetary discipline in traditional verticals.

    Technology companies are not only building AI for their customers, but are themselves going through a painful process of reorganisation in which human capital has to give way to investment in computing power.

  • How much electricity does AI use? The US government is launching a big count. The consequences could reach Poland

    How much electricity does AI use? The US government is launching a big count. The consequences could reach Poland

    The US Department of Energy (DOE) is ending the guesswork. The launch of a pilot study into the real-world energy consumption of data centres signals that the AI sector’s uncontrolled appetite for electricity is coming to an end. Although the study applies to Texas, Virginia and Washington, its echoes will in time hit the European market, including the rapidly growing technology hub in Poland and the CEE region.

    Until now, technology giants have operated largely in the realm of estimates. Now that the Energy Information Administration (EIA) is starting to ask for specific sources of emergency power and actual network load, Polish data centre operators and investors must prepare for a similar tightening of course from EU and national regulators.

    Pressure on efficiency in the CEE region

    As a key point on the map of digital expansion in Central Europe, Poland faces a unique challenge. Our energy mix, still heavily based on coal, means that the construction of more ‘server farms’ raises social and environmental tensions. A US study shows that the gigantic demand for AI can no longer be hidden under the guise of general declarations about green energy. Polish entrepreneurs should pay particular attention to three aspects: the stability of energy prices for individual consumers, the risk of overloading local grids and the need to invest in self-consumption and own RES sources.

    In the CEE region, where energy costs are a key competitive factor, transparency can prove to be a double-edged sword. On the one hand, accurate data will allow better planning of critical infrastructure. On the other, they may expose the weaknesses of energy systems that are not ready for the demand surges generated by language models.

    Tristan Abbey’s EIA initiative is a lesson in humility for the Big Tech sector. It demonstrates that technology does not develop in a vacuum and is underpinned by a physical energy infrastructure with limited resources.

    This is why echoes from Virginia or Texas are likely to be heard in Warsaw and Prague:

    1. standardisation of reporting requirements

    When the US giants (Amazon, Google, Microsoft) are forced to report their energy consumption in detail in the US, over time they will implement the same monitoring systems in their European branches. For Polish business, this means that local subcontractors and co-location operators will have to adapt to the same rigorous transparency standards in order to maintain contracts with global players.

    2. fight for scarce resources

    The problem of ‘no power’ for AI is global. If the US – a country with huge gas reserves and a developed grid – starts to officially measure the problem, it is a wake-up call for Europe, where the grid is older and more burdened by the energy transition. Poland, being in the process of moving away from coal, has even less margin for error. Investors are looking at the DOE’s hands because they know that if the US ‘runs out of space’ in sockets, the pressure to build in the CEE region will increase, driving up connection prices in our country.

    3. “Export” of regulations

    Historically in tech it works like this: The US defines the technical problem and Europe (EU) gives it a legal framework. The data collected by the EIA in Houston will be carefully analysed by Brussels when designing the next iteration of the Energy Efficiency Directives (EEDs). Poland, as a country with high CO2 emissions per kWh in the region, is most vulnerable to the negative effects of such regulations if data centres are found to consume more than assumed.

    4 Chain reaction in the supply chain

    The questions about backup power (diesel generators vs. batteries) that Tristan Abbey asks are a direct hit to the infrastructure market. Polish power equipment companies need to keep an eye on these trends, as they will set the procurement standards for the next decade.In short: this is not a local dispute in Virginia. The cloud has become a measurable, heavy burden on the national economy. Any Polish CEO planning to migrate to the cloud in 2026 must take into account that its cost will be increasingly linked to the price of emission allowances and network capacity, which the US has just started to ask about.

  • The technology gap is widening: SMEs vs corporates in the race for AI

    The technology gap is widening: SMEs vs corporates in the race for AI

    Small and medium-sized enterprises (SMEs) are the backbone of the European economy. They account for 99.8 per cent of all companies, generate more than half of the added value and employ nearly two-thirds of the private sector workforce. In an era of global competition and rising customer expectations, digitalisation is no longer an option for them – it has become a condition for survival. However, the latest data from across the European Union paints a worrying picture: while large corporations are departing on the digital express, the SME sector is largely still waiting on the platform.

    An analysis of the adoption of the three pillars of modern business – cloud computing, artificial intelligence (AI) and cyber security – reveals a deep and widening gap. This ‘digital maturity gap’ threatens not only the competitiveness of individual companies, but also the achievement of the EU’s ambitious strategic goals, known as the ‘Road to the Digital Decade’.

    Two-speed Europe: who is the digital leader and who is being left behind?

    To understand the real level of digitalisation, it is not enough to see if a company has access to the internet. The key is how deeply technology is integrated into its business processes. This is measured by the EU’s Digital Intensity Index (DII), which assesses the use of 12 key technologies.

    Only 58% of SMEs in the EU have reached a ‘basic level’ of digitalisation, which means using at least four of these technologies. This is a far cry from the EU’s target that more than 90% of companies in the sector should reach this threshold by 2030.

    The map of Europe shows a clear division. The Nordic countries are at the head of the peloton, with as many as 86% of SMEs meeting the criteria for basic digitisation in Finland and 80% in Sweden. At the other extreme are Romania (27%) and Bulgaria (28%). Poland, with a score of 43% (data for 2022), is well below the EU average, which signals systemic barriers inhibiting the potential of our companies.

    The problem is the difference between ‘being online’ and ‘being digital’. Almost all companies in the EU have broadband, but they often use it passively – for email or social media profiles. The real transformation begins when technology becomes an integral part of the operating model, not just a facade.

    Cloud computing: a foundation that shows cracks

    Cloud computing is today the cornerstone of flexibility and scalability. In 2023, 45.2% of businesses in the EU will be using it, a steady but slow growth. However, the devil is in the detail.

    The biggest challenge is the ‘cloud gap’ between companies of different sizes. While 77.6% of large corporations are actively using the cloud, the figure for small businesses drops to just 41.7%. This is a gap of more than 35 points, showing that SMEs still face barriers to accessing this fundamental technology.

    Moreover, companies that are already in the cloud mainly use it for basic tasks: email handling (82.7%), file storage (68%) or office software (66.3%). They are much less likely to use advanced services such as developer platforms (PaaS) or computing power (IaaS), which are essential for building innovation.

    The conclusion for managers is simple: the cloud is not just a storage facility for data, but first and foremost a launch platform for AI. Companies that do not invest in a mature cloud infrastructure today will have a double barrier to overcome tomorrow to enter the world of artificial intelligence.

    Artificial intelligence: the technology that divides most

    If the cloud shows the cracks, artificial intelligence reveals the real divide. Despite the huge interest, AI adoption in European companies remains alarmingly low, at just 13.48% in 2024. This is a result that is dramatically far from the EU target of 75% for 2030.

    “The AI implementation gap is gigantic. Artificial intelligence is used by as many as 41.17% of large corporations, but only 11.21% of small companies. This means that large companies implement AI almost four times as often. Poland, with a score of 5.9%, is at the grey end of Europe, ahead of only Romania (3.07%).

    Why is the gap so deep? Cloud deployment is often a decision to optimise costs. AI implementation is a strategic investment with an uncertain return, requiring not only capital, but above all competence and a mature data management strategy – resources that SMEs often lack.

    If this trend continues, AI, rather than levelling the playing field, will become the ‘great divider’. This could lead to a ‘winner-take-all’ scenario, in which large, data-rich corporations, thanks to AI, will become even more powerful, marginalising smaller players.

    Cyber security: the paradox of risk in the SME sector

    On paper, the situation looks good: 92.76% of companies in the EU use at least one ICT security measure. However, these are mainly basics, such as strong passwords or data backup. The real picture of digital resilience emerges when we look at proactive measures.

    A regular ICT risk assessment – the cornerstone of any mature security strategy – is carried out by only 34.1% of companies in the EU. The difference between large (75.62%) and small (29.35%) companies here is colossal. This means that most SMEs are operating ‘blindly’ without fully understanding their attack surface.

    This leads to the ‘SME digital risk paradox’. On the one hand, small businesses are increasingly being targeted, seen as ‘easier prey’ and a gateway to the supply chains of larger partners. On the other hand, they invest the least in strategic defence, mistakenly believing that they are too small to attract the attention of cybercriminals. In the connected economy, SME security becomes a security issue for the entire ecosystem.

    How to bridge the digital divide?

    Passivity is no longer an option. To survive and compete in the digital decade, SME leaders must take decisive action.

    It makes sense to start with strategy, not technology. Before you invest in any tool, define the key business problem you want to solve. Is it increasing sales, reducing costs or perhaps improving customer service? Only then select the right solution.

    Use the cloud as a foundation. Migrate core systems (email, files, accounting) to the cloud. This will not only free up resources and increase security, but most importantly create a centralised database – a prerequisite for future AI implementations.

    Invest in people, not just platforms. The best technology is useless without a competent team. Take advantage of available EU and national programmes (e.g. Digital Skills and Jobs Coalition, SME4DD) to upskill staff in data analytics, digital marketing and cyber security.

    Think security from the outset. Treat cyber security as an integral part of any digital project, not an expensive add-on. A proactive approach is always cheaper and more effective than reacting to a crisis.

  • OpenAI on the AWS platform? Microsoft fights for cloud exclusivity

    OpenAI on the AWS platform? Microsoft fights for cloud exclusivity

    The Financial Times reports that Microsoft is considering legal action against OpenAI and Amazon. The bone of contention is a $50 billion deal that could end the Redmond giant’s previous dominance as the exclusive cloud infrastructure provider for ChatGPT developers.

    ‘Frontier’, OpenAI’s new commercial product, has become a flashpoint. The key question is whether making it available via Amazon Web Services violates the exclusivity provisions of the Azure platform.

    For business leaders, this signals that the era of monolithic partnerships in AI is coming to an end. OpenAI, seeking to diversify its revenue and reach, is beginning to test the limits of loyalty to its largest investor.

    From a market perspective, the potential litigation could redefine the standards of cooperation between model providers and infrastructure giants. If Amazon manages to break Microsoft’s monopoly, a new wave of competitiveness awaits, forcing enterprises to be more flexible in their multicloud strategies.

  • Hidden cloud costs in AI projects: How to avoid them in 2026?

    Hidden cloud costs in AI projects: How to avoid them in 2026?

    Implementing artificial intelligence in many organisations was supposed to be like turning on a light – a process that is quick, seamless and instantly brightens the business horizon. The reality, however, is proving to be much more challenging, resembling more like building an entire power plant from scratch. The success of advanced algorithms today does not solely depend on choosing the right model, but more importantly on keeping infrastructure costs in check before the technology starts to earn its keep.

    According to Wasabi Technologies’ latest Cloud Storage Index report, investment in artificial intelligence is growing at an exponential rate. What is surprising, however, is that as much as 65 per cent of these budgets do not feed into the accounts of innovative software developers at all, but instead flow broadly towards the foundations: storage, data storage systems and pure computing power.

    Valley of disappointment vs. incubation phase

    The clash between hype announcements and hard financial data is sometimes painful. Currently, only 29 per cent of surveyed companies in the German market report a positive return on investment in AI-based projects. On the surface, this result could be cause for concern, but a deeper analysis reveals a completely different picture. As many as 62 per cent of organisations assume that these investments will start generating real returns in the next twelve months. This phenomenon can be referred to as deferred ROI.

    Businesses are coming to the realisation that implementing artificial intelligence is not a sprint, but an extremely demanding marathon. Analytical models require time, vast amounts of precise information and advanced training. Before the expected productivity gains and new business models can emerge, organisations must endure a long incubation period in which capital is allocated intensively without immediate, measurable financial results.

    The cloud bill of horrors and hidden costs

    During this transitional period, unforeseen infrastructure costs become the biggest threat to the liquidity of innovation projects. The referenced report exposes an inconvenient truth, indicating that almost 48 per cent of companies exceeded their budgets for cloud services in the past year. The reason for this is rarely a mere physical lack of disk space.

    Far more often than not, budgets melt in the clash with hidden fees. Half of the expenditure on cloud storage is often additional costs, related to information transfer, API queries or complex access management. The aggregation and processing of the terabytes of data required to power artificial intelligence models generates gigantic network traffic, for which cloud providers bill heavily.

    An additional burden is the poor quality of the data itself. Storing unstructured, duplicated or erroneous information costs twice. First, it generates unnecessary storage costs, and then it leads to useless, error-laden algorithm results, which ultimately nullifies the entire investment effort.

    Escape to hybrid architecture

    The answer to rising costs and system complexity is the growing popularity of hybrid environments. More than 64 per cent of businesses are choosing to combine a local server infrastructure with a public cloud. This division of roles appears to be the optimal compromise in times of market uncertainty. The public cloud takes on the heaviest tasks of aggregating huge data sets and long-term archiving, representing the beginning and end of the analytics pipeline.

    Local servers, on the other hand, are used to securely process the company’s most sensitive, strategic assets. However, it is important to remember that this hybrid solution, while extremely flexible, drastically increases the complexity of managing the entire IT ecosystem. Successful orchestration of such an environment requires outstanding architectural expertise to ensure that the cost of transferring data between different zones does not eat up the gains generated by optimisation alone.

    A question of trust in the shadow of cyber attacks

    Even the best-optimised infrastructure loses its relevance in the face of security breaches. The problem is extremely acute, given that almost half of the companies surveyed have experienced a cyber attack that affected access to their data stored in the public cloud. This situation creates a deep crisis of trust. A significant proportion of users of such solutions do not have complete confidence that their digital assets remain intact after a security incident.

    The business consequences could be catastrophic in this case. If AI-based systems start making strategic financial or operational decisions based on data that has been inadvertently modified by an intruder, the entire organisation will be on the brink of the precipice. Therefore, the stability and absolute security of the data storage architecture is an absolute prerequisite before any implementation of advanced analytics.

    The foundations of true innovation

    True technological transformation rarely begins with flashy visions spun in boardrooms. Its foundations are poured in carefully designed, secure data centres. Before an organisation decides to purchase expensive artificial intelligence-based software licences, it is essential to conduct a rigorous audit of its existing architecture.

    Particular attention must be paid to cost transparency, elimination of hidden fees, rigorous hygiene of collected information and unwavering security of the entire ecosystem. Advanced algorithms are not forgiving of digital clutter. The sooner companies get their technological foundation in order, the more efficiently they will join the elite group of those organisations that can already turn the potential of artificial intelligence into measurable business returns.

  • Control Architecture: How NIS2 and Data Act regulations have redefined cloud maturity in 2026

    Control Architecture: How NIS2 and Data Act regulations have redefined cloud maturity in 2026

    The fascination with cloud computing technology itself has given way to an era of mature risk management. Until a few years ago, debates in IT directors’ offices oscillated around the dichotomy between on-premises and public infrastructure, treating migration as an end in itself. The year 2026, however, brought a sobering and profound redefinition of priorities. Today, the cloud has ceased to be merely a moving infrastructure and has become a strategic ecosystem in which control is the key currency. Indeed, the real challenge is no longer a question of where a container or virtual machine physically resides, but who is actually in control of cost, operational continuity, legal compliance and the ability to change course when market dynamics demand it.

    The business landscape has been shaped by two powerful regulatory pillars: the NIS2 Directive and the EU Data Act, which took full effect on 12 September 2025. Although initially treated with some reserve, typical of new bureaucratic burdens, in retrospect they appear as catalysts for positive change. They have transformed the European digital services market from a space dominated by the arbitrary rules of global providers to an environment where transparency and interoperability have become a standard rather than a privilege.

    Fundamental to this change is the shift from declarative security to operational resilience. For years, many organisations have relied on so-called catalogue security, trusting that the certifications of the big players automatically solve the problem of protecting assets. The implementation of NIS2 has brutally verified this approach, imposing a common framework that requires real risk management measures and precise incident reporting mechanisms. In 2026, security is seen as a continuous process of monitoring, detecting and actively learning from mistakes. The difference between having control and being protected has become clear: the former requires the ability to demonstrate at any time what happened, what steps were taken and how the failure was mitigated.

    In parallel, the Data Act has introduced a new dynamic in the relationship between the customer and the processing provider. A key element of this regulation is the facilitation of migration between providers, effectively hitting the phenomenon of dependence on a single technical partner. Minimum requirements for cloud contracts and imposed interoperability standards have meant that the concept of exit readiness is no longer just a theoretical provision in business continuity plans. In practice, this means that organisations can today plan their architecture in a modular manner, without fear of economic or technological barriers to a possible change of provider. The ability to seamlessly transfer data and functionality without losing its integrity has become the insurance policy of the modern business.

    Nowadays, there is a clear trend for medium and large companies to seek more customised models. Increasingly, the choice is falling on hybrid environments or private models hosted within established cloud providers. This structure preserves the benefits of consuming resources as a service, while offering a higher level of isolation, traceability and, most importantly, operational proximity. In this context, the naming of solutions goes down the drain. It becomes irrelevant whether the model is labelled public or private, as long as it measurably addresses the fundamental needs of the business.

    Three questions are key here, which in 2026 represent a kind of litmus test for any cloud strategy. The first relates to operational peace of mind: does the architecture allow for stable operations without worrying about sudden regulatory or technological changes? The second relates to auditability: is the compliance verification process frictionless, evidence-based and naturally collaborative with the provider, rather than tediously mining data from opaque systems? The third, and perhaps most important, relates to freedom: does the organisation have a viable and feasible exit route if the partnership ceases to meet expectations?

    True business resilience is no longer equated with a simple high availability parameter written into a contract. Mature organisations understand that business continuity does not come from a blanket provision of guaranteed uptime, but from sound design, application-level replication and regularly tested disaster recovery plans. With this approach, businesses stop improvising with each new project, relying instead on repeatable mechanisms and clear recovery objectives. This shift from reactive firefighting to predictable crisis management is one of the biggest successes forced by the new framework.

    The human factor is also not insignificant. The most valuable attribute of a cloud provider turns out to be a stable team that understands the specifics of a particular business, its critical moments and periods of peak demand. The best cloud is not the one that offers the most elaborate management console, but the one that realistically takes the operational burden off the customer’s shoulders. Team continuity on the part of the technology partner is often the only difference between a chaotic response to an incident and a controlled process of system evolution.

    The issue of upgrading applications is also worth noting. The cloud loses its economic efficiency when it is treated merely as expensive hosting for outdated solutions. Excessive resource consumption and the need to manually handle legacy workloads generate layers of exceptions that, over time, become a brake on innovation. True productivity is born out of a step-by-step upgrade towards cloud-native patterns, where automation, scalability and observability are built into the very design of the system. A hybrid model, skilfully designed, allows you to draw the best of both worlds: to benefit from the advanced analytics services or artificial intelligence of global players, while maintaining the core of your business in a secure, sovereign and fully controlled environment.

    The migration process is no longer seen as simply copying machines. It requires precise planning, coordination with the business and the redesign of security policies from day one. When the supplier takes full responsibility for the process, operational risk drops dramatically and deployment timelines become predictable. This is a key element in building a competitive advantage, especially in industries subject to strong regulatory rigour.

    The year 2026 is when cloud maturity is measured not by the number of services available, but by the quality of control over them. European regulations such as NIS2 and the Data Act, while demanding, have laid a solid foundation for a system where security, sovereignty and portability are immanent features of digital services. Businesses that have understood this lesson no longer see the cloud as an expense, but as a platform for growth, providing traceability, proven continuity and, above all, the peace of mind necessary to make bold decisions in a global marketplace. In this new dispensation, the winners are those for whom technology is a servant of strategy, not a constraint on it.

  • Google and AWS want to go local. IT giants battle it out for the European cloud market

    Google and AWS want to go local. IT giants battle it out for the European cloud market

    For the past decade, the technology world has fed us a vision of digital cosmopolitanism. Cloud computing was supposed to be a transnational entity, an ethereal layer of innovation that, like Roman aqueducts, provides life-giving resources regardless of latitude. We believed in ‘Cloud Anywhere’, in stateless clusters and an architecture for which national borders were merely an annoying artefact of the analogue past.

    However, 2026 brings a painful wake-up call. According to Gartner‘s latest forecasts, global spending on cloud sovereigns will increase by 35.6%, reaching a not inconsiderable $80 billion. This is no mere market correction. This is the moment when digital globalism collides with the hard wall of geopolitics, and the Seattle and Mountain View giants – hitherto the priests of universalism – must hastily learn their local dialects.

    The anatomy of a concession

    Rarely in the history of IT have the major players voluntarily abandoned economies of scale. The foundation of the power of AWS or Microsoft Azure was unification: one technology stack, one operating model, one global management system. But today’s landscape, dominated by the fear of losing their ‘digital autonomy’, is forcing them into a process that could be called controlled fragmentation.

    The launch of AWS European Sovereign Cloud or the Sovereign Core platform from IBM are acts of capitulation to the hard law of sovereignty. They are an attempt to answer a fundamental question: who has the last word when the cloud operating system needs to be rebooted and the encryption keys are of interest to a foreign jurisdiction?

    Survival strategy

    The most interesting phenomenon, however, is how deftly the technology giants are adapting to the role of ‘local suppliers’. We are seeing a fascinating market spectacle: companies that epitomise American technological dominance are entering into alliances with national telecoms champions in Europe or Asia. Partnerships with T-Systems in Germany or Orange in France are nothing short of ‘yowhite-labeling’ of trust.

    For the business customer, this is a paradoxical situation. On the one hand, he receives the promise of Silicon Valley-like innovation, on the other, the guarantee that data will not leave his backyard. But has there really been a change under this mask? Critics point to the problem of the U.S. CLOUD Act, which in theory allows U.S. services to access data managed by U.S.-based companies, regardless of the location of the server. Hyperscalers are doubling and tripling to prove that technical barriers render this law useless. It is a technological arms race where credibility is at stake.

    80 billion reasons to play locally

    Why do the giants opt for the engineering nightmare of maintaining separate sovereign regions? The answer is: because they have no choice. Gartner predicts that by the end of 2026, organisations will move 20% of their existing workloads from global public clouds to local providers. This is a gigantic capital outflow.

    Spending growth of 35.6% is being driven by critical sectors: governments, banking, energy. These are industries that have stopped believing in the ‘goodwill’ of global corporations. As trust erodes to the point where government organisations begin to consider whether geopolitical tensions could lead to sudden service cuts, sovereignty has become the new KPI for boards.

    Gartner’s Rene Buest rightly points out that the aim is to ‘keep wealth generation within its own borders’. Data has become the new oil, and the sovereign cloud is the local refinery. Countries have realised that by allowing data to flow freely to global centres, they are losing not only control, but also the potential to build their own AI models and innovations.

    Sovereignty tax

    However, this new reality carries a hidden cost. We need to talk openly about a ‘sovereignty tax’. Localised solutions, cut off from global networks, will inherently be more expensive to maintain. Moreover, they may suffer from so-called ‘technology lag’. The latest AI services, the most advanced language models or analytics functions tend to debut in major cloud regions. Sovereign enclaves may receive them with a delay of several months or even a year.

    Business is therefore faced with a dilemma: maximum innovation or absolute control?

    Will the mask become a face?

    The year 2026 will go down as the moment when cloud computing finally lost its innocence. The hyperscalers, donning the masks of local providers, made a masterstroke – instead of fighting regulation, they decided to capitalise on it.

    However, it is important to remember that data sovereignty is not just a question of where the server stands, but who has the authority to operate it and who controls the platform’s source code.

  • IT partner in 2026: Why is the partner channel generating $2 trillion in SME sector spend?

    IT partner in 2026: Why is the partner channel generating $2 trillion in SME sector spend?

    Paradoxically, it is not algorithms but trusted human capital that is becoming the most valuable asset of smaller business. The SME sector is increasingly shifting its budgets towards specialised partners, looking to them not only as suppliers, but above all as architects of survival. In 2026, as much as 79% of IT spending in the SME sector flows through the hands of commercial partners. In regions such as EMEA or Latin America, this figure is even higher than 80%.

    This is no coincidence, but proof that relationship and local trust are becoming the hardest currency in business.

    Partner as ‘external brain’ of the operation

    For a small or medium-sized company, technology is rarely an end in itself – it is a tool for survival and growth. At the current rate of innovation, managing the technology stack alone is becoming an insurmountable barrier for SMEs. The difference between the market average (66.7% spend by partners) and the SME sector (79%) shows that the smaller the scale of the business, the greater the need for a trusted guide.

    The IT partner in 2026 has become the de facto ‘external technology director’ . Companies with between 100 and 499 employees, which account for as much as 42% of spend in their segment, are not looking for products on the shelves of digital giants. They are looking for someone who will take responsibility for consultancy, implementation and, most importantly, ongoing operational support.

    The end of the dictatorship of “boxed” solutions

    The SME market in 2026 has developed a defence mechanism against technological chaos. Although this sector’s spending is growing more slowly than the broader enterprise market, its structure is becoming increasingly consolidated around external advisors. While the largest corporations are pumping billions directly into hyperscale data centres, smaller companies have almost completely handed over the reins to local partners.

    This change is not a coincidence, but a pragmatic calculation. The medium-sized company is not looking for access to raw computing power, but a ready continuity of processes. In EMEA, where partners control as much as 82% of spending, technology has become a service whose stability must be vouched for by a specific individual, not the anonymous rules and regulations of a global provider

    Managed services: A new standard for security and peace of mind

    Omdia’s data analysis sheds light on a fascinating trend: the dynamic growth of managed services, which is expected to reach $251 billion at 9.7% growth. This signals a profound mental shift in business. Entrepreneurs have realised that a one-off implementation is only the beginning.

    Technology in the hands of smaller companies has become a test of character and trust. While the market giants are tempted by direct access to powerful infrastructure, the SME sector in 2026 is massively opting for the intermediation of local partners, seeing them not only as suppliers, but above all as guarantors of operational peace of mind. The eighty per cent dominance of the partner model is clear evidence that it is the personal relationship that is becoming the most effective fuse for business growth.

    Cloud and connectivity – foundations built by intermediaries

    Although cloud computing is associated with giants such as AWS, Microsoft and Google, partners are the ones ‘bringing it under the roof’ of medium-sized companies. The predicted 22.3% growth in cloud infrastructure services is largely due to integrators who can carry out a secure migration without paralysing the customer’s current operations.

    A similar mechanism is observed in the area of Unified Communications (UC). Since 9 out of 10 UC platforms are purchased through partners, this means that the key for the business is not the chat application itself, but its integration with sales processes, customer service and ERP systems. The partner is the architect here, making the individual building blocks from different suppliers start to form a coherent whole.

    cloud

    Regional dependency

    Geographical data confirms that reliance on the partner channel is a global trend and resilient to cultural differences. From Asia (81%) to Latin America (86%), the SME sector needs local support. Even in North America, where direct sales models are historically the strongest, up to 73% of budgets go through partners.

    The battle for the SME market is not taking place in the data centres of the hyperscalers, but in the relationships built by thousands of local IT companies. They are the ‘last mile’ of digitalisation, without which the global technology revolution would be bogged down by configuration problems and lack of technical support.

    Pragmatism instead of fascination

    An interesting phenomenon is the evolution of approaches to artificial intelligence. Although half of SME companies are already using AI tools, the time for hobbyist testing of chatbots is over. In 2026, AI has become an invisible component of analytics and compliance, and its implementation depends almost entirely on the competence of the IT partner. It is up to them to decide whether the technology will save money or merely increase the client’s technology debt.

    The real strength of the partner channel lies in its flexibility. While global providers standardise their offering to the limit, the IT partner adapts it to local legal and operational realities. It is this ‘last mile’ of implementation that generates the lion’s share of the $2 trillion that the SME sector puts on the table.

    The renaissance of relationships

    The addressable market for partners serving SMEs is expected to be worth as much as $1.87 trillion in 2026. This is evidence of a renaissance in professional advisory services, making it clear that the role of the partner as a trusted advisor is becoming more important than ever. Channel partners have won this battle because they are the only ones offering something that cannot be bought in a subscription model: personal accountability for the business outcome. For the SME sector, which cannot afford downtime and failed experiments, the professional IT partner has become the most important fuse for growth.

    Finally, it is worth adding that, according to Omdia data, the SME sector, with a budget of $2.38 trillion, will capture nearly 40% of the IT pie in 2026, creating a powerful space for partners to build business value.

  • The Greenland effect in IT: How unpredictable US policy is driving the European cloud

    The Greenland effect in IT: How unpredictable US policy is driving the European cloud

    Until a few years ago, the term ‘technological sovereignty’ was the domain of academic debates and niche reports prepared by EU officials in Brussels. For a CEO or CTO in Europe, US Big Tech was like gravity – fixed, inevitable and, despite some privacy controversies, guaranteeing stability. However, recent months have brought a brutal verification of this optimism. Events on the Washington-Brussels line, including Donald Trump ‘s staggering territorial ambitions for Greenland, have catalysed changes that could redraw the map of digital business in Europe forever.

    The end of digital optimism

    Why has the ‘Greenland Effect’ become a symbol of change in IT? While the US administration’s attempted annexation of the island may have seemed like a media anecdote, for European business leaders it was a clear warning: we live in a time where existing rules of the game and alliances can be challenged in a single tweet or unpredictable political decision.

    Risk is no longer theoretical. Today, European business has to ask itself a question that until recently sounded like a sci-fi movie script: what will happen to my company if access to SaaS services, cloud computing or data centres from the US is blocked as a result of a diplomatic dispute? The answer to this question today is building a new strategy of ‘limited trust technology’.

    The statistics of addiction: Landscape after the battle

    To understand the scale of the challenge, it is important to look at hard data. In 2024, European customers will have spent nearly $25 billion on cloud infrastructure provided by the five largest US players. According to IDC data, US companies control as much as 83% of the European cloud market.

    This contrast is striking when we recall Europe two decades ago. In the age of mobile telephony, it was our continent that dictated the terms thanks to the power of Nokia and Ericsson. Today, in the age of the data economy, Europe finds itself in the deep shadow of the United States and China. Attempts to build local search engines or social networks have failed, crushed by American scale, high-risk culture and almost unlimited access to capital.

    EU business leaders point to three main inhibitors: excessive bureaucracy, market fragmentation into 27 national systems and a fear of risk that paralyses innovation at an early stage.

    Fortress Europe: A new defence strategy

    Faced with rising tensions, Germany and France – the two largest economies in the Union – have stopped waiting for a pan-European consensus and have gone on the offensive. The strategy is clear: if we cannot (yet) create our own Google, we must secure the foundations.

    The German Federal Ministry of Digitalisation has just implemented openDesk, an open source alternative to Microsoft tools. This signals that open source software is ceasing to be the domain of enthusiasts and is becoming an ‘insurance policy’ for state institutions and strategic enterprises. France, on the other hand, is promoting Visio, a local videoconferencing solution, eliminating dependence on US platforms in public administration.

    President Emmanuel Macron is going one step further, offering cheap nuclear power to companies building data centres in the region and actively supporting Mistral AI – the European answer to software from OpenAI. This is no longer just politics; it is the construction of a new business ecosystem in which the ‘origin of technology’ becomes a key parameter of choice.

    Giants’ response: Camouflage or adaptation?

    US tech giants are not going to stand idly by and watch their loss of influence in a region that generates hundreds of billions of dollars in revenue for them. Big Tech’s adaptive strategy is fascinating: they are building ‘European clouds’ to look and act like local companies.

    Microsoft is stepping up its collaboration with Delos Cloud (a subsidiary of SAP) and Google is setting up independent entities based in Germany and staffed exclusively in the EU. The aim is clear: to circumvent concerns about the US Cloud Act, which in theory allows US services to see data stored abroad.

    However, for the informed CTO, this is still a half-hearted solution. The question of whether the US giant’s ‘local company’ will realistically resist pressure from its own government in a crisis situation remains open.

    Change management: People, not just bits

    As Frank Karlitschek, CEO of NextCloud, points out, technology is only half the battle. The biggest challenge for the European business is change management. Migrating from comfortable, familiar US systems that have been around for years to European or open-source alternatives is an operationally painful process.

    It requires excellent communication and preparation of employees to change their habits. However, in the new geopolitical paradigm, this effort is seen not as a cost, but as an investment in Business Continuity.

    Technology as a diplomatic currency

    “The Greenland effect” has made Europe realise that in the 21st century sovereignty does not end at land borders – it begins at servers. Europe does not seek complete isolation from American technology, because that would be economic suicide. It does, however, seek to create a ‘fuse’.

  • IT declares death, business counts profits. Why does the mainframe still rule the world?

    IT declares death, business counts profits. Why does the mainframe still rule the world?

    Every morning, millions of people around the world perform the same, almost mechanical action: bringing their payment card close to a terminal, checking their balance on a mobile app or booking a train ticket to the other end of the country. All this is done in the aesthetically pleasing, responsive interfaces that we associate with modernity. Few people realise, however, that underneath this shiny layer of ‘front-end’ beats the heart of a technology that was already labelled an open-air museum in the 1990s.

    The mainframe and the COBOL language – as they are referred to – are the cornerstones of the global economy. Although there is a cult of novelty in the IT world, business reality is verifying the ‘death of the mainframe’ narrative. Today, we must ask ourselves: are these systems really the ballast of the past, or are they the most solid insurance policy available to modern business?

    The foundation of stability: Why don’t the giants go away?

    In the technology sector, myths die a slow death. One of the most persistent is the belief that modern distributed architecture (microservices, cloud) can seamlessly replace the mainframe monolith. Meanwhile, banks, insurance companies, public administration systems and logistics giants still base their critical processes on COBOL. Why?

    The answer is transactional performance, which cannot be easily faked. The mainframe was designed for one purpose – to handle a gigantic number of real-time input/output operations while maintaining almost 100 per cent availability. In a cloud architecture, latency resulting from communication between distributed servers can become an insurmountable barrier when processing thousands of transactions per second. The mainframe is a ‘money machine’ in the literal sense – it is the one that settles pensions, taxes and interbank transfers, with a stability that many modern platforms can only dream of.

    The economics of code: When the cloud becomes a trap

    Many business leaders look at the mainframe through the prism of the cost of maintaining their own infrastructure and licences (CapEx). Moving to a cloud model (OpEx) seems an enticing promise of savings and flexibility. However, the reality can be brutal on the wallet.

    In a mainframe environment, every instruction has a measurable price. CPU consumption, database operations, working time – all of this translates into monthly invoices. This is why traditional COBOL programmers were (and are) masters of optimisation. Every millisecond saved is profit for the company.

    By moving the same, often suboptimal processes to the cloud in a pay-as-you-go model, companies fall into a trap. Without deep code optimisation, the dynamic scaling of the cloud makes bills grow exponentially. Often, we find that escaping the ‘IBM monopoly’ ends up falling into an even more expensive dependency on cloud providers, where the cost of data transfer and computing power at massive transaction scale exceeds the budget for maintaining an in-house mainframe. Unsurprisingly, some organisations, after costly migration trials, are ‘falling off the cloud like rain’ and meekly returning to proven on-premise solutions.

    Risk management: The skills gap as a real threat

    The real threat to business is not mainframe technology itself, but what sociologists call the ‘silver tsunami’. The experts who have been building and maintaining these systems for the last 30-40 years are retiring.

    For decades COBOL has been removed from university curricula as an ‘unattractive’ language. Young programmers prefer JavaScript or Python frameworks, which offer instant visual gratification, autocomplete code and modern development environments. Working in a mainframe, where the compiler is often crude and errors are pointed out with absolute precision, is not ‘sexy’.

    For business, this is a critical situation. Unless there is a generational change, the systems that drive the economy will be left unattended. This is an operational risk greater than any hacking attack. The lack of specialists capable of optimising code and understanding the architecture of legacy systems could paralyse financial institutions within the next decade. Knowing how the ‘heart’ of a system works is now becoming a rarer and more valuable commodity than knowing the latest mobile app development framework.

    A strategy for tomorrow: Modernisation instead of revolution

    Instead of a radical and risky migration, more and more organisations are choosing the middle way – the hybrid model. This involves keeping a stable, optimised core in COBOL and encapsulating it with modern middleware layers. This allows the ‘old’ mainframe to communicate securely with new mobile applications or AI systems via APIs.

    Modernisation does not necessarily mean demolishing foundations. It can mean strengthening them. Investing in training for existing IT teams, valuing mature talent (mentoring) and opening up to cross-functional collaboration on critical systems is the only way to maintain business continuity.

    A heart that must beat

    The mainframe does not need our pity or nostalgia. It is a technology that defends itself – with performance, stability and scale. But as business leaders, we need to stop treating it as an ’embarrassing secret’ hidden in the server room.

    Recognising the value of these systems is the first step to securing the future. The mainframe is not a technology debt that needs to be repaid as soon as possible. It is a powerful, undervalued insurance policy. But in order for it to continue to protect our transactions and data, we need to nurture a new generation of ‘digital mechanics’ who will not be afraid to get their hands dirty in COBOL code. Because when the heart stops beating, even the most beautiful organism – which is the modern corporation – simply ceases to exist.

  • Patriotism or cold calculation? Why IT is going back to its roots (and local servers)

    Patriotism or cold calculation? Why IT is going back to its roots (and local servers)

    In growing geopolitical uncertainty, the mantra of unconditionally moving resources to the global cloud is losing relevance, giving way to the urgent need to build digital independence. Infrastructure leaders (I&O) need to prepare for a year in which physical data localisation and supplier diversification will become not so much a technological option as a key component of business survival strategies.

    For the past decade, the IT strategy of many businesses has been based on a simple premise: a global hyperscaler will do it better, cheaper and more securely. Local data centres were treated as a relic of the past and the notion of digital sovereignty was reduced to the need to meet RODO requirements. Today, this paradigm is being rapidly eroded. The tough question is increasingly being asked in CIOs’ offices: what happens if global digital supply chains are disrupted?

    Geopatria: A strategy for the times of “Decoupling”

    The notion of geopatriarchy, which is beginning to dominate trend analyses for the coming quarters, is sometimes mistakenly equated in the IT community with simple local economic patriotism. This is a cognitive error that can cost companies stability. In reality, geopatriotry is a reaction to the global trend of ‘decoupling’, or the separation of economic and technological blocks.

    Modern I&O cannot ignore the fact that the public cloud is not an ethereal entity, but a physical infrastructure under the jurisdiction of specific powers. Relocating workloads (workloads) from global platforms to regional or national solutions ceases to be a matter of ideology and becomes part of systemic risk management.

    The key shift is from data sovereignty (where the files lie) to operational sovereignty. IT leaders need to ask themselves: in the event of sanctions, regulatory changes in the US or Asia, or physical disruption of cross-border links, will my business retain operational capability? Geopatria is essentially building a technical insurance policy. It reduces geopolitical risk and makes critical business processes independent of decisions made on other continents.

    Composability: How to escape the “Vendor Lock-in” trap

    Critics of the local approach rightly point out that abandoning the global cloud could mean being cut off from innovation. Regional providers rarely have the R&D budgets of the Silicon Valley giants. The solution to this dilemma is a new approach to hybrid computing.

    Hybridisation in 2025 is not about bundling an old server room with a cloud VPN. It is a philosophy of composable and extensible architecture. I&O managers must build systems from interchangeable building blocks. It’s about coordinating compute, storage and networking mechanisms in such a way that resources can be freely interchanged between providers.

    If a global provider becomes risky (politically or cost-wise), the company should be technically able to move processes to local infrastructure without rewriting applications. This approach forces I&O leaders to change their thinking about architecture – from monolithic deployments to flexible, containerised architectures that ‘float’ between different environments. This is where the real business value is born: in the ability to adapt quickly, rather than in simply owning the servers.

    Crisis of confidence and defence of identity

    The proliferation of infrastructure (Edge, local cloud, global cloud) brings with it a new threat: the erosion of trust. In an environment where data travels across multiple jurisdictions and systems, verifying what is true becomes an engineering challenge.

    Therefore, security against disinformation is becoming an integral part of the new I&O strategy. We are not talking about PR image protection, but hard technologies for digital identity verification. In the era of Deepfakes and software supply chain attacks, companies need to implement mechanisms that guarantee that the code, command or user is who they say they are.

    For operations departments, this means implementing systems that validate the authenticity of communications at every stage. Protecting brand reputation starts deep at the infrastructure layer – from securing the identity of administrators to cryptographically signing application containers.

    The economics of independence: Energy efficiency as a necessity

    Building a sovereign, hybrid infrastructure is more expensive than renting computing power on a pay-as-you-go model from a giant. This is a fact that CFOs often do not want to discuss. However, I&O managers have a new argument in hand: energy-efficient computing.

    New technologies and practices to reduce the carbon footprint are not just a nod to ESG. It is a way to fund independence. The use of neuromorphic systems, optical computing or simply radical energy optimisation of data centres, reduces the operating costs of in-house and co-located infrastructure.

    In this way, ‘Green IT’ ceases to be a marketing add-on and becomes the foundation of the hybrid model’s profitability. I&O leaders who combine the geopatriation trend with an aggressive energy efficiency strategy will be able to prove to management what is most important: operational security while maintaining budgetary discipline.

    From administrator to strategist

    The infrastructure and operations areas are entering a phase of strategic maturity. The role of the head of I&O is evolving from a provider of resources (‘give me a server’) to an architect of state and business continuity.

    Understanding the impact of geopatriation and implementing a model where a company is not held hostage to one provider or one jurisdiction is the most pressing task for the coming months. Those who treat this trend as a trivial throwback to the past may wake up to the reality that they have no control over their own digital destiny.

  • Public cloud in the European Union – between innovation and data responsibility

    Public cloud in the European Union – between innovation and data responsibility

    As a result, the development of cloud services in the EU is taking place in parallel with the debate about data sovereignty, ethical computing and the need to build solutions in line with European values. According to the European Commission, investment in computing infrastructure and AI will be one of the most important drivers of growth, but only if businesses and institutions trust that the cloud is a secure, predictable and compliant environment.

    European cloud in practice: from scalability to strategic independence

    The increasing load on systems, the digitalisation of public services and the development of AI models are making the public cloud not just a convenient tool for European organisations, but a key component of business infrastructure. It allows them to rapidly increase computing power, implement new functions and move processes that previously required their own data centres. At the same time, the EU is increasingly emphasising the need to build solutions that provide control over data flows and reduce reliance on non-European jurisdictions.

    – The European model assumes that IT architecture must support auditability, data control and interoperability. This is not a regulatory cost, but an investment in the European economy, which does not limit its development in the long term, but ensures that we maintain our identity as a European economy, comments Artur Kmiecik, Head of Cloud and Infrastructure at Capgemini Poland.

    Standards and certification: EUCS as the new security map for cloud computing

    In order to structure the requirements for cloud providers, ENISA is preparing the EUCS, a European certification scheme to unify the rules for assessing the security and compliance of services. For organisations, this means clearer criteria for selecting a provider, and for public administrations, the ability to use services with a predictable level of protection. The EUCS also simplifies the documentation and integration of systems that have to meet stringent industry standards. In practice, this is a strategic step towards a more transparent and standardised cloud market across the Union.

    Data under protection: how GDPR and EDPB set the framework for responsible processing

    Data protection regulation remains one of the strongest pillars of the European cloud approach. The GDPR and European Data Protection Board guidelines specify how to design processing and how to ensure compliance in an environment that is dynamically changing. This enforces practices based on privacy-by-design, regular risk assessment, access control and documentation of activities. At the same time, organisations need to be fully aware of where their data is and who can process it. The result is a model that reinforces transparency and predictability – including for services operating across national borders.

    AI in the cloud – innovation under regulatory scrutiny

    AI naturally thrives in cloud environments, which provide scale, computing power and the ability to update quickly. At the same time, the AI Act creates a legal framework to guarantee user security and transparency of models. Organisations that want to use more advanced systems need to prepare for documentation obligations, compliance testing and risk assessments, especially in high-responsibility sectors. This ensures that the development of AI does not come at the expense of data quality or user rights. Regulation does not slow down innovation – it puts it in order and gives it clear rules to work by.

    Trust as the currency of the digital economy: transparency and control over data

    The complexity of cloud environments means that organisations increasingly expect not only security, but also full auditability of operations. The ability to track activity, view logs, analyse permissions and verify processes is becoming one of the key criteria for vendor selection. Companies and institutions want to make sure they know who is processing their data and how – and transparency is becoming just as important as technical safeguards.

    – The IT architecture in our region must take into account not only scale and computing power, but also the requirements of the European Union. In practice, trust in the cloud is becoming the currency of the digital economy – organisations that can gain it through control of data flows and responsible use of AI will gain a real competitive advantage. The future of the European cloud is not only interoperability, but also ethical innovation that protects users and strengthens the data economy, adds Artur Kmiecik, Head of Cloud and Infrastructure at Capgemini Poland.

    The future of the European cloud: interoperability, ethics and responsible innovation

    Initiatives such as GAIA-X or European data spaces show that the future of the cloud in the EU is the development of systems that can work together independently of the provider. Interoperability is expected to facilitate cross-sector projects, process automation and data exchange in a way that complies with the highest ethical standards. At the same time, responsible innovation principles are growing in importance to protect users and strengthen the data economy. It is a direction that will allow Europe to develop modern technologies without abandoning the values that define its approach to digitalisation

    Source: Capgemini

  • Data gives you an edge, but requires control. 8 predictions for the enterprise market

    Data gives you an edge, but requires control. 8 predictions for the enterprise market

    Just a decade ago, the definition of a ‘secure business’ was simple: a robust firewall, up-to-date anti-virus and regular backup. Today, in the age of hybrid environments and ubiquitous artificial intelligence, this approach sounds like an archaism. Data has given businesses superpowers in the form of a competitive advantage, but it has also brought unprecedented operational complexity to IT departments. Looking at technology predictions for 2026, it is clear that we are entering an era where ‘digital sovereignty’ is becoming the new currency and speed is the only acceptable security parameter.

    Technology has ceased to be magic and has become critical logistics. If we look at what lies ahead over the next two years, the conclusions are clear: traditional cyber security is not enough. The arms race has moved to the infrastructure level, and it will be won by those who understand that the geographical boundaries of data matter, and that response times count more than the height of defence walls.

    Speed is the new benchmark

    For years, we have lived in a paradigm of perimeter protection – building a fortress where no unauthorised person has access. Predictions for 2026 brutally verify this approach. Cyber threats have evolved. These are no longer isolated incidents of ransomware, involving ‘just’ disk encryption. We are dealing with complex operations in which data is not only locked, but above all quietly exfiltrated and then sold on the black market or used for blackmail.

    In such a reality, a company’s resilience (resilience) is not measured by whether an attack can be avoided, but how quickly an organisation is able to recover from an incident. Traditional data recovery from tapes or free archive repositories becomes an unacceptable bottleneck.

    Speed is becoming the new standard. Anomaly detection must happen in real time and isolation of infected resources must happen automatically. Furthermore, the concept of ‘clean data recovery’ is becoming crucial. In the future, intelligent infrastructures will have to guarantee that the target state to which we return after a disaster is absolutely free of malicious code. This requires integrating security systems directly into the storage layer, rather than treating them as an external overlay.

    Geopolitics enters the server room

    Not so long ago, the cloud strategy of many companies was based on simple economic calculus and flexibility, often ignoring the physical location of bits and bytes. Those days are irrevocably passing. Governments around the world, concerned for national security and the privacy of citizens, are tightening regulations on where data can be stored and processed.

    Therefore, one of the key trends by 2026 will be data sovereignty. Companies and technology partners must respond by building environments that provide privacy without inhibiting innovation. Sovereign clouds and local hybrid environments are the market response. This is not about a complete retreat from global hyperscalers, but about managing risk wisely.

    Herein lies a huge opportunity for modern data platforms. They are designed to take the burden of bureaucracy off the shoulders of IT departments. Sustainable platforms are supposed to automate encryption, access policy management and regulatory compliance. This allows engineers to focus on creating business value, rather than wasting time manually aligning systems with regulatory requirements. Sovereignty ceases to be an obstacle and becomes part of the architecture.

    The race against time and quantum

    Looking to the future, it is impossible to ignore threats that seem distant today but could become standard in 2026. We are talking about post-quantum cryptography (PQC). Although quantum computers capable of breaking current security measures are still a song of the future, data that is stolen today could be decrypted in a few years (the so-called ‘harvest now, decrypt later’ attack).

    Therefore, the smart infrastructure of the future must integrate PQC standards now. Security cannot be a service tacked on at the end of the implementation process. It must be built into the DNA of data storage systems – from behavioural anomaly detection at the record level to advanced encryption. Only this approach will give companies peace of mind in the face of evolving threat models.

    Trust as a currency

    All of the above – speed, sovereignty, security – converge on one point: artificial intelligence. The year 2026 is when AI will cease to be just a content generator and will start to operate in the model of Agentic AI – autonomous systems that make decisions.

    However, for AI to be effective and secure, it must be trustworthy. Most AI initiatives fail not because of poor language models, but because of poor quality databases and lack of control over them. If a company is unsure who has accessed the training data, whether it has been manipulated and whether it complies with regulations, implementing AI becomes Russian roulette.

    Therefore, comprehensive data management (Data Governance) comes to the fore. Access control, data lifecycle tracking (data lineage) and integrity are foundations without which even the most advanced algorithm will be useless.

    The end of silos

    The path to 2026 is through understanding that artificial intelligence, cloud, cyber resilience and modern infrastructure are no longer separate areas. They are interconnected vessels.

    Cloud strategies are shifting towards workload-optimised (workload) platforms. Instead of managing separate consoles, companies will rely on unified platforms to decide where a given task will perform best – whether in the public cloud, sovereign cloud or local data centre.

    In the coming years, those who bet on an intelligent data infrastructure will win. One that ensures speed of recovery from attack, guarantees sovereignty in the face of regulation and provides the fuel for trustworthy artificial intelligence. It is time to stop treating infrastructure as a cost and start seeing it as the foundation of modern business.

  • A $650m lesson. Why are AWS and Google finally working together?

    A $650m lesson. Why are AWS and Google finally working together?

    Amazon Web Services and Google Cloud, the two giants fiercely competing for cloud market dominance, have decided to make a rare gesture of collaboration. The companies have officially launched a joint multi-cloud network service that is set to fundamentally change the way businesses manage their infrastructure. The new initiative allows private, high-speed connections to be established between the two providers’ platforms in just minutes, a drastic acceleration from the previous standard where the process took weeks.

    The decision to bring the ecosystems together comes at a sensitive time for the industry. The market is still analysing the impact of the 20 October AWS outage, which paralysed thousands of websites, including apps such as Snapchat and Reddit. According to analysts at Parametrix, the incident cost US businesses between $500 million and $650 million. The technical combination of offerings – AWS Interconnect-multicloud and Google Cloud Cross-Cloud Interconnect – is intended to address these challenges by offering customers real redundancy and interoperability.

    Salesforce is the first big beneficiary of this solution, signalling that the enterprise market was expecting such flexibility. While AWS remains the undisputed leader with revenues of $33 billion in the third quarter – a result more than double the $15.16 billion achieved by Google Cloud – this alliance shows a shift in strategy. Faced with gigantic investments in infrastructure under artificial intelligence and growing IT complexity, market leaders are moving away from a ‘walled garden’ policy to the pragmatic collaboration required to support critical workloads.