Tag: Microsoft

  • Windows K2, Microsoft’s new strategy for dealing with the problems of Windows 11

    Windows K2, Microsoft’s new strategy for dealing with the problems of Windows 11

    In the history of Microsoft’s operating systems, it has rarely been the case that a product still has to prove its worth almost five years after its release. Windows 11, while statistically dominating the market, is at a critical turning point. The project, internally dubbed ‘Windows K2’, is not just a package of technical fixes – it is an admission of flaws in user experience (UX) design and an attempt to regain the trust of the business sector at a time when support for Windows 10 has finally expired.

    Forced statistics: The reality of market 2026

    From an analytical perspective, the current market position of Windows 11 is the result not so much of user enthusiasm as of the inevitability of the software lifecycle. Although the system now controls around two-thirds of the market, a third of the PC fleet still operates on Windows 10 or older versions. In the enterprise sector, this resistance has been particularly pronounced.

    For business, the transition to Windows 11 presented two main barriers: stringent hardware requirements (TPM 2.0 module, newer generations of processors) and operational costs due to the need to train employees and adapt infrastructure. Microsoft, realising the risk of mass migration to alternative ecosystems or extending the life of old hardware, launched the ESU (Extended Security Updates) programme. However, paid support for Windows 10 is only a temporary solution – an expensive ‘stability tax’ that companies pay to avoid a still immature system. The K2 project is supposed to be an argument for investing this money in migration rather than persisting with the past.

    Performance architecture: Tackling “resource intensity”

    One of the most serious criticisms of Windows 11 is its inefficiency in resource management compared to its predecessor. Benchmark tests on identical hardware indicated that Windows 11 shows a greater appetite for RAM, without offering a commensurate increase in productivity in return. For IT departments managing thousands of workstations, this system ‘overweight’ means a shorter hardware lifecycle and higher TCO.

    A key element of the K2 operation is the full integration of the WinUI 3 structure. Microsoft is aiming to unify the interface, which is expected to eliminate historical legacies in the code that slow down File Explorer or the Start Menu. From a business point of view, the smoothness of the interface is not a question of aesthetics, but of ergonomics. Every second of delay in rendering menus or searching for files on a corporate scale translates into measurable efficiency losses.

    An end to ideology in favour of pragmatism

    Over the past few years, Microsoft has tried to impose its vision of the system as a service platform on users, manifesting itself through, among other things:

    • Stiff, limited taskbar.
    • Intrusive suggestions and ads in the Start Menu.
    • Aggressive promotion of Edge, Bing and OneDrive services.

    From a systems administrator’s perspective, this approach is problematic. An operating system in a professional environment should be a transparent tool, not a marketing channel. Pavan Davuluri’s announcements about restoring full functionality to the taskbar (including the ability to position it freely) and reducing unwanted content in the Start Menu demonstrate a return to pragmatism.

    Removing the ‘advertorial’ and intrusiveness of MSN services from the widgets is a step towards regaining the professional nature of the system. Business does not need the weather forecast interspersed with tabloid gossip inside a work tool. The K2 project seems to understand that control of the desktop must return to the user and administrator.

    Copilot: From euphoria to manageable assistance

    Artificial intelligence has become a cornerstone of Microsoft’s strategy, but the way it has been implemented in Windows 11 has been controversial. The integration of Copilot into applications such as Notepad and Paint was seen by many professional users as an unnecessary burden on the system and a potential risk to data confidentiality.

    There is a significant redefinition of the role of AI within the K2 project. Microsoft is moving away from the concept of ‘AI everywhere’ to ‘AI where it makes sense’. For the business sector, the most significant change is the ability to fully manage and disable Copilot functions on computers managed by central policies (GPO/Intune). This is critical for companies in regulated industries (finance, medical, legal) where uncontrolled data flow to the cloud is unacceptable. Copilot is intended to become an optional assistant rather than an integral, non-removable part of the system kernel.

    Repairing the feedback loop

    The Windows 11 release cycle was plagued by unstable updates that could cripple entire departments. Criticism focused on prioritising new features over code quality. As part of Operation K2, Microsoft announced a ‘resuscitation’ of the Windows Insider programme.

    For the business, this signals that the patch testing process will become more rigorous. The promise that Insider feedback will realistically influence the final shape of the update is key to avoiding a Patch Tuesday scenario. Additionally, greater flexibility in deferring updates and streamlining the configuration process for new devices (OOBE) is expected to reduce technical downtime, a direct gain for the operational agility of businesses.

  • Data centre spending peaks. How is AI driving infrastructure construction?

    Data centre spending peaks. How is AI driving infrastructure construction?

    Market forecasts for the technology sector are rarely so clear-cut. According to the latest data from analyst firm Gartner, global IT spending will reach $6.31 trillion in 2026. This is evidence of a shift in the centre of gravity of global business. The 13.5 per cent year-on-year increase, significantly higher than previous estimates, is a direct result of the artificial intelligence infrastructure arms race.

    A foundation of concrete and silicon: Exploding the data centre sector

    The most glaring point in the report is the dynamics of investment in data centres. Gartner predicts that spending in this segment will grow by 55.8% in 2026, surpassing the $788 billion barrier. To understand the scale of this phenomenon, it is important to look at it through the lens of technological change: we are not dealing with a simple expansion of existing resources, but with a complete reconfiguration of computing architecture.

    Traditional data centres, optimised for data storage and standard business applications, are giving way to HPC facilities. These are designed for the specific requirements of graphics processing units (GPUs) and TPUs, which are at the heart of modern AI. The surge in investment extends not only to the servers themselves, but also to advanced liquid cooling systems, high-density power infrastructure and enabling technologies, without which scaling large-scale language models (LLMs) would be impossible.

    In parallel, the IT services segment, infrastructure deployments and the IaaS model will generate a turnover of $1.87 trillion. This suggests that the market is ripe for consuming computing power in a hybrid model, where physical infrastructure goes hand in hand with specialised management.

    The dominance of hyperscalers: The computing oligopoly

    A phenomenon of a structural nature is the increasing concentration of computing power in the hands of a few players. By 2031, hyperscalers – mainly Microsoft, Google (Alphabet) and AWS (Amazon) – are forecast to control as much as 67% of global data centre capacity.

    This year alone, these three giants plan to spend more than $500 billion on capital expenditure related to AI infrastructure. Such gigantic outlays create a barrier to entry almost impossible for new players to overcome. For businesses, this means that they have to strategically choose a cloud provider that de facto becomes a partner in delivering a data-driven competitive advantage.

    We are also seeing a new geopolitical map of IT investment. Microsoft’s $25 billion investment in Australia or Meta’s construction of the world’s 32nd data centre show that the availability of stable energy sources and space is becoming more important than proximity to traditional business clusters.

    Strategic alliances and supply chain

    Analysis of recent market deals sheds light on the direction in which the industry is heading. Anthropic’s agreements with Google and Broadcom to supply TPU (Tensor Processing Unit) power from 2027 onwards point to the growing importance of proprietary chips to make the giants independent of the dominance of third-party processor suppliers.

    Even the biggest players need flexibility and specialised GPU cloud providers to cope with surges in computing power demand, as evidenced by Meta’s $21 billion partnership with CoreWeave. The biggest profits will be generated not by the AI developers themselves, but by the companies supplying the ‘components’ of this revolution – from accelerator manufacturers to power suppliers.

    Market insights for business

    In the context of the upcoming 2026 Investment Summit, business leaders should consider three key lessons:

    1. Infrastructure as a bottleneck: A 55.8% increase in spending on data centres suggests that access to computing power may become a scarce commodity. Companies planning large-scale AI deployments need to secure infrastructure resources in advance to avoid product development downtime.
    2. The need for cost optimisation: With IT spending reaching $6 trillion, efficiency becomes key. The shift from generic cloud solutions to AI-optimised infrastructure (such as IaaS supported by TPUs/GPUs) will determine the margins of digital projects.
    3. A new ecosystem of suppliers: Companies such as Broadcom and CoreWeave are worth watching. They represent a new category of technology partners who, through specialisation, are able to provide the components needed to scale AI faster and cheaper than traditional hardware suppliers.
  • The end of Microsoft’s monopoly on OpenAI. What does the new agreement mean for the market?

    The end of Microsoft’s monopoly on OpenAI. What does the new agreement mean for the market?

    The most influential partnership in the history of artificial intelligence has just undergone a fundamental transformation. Microsoft and OpenAI have announced a renegotiation of the terms of their partnership, ending Azure’s previous exclusivity to offer ChatGPT creator models. The new agreement paves the way for the startup to have a direct presence in the ecosystems of Microsoft’s biggest competitors, including Amazon Web Services and Google Cloud. While the original deal, backed by a $13 billion investment, defined the current AI landscape, both parties recognised that the existing formula had become too cramped for their growing ambitions.

    Strategic foundations for change

    Under the new arrangement, Microsoft will remain OpenAI’s primary cloud partner until 2032, and the startup has committed to spend at least $250 billion on Azure services. The Redmond giant retains priority rights to deploy new products, but loses its sales monopoly. In return, Microsoft has secured a 20 per cent share of OpenAI’s revenue by 2030, importantly including if the startup achieves so-called artificial general intelligence (AGI). Previous provisions would have allowed OpenAI to stop paying Microsoft when it made the technological leap to AGI, which was a significant risk for the investor. At the same time, Microsoft stops sharing profits with OpenAI from offering their models within Azure, simplifying the giant’s financial structure.

    The loosening of ties is a move dictated by the maturity of the market. OpenAI, as it prepares to go public, needs to demonstrate its ability to scale its enterprise business beyond a single vendor’s infrastructure, especially in a clash with the rising Anthropic. From Microsoft’s perspective, giving up some control of OpenAI’s model distribution is the price of taking off the burden of funding the giant infrastructure needed by the startup and, perhaps most importantly, easing pressure from antitrust authorities in the US and Europe. Satya Nadella’s strategy is evolving towards diversification; Microsoft is increasingly promoting its own models and third-party solutions within Copilot, reducing the critical dependence on a single technology provider.

    It is worth noting the increasing freedom to build multi-cloud strategies. It seems a good direction to review current contracts with cloud providers for upcoming AWS Bedrock or Google Vertex AI deployments, which will optimise costs and reduce latency. It is also worth monitoring the pace of Microsoft’s in-house models, as their growing role in Copilot 365 may soon offer better value for money than standard external models.

  • Benchmarks won over loyalty: Microsoft bets on Anthropic. A blow for OpenAI

    Benchmarks won over loyalty: Microsoft bets on Anthropic. A blow for OpenAI

    Microsoft ‘s choice of the Claude Mythos model as the foundation for its new software security architecture sets a significant precedent in the Redmond-based technology giant’s strategy. This decision, while at first glance it may appear to be a mere operational adjustment, in reality reveals deeper market shifts in the generative AI sector and changing priorities in digital risk management. Analysing the facts of Anthropic‘s model integration, a clear pattern can be discerned: Microsoft is moving from a phase of fascination with general AI capabilities to a phase of rigorous, benchmarked selection of specialised tools.

    A key reference point for this decision is the CTI-REALM benchmark, co-developed by Microsoft engineers. The fact that Claude Mythos scored highest in it, distancing the GPT-5.4-Cyber model, is a market signal that cannot be ignored. Microsoft, as OpenAI’s largest partner and investor, has shown that pragmatism and hard data, rather than corporate loyalty, wins in critical areas such as cyber security. This strategic approach to model vendor diversification avoids vendor lock-in and ensures access to the most effective solutions in specific niches.

    From a business perspective, integrating Mythos directly into the software development cycle is a classic implementation of the ‘Shift-Left’ strategy. The cost of fixing a vulnerability discovered at the production stage is many times higher than eliminating the bug at the code writing stage. The cited data about the detection of a vulnerability that has existed for 27 years and the success of Mozilla, which identified 271 vulnerabilities thanks to Claude Mythos, are not just technological curiosities. They are concrete indicators of return on investment (ROI). For companies operating on huge collections of legacy code, automating security audits using such high-precision models means saving thousands of hours of high-level professionals and drastically reducing the legal and reputational risks associated with potential data leaks.

    The market reaction to Mythos’ capabilities, manifested, for example, by concern in the banking and insurance sectors and interest from the NSA, suggests that there is a new kind of regulatory risk involved. Claude Mythos is seen as a dual-use technology. The model’s ability to instantaneously map vulnerabilities makes it a defensive tool of unprecedented power, but also a potential offensive instrument. The embargo under consideration by US agencies and the restrictive access under Project Glasswing suggest that in the near future, access to the most advanced cyber security models may be rationed in a similar way to armament or high-end cryptographic technologies. Companies must therefore take into account in their strategies the fact that technological advantage in the area of AI may be limited by state interventions.

    It is also worth noting a painful market lesson for OpenAI. The fact that the release of GPT-5.4-Cyber failed to draw attention away from the Anthropic solution is indicative of the change in expectations of corporate customers. The market has become saturated with promises of versatility; solutions with proven effectiveness in specific usage scenarios are now sought after. Microsoft, by implementing Claude into its 365 applications and its internal processes, de facto legitimises Anthropic as an equal, and in some respects superior, technology partner. This suggests that OpenAI’s dominance may be more fragile than stock market valuations would indicate.

    For Microsoft itself, the move is an attempt to run away from mounting criticism over historical security lapses. Redmond has understood that with the current scale and complexity of the Windows and Azure ecosystem, traditional methods of manual code review are inefficient. Using Claude Mythos as an intelligent filter to verify developers’ work is an attempt to systemically address the problem of technology debt. If Microsoft manages to significantly reduce the number of critical vulnerabilities in its products with this solution, it will set a new market standard to which all SaaS and Cloud players will have to adapt.

  • Layoffs at Big Tech 2026 – why the Meta and Microsoft are cutting jobs

    Layoffs at Big Tech 2026 – why the Meta and Microsoft are cutting jobs

    Silicon Valley is going through a painful but precise tissue replacement operation. While investors are reacting enthusiastically to new stock market records, thousands of Met and Microsoft employees are finding out that their roles are becoming redundant in the new algorithm-oriented world order. What we are seeing is no longer just an echo of the Pocovid correction, but a fundamental shift in strategic priorities.

    The Met has just announced a 10 per cent reduction in its workforce, which, combined with the elimination of unfilled vacancies, means the removal of nearly 14,000 jobs from the labour market. However, a deeper financial analysis of Mark Zuckerberg’s company reveals a second bottom to this decision. The company plans to increase capital expenditure to as much as $135 billion in 2027, focusing on building data centres and developing Superintelligence Labs.

    This is a classic example of aggressive reallocation of resources: billions saved on the ‘traditional’ workforce are funding an artificial intelligence arms race. Behind the scenes, however, there is talk of the phenomenon of “AI-washing” – conveniently attributing redundancies to technological advances to cover up the 2020-2022 recruitment mistakes.

    Microsoft in Redmond, on the other hand, is employing a more subtle but equally telling tactic. For the first time in its history, the giant has opted for a voluntary departure programme targeting around 7% of its US workforce. The ‘sum 70’ criterion (combining age and seniority) suggests that the company wants to slim down the structure of costly, experienced managers whose competencies may not be suited to the era of generative models. At the same time, Microsoft is simplifying the reward and bonus system, giving executives more leeway to reward the talent that realistically drives new business divisions.

    This trend is not isolated – Amazon, Intel or Cisco are following a similar path. There is a clear lesson for the business world: operational efficiency in 2026 is no longer about having the largest teams, but about building the most scalable systems. The technology labour market is no longer a safe haven, becoming a testing ground for a new definition of corporate productivity.

  • Azure tax? UK court clears the way for billion-pound lawsuit against Microsoft

    Azure tax? UK court clears the way for billion-pound lawsuit against Microsoft

    The London Competition Appeal Tribunal (CAT) has made a decision that could fundamentally change the European cloud infrastructure market. Microsoft, after months of trying to dismiss the claims, must brace itself for a massive lawsuit. At stake is £2.1 billion in damages and the future of a licensing strategy that has been controversial for years among finance and technology executives around the world.

    The case, led by Maria Luisa Stasi on behalf of nearly 60,000 UK businesses, strikes at the heart of Microsoft’s business model. The crux of the dispute is not about the quality of cloud services per se, but about the way the Redmond giant prices Windows Server software licences. According to the plaintiffs, Microsoft has a discriminatory pricing policy: companies choosing to run Windows Server on competitors’ platforms, such as Amazon Web Services, Google Cloud or Alibaba, pay much higher wholesale rates than users choosing the native Azure environment.

    From a business perspective, this means that Azure does not just win by technological prowess, but by an artificially generated cost advantage. For many organisations that have historically based their infrastructure on Microsoft solutions, moving to a competing cloud involves a hidden ‘tax’ that is ultimately charged to their margins or passed on to end customers.

    Microsoft has consistently defended its strategy, arguing that an integrated business model fosters innovation and allows it to offer better solutions within its own ecosystem. Company representatives have announced an appeal, challenging the methodology for calculating the alleged losses and pointing to the dynamic nature of the cloud market.

    However, the London tribunal’s decision coincides with increasing regulatory pressure. The UK Competition and Markets Authority (CMA) and authorities in the EU and US are looking increasingly closely at practices that restrict software interoperability.

    The market is no longer willing to accept technology lock-in with impunity. If Microsoft loses or is forced to settle, we will see not only gigantic compensation payments, but above all a levelling of the price playing field in the cloud. This could pave the way for a new wave of data migration, where performance rather than convoluted and expensive licensing provisions will determine the choice of provider.

  • France is replacing Windows with Linux. Great migration of administration in 2026

    France is replacing Windows with Linux. Great migration of administration in 2026

    The French strategy to move away from proprietary solutions is an industrial-scale operation, the centrepiece of which is the systemic dismantling of the Vendor Lock-in mechanism. Two key institutions are responsible for the architecture of this process: The Interministerial Directorate for Digitalisation (DINUM) and the National Agency for the Security of Information Systems (ANSSI). The manifesto they have developed is not limited to replacing Windows; it is a deep reconstruction of the entire state technology stack, based on full control of the source code.

    At the core of this change are eight technical pillars to be implemented across all ministries by autumn 2026. The most prominent element is the operating systems layer. Although the directive does not impose one particular Linux distribution, it forces ministries to move out of the Microsoft monoculture. This deliberate flexibility allows the environment to be adapted to specific sectoral requirements – such as in defence or health – while maintaining the common denominator of an open standard.

    A key implementation tool is La Suite Numérique. It is a proprietary ecosystem of productivity tools that has already covered more than 600,000 officials in its test phase. It includes:

    • Tchap: an encrypted end-to-end communicator based on the Matrix protocol.
    • Visio: a videoconferencing system using the Jitsi open code.
    • Euro-Office: a sovereign office suite developed in European cooperation.

    The choice of open source solutions here is not dictated solely by optimising licence costs. In the DINUM documentation, security is defined by ‘auditability’. Full code transparency allows French services to independently verify systems for the presence of vulnerabilities and undocumented functions (backdoors), which is impossible with closed source systems.

    An important element of the French manifesto is the infrastructure layer. The state rejects the model of hosting data in the public clouds of US hyperscalers in favour of national solutions such as Outscale (Dassault Systèmes). With SecNumCloud certification, this infrastructure is legally and technically isolated from the influence of non-European jurisdictions, eliminating the risks associated with, for example, the US CLOUD Act.

    To summarise the first stage of the transformation: France is redefining the concept of modern administration. Instead of a passive subscription to off-the-shelf products, the state is opting for a model of investing in the development of its own competences and the maintenance of systems over which it has full control. This approach puts an end to a situation in which the evolution of administrative tools, their price and operational risks are dictated by the business strategy of an external, foreign provider.

    The geopolitical trigger, or Trump and the “Data Cold War”

    France’s decision on systemic migration was not made in a technological vacuum, but is a direct reaction to the abrupt change in the balance of power on the Washington-Brussels axis between 2024 and 2025. The return of Donald Trump’s administration brought not only trade protectionism and tariffs, but above all regulatory unpredictability, which became an unacceptable risk for European public institutions. In this context, software has ceased to be regarded as an office tool and has become a strategic resource subject to a ‘digital tariff war’.

    A key argument for abandoning US ecosystems has become a jurisdictional issue. Existing US laws, including the CLOUD Act, give US services theoretical and practical insight into data processed by homegrown corporations, regardless of the physical location of the servers. For France, which has been promoting a data sovereignty strategy since late 2024, keeping critical infrastructure on Microsoft or Azure solutions has become a structural weakness.

    The ‘spirit of Munich’ is often invoked in the public debate. – the Bavarian capital’s high-profile but ultimately unsuccessful attempt to switch to Linux almost two decades ago. However, analysts point out that 2026 is a very different technological reality. Why is the project likely to succeed this time?

    1 Maturity of the ecosystem: Today’s desktop Linux is stable and ready for the mass user, as evidenced by successful deployments in smaller government agencies across the EU.

    2 Cloud architecture and SaaS: Most critical administrative applications have moved to the browser. The operating system has become merely an ‘access layer’, dramatically reducing the compatibility issues that buried the Munich project.

    3 Containerisation and OpenStack: Modern virtualisation standards allow specific software to be isolated and run independently of the host, which solves the problem of so-called legacy software.

    The effects of this shift can already be seen in the market data. The dominance of US cloud providers (85% of the EU market) is starting to get a real bite from local players. Companies such as Scaleway have seen record increases in the number of institutional customers in 2025. This is no coincidence – European players are actively seeking asylum from US jurisdiction.

    It is estimated that spending on sovereign cloud infrastructure in Europe will more than triple, reaching a ceiling of €23 billion in 2027. France, by initiating this move, is positioning itself as the leader of a new digital order in which control over the algorithm and data is as important as control over physical borders. This appears to be the beginning of a ‘digital chill’ in which Europe chooses technological autarky as the only way to maintain political agility.

    The Polish perspective: Between a national ‘Office’ and the trauma of major deployments

    In February 2026, the Polish debate on digital sovereignty ceased to be theoretical. The Central Informatics Centre (COI) announced a plan to build a national office suite to break the dictates of Microsoft. This is in response to data from a report by the Instrat Foundation at the end of 2025, which exposed the scale of the vendor lock-in phenomenon in Poland: as much as 99% of public procurement of office software favours Redmond products. This phenomenon has ceased to be a technical problem and has become an economic barrier – when licence costs rise, the administration, trapped in closed standards, has no real escape route.

    The Polish attempt is chasing French radicalism, but in Poland political optimism is clashing with historic business scepticism. The main inhibitor is trauma, of which the ZUS Comprehensive Information System (KSI ZUS) remains a symbol. Since the maintenance of a single system was able to consume PLN 2.8 billion over a six-year cycle, the IT industry is rightly asking whether the state is ready to build an ecosystem from scratch capable of competing with Microsoft 365. There is a real risk that the ‘national alternative’, rather than saving money, will generate a new billion-dollar ‘bottomless pit’.

    The key argument of the COI is that the structure of spending has changed. In the subscription model, capital goes directly to the US. Moving to open source solutions (LibreOffice, Nextcloud, OnlyOffice) does not mean that the system will be ‘free’, but it radically changes the Total Cost of Ownership (TCO):

    • Instead of licences – services: Funds are redirected to local integrators for implementation, support and training.
    • Investment in staff: the need to build strong in-country support teams instead of relying on the manufacturer’s external SLA.
    • Hardware longevity: Open source software allows older workstations to last longer, which fits in with the sustainability policy.

    The impetus that cut off discussions about the ‘convenience’ of the interface was an incident in May 2025, when the Attorney General of the International Criminal Court in The Hague temporarily lost access to Microsoft mail. It became clear to Warsaw that the SaaS (Software as a Service) model was not only about convenience, but also about exposure to US extraterritorial law (CLOUD Act).

    However, Poland does not have to build the wheel ‘from scratch’. An opportunity for the COI is its role as an integrator of European standards, such as the German openDesk or the French La Suite Numérique. If the national suite is built on the basis of transparent public-private partnerships and not locked inside a single institution, it could become a flywheel for the domestic IT sector. After all, true sovereignty is not about using software because it is ‘national’, but about the ability to freely choose technology that does not make the state hostage to a single supplier.

    Global impact and redefinition of the relationship with Big Tech

    The French retreat from the Windows ecosystem is forcing a fundamental shift in the business strategy of global technology corporations. For decades, a model based on closed software and a monopolistic position on desktops has been the foundation of Microsoft’s stable public sector revenues. The Paris decision, backed by specific timeframes and budgets, ends the unconditional acceptance of the terms dictated by Redmond vendors.

    The main impact for Microsoft and other giants (Google, AWS) is the loss not only of financial revenue – one country’s administration is a fraction of global revenue – but of its status as the ‘default standard’. If France proves that the modern state can operate on the basis of a proprietary technology stack (La Suite Numérique) and open systems, the ‘Vendor Lock-in’ model may begin to crumble in other regions of the world, from the European Union countries to the BRICS group countries also seeking technological emancipation.

    Projected changes in Big Tech strategies:

    • Forced transparency: Microsoft, in order to save its position in Europe, will be forced to offer so-called sovereign clouds with a much higher degree of transparency of source code and data location. Already we are seeing attempts to create partnerships (e.g. the Bleu project in France) that are supposed to be an ‘acceptable compromise’ between US technology and European security requirements.
    • Shifting front on AI: As the operating system becomes a commodity, artificial intelligence models are the new battleground for dominance. The success of the French Mistral AI shows that Europe is taking the ‘open weights’ route, which stands in contrast to the closed ecosystems of OpenAI or Google.
    • Consolidation of the European IT market: The French push is stimulating demand for the services of local infrastructure providers (OVHcloud, Scaleway, Dassault Systèmes). The estimated increase in sovereign cloud spending to €23 billion in 2027 represents capital that will be invested in European innovation instead of funding US R&D.

    States will increasingly require the full auditability of software as a condition for admission to public procurement. As a result, technology giants will face a choice: either comply with sovereignty requirements and open up their systems to external scrutiny, or be pushed out of the most sensitive sectors of government.

    The replacement of the dashboard layer in France is only the visible apex of change. Underneath is a deep reconfiguration of financial and decision-making flows. If this model proves effective, Microsoft could lose its role as the ‘operating system of states’, becoming just one of many providers in a pluralistic, open ecosystem.

    Sovereignty as an investment, not a political cost

    While the upfront costs of migrating 2.5 million civil servants to open source solutions are significant, they should be seen in the category of an investment in the national technology ecosystem rather than a one-off operational expense. Unlike the subscription model, where capital irretrievably leaves the local market, funds invested in the development of solutions such as La Suite Numérique build sustainable developer and infrastructure competencies within the state.

    A key factor for success in 2026 is to move away from thinking of the operating system as an isolated island. France is proving that sovereignty is built in layers: from open code on desktops, to certified cloud (SecNumCloud), to independent AI models (Mistral). This approach solves the dilemma faced by previous projects – changing from Windows alone to Linux while remaining in the Microsoft or Google cloud would only be a facade change. Only control of the entire technology stack provides real resilience to geopolitical pressures and changes in the pricing policies of global corporations.

    Lessons for Poland and decision-makers in the CEE region:

    1 Audit of dependencies: The Polish administration needs to move beyond the role of a passive consumer of licences. A sound TCO (Total Cost of Ownership) analysis over a decade is required, taking into account the risk of sudden increases in subscription prices and the potential benefits of developing in-house tools.

    2 Building an alternative pathway: The French example shows that migration does not have to be abrupt, but must be planned. Poland needs its own ‘plan B’ – an open source toolkit that can be implemented in parallel to proprietary systems, reducing the degree of dependency (so-called vendor lock-in).

    3 Regional cooperation:*Digital sovereignty on the scale of one medium-sized country is difficult to achieve. An opportunity for Poland is active participation in projects such as the Euro-Office or a common European cloud, which will allow R&D costs to be spread over many member states.

    Continuing to remain in Microsoft’s ‘licence well’ is a short-sighted strategy. And although full digital sovereignty is a utopia, moves to give more choice and independence from one key supplier are seen as necessary, especially in a tense geopolitical situation.

  • Hyperscalers are taking over the data centre market. Is this the end of on-premise?

    Hyperscalers are taking over the data centre market. Is this the end of on-premise?

    For decades, the company server room was the technological equivalent of a family castle. It was tangible proof of sovereignty, a safe haven for data and the pride of IT departments that nurtured their own silicon with almost craftsmanlike precision. But the latest predictions from Synergy Research Group plot a scenario in which these digital fortresses become costly open-air museums. By 2031, hyperscalers such as Google, Microsoft and AWS will have seized 67% of global data centre capacity for themselves. What we are seeing is a rapid shift in the centre of gravity of the digital world, necessitated by the brute physics of artificial intelligence.

    The architecture of coercion

    In 2018, enterprises controlled more than half of the world’s computing infrastructure. The prospect of 2031, in which this share shrinks to just 19%, seems at first glance a statistical error. However, the reason for this dip is not an unwillingness to own, but an inability to meet the demands of the new era. Modern AI systems, based on GPUs and specialised chips such as TPUs, require power densities and cooling systems that exceed the design standards of traditional office buildings.

    Hyperscalers are building infrastructure today at fourteen times the scale of just eight years ago. This scale creates a barrier to entry that is impossible for a single organisation to break through. When Satya Nadella announces a doubling of Microsoft’s physical data centre footprint in just two years, he is not talking about building data warehouses, he is talking about creating large-scale innovation reactors. For the average enterprise, trying to catch up to this pace in-house would be akin to building a private power plant network just to power the office kettle.

    The currency of gigawatts and limits

    In the new economic order, capital is no longer the only determinant of development opportunities. The availability of computing power, treated as a scarce and limited resource, is coming to the fore. Strategic partnerships, such as those entered into by Anthropic with Google or OpenAI with AMD, are in fact reservations of energy and silicon for years ahead. In a world dominated by language models and advanced analytics, the ‘power shortage’ referred to by Microsoft’s Amy Hood is becoming a real operational risk for any technology-dependent business.

    This phenomenon is fundamentally changing the role of technology leaders in organisations. The CIO ceases to be a steward of fixed assets and becomes a digital commodity strategist. He or she must operate in a reality where computing power is rationed and its price can skyrocket under local energy considerations. Projected energy price spikes of up to 79% in technology hubs will force a new discipline on business: algorithmic frugality.

    Physical resistance of the cloud

    Although the term ‘cloud’ suggests something ethereal and intangible, its foundations are heavy, loud and raising increasing public opposition. The expansion of technology giants is colliding with the barrier of local politics and ecology. Digital progress is no longer seen as an indisputable good.

    For business, this means a new form of localisation risk. Dependence on one region or supplier coming into conflict with a local community or energy system can become a bottleneck for AI-based product development. This is why more and more companies are attempting to secure operational continuity in the face of growing resentment towards energy-intensive giants.

    Risks of gigantism and opportunities of localism

    The dominance of hyperscale providers brings with it risks that become market opportunities for on-premise proponents. Dependence on a narrow group of suppliers (vendor lock-in) and their vulnerability to local social conflicts or investment blockades – such as those in Wisconsin or Maine – make a diversified in-house infrastructure an insurance policy.

    Opportunities for in-house data centres lie in their ability to adapt where the giants are too sluggish. Local units can deploy innovative heat recovery systems or use niche, green energy sources more quickly, building better relationships with the environment than anonymous, energy-intensive megastructures. This is where ‘edge AI’ is born, processing data where it arises, without the need for costly and slow transfer to global centres.

    Balance as the new overarching strategy

    A comprehensive look at 2031 dictates that we see it not as capitulation but as a new specialisation. The threat to business is not the power of Google or Microsoft, but the lack of an in-house, thoughtful infrastructure strategy. Organisations that indiscriminately abandon their own resources may wake up to a moment when access to innovation is rationed by external suppliers.

    The right chess move today is to reinvest in ‘intelligent on-premise’. This is a smaller but denser infrastructure, optimised for a company’s specific, unique algorithms, while generic computing tasks are delegated to the cloud. This duality allows the company to benefit from the enormity of hyperscalers’ investments, while retaining the hard core that makes the company a sovereign player in the market.

  • Microsoft halts cloud and sales hiring

    Microsoft halts cloud and sales hiring

    Microsoft has ordered managers of key units, including its strategic cloud division and North American sales groups, to halt the recruitment of new employees, reports The Information. The decision, while not corporate-wide, signals a deeper shift in resource management at the threshold of the end of the fiscal year.

    Microsoft’s move is a classic example of margin optimisation in the face of gigantic capital expenditure. The company, which employs more than 220,000 people globally, is under increasing pressure from Wall Street. Investors, accustomed to the steady growth of the Azure sector, are anxious to see record spending on the data centres and processors needed to support language models. The hiring freeze in sales and cloud infrastructure is a signal that the company is looking for savings where growth rates have stabilised in order to fund the areas with the highest potential for breakthrough.

    Importantly, the recruitment lock-in is selective. The teams responsible for developing Microsoft’s Copilot tool and key AI projects still have the green light to recruit talent. This is a clear indication that, for CEO Satya Nadella, ‘artificial intelligence’ is no longer just an add-on to the portfolio, but a new business core to which the cost structure of the entire organisation is subordinated.

    Microsoft’s actions are part of a wider ‘year of efficiency’ trend in Silicon Valley. While Meta is cutting a fifth of its workforce and Amazon is correcting pandemic-era over-expansion, Microsoft is taking the route of surgical precision. Instead of mass layoffs on the scale of its market rivals, the company is relying on budgetary discipline in traditional verticals.

    Technology companies are not only building AI for their customers, but are themselves going through a painful process of reorganisation in which human capital has to give way to investment in computing power.

  • Microsoft stops bombarding users with Copilot and withdraws AI from Windows 11

    Microsoft stops bombarding users with Copilot and withdraws AI from Windows 11

    For the past two years, Microsoft’ s strategy has been clear: artificial intelligence is to be everywhere, whether the user wants it or not. However, the Redmond giant’s latest decisions suggest that it is time for a substantive rethink. Pavan Davuluri, head of Windows and devices, has just announced a significant change of course.

    Copilot, hitherto shoved into almost every nook and cranny of the operating system, is beginning to disappear from places where its presence was forced.

    This is not just a minor technical correction, but a strategic signal for the technology market. Apps such as Photos, Notepad and the Clipping Tool are losing direct, aggressive links to the AI assistant. Microsoft thus acknowledges that the ubiquity of algorithms has become a burden rather than a convenience for many business users.

    “Noise” generated by uninvited AI functions began to generate resistance, which Redmond must have eventually recognised.

    Analysing these moves in a broader context, it is clear that Microsoft has hit an enthusiasm ceiling. The problems with the Recall feature – which, due to privacy and data security concerns, has become an image burden – showed that the pace of innovation has overtaken infrastructure readiness and customer confidence. The slowing down of Copilot integration in File Explorer or System Settings is an attempt to salvage the foundations of Windows: transparency and predictability.

    The added value from AI comes not from its ubiquity, but from its fine-tuning to processes. Rather than ‘pushing’ the assistant everywhere, Microsoft is now promising to get back to basics, such as optimising system speed and greater taskbar personalisation.

    It is a rare moment when the world’s largest software company acknowledges that a product must follow real user needs, not just stock trends. For the Windows ecosystem, this marks a shift from the ‘AI at all costs’ phase to one of mature implementation.

  • OpenAI on the AWS platform? Microsoft fights for cloud exclusivity

    OpenAI on the AWS platform? Microsoft fights for cloud exclusivity

    The Financial Times reports that Microsoft is considering legal action against OpenAI and Amazon. The bone of contention is a $50 billion deal that could end the Redmond giant’s previous dominance as the exclusive cloud infrastructure provider for ChatGPT developers.

    ‘Frontier’, OpenAI’s new commercial product, has become a flashpoint. The key question is whether making it available via Amazon Web Services violates the exclusivity provisions of the Azure platform.

    For business leaders, this signals that the era of monolithic partnerships in AI is coming to an end. OpenAI, seeking to diversify its revenue and reach, is beginning to test the limits of loyalty to its largest investor.

    From a market perspective, the potential litigation could redefine the standards of cooperation between model providers and infrastructure giants. If Amazon manages to break Microsoft’s monopoly, a new wave of competitiveness awaits, forcing enterprises to be more flexible in their multicloud strategies.

  • Why is Microsoft standing up for Anthropic?

    Why is Microsoft standing up for Anthropic?

    In Silicon Valley, the competition for dominance in the artificial intelligence sector usually resembles an arms race. However, in the face of bureaucratic pressure from the Department of Defence (DOD), the major players have decided to close ranks. Microsoft has officially backed Anthropic ‘s request to block the Pentagon’s decision to declare model developer Claude a ‘supply chain risk’.

    Pragmatism instead of solidarity

    For Microsoft, intervening in federal court in San Francisco is not just a gesture of goodwill towards a competitor. It is a cold business calculation. The Redmond giant has integrated Anthropic’s technology into solutions provided to the US military. Suddenly cutting off access to these models would call into question the continuity of defence contracts and force engineers to make costly, hasty rebuilds of systems.

    Microsoft argues that by giving itself six months to withdraw from Anthropic’s technology, the Pentagon has forgotten about an analogous transition period for third-party contractors. Without the court-ordered withholding of decisions (TRO), technology companies will be saddled with new and unpredictable operational risks that could destabilise their business planning for years.

    A new front in Big Tech’s relationship with government

    The case sheds light on a broader issue: the tension between the pace of innovation and the rigours of national security. The DOD’s decision to blacklist Anthropic is all the more surprising given that the startup promotes itself as a leader in AI security and ethics. Microsoft’s vote, backed by engineers from OpenAI and Google, suggests that the industry resents arbitrary decisions by officials that could block the military’s access to cutting-edge tools.

    The dispute shows that in the AI sector, technical success is only half the battle; the other is navigating the increasingly complex maze of government regulation. If the court does not grant Anthropic and Microsoft’s request, this precedent could hit any software vendor working with the public sector.

     

  • The end of cheap power for AI? Trump hits back at tech giants

    The end of cheap power for AI? Trump hits back at tech giants

    The Trump administration has thrown down the gauntlet to technology giants, confronting them with a dilemma that could redefine the economics of data centres in the US. During his State of the Union address, the president announced that major technology companies would be forced to build their own power supplies to relieve the strain on the national electricity grid. While the rhetoric focuses on protecting consumers’ wallets from rising bills, for the AI and cloud computing sector this marks a shift from a pure consumption model to a role as critical infrastructure developers.

    The move is a direct response to tensions in states such as Virginia and Ohio, where the rapid growth of AI clusters has led to an overloading of local grids. PJM Interconnection, a key operator on the East Coast, has previously suggested that new large-scale energy customers need to bring their own capacity into the system. Now these suggestions are becoming the foundation of hard federal policy.

    For leaders such as Microsoft, Google and Amazon, the announced March meeting at the White House will not just be a courtesy visit. These companies have been investing in renewables for years, but Trump’s new doctrine suggests something much more demanding: physical separation from the public grid or direct financing of new energy generation (so-called *behind-the-meter*). This forces capital-intensive investment, probably towards small modular reactors (SMRs) or advanced energy storage systems, which are currently in the early stages of deployment.

    From a business perspective, Trump is trying to kill two birds with one stone. On the one hand, the administration wants to maintain its lead over China in the AI arms race, which requires gigantic computing power. On the other hand, rising energy prices have become a political burden ahead of the upcoming mid-term elections. By shifting the cost of infrastructure development to Big Tech, the White House is taking this burden off the shoulders of voters, while forcing companies to accelerate innovation in the energy sector.

  • AI investments under question: Why is the stock market losing billions?

    AI investments under question: Why is the stock market losing billions?

    In just a few weeks, an astronomical sum of $1.3 trillion evaporated from the valuations of major tech giants. This phenomenon, although rapid, was not the work of an unfortunate coincidence or a temporary panic of trading algorithms. Rather, it represented a harsh market verdict delivered over a business model based largely on narrative rather than on the foundation of cash flow generation. Unconditional faith in the promises of artificial intelligence has come to an end, giving way to an era of rigorous viability verification.

    For the past few years, the technology sector has been fed visions of an almost eschatological nature, in which artificial intelligence was to become the panacea for all efficiency ills. However, January 2026 brought a radical change in optics. Investors, hitherto inclined to give preference to far-reaching goals, turned their attention to current financial transparency. The gap between the enthusiastic declarations made in international economic forums and the hard operational reality became too clear to be ignored any longer. The symbol of this tumble became Microsoft, whose capitalisation shrank by $613 billion, which, when juxtaposed with historical daily declines of 12%, showed the scale of the fragility of modern valuations.

    The reason for this lies in the deep disconnect between the perceived usefulness of the technology by its creators and its real-world adoption at the consumer and corporate level. While leaders in Redmond or Seattle talk about changing the world, the average enterprise is still searching for answers on how generative models are going to realistically translate into operating margins. This gap in understanding led to a speculative bubble that burst when the market began to demand evidence of returns on gigantic capital expenditures.

    In doing so, it is worth noting an interesting paradox that sheds new light on the structure of the current crisis. While developers of software and language models are losing value, the beneficiaries of the situation remain those operating in the realm of physical infrastructure. Companies such as Taiwan Semiconductor Manufacturing Co or Samsung Electronics are experiencing growth, suggesting that capital is not fleeing the technology sector altogether, but is making a strategic rotation. Investors have begun to favour component suppliers whose returns are tangible and immediate, at the expense of visionaries whose success depends on the future, still uncertain monetisation of services. In this context, the success of a traditional giant like Walmart, which has significantly increased its market value through the point-to-point implementation of solutions in logistics, becomes a signpost for modern business strategy. Success is not achieved by whoever has the most powerful technology, but by whoever can harness it most effectively to generate savings in the real value chain.

    One of the most worrying aspects of the current situation is the phenomenon of the so-called capital expenditure trap. Projects of almost cyclopic scale, such as the OpenAI Stargate initiative valued at $500 billion, have turned into mechanisms for consuming capital on an industrial scale. A fundamental strategic dilemma is emerging here for technology and finance executives. Expenditure on AI infrastructure is growing exponentially, while the lifecycle of purchased hardware is shortening dramatically. There is a real risk that state-of-the-art accelerators and memory systems will lose their moral and technical value faster than they manage to make a profit covering the cost of their purchase. The situation is exacerbated by a component availability crisis, with delayed deliveries arriving in data centres at a time when the next, more efficient generation of silicon is already on the horizon.

    The current market correction is a lesson in humility towards the laws of economics. It demonstrates that, in the long term, technology cannot escape the need to demonstrate its cost-effectiveness. The strategic tension IT decision-makers now find themselves in requires a move away from aggressive, often unreflective pursuit of novelty towards sustainability. Rather than building monumental, untested systems, it makes sense to focus on financial transparency and precise definition of operational goals. The market no longer rewards mere presence in the AI arms race; it now rewards the ability to win individual performance battles.

    A key lesson from recent developments is the understanding that artificial intelligence is undergoing a process of normalisation. It is ceasing to be treated as a magical tool with infinite potential, and is beginning to be seen as a costly asset that must be managed with the same discipline as a fleet of machines or a transport fleet. This is not the end of the revolution, but the moment of its transition into a mature phase, where the advantage is determined not by the amount of capital invested, but by the precision of its allocation.

    Big Tech’s billion-dollar loss in 2026 does not necessarily signify the twilight of innovation, but is rather a necessary course correction. For the business world, it signals that the time of speculative euphoria is over and that the future belongs to those entities that can combine technological savvy with iron business logic. The biggest challenge of the coming months will therefore not only be the fight for access to the fastest processors, but above all the fight to regain the confidence of investors by showing real, tangible results from the transformations underway. In a world where trillions of dollars can disappear in a few trading sessions, the most valuable currency becomes credibility and the ability to generate profit here and now.

  • $50 billion from Microsoft. The giant is betting on emerging markets

    $50 billion from Microsoft. The giant is betting on emerging markets

    At the AI Summit in New Delhi, Microsoft outlined a new map of its global dominance. The Redmond-based giant has pledged investments of $50 billion by the end of the decade, targeting countries in the Global South directly. It is a business move aimed at securing a leadership position in markets that will define the growth dynamics of the technology sector in the coming years.

    Satya Nadella’s decision to shift the capital burden towards emerging markets reflects the saturation of Western digital ecosystems. After last year’s $17.5 billion injection in India alone, Microsoft is clearly signalling that the battle for cloud infrastructure and software talent is shifting to the southern hemisphere. This means that future revenues from Azure and AI services will be generated where demographics favour rapid technology adoption.

    However, this strategy brings with it specific operational challenges. Investing in lower-income regions requires Microsoft not only to build data centres, but also to work closely with local governments to create an appropriate regulatory framework. In New Delhi, surrounded by world leaders, the giant’s managers have had to balance the promise of democratising access to AI with the need to protect intellectual property.

    The innovation hub is no longer the exclusive domain of Silicon Valley. Companies that want to remain relevant in the AI supply chain need to start looking at the Global South as a key testing ground for new business models.

  • Trusted Tech Alliance: Giants, led by Microsoft and Ericsson, build a wall against digital isolationism

    Trusted Tech Alliance: Giants, led by Microsoft and Ericsson, build a wall against digital isolationism

    The formation of the Trusted Tech Alliance, jointly announced by Microsoft, Ericsson and thirteen other industry leaders, is a clear signal: business is no longer a passive observer of the political fragmentation of the world. Faced with the escalating rhetoric of the new US administration and Europe’s push for ‘digital sovereignty’, corporations are trying to seize the narrative before politicians do it for them.

    Business versus geopolitics

    The initiative is born at a critical moment. Instead of waiting for inconsistent local regulations, companies such as Google, SAP and Amazon Web Services (AWS) want to impose their own supranational definition of ‘trust’. The foundation of the alliance consists of five pillars, intended to act as a universal security certificate:

    • Secure by design.
    • Ethical conduct and corporate governance.
    • Rigorous standards in supply chains.
    • Adherence to global safety standards.
    • Support for an open digital environment.

    For Microsoft’s Brad Smith and Ericsson’s Börje Ekholm, the goal is clear: to save the scalability of services. Ekholm points out that total technological sovereignty in today’s economy is an illusion and a straight path to trade barriers that will hit innovation.

    Escape to the front?

    Analysts point out, however, that under the guise of caring about standards is a classic defence mechanism. The Trusted Tech Alliance can be read as an attempt to get ahead of hard regulation (like the European AI Act) through soft law. Companies set rules that are convenient for themselves before governments impose costly ones.

    Moreover, the criticism of ‘digital sovereignty’ coming from Silicon Valley is sometimes perceived as hypocritical. For many countries, data localisation is the only defence mechanism against digital colonialism. There is also a risk that the alliance’s exorbitant standards – requiring costly independent audits – will become a protective moat for giants, effectively cutting off smaller startups and competitors from emerging markets.

  • Sharp Europe’s strategic turnaround: Full AI and Security accreditation from Microsoft

    Sharp Europe’s strategic turnaround: Full AI and Security accreditation from Microsoft

    It is rare for a company historically associated with office equipment to make such a radical and effective turn towards highly specialised digital consultancy. Sharp Europe, through its Sharp DX unit, has just sent a clear signal to the market by winning a set of six Microsoft Solutions Partner certifications. This achievement places the company in a small group of global players capable of comprehensively supporting the Redmond-based giant’s ecosystem.

    From a business perspective, this is not just a marketing gesture. In an era of increasing cloud complexity and stringent data regulation, enterprise customers are moving away from working with a range of niche providers to end-to-end partners. With accreditations in areas ranging from security and modern working to Azure infrastructure and data and AI, Sharp is positioning itself as a strategic integrator. For CIOs, this means less implementation risk and technology consistency, which in highly regulated industries is a more valuable currency than ever.

    This success is the result of a consistent consolidation of the company’s European IT structures. Under the Sharp DX banner, regional competences from markets such as the UK, France and Switzerland have been integrated, supported by a strong technology centre in Warsaw. This structure makes it possible to combine a local presence with a powerful operational scale, which, in the Microsoft AI Cloud Partner Program model, is crucial to support large-scale transformation processes.

    Roland Singer, Vice President of Sharp DX Europe, rightly points out that these certifications are the foundation for the next phase: the AI era. With the upcoming Microsoft AI Tour in 2026, Sharp is moving from being seen as a device supplier to becoming an architect of the digital workplace. The company’s strategy shows that survival in modern business depends on the ability to manage data and security in the cloud, rather than just providing hardware.

  • Microsoft Poland bets on veterans. Strategic castling in the shadow of the race for AI

    Microsoft Poland bets on veterans. Strategic castling in the shadow of the race for AI

    Microsoft’s Polish division is making a move that suggests a need for stability and deep knowledge of the local ecosystem. The appointments of Ilona Tomaszewska and Tomasz Wilecki, announced at the beginning of 2026, are not so much a personnel revolution as a precise regrouping of forces in key areas for the Redmond giant: partner network and public administration.

    The decision to give Ilona Tomaszewska the role of Partner Channel Director (EPS Lead) is a signal to the market that Microsoft intends to monetise its investment in cloud infrastructure through an army of intermediaries. Tomaszewska, with 16 years of experience at the company, is taking the helm of a network of 7,500 partners at a critical time. In the era of the ‘AI era’, the technology itself is becoming a commodity; the real margin lies in implementation and consultancy. Her experience in the public sector can prove invaluable in aligning partner offerings under the stringent regulatory requirements that currently define the pace of innovation adoption in Poland.

    Tomasz Wilecki’s return to the public sector after eight years in other divisions, including most recently as sales leader for AI solutions, is indicative of Microsoft’s priorities in its relationship with the state. The Polish administration faces the challenge of implementing artificial intelligence in a secure way, and Wilecki is to be the guarantor of this process. His appointment combines technological competence with the ability to navigate complex public structures. For Microsoft, this sector is a strategic bastion – not only because of the scale of procurement, but also as a testing ground for sovereign cloud solutions.

    Both appointments share a common denominator: loyalty and internal promotion. In a time of geopolitical and regulatory uncertainty (AI Act), Microsoft is betting on proven leaders who can translate global corporate strategy into the specific realities of the Vistula market. This is a safe, but also very pragmatic move. Instead of looking for ‘fresh blood’ externally, the company is opting for the relational capital that Tomaszewska and Wilecki have built up over decades. In the context of the competition with Google Cloud and AWS for dominance in the Polish cloud, it is these relationships that could prove to be a decisive asset.

  • Sovereignty or a bottomless pit? Government, Microsoft and billions for licences

    Sovereignty or a bottomless pit? Government, Microsoft and billions for licences

    In February 2026, the Polish debate on digitalisation entered a new, hot phase. The Central Informatics Centre (COI) announced an ambitious plan: to build a national office suite for public administration to replace the Redmond giant’s solutions. This move is part of the broader European trend of ‘digital sovereignty’, but the Polish version raises as much hope as justified scepticism. For business and IT leaders, it is a signal that the hitherto “licence first” model is no longer the only path to state development. But is Poland ready for its own ‘Office’, or are we in for another billion-dollar project of dubious quality?

    The dictate of one supplier: Diagnosing the monopoly

    The starting point for the COI initiative was the Instrat Foundation report ‘Seemingly open procurement’, published at the end of 2025. The data are merciless: as much as 99% of the analysed public procurement of office software in Poland directly or indirectly favours Microsoft products. In one in five tenders, competitors are excluded outright, and in the rest, through specific technical requirements that only one ecosystem meets.

    The vendor lock-in phenomenon has ceased to be a theoretical academic problem and has become a real threat to government budgets. As licence costs rise, the administration has nowhere to run, because the entire infrastructure, from mail to advanced spreadsheets, is based on closed standards. The head of the COI, Radosław Maćkiewicz, makes it clear: Poland spends too much money on Microsoft software, and these funds could support an indigenous IT ecosystem.

    The other side of the coin: The spectre of “billionaire systems”

    When the administration talks about ‘building its own solutions’, a red light goes on in the private sector. The history of Polish public IT is full of projects whose costs ran into billions and whose quality left much to be desired. A symbol of these concerns is the ZUS Comprehensive Information System (KSI ZUS). According to figures from recent years, the maintenance and development of this system over a six-year cycle (2015-2020) cost the state nearly PLN 2.8 billion. What is more, current contracts for the maintenance of KSI ZUS alone amount to hundreds of millions of zlotys (e.g. Asseco’s offer for nearly 350 million zlotys).

    Critics rightly ask: should a state that struggles to manage such molochs efficiently embark on building an ecosystem from scratch to compete with Microsoft 365, which has been fine-tuned over decades? Building a modern office suite is not just a word processor, it involves hundreds of thousands of hours of developer work, security testing and cloud integration. There is a real risk that the ‘national alternative’ will become another bottomless pit, with billions of public money disappearing into it and the end product lagging behind market standards.

    At the same time, it is worth bearing in mind the expert opinion on the quality of the software built by the COI, e.g. on the occasion of mCitizen. Although the application is popular, reports by the Defence CSIRT have pointed to security gaps, such as ‘dead code’ or vulnerabilities in the library supply chain. The scaling of these problems to a system on which every official in the country will depend for their work raises understandable fears.

    Lesson from The Hague: Why sovereignty is not just about austerity

    Arguments about costs are only part of the equation, however. The geopolitical turning point came in May 2025, when the Prosecutor General of the International Criminal Court (ICC) in The Hague was to lose access to his Microsoft-provided email account. The official reason was said to be the sanctions imposed by the Donald Trump administration.

    Regardless of the corporation’s subsequent denials, the incident horrified European decision-makers. It became clear that relying on the US SaaS model was not only a matter of convenience, but also an exposure to US extraterritorial law (e.g. CLOUD Act) and the political decisions of a foreign power. It is this fear that is driving migrations today in the German state of Schleswig-Holstein (60,000 posts switching to Linux and LibreOffice) or the Danish Ministry of Digitalisation. In these cases, sovereignty over data is valued more highly than the polished interface of a US-based giant.

    Transparency of savings: The myth of ‘free’ Open Source

    The COI initiative is to be based on open (open source) solutions. This is key, because it lowers the barrier to entry – we don’t have to write everything from scratch, we can draw on projects such as LibreOffice, Collabora Online or Nextcloud. However, in the IT business, ‘open’ rarely means ‘cheap in the short term’.

    Transparency of savings requires an honest look at TCO (Total Cost of Ownership). While the cost of Microsoft’s licences (around £38-49m per year for ZUS alone in 2024/25) is easily measurable, the hidden costs of migration are huge. Germany’s Schleswig-Holstein estimates that it will save €15m per year on licences, but at the same time invest €9m in 2026 alone in the transformation and training process.

    The real cost of the national office suite will be:

    1. training and adaptation: officials accustomed to Outlook can lose productivity drastically in their first year with the new tool.

    2. maintenance and SLA: Open Source requires strong local support teams. Instead of paying Microsoft, we will pay Polish companies (e.g. in a PPP model), which supports the economy but does not necessarily mean a drastic decrease in budget expenditure.

    3 Compatibility: Millions of historical.docx documents and advanced.xlsx worksheets must work flawlessly. The cost of fixing formatting errors can run into the millions.

    European trend: Lyon and Schleswig-Holstein lead the way

    Poland is not alone. Lyon, France’s third-largest city, is already deploying OnlyOffice and Linux on 10,000 workstations, arguing not only for sovereignty but also for ecology – open software allows older hardware to be used for longer. In contrast, the German openDesk project, developed by ZenDiS under the aegis of the German Interior Ministry, is becoming a ready-made standard for the whole of Europe.

    This is where the opportunity for COI lies: not to build the wheel from scratch, but to become a Polish integrator of European sovereign solutions. Using Collaborator Online in conjunction with the Polish government cloud would avoid the fate of ZUS’s KSI, while at the same time providing a guarantee that citizens’ data will never leave the country.

    Value for business

    The COI initiative should be read as a call for diversification. A complete abandonment of Microsoft in commercial enterprises is unlikely today, but building ‘hybrid resilience’ is slowly becoming a necessity.

    This is a huge opportunity for the Polish IT sector. The shift from a model of selling licences (where most of the margin goes to the US) to a model of high-margin implementation and maintenance services around Open Source can be a ‘flywheel’ for domestic integrators. However, business needs to watch the state’s back. If the ‘national package’ is locked within the walls of a single institution, it will become an expensive monument. If, on the other hand, it is built on transparent public-private partnerships and open standards, it can become the foundation of a modern state that invests in its own intellect instead of paying a ‘digital tax’ to giants.

    The key to success will not be the technology – because LibreOffice or OnlyOffice are already ready – but the transparency of how these ‘saved’ millions are spent. True sovereignty is the ability to freely choose technology, not being forced to use software just because it is ‘national’. Poland must prove that it can build systems that are not only expensive, but above all effective.

  • Davos 2026: no more romanticising AI. Time for hard reality

    Davos 2026: no more romanticising AI. Time for hard reality

    The World Economic Forum in Davos has acted as a barometer of global sentiment for years, but this year’s edition will be remembered as a moment of great technological overkill. If previous years were marked by an almost childlike fascination with the potential of generative artificial intelligence, January 2026 brought a hard landing on the reality of physical limitations. Silicon Valley leaders, financial sector giants and policymakers abandoned utopian visions in favour of cool engineering and pragmatic profit-and-loss calculus. Behind the scenes at Alpine, the question was no longer what AI could write, but how to build a planetary infrastructure capable of harnessing these ambitions.

    Five-layer foundation and primacy of application

    The foundation for this new perspective was the concept presented by Nvidia’s Jensen Huang during his high-profile debate with Larry Fink. Huang redefined the concept of artificial intelligence, moving away from seeing it as just another economic sector. Instead, he portrayed it as “the greatest infrastructure construction in human history”. The key to understanding this vision is the five-layer cake model, which starts with fundamental energy resources, moves through silicon processors, cloud data centres to the underlying models and final applications themselves.

    Huang, Fink
    Jensen Huang, CEO of NVIDIA; Larry Fink, CEO of BlackRock / Source: World Economic Forum

    In this architecture, Huang sees a new hierarchy of values. While media attention is focused on layer four – i.e. the models themselves, such as GPT and Claude – the Nvidia CEO has made it clear that it is unprofitable for most global players to compete in this field. Real economic value and return on investment (ROI) will be generated in layer five. It is in the application of intelligence to specific, deep problems such as drug discovery, hyper-optimisation of global logistics or materials engineering that the real growth engine lies. The message to business is clear: stop chasing model parameters, start building solutions that monetise those models.

    Data sovereignty as a new insurance policy

    As artificial intelligence becomes the ‘corporate brain’, a new, almost existential question of sovereignty for business resounded at Davos. Huang called for nations, and business leaders picked up on this call for organisations, to build ‘National Intelligence’ and sovereign AI ecosystems. In 2026, it is becoming clear that organisations cannot afford to fully outsource their knowledge and decision-making processes to external providers.

    Artificial intelligence infrastructure must be treated with the same criticality as national roads or power grids. For corporations, this means that they need to have intellectual property not only to the data, but also to the fine-tuning processes and contextual data. Without their own autonomous intelligence, companies risk becoming mere rentiers of other people’s minds, which in the long term threatens their competitiveness and uniqueness in the market.

    Energy ‘to be or not to be’ of global growth

    The physicality of the technology resonated most strongly in the context of the energy crisis, which was the focus of Satya Nadella‘s speech. The Microsoft CEO made a brutally honest diagnosis: GDP growth anywhere in the world will from now on be directly correlated to the cost of energy required to power AI processes. Nadella warned of a loss of ‘public consent’ for the technology if the gigantic energy consumption does not translate into measurable successes in education, health and public sector productivity.

    Nadella, Fink
    Larry Fink, CEO, BlackRock and Interim Co-Chair, World Economic Forum and Satya Nadella, CEO, Microsoft / source: World Economic Forum

    The ‘tokens per dollar per watt’ concept he introduced is becoming the new hard currency of the modern economy. This warning coincides with market data showing drastic increases in energy prices, presenting infrastructure executives with a challenge that no algorithm can solve. Elon Musk has joined this discussion and – in his style of combining engineering with visionaryism – has pointed out that the bottleneck is no longer chip manufacturing, but voltage in transmission networks. His proposal to move data centres into orbit to be powered directly by solar energy ceased to be treated at Davos as a curiosity and became a signal of how desperately we will be looking for new power sources in the next decade.

    Musk, Fink
    Elon Musk, CEO, Tesla; Chief Engineer, SpaceX; CTO, xAI, Tesla / source: World Economic Forum

    Emerging from pilotage purgatory and the junior crisis

    Despite the mammoth investment, Alphabet’s Ruth Porat hit on a sensitive point for global business: most companies are stuck in ‘pilot purgatory’. Deloitte data confirms this diagnosis – only 25% of organisations have managed to scale their AI projects. The problem turns out to be not a lack of technology, but a growing technical debt and a lack of a coherent data architecture. Davos 2026 sent a clear message: the time of fascination with chatbots is over; what matters now is the complete redefinition of operational processes.

    google deep mind, anthropic, the economist
    Dario Amodei, CEO, Anthropic; Demis Hassabis, CEO, Google DeepMind; Zanny Minton Beddoes, Editor-in-Chief, The Economist / source: World Economic Forum

    The biggest concern, however, is the transformation of the labour market. Dario Amodei (Anthropic) and Demis Hassabis (Google DeepMind) unanimously predict the emergence of AI agents capable of replacing junior engineers. This raises a fundamental training crisis: if artificial intelligence eliminates the entry stage, the industry will lose a natural testing ground for future experts. Jensen Huang offered an optimistic takeaway here, however. In his view, AI is “the easiest software to use ever”, and the core competency of the future will not be writing code, but “driving goals”. The IT worker is evolving from the role of craftsman to that of teacher and systems supervisor, forcing companies to transform their technical departments into centres of continuing education.

    AI’s new world order

    Davos 2026 marked the end of an era of digital innocence. Altman himself, despite his discreet profile, confirmed the commercial maturity of OpenAI, revealing billions of dollars in API revenue, proving that AI has permanently grown into the fabric of business. However, the overall tone of the forum was one of caution: success in the coming years will not depend on how powerful models we buy, but how intelligently we manage the energy, data sovereignty and transformation of our teams.

    The year 2026 is a time of ‘big clean-up’ and building the foundations for an architecture that must be as resilient as it is efficient. The race for the palm of supremacy is on, but its finish line has shifted from computer screens to power plants and training rooms, where a new definition of human labour is being born.

  • The myth of invulnerability is over. Microsoft has opened the FBI’s BitLocker data vault

    The myth of invulnerability is over. Microsoft has opened the FBI’s BitLocker data vault

    BitLocker encryption has for years been regarded as the standard for data integrity. However, Microsoft’ s recent collaboration with the FBI on the Guam investigation sheds new light on the illusion of complete privacy in the Windows ecosystem. The Redmond giant has, for the first time, publicly confirmed the handover of recovery keys to law enforcement, sending a clear message to business leaders and security officers: trust in cloud defaults can come at a price.

    The dispute mechanism is technically simple but politically complex. BitLocker generates a 48-character key that defaults to Microsoft’s servers within a user’s account. Although the company argues that it receives only about twenty such requests a year and only responds to legitimate warrants, the mere fact that the corporation has a ‘backup key’ for company laptops changes the perception of risk. Senator Ron Wyden described the practice as irresponsible, pointing out that systems designed with security in mind should not have a ‘back door’ available to the manufacturer.

    From a management perspective, the case highlights the growing gap in security philosophy between the Big Tech giants. While Microsoft maintains an architecture that allows access to keys, competitors such as Apple and Meta are increasingly promoting end-to-end encryption, where even the manufacturer does not have the tools to read user data. With services such as ICE unsuccessfully attempting to break BitLocker’s security by their own efforts in 2025, the pressure on software vendors to become the de facto arm of the judiciary has only increased.

    For businesses, the lesson from Guam is practical. Full sovereignty over data requires giving up the convenience of default cloud synchronisation in favour of local key storage on physical media. This, however, places total responsibility on IT departments for the potential loss of access to workstations. In an era of increasing regulation, such as the European debates surrounding ‘Chat Control’, the decision of where the key to a company’s safe physically rests ceases to be a technical detail and becomes a key element of a legal risk management strategy.