Tag: Microsoft

  • Microsoft’s costly mistake: Why January’s patch hit enterprise productivity

    Microsoft’s costly mistake: Why January’s patch hit enterprise productivity

    Rarely does the cure prove more dangerous than the disease. But for Microsoft, the start of 2026 has been marked by putting out fires that the company itself has stoked. The just-released emergency update KB5078127 is intended to fix the chaos introduced by January’s ‘Patch Tuesday’ cycle on Windows 11 systems (versions 24H2 and 25H2).

    From a business perspective, the incident is more than just a technical glitch; it is a lesson in the fragility of modern cloud-based workflows. The problems, which began on 13 January, hit the very foundations of everyday work: file access and communication. Users using OneDrive or Dropbox experienced ‘freezing’ applications when trying to save documents. Outlook users were particularly hard hit. Errors when saving PST files resulted not only in the need to reboot entire systems, but also in the loss of sent messages and inconsistent email databases.

    For IT departments already grappling with increasing infrastructure complexity, the two-week delay in delivering an effective solution was a wake-up call. Microsoft initially tried to salvage the situation with an optional interim patch to restore basic power functions – after the January update, some computers stopped responding to sleep and shutdown commands. However, this solution did not touch the core of the storage problem.

    The decision to release an ‘out-of-band’ update (outside the standard schedule) suggests that the scale of requests from enterprise customers has exceeded the Redmond giant’s tolerance threshold. KB5078127 is a cumulative update, meaning that technical departments can deploy it automatically, eliminating the effects of previous bugs without having to manually configure each workstation.

    Microsoft’s aggressive release cycle has increasingly come at the expense of stability. In an environment where operating system stability is treated as a public medium, such stumbles undermine confidence in automatic updates, forcing administrators to revert to conservative methods of testing patches in isolated environments before mass deployment.

  • Coincidence or pressure to migrate? Microsoft and the problem with encryption in Outlook

    Coincidence or pressure to migrate? Microsoft and the problem with encryption in Outlook

    For thousands of IT managers, the return to work after the new year began with a communications crisis. Classic Outlook users who upgraded to 19426.20218 in the Current Channel lost the ability to read encrypted messages. Instead of the email content, the system displays a credential verification error message and an unreadable `message_v2.rpmsg` attachment. Microsoft confirmed the problem on 6 January by launching an investigation, but at the time of this writing, an official patch had not been released.

    The situation is serious because it affects the business-critical Information Rights Management (IRM) mechanism. The bug technically lies in a regression in the way the mail client processes RPMSG containers when attempting to retrieve usage licences. As a result, secure mail becomes a digital deadlock. Microsoft is now suggesting ad hoc solutions, including forcing senders to change their encryption method (via the ‘Options’ ribbon instead of the ‘File’ menu) or the complicated for the average user process of rolling back the Office version to compile 19426.20186 via the command line.

    This failure is part of a worrying trend observed in recent quarters. Although Microsoft officially guarantees support for classic Outlook until at least 2029, the stability of this platform is becoming a matter of debate. This is the third major incident in the last four months – following November’s Exchange Online connectivity issues and October’s disappearance of the application from Windows systems.

    For IT decision-makers, this is a wake-up call. The Redmond giant is aggressively promoting the ‘New Outlook’, a web-based application that continues to attract resistance in corporate environments due to functional shortcomings and data privacy concerns. The increasing frequency of bugs in the classic version can be interpreted in two ways: as a result of the technological debt of the old code or, as more cynical market observers suggest, a form of ‘soft extinguishing’ the product by lowering the priority of quality control. Regardless of Microsoft’s intentions, companies relying on the classic Exchange architecture need to prepare themselves for the fact that using it will require increasingly frequent administrative interventions.

  • Less AI, more control. Microsoft allows IT administrators to remove Copilot

    Less AI, more control. Microsoft allows IT administrators to remove Copilot

    Aggressive integration of artificial intelligence into the Windows ecosystem has been a top priority for Microsoft in recent quarters, but the company is beginning to recognise the need for greater flexibility for corporate administrators. The Redmond giant has begun testing a new policy setting that allows IT departments to systemically remove Copilot applications from managed devices, a significant nod to infrastructure managers expecting more complete control over corporate software.

    The new functionality, defined as the `RemoveMicrosoftCopilotApp` rule, hit the DevOps and Beta channels this week as part of the Windows Insider programme. The deployment covers systems updated to Build 26220.7535. The mechanism is designed to work with key device fleet management tools such as Microsoft Intune and System Center Configuration Manager (SCCM). Once the policy is activated, the operating system automatically initiates the process of uninstalling the AI assistant, allowing a central ‘clean-up’ of the working environment without the need for intervention on individual workstations.

    It is worth noting, however, that Microsoft is taking a surgical rather than a radical approach to this process. The policy is not a tool to completely block AI, but rather a mechanism to optimise resources. Removal will only occur in a specific scenario: where both the basic version of Copilot and Microsoft 365 Copilot are present on a device, the app has not been manually installed by an employee, and the tool itself has been inactive for the past 28 days. This approach suggests that the aim is to remove the ‘dead software’ rather than restricting access to active users, who, incidentally, retain the ability to reinstall the tool.

    In addition to the AI policy changes, the latest compilation also brings stability fixes, including a solution to problems with File Explorer hanging and the Windows Update settings menu. However, it is the new application management policy that is the most important signal for the B2B market. The feature remains in testing for now, and no date has yet been set for its widespread release outside of the Insider programme, but for IT directors it is a clear sign of a return to more granular control of systems in the generative AI era.

  • Edge or already Copilot? Microsoft blurs boundaries in new interface

    Edge or already Copilot? Microsoft blurs boundaries in new interface

    Microsoft is quietly redefining the visual identity of its browser, which could herald a wider shift in design strategy in Redmond. The latest developer versions of Edge (the Canary and Dev channels) see a deep integration of the aesthetics familiar from the Copilot app.

    As Windows Central notes, this is not just a superficial overlay, but a systemic redesign of the interface.

    The experimental design introduces the AI assistant’s signature rounded corners, specific typography and a refreshed colour palette. These changes include key navigation elements, including the context menu, new tab page and settings panel.

    Importantly, the new style is being implemented as the default standard, regardless of whether the user is actively using the AI features.

    The move suggests that Microsoft is moving away from the strict adherence to Fluent Design principles associated with Windows 11 in favour of visual consistency built around the Copilot brand. AI is no longer just a feature and is becoming the foundation of UX.

    Currently, the refreshed interface remains in the testing phase and a date for its implementation into the stable version of Edge has not yet been confirmed.

  • Platformisation of cybersecurity in 2026: Strategic necessity or fashionable buzzword?

    Platformisation of cybersecurity in 2026: Strategic necessity or fashionable buzzword?

    For the past decade, there has been one almost dogmatic rule in the IT world that dictates buying best-in-class solutions. Chief Information Security Officers (CISOs) built their digital fortresses by selecting individual components like building blocks, where the firewall came from one vendor, the EDR system from another and identity protection from yet another.

    This strategy, known as ‘Best-of-Breed’, has led to a situation where, by 2025, the average large enterprise is managing a complex ecosystem of dozens and sometimes almost a hundred different security tools.

    Today, at the beginning of 2026, we can declare with full responsibility the bankruptcy of this model. It was not the inefficiency of individual applications that killed it, but their fundamental inability to work together at the pace imposed by modern artificial intelligence. It is the platformisation of cybersecurity that is the trend that is redefining not only the IT architecture, but also the financial balance sheets of the world’s largest corporations.

    Response to the demise of the “Best-of-Breed” model

    Until two years ago, having specialised niche tools was a point of pride, indicative of a company’s technological sophistication. Today, it has become an operational nightmare, symbolised by the phenomenon of alert fatigue.

    Security Operations Centres are drowning in information noise as isolated systems generate thousands of notifications that do not add up to a logical whole. Until now, the security analyst has wasted much of his or her working time just switching between consoles and trying to correlate facts manually. When faced with modern attacks using autonomous artificial intelligence, which can perform reconnaissance and attack in fractions of seconds, a human trying to combine data in spreadsheets stands no chance.

    That is why, in 2026, the modern security leader understands that the platformisation of cybersecurity is not an option but an operational necessity – he or she is looking for the tightest and best integrated ecosystem, not individual gadgets.

    Why has the platformisation of cybersecurity become a financial and technological priority?

    This transformation is happening now with such rapidity because three key pressure vectors are converging, the most important of which is technology. In order to implement the AI-based autonomous defence promised by suppliers, the algorithm must have access to the full context of events. It needs to be able to see everything from the CEO’s laptop to the cloud servers to the gateway logs. If the data is locked up in dozens of separate databases, artificial intelligence remains blind and useless. The platformisation of cybersecurity breaks down these walls, creating a single coherent lake of data on which algorithms can effectively identify and neutralise threats.

    Financial pressures are an equally important factor. The end of the era of cheap money has forced companies to brutally review costs, and maintaining relationships with dozens of suppliers means a proliferation of negotiation processes, invoices and costly API integrations.

    By migrating to a single platform, the total cost of technology ownership can be reduced by up to a third, making the CFO the security department’s greatest ally in the consolidation process in 2026.

    Additionally, the market faces a skills gap, as finding an expert with deep knowledge of a dozen niche technologies borders on the miraculous. It is much easier and cheaper to source an engineer certified in a single ecosystem, making the platformisation of cybersecurity also solve staffing issues.

    The technological dimension of cybersecurity platforming: one agent, full automation

    The modern platform is more than just a bundle of products sold at a discount. It is a fundamental change in architecture, with a unified agent as a key innovation. We remember the days when corporate laptops lost performance under the weight of many different security programmes running in the background.

    Effective cybersecurity platformisation involves installing a single lightweight sensor that performs multiple functions simultaneously, from antivirus to vulnerability scanner. The second pillar is the automation built into the core of the system. The platform gains the ability to self-heal, which in practice means that when a suspicious connection is detected, the system is able to automatically disconnect the device from the network and reset the user’s privileges, without involving a human in the process. The analyst only receives a report of the action taken, allowing him or her to focus on more complex tasks.

    How does the platformisation of cybersecurity change the supplier market in 2026?

    The security market in 2026 is beginning to resemble the operating systems market of the 1990s, where the winner takes all. We are seeing an aggressive battle between the main camps, including the hegemon in the form of Microsoft and contenders such as Palo Alto Networks and CrowdStrike. Microsoft is capitalising on its dominance in the office and cloud environment by offering integration that is hard to compete with on price.

    Competitors, on the other hand, are vying for the title of independent alternative, making a series of acquisitions to fill gaps in their portfolio and not functionally diverge from the leader. In this battle of the giants, innovative start-ups offering point solutions suffer the most.

    Corporate customers see that platformisation of cybersecurity has greater long-term benefits, so they are forgoing niche innovations, preferring to wait until their main platform implements similar functionality as part of a standard upgrade.

    Operational risks

    Enthusiasm about simplifying the architecture cannot obscure the systemic risks, which are being talked about increasingly loudly behind the scenes. The biggest risk is total dependence on a single supplier. Once a company has implemented full cybersecurity platformisation, it becomes hostage to the vendor’s pricing policy, and withdrawing from such a relationship is a multi-year and costly process.

    We are already seeing providers who have gained a dominant position start to raise subscription prices. The second risk is creating a single point of failure. If a platform update contains a bug, as has happened in the past, a company can lose protection on every front simultaneously.

    There is also a quality dilemma, as platforms are usually good at everything, but are rarely the best at every single aspect. Managers are faced with the choice of accepting a module that is marginally less effective than the market leader in exchange for full integration. In 2026, the answer is increasingly in the affirmative, as better visibility of the whole compensates for any scoring deficiencies.

    Platformisation of cybersecurity as the foundation of a new strategy

    The trend seen in 2026 is not a passing fad, but a logical response to the evolution of digital threats. In a world where attacks are automated and supported by algorithms, defence systems need to operate as one cohesive organism, not a collection of loose organs.

    The clear lesson for business leaders is that the platformisation of cybersecurity is making simplicity the new benchmark for effective protection. A complex infrastructure is an ideal environment for hackers, so it is high time to sort out the technological clutter, consolidate resources and prepare for a war of algorithms in which the winner will be the one with better data and a more consistent picture, not the one who has accumulated more expensive gadgets.

  • More than instant messaging. A new definition of collaboration according to Microsoft

    More than instant messaging. A new definition of collaboration according to Microsoft

    Attendees at this year’s Microsoft Ignite conference may have had the mistaken impression that Redmond’s flagship messaging service had given way to other technology initiatives. The number of stage announcements directly related to Teams was surprisingly modest, raising questions behind the scenes about the giant’s priorities. The business reality, however, looks very different. Details published simultaneously on the company’s tech blogs show that what we saw during the main presentations was just the tip of the iceberg. Teams not only remains a priority, but is undergoing one of the most comprehensive evolutions in its history.

    Underpinning this change is ubiquitous artificial intelligence, which, in a ‘platform penetration’ model, becomes the lifeblood of the entire ecosystem. Copilot in Teams gains new analytical powers, allowing it to process chat history, meeting transcripts and calendar data to generate contextual summaries and conclusions. Significantly for the partner channel, Microsoft has launched a public preview of features to improve secure collaboration with external parties, addressing the real needs of companies working in hybrid models.

    The most interesting conclusion from Ignite 2025, however, goes beyond the application layer itself. Microsoft is clearly adopting a holistic approach to collaboration, integrating three pillars: software, hardware (such as dedicated video bars) and advanced security. An example of the latter is the new architecture for blocking executables and other threats before they even reach the communication channels. This is a strategy in direct competition with the model adopted by Cisco, where deep integration of hardware and software is the strength of the offering.

    This positioning puts the competition in a challenging situation. Although players such as Zoom and RingCentral continue to fight for the market, only Google, with its Workspace suite, is able to offer a similar synergy of productivity and communication tools. Microsoft, by combining the capabilities of Teams with the power of Office and imbuing the whole thing with artificial intelligence, is cementing its leadership position.

  • SAS brings synthetic data generator to Microsoft ecosystem

    SAS brings synthetic data generator to Microsoft ecosystem

    The strategic partnership between SAS and Microsoft is entering a new phase, focusing on one of the biggest challenges of modern AI: accessing data while maintaining privacy. SAS has made its SAS Data Maker tool available in the Microsoft Azure Marketplace, offering companies a solution to the shortage of secure training data.

    The move is a direct result of SAS’s 2024 acquisition of UK-based Hazy, a pioneer in synthetic data generation. The integration of Hazy’s technology has allowed the US analytics giant to create a tool that replicates the statistical, relational and temporal properties of real-world datasets without revealing any sensitive information. In practice, this means that organisations can train and test new artificial intelligence models, bypassing legal and ethical barriers related to RODO or corporate confidentiality.

    microsoft

    SAS Data Maker stands out from the competition with its ‘no-code’ approach. Users can generate data using a graphical interface, which democratises access to advanced techniques such as differential privacy without the need for programmers. The software supports complex structures, including time series processing and operations on multiple tables simultaneously. An important element of the platform is the built-in quality control mechanisms, which allow the fidelity of synthetic data to be visually compared with its original counterparts before being used in a production environment.

    The debut of Data Maker in Microsoft’s shop is another piece of a wider puzzle. Azure already offers SAS Viya Workbench, a development environment that supports model building in SAS, R and Python languages. The presence of both tools in a single cloud ecosystem is expected to make it easier for enterprises to integrate new solutions into existing workflows without the need for costly technical adjustments.

    As announced, however, the exclusivity for Microsoft is temporary. SAS plans to make Data Maker available on other cloud platforms and to integrate it more deeply with SAS’ flagship Viya analytics platform, confirming the company’s desire to maintain flexibility in multi-cloud environments.

  • Microsoft Poland is tightening its grip. Cichocka and Albin join the board

    Microsoft Poland is tightening its grip. Cichocka and Albin join the board

    Microsoft is tightening its grip on the Polish market with the announcement of significant changes to the local leadership team. At a time when the IT industry is moving from a phase of simple digitisation to complex implementations of artificial intelligence, the Polish branch of the Redmond giant is relying on proven staff. The management team, led by CEO Iwona Szylar, is joined by Kamila Cichocka, taking on the position of Chief Operating Officer (COO), and Rafał Albin, who takes over as Director of Customer Development.

    Kamila Cichocka’s appointment as COO signals an emphasis on operational excellence at a crucial time for the company, shortly after the opening of the Polish cloud region. Cichocka, who has been in the technology industry for more than two decades, brings a rare combination of sales and marketing perspective to the role. For the past 13 years within Microsoft, she has managed, among other things, marketing in the Central Europe region, with responsibility for strategy in dozens of markets. Her return to a tight focus on operations in Poland is aimed at tightening up internal processes and building new business models. As she points out, technology is now to be the catalyst for change, and her priority becomes building processes that support efficiency, which in the corporate jigsaw is key to maintaining growth momentum.

    In parallel, Rafal Albin’s assumption of the role of Director of Customer Development is a clear signal to the enterprise market and the partner ecosystem. Albin is a Microsoft veteran with 16 years of experience, having worked his way up almost every rung of the management ladder – from consumer product sales in Romania, to partner channel management, to marketing and operations departments. His experience working with partners in the CEE region will be key in his new role, where the main challenge is to turn cloud and AI technologies into real business resilience for customers. Albin is tasked not only with selling technology, but more importantly with supporting the building of competitive advantages for Polish companies, which requires a deep understanding of the specifics of local business.

    Both promotions are part of a broader strategy to stabilise Microsoft’s management team in Poland. Rather than looking externally for talent, the company is tapping into the deep institutional memory of its leaders to more efficiently navigate the era of data-driven transformation and artificial intelligence.

  • The end of the wild west in the cloud. European Union takes 19 IT giants under the microscope

    The end of the wild west in the cloud. European Union takes 19 IT giants under the microscope

    European Union regulators made an unprecedented move on Tuesday, naming 19 companies – including Amazon Web Services, Google Cloud and Microsoft – as critical service providers for European banking. The decision fundamentally changes the balance of power between Big Tech and financial supervision, moving the relationship from a partnership to a strictly regulated level.

    The move is a direct consequence of the Digital Operational Resilience Act (DORA) regulation coming into force. The new legislation gives European supervisory authorities (EBA, EIOPA, ESMA) the power to directly control technology companies, which until now have only been accountable to their business customers. The regulators make no secret of the fact that their main aim is to mitigate systemic risk. In this era of widespread digitalisation, a failure at one of the leading cloud providers could paralyse a significant part of the European banking system, triggering a domino effect with difficult-to-quantify consequences.

    The list of players targeted by the new surveillance regime is diverse, showing how deeply technology has penetrated finance. In addition to the cloud ‘big three’ (AWS, Google, Microsoft), IBM, market data providers such as Bloomberg and the London Stock Exchange Group (LSEG), as well as telecoms operators, including Orange, and consultancies such as Tata Consultancy Services have also been targeted. Each of these entities will now have to prove that they have an adequate risk management framework in place and that their infrastructure is resilient to cyber attacks and technical failures.

    The industry reaction to the announcement was measured and diplomatic, suggesting that the tech giants had been preparing for this scenario for a long time. Representatives from Microsoft and Google Cloud immediately declared their full willingness to cooperate, emphasising their commitment to cyber security. The LSEG, in turn, openly welcomed the new designation, seeing it as a confirmation of its key role in the ecosystem. Silence has so far been maintained by Bloomberg and Orange, which may indicate ongoing internal analyses of the new regulatory obligations.

    Brussels’ decision is part of a wider global trend of tightening control over critical infrastructure. The European Central Bank explicitly lists technological disruption alongside geopolitical tensions as the main threats to the sector. Similar steps are being taken by the UK, although the legislative process there is lagging behind that of the EU – London does not plan to designate its critical entities until next year. Europe is thus once again becoming a testing ground for new regulatory standards in the technology world.

  • The end of OpenAI exclusivity? Microsoft diversifies portfolio with investment in Anthropic

    The end of OpenAI exclusivity? Microsoft diversifies portfolio with investment in Anthropic

    The exclusivity in Microsoft’ s relationship with OpenAI has officially come to an end. On Tuesday, the Redmond giant, in a rare alliance with Nvidia, sealed a new configuration at the top of the tech industry by investing in Anthropic, a major rival to the makers of ChatGPT. As part of the complex deal, the startup, recently valued at $183 billion, has committed to spend as much as $30 billion on Microsoft’s Azure cloud infrastructure. In return, Nvidia will offer a capital injection of up to $10bn and Microsoft will contribute another $5bn, cementing its role as the ‘armourer’ in the AI war.

    For Satya Nadella, this step is as much pragmatic as it is necessary. Although Microsoft’s CEO asserts that OpenAI remains a “critical partner”, the market reads the move unequivocally: Redmond no longer intends to rely on the success of just one model company. Anthropic, which is aggressively capturing the enterprise market and already serves more than 300,000 enterprise customers, becomes the ideal hedge. The deal will bring Claude models to Azure AI Foundry, making Anthropic the only boundary model provider available from all three major cloud operators (AWS, Google and Microsoft).

    The decision comes at a time when OpenAI itself, led by Sam Altman, is seeking greater independence after a major restructuring and move away from the non-profit model. The startup recently announced a $38 billion cloud deal with Amazon and chalks up ambitious infrastructure plans of $1.4 trillion. In the face of such astronomical costs – where the industry estimates the cost of 1 gigawatt of AI computing power at $20-25 billion – corporate loyalty gives way to mathematics.

    The main objective of the partnership is to systemically reduce the dependence of the AI economy on OpenAI. However, the structure of these deals is of growing concern to investors. What we have is a closed loop of capital: tech giants invest in startups, which then return these funds in the form of cloud fees, artificially pumping up donor revenues. For Nvidia, it’s an ideal arrangement – whether Claude or ChatGPT wins, demand for Grace Blackwell chips remains insatiable, although the risk of a bubble in the sector is becoming increasingly tangible.

  • Microsoft accelerates AI race – and burns billions of dollars on infrastructure

    Microsoft accelerates AI race – and burns billions of dollars on infrastructure

    Microsoft closed the first quarter of its 2026 fiscal year with a result that, on the one hand, shows that the engines driving its growth – cloud and artificial intelligence – are working, but on the other hand highlights cost and infrastructure challenges. Revenues reached USD 77.7 billion, an increase of 18% year-on-year and above market expectations.

    At the same time, capital expenditure reached a record high of almost US$35 billion during the quarter, and the company warned that this would be even higher in the current fiscal year.

    It is worth looking at two key aspects – the scale of cloud business growth and the increasing burden of infrastructure costs.

    The ‘Azure and other cloud services’ division reported growth of 40 per cent year-on-year, beating analyst expectations (~38.4 per cent).

    Total revenue for the cloud segment (Microsoft Azure together with other cloud services) was USD 49.1 billion – +26% year-on-year.

    Importantly, Microsoft’s management indicated that growth could have been even higher had it not been for limitations on the availability of computing power and infrastructure, which the company expects to remain in place until at least June 2026.

    Regardless of the strong performance, it is capital expenditure (CAPEX) that raises the biggest questions. Microsoft is investing heavily in building data centres, purchasing processors, accelerators (GPUs) and increasing power – mainly in response to the demand generated by AI solutions.

    In the release, the company points out that while spending is currently high, it is leading to an increase in future obligations – the backlog of contracts (‘commercial remaining performance obligation’) has reached USD 392 billion, a 51% increase year-on-year.

    As a result – despite strong financial results – Microsoft’s shares fell by around 3-4 % after the close of the session, as investors began to weigh up the speed and scale of their investments and the risk of their return.

    In the context of the technology market and the IT sales channel, two conclusions are key. First – the cloud and AI sector continues to generate real demand, which opens up very specific opportunities for partners, integrators and infrastructure providers. Second – the scale and pace of investment means that the cost of entry is increasing; someone has to fund it and someone has to deliver the results to return the capital – and here there can be tensions in business models.

    Microsoft is demonstrating that revenue growth is accompanied by investment on a gigantic scale – which on the one hand builds competitive advantage, but on the other hand generates questions about the rate of return and whether all participants in the AI/cloud ecosystem will be able to benefit. In the IT channel, this is the moment to think – where is the place of the integrator or resellers in this value chain: whether at the infrastructure level or at the level of higher layer services, where the margin may be safer with increasing cost pressures.

    In short, Microsoft confirms: cloud and AI are pushing growth. But at the same time, investments are gaining scale to the point where they are transforming the business model from a one-off deployment to a long-term capital commitment. And this sends a signal to both customers and industry partners: it’s time not just to sell ‘service X’, but to build a business around sustainability and scale.

  • Companies need AI to reinforce Zero Trust strategy

    Companies need AI to reinforce Zero Trust strategy

    DXC Technology and Microsoft ‘s study ‘The Trust Report : From Risk Management to Strategic Resilience in Cybersecurity‘ involved more than a hundredcybersecurityexperts from four continents. The results show unequivocally: the Zero Trust model is becoming the cornerstone of data protection, but only a small proportion of organisations are fully utilising the capabilities of artificial intelligence in this area.

    In the companies analysed, as many as 83% of companies that implemented the Zero Trust model reported a reduction in security incidents and a decrease in remediation and support costs. Meanwhile, only 30% of organisations reported using AI-based tools – for user authentication, for example – making it clear that the potential of AI in cyber security remains largely untapped.

    The report identifies key barriers: 66% of organisations cite outdated IT systems as the main obstacle to implementing Zero Trust, and 72% say it is new, rapidly evolving threats that motivate them to continuously improve Zero Trust policies and practices. Importantly – more than half of companies acknowledge that Zero Trust also brings unexpected benefits, including improving the user experience while strengthening security.

    Three conclusions can be drawn from the perspective of the technology market in Poland. First: the implementation of Zero Trust is no longer an option, but a condition for effective protection – organisations that have implemented it are realistically reducing risks and costs. Second: although AI is in the discourse often, in practice only one in three companies declare its use in the security area – which creates space for technology and service providers to occupy a strategic position. Third: the integration of identity, devices, networks, applications and data into a coherent architecture – not only the technology solutions, but also the security culture and management commitment – is becoming a key element of deployments.

    Commenting on the results, especially in the context of Poland and the CEE region, it is worth emphasising that the transformation towards Zero Trust requires both an upgrade of IT resources and a change in organisational approach. In a region where infrastructure and procedures often remain from an earlier era, the legacy-systems barrier can be more acute than in developed economies. Meanwhile, cybercriminals – aided by AI tools – are increasingly circumventing conventional security, which only adds to the pressure to accelerate change. The DXC and Microsoft report makes it clear that digital security has become a strategic element – not only is the IT department expected to act, but decisions to build resilience must be made at board level.

    The Zero Trust model is gaining stature as a fundamental standard in cyber security. But its full value will only be achieved when combined with AI technologies and in an environment where cultural change and modernisation of the IT environment go hand in hand. For the IT channel companies in Poland, it is a signal that offering solutions and services in the zero trust + AI model can become an important differentiator – especially where the competition still focuses exclusively on classic security solutions.

  • Poland in the TOP 10 most frequently attacked countries in Europe – Microsoft report

    Poland in the TOP 10 most frequently attacked countries in Europe – Microsoft report

    According to the latest Microsoft Digital Defense Report, Poland ranked among the top ten most frequently attacked countries in Europe – 10th on the continent and 27th globally in terms of the number of users affected by cyber attacks. Even more worrying are the figures for the activities of foreign-sponsored groups. In this category, our country ranks third in Europe, just behind Ukraine and the UK. This is confirmation: Poland has become a permanent fixture on the digital threat map, especially in the context of information warfare and the activity of Russian groups.

    The scale at which cybercrime operates today exceeds the capabilities of traditional defences. Microsoft analyses 100 trillion security signals every day – 28 per cent more than the year before. A global team of 34,000 engineers and 15,000 partners are responsible for systems that identify 38 million identity incidents, scan 5 billion emails and block 4.5 million malicious files. These are figures that show that cyber security has become a global industry – and a race driven by artificial intelligence.

    State actors: cyber warfare without declaration

    The most serious threats today come from government-sponsored groups. Russian entities, according to Microsoft Threat Intelligence, have increased activity against NATO countries by 25 per cent, targeting not only government institutions, but also the media, energy sector, IT and NGOs. In the background of each wave of attacks, the same goal remains: to destabilise or obtain strategic information.

    Attackers are aided by the fact that many key institutions – hospitals, universities, local authorities – have limited budgets and use outdated infrastructure. The growing number of ransomware attacks demonstrates the brutal logic of cybercriminals: system paralysis often makes the victim pay.

    AI in the hands of both parties

    A new element in the equation is generative artificial intelligence. On the one hand, it makes it easier for cybercriminals to create more convincing phishing campaigns, generate fake content or automate attacks. On the other, it gives defenders the tools to respond faster than ever.

    “Digital technology is helping to level the playing field between perpetrators of attacks and the organisations and users who defend against them.(…) However, it is important to remember that the same technology that supports us is also in the hands of cybercriminals. This reshapes the threat landscape and does not allow us to rest on our laurels.” – Krzysztof Malesa from the Polish branch of Microsoft reminds us. This is the essence of this year’s report: AI is not a choice, it is a necessity.

    Cyber security as a management issue

    The report makes it clear: cyber security can no longer be just the domain of IT departments. Boards of directors should understand the risks associated with AI and prepare organisations for the coming era of post-quantum cryptography. Future digital conflicts will not only be resolved in server rooms, but in boardrooms.

    Shared responsibility

    The most sensitive sectors – health, education, administration – will not defend themselves alone. What is needed is public-private cooperation, information sharing and the creation of a coherent legal framework that will be a real deterrent against state actors.

    As a country on the frontline of digital confrontation, Poland must invest not only in technology but in institutional resilience. Because in a world where a cyberattack can paralyse a hospital, digital security is becoming public security.

  • Is Microsoft blocking competition? Qwant’s complaint and the French authority’s decision

    Is Microsoft blocking competition? Qwant’s complaint and the French authority’s decision

    French search engine Qwant has found itself on a collision course with Microsoft – but all indications are that the antitrust regulator there will reject its complaint. It’s a seemingly local dispute, but it illustrates well how difficult it is to challenge the giants controlling the search infrastructure, even in Europe, which carries Big Tech regulation on its banners.

    Qwant, which has been using Bing’s results and infrastructure for years, has accused Microsoft of restrictive practices: forcing exclusivity terms, hindering the development of its own search technology and favouritism in the allocation of advertising. In short, it argues that without independent access to data and advertising, it is impossible to build a viable alternative.

    However, the French antitrust authority was said to have suggested at a hearing in June that the application should be rejected. A decision is expected within two weeks, although Qwant promises a further fight in court or at EU level. Microsoft responds briefly: the allegations are baseless and the search market is dominated by Google anyway.

    And this is where the fundamental problem arises – who really controls the search market? Google has more than a 90% share in Europe, but it is Bing that remains the key ‘engine’ provider for most of the smaller players: Qwant, DuckDuckGo, Ecosia or Lilo. Search syndication is a quiet pillar of the market – a few global providers decide who has the data, ads and revenue.

    Qwant is unable to compete without Microsoft, but at the same time accuses it of blocking independence. Such a paradox makes the question of competition no longer about interfaces or privacy, but about infrastructure: who controls access to the web index, to the ad network, to AI models?

    In the context of European regulation – from the DMA to the planned algorithm transparency rules – the Qwant case could prove to be a test case. If regulators find that the use of a dominant provider does not create a risk of abuse, smaller search engines will remain as customers, not competitors.

    Whatever the verdict, one thing is already clear: the era of ‘neutral access’ to search is over. In the age of AI and proprietary language models, the question is whether Europe will build its own infrastructure or merely use American interfaces. Qwant is not just a French problem – it is a test of digital sovereignty in practice.

  • Europe’s answer to hyperscalers? Nscale to supply 200,000 Nvidia chips to Microsoft

    Europe’s answer to hyperscalers? Nscale to supply 200,000 Nvidia chips to Microsoft

    UK start-up Nscale, which specialises in infrastructure for artificial intelligence, has signed an extended deal with Microsoft. As part of the collaboration, the company will supply around 200,000 Nvidia chips to data centres in the US and Europe – one of the largest contracts of its kind announced publicly. According to the Financial Times, the deal could be worth up to $14bn, although Nscale does not comment on financial details.

    It is a move that is part of the global race for computing power, which today defines the real position of AI players. Microsoft – a key investor in OpenAI – is intensively expanding its own infrastructure to become independent of external providers and secure resources for years to come. For Nscale, in turn, this is an opportunity to join the exclusive group of Big Tech infrastructure partners.

    Deliveries will begin in 2025 and will include new data centres in Texas and Portugal, among others. Dell Technologies will also be involved in the contract, suggesting that Nscale will act as an AI infrastructure integrator rather than just a hardware supplier. This follows an earlier project in Norway, where a joint venture between Nscale and Aker is preparing an AI campus with 52,000 Nvidia GPUs for Microsoft.

    Nscale, which in September raised $1.1bn in funding from Aker and Nokia, among others, has consistently positioned itself as the European answer to US-based hyperscalers. The ambition is clear: to build a continental AI infrastructure at scale to rival AWS and Google Cloud.

    In the background, the strategic question remains – will Europe manage to maintain control of key AI resources if the US giants remain the main beneficiaries? The Microsoft contract strengthens Nscale, but at the same time shows how strong the AI market’s dependence on capital and demand from the US is

  • AI PC: real revolution or the biggest marketing bubble of the decade?

    AI PC: real revolution or the biggest marketing bubble of the decade?

    The PC market, after a period of pandemic revival, stagnated. Innovation seemed to be only cosmetic and hardware replacement cycles were lengthening. In this landscape, however, a powerful new catalyst for change has emerged: AI PC. This is not another fashionable buzzword, but the announcement of a fundamental transformation in PC architecture that is set to redefine the role of the PC in our lives and initiate a massive hardware replacement cycle.

    But what exactly is an AI PC? It is not simply a computer with access to cloud-based AI services. The definition goes back to the silicon itself. A true AI PC is a device equipped with a specialised, three-element computing architecture: a traditional CPU for general tasks, a powerful GPU for parallel processing and, crucially, an NPU (Neural Processing Unit). It is the NPU, a dedicated and energy-efficient accelerator, that is at the heart of the revolution, enabling AI tasks to be efficiently processed directly on the device, without burdening other components .

    The key parameter here became performance measured in TOPS (trillions of operations per second) . The turning point turned out to be Microsoft’s establishment of a threshold of at least 40 TOPS for the NPU itself as a condition for ‘Copilot+ PC’ certification . This strategic move redefined the market, forcing the entire industry into a race to exceed the imposed threshold.

    This brings us to the main thesis: AI PC is not just a hardware evolution, but a fundamental paradigm shift. We are witnessing a shift from a fully cloud-dependent architecture to a hybrid model in which AI computing power is strategically dispersed between data centres and the end device. This shift carries profound implications for cost, privacy and the entire IT ecosystem.

    Market drivers: why now?

    The sudden emergence of the AI PC category is the result of a confluence of three powerful forces that made moving AI to the device not only possible, but necessary.

    A technological necessity: privacy, security and latency

    In an era of increasing awareness of data protection, cloud computing raises concerns. AI PC addresses these challenges by offering analysis of sensitive data directly on the device, enhancing privacy and security.

    What’s more, for real-time applications like live translation, the elimination of delays (latencies) associated with communication with the cloud is crucial to the quality of the user experience.

    Economic impetus: the hidden cost of cloud AI

    The boom in generative AI has revealed a brutal economic truth: while training models is a huge but one-off expense, the real budget ‘eater’ is the cost of inference, i.e. actually using the models.

    Every query to cloud-based AI generates a cost that, at scale, becomes difficult to predict and is a barrier to enterprise adoption of the technology. By moving some of the computing to the end-device, technology giants such as Microsoft are strategically passing on some of the rising operational costs to customers who are investing in new, more expensive hardware.

    Market maturity: the boom effect of generative AI

    The explosion in popularity of tools such as ChatGPT has fundamentally changed user expectations. Consumers and business employees alike now expect AI to be an integral part of their everyday tools. The timing coincides perfectly with the natural cycle of post-pandemic hardware replacement and the impending end of Windows 10 support in October 2025, creating the perfect ‘window’ for the introduction of a new product category.

    Battlefield: architects of the new PC era

    The entry of AI PC into the market has sparked the most intense rivalry in the industry for years, with traditional and new players facing off against each other.

    Chipmakers: the war of architectures

    The competition is no longer just between Intel (Core Ultra) and AMD (Ryzen AI) within the same x86 architecture. The real breakthrough is the entry of Qualcom (Snapdragon X Elite), which brings ARM architecture to mainstream Windows PCs, promising unprecedented energy efficiency . This is the biggest challenge to the ‘Wintel’ duopoly (Windows + Intel) in decades, initiating a fundamental war of architectures – x86 versus ARM – on the same system platform.

    Although Microsoft has created an advanced emulation layer, history teaches that this always involves compromises in performance, especially in games and specialised software . It is also worth remembering that the pioneer in this field is Apple, which has been integrating dedicated neural engines into its processors since 2017, exploiting the advantage of full control over hardware and software.

    Software giants: Microsoft as market conductor

    In this revolution, it is not hardware manufacturers but the software giant that is dealing the cards. Microsoft, through Windows and the new Copilot+ feature category, has become the main conductor of the market . By introducing exclusive tools such as Recall (photographic computer memory) or Cocreator (real-time image generation), the company has created a real demand for hardware capable of running them locally . Microsoft’s strategy is clear: transform the operating system into a proactive, intelligent assistant.

    The market in figures: growth forecasts and potential

    Market analysts agree: we are standing at the threshold of an exponential increase in AI PC adoption. Although short-term forecasts are being adjusted due to macroeconomic uncertainty, the long-term trend is clear.

    • Canalys predicts that AI PC shipments will reach 48 million units in 2024 (18% of the market) and will reach 205 million by 2028, representing a compound annual growth rate (CAGR) of 44% .
    • Gartner forecasts that AI PC market share will reach 31% in 2025 and exceed 54% in 2026 .
    • IDC estimates that AI PCs will account for nearly 60% of the total market by 2027, with a CAGR of 42.1% between 2023 and 2028.

    Projected Share of AI PCs in Total PC Sales (2024-2028)

    • 2024: 18%
    • 2025: 35%
    • 2026: 55%
    • 2027: 60%
    • 2028: 75%

    This dynamic adoption curve shows that AI PC is not a fad, but a technological standard that will dominate the market before the end of this decade.

    Strategic implications: opportunities and threats

    The move to an AI PC architecture has fundamental implications for the entire IT ecosystem.

    For business: productivity versus security

    The promise of AI PC for business is to leapfrog productivity by automating routine tasks. However, the revolution comes at a price. AI PCs will be 10-15% more expensive, requiring IT departments to analyse their total cost of ownership (TCO). The biggest challenge, however, is security.

    Case study: Microsoft Recall

    Nothing illustrates this better than the controversy surrounding the Microsoft Recall feature. Designed as a computer’s ‘photographic memory’, the original version stored the user’s entire activity history in an unencrypted database. This meant that any malware could steal a victim’s entire digital life in seconds . Public criticism forced Microsoft to redesign the feature, making it disabled by default and adding advanced encryption . The Recall saga is a fundamental lesson: local processing creates powerful new attack vectors, and the promise of privacy is empty without a robust security architecture.

    For the software market and the risk of a “marketing bubble”

    For developers, the emergence of the NPU is an opportunity to create a new generation of ‘AI-native’ applications . On the other hand, the fragmentation of platforms (x86 vs. ARM) creates a risk of chaos and increased developer costs.

    At the same time, a question hovers over the market: are current applications revolutionary enough to justify a mass replacement of hardware? The industry has been searching for decades for a “killer app” – an application so groundbreaking that people buy new hardware for it . For now, the AI PC market does not have a single, obvious ‘killer app’, which fuels fears of a marketing bubble in which promises overtake actual value. However, it is possible that the strength of AI PC will be the sum of hundreds of small enhancements running in the background that will gradually improve the PC experience .

    Analysis of the AI PC market leads to a clear conclusion: we are witnessing more than just another hardware refresh cycle. Driven by the need for privacy, economic pressures and expectations shaped by generative AI, NPU integration is initiating a fundamental paradigm shift in PC architecture.

    This confirms our central thesis: AI PC is a revolution, not an evolution. It is a strategic shift to a hybrid AI architecture that will change not only how computers process information, but also how they interact with us. Predictions clearly point to exponential growth and the inevitable domination of this category in the market.

    The personal computer, for years seen as a mature tool, is on the threshold of reincarnation. It is being transformed from a passive window on the digital world into an intelligent, proactive partner. The biggest challenge for the industry as a whole now is not whether this transformation will happen, but how to manage it in a way that is safe, productive and of value to the user. Avoiding the trap where marketing promises trump real-world usability will determine whether AI PC becomes a true revolution or just an expensive bubble. This is not the end of the history of the personal computer – it is the beginning of a whole new chapter of it.

  • Trump publicly calls on Microsoft to fire one of its top executives

    Trump publicly calls on Microsoft to fire one of its top executives

    President Donald Trump has publicly called on Microsoft to fire Lisa Monaco, its new head of global affairs, putting the tech giant in an extremely difficult position. The attack on the former Democratic administration official is another example of the White House’s increasing pressure on US corporations and is part of a wider trend of hitting those perceived as political opponents.

    Monaco, who joined Microsoft in July, is expected to head up the company’s relationships with governments around the world. Her extensive experience at the Department of Justice, where she served as homeland security adviser in the Obama administration and was deputy attorney general during Joe Biden’s tenure, was expected to be an asset to the company. However, it is this past that has become a source of conflict.

    President Trump has described Monaco as a ‘national security risk’, pointing to the key contracts Microsoft performs for the US government. Earlier in February, her security clearances were revoked and Monaco herself was banned from federal buildings.

    It is worth recalling that she played a role in coordinating the Justice Department’s response to the events of 6 January 2021. The call for her dismissal came just one day after the indictment of former FBI Director James Comey.

    For Microsoft, the situation is an unprecedented challenge. The company has been trying for months to warm its relationship with the second Trump administration. CEO Satya Nadella recently attended a dinner with the president at the White House, and the technology industry, previously accused by Republicans of bias, has actively sought a platform for dialogue.

    Now the board in Redmond faces a difficult choice. Bowing to pressure and firing Monaco would set a dangerous precedent and could be perceived as capitulation to political pressure. But ignoring the president’s demand risks escalating the conflict, which could jeopardise billions of dollars worth of government contracts and expose the company to further attacks.

    The case of Microsoft is the latest example of the Trump administration’s broader strategy of unprecedented interference in the affairs of corporate America. Previous examples of pressure include Intel, whose CEO had to step down, and Disney, whose ABC station suspended the Jimmy Kimmel show for several days.

    The latest move by the White House is a wake-up call for the entire tech industry – in the current political climate, any personnel decision can become a high-stakes battleground. Microsoft, for now, declined to comment.

  • UK the new AI hub? Investment from Microsoft and Nvidia

    UK the new AI hub? Investment from Microsoft and Nvidia

    This is no longer just a trend. It is a technological landing on an unprecedented scale. The just-announced investments by Microsoft and Nvidia in UK AI infrastructure, totalling tens of billions of dollars, signal that the global race for AI dominance is entering a new, brutally capital-intensive phase.

    The UK, with its favourable regulatory environment and strategic location, is emerging as a key battleground in this battle.

    The arms race in the cloud

    The figures speak for themselves. Microsoft has pledged to invest $30 billion by 2028, half of which will be spent on physical data centre expansion.

    Nvidia is adding $15bn (£11bn), in partnership with hyperscaler Nscale and GPU cloud provider CoreWeave, with plans to deploy 120,000 of its latest Blackwell Ultra GPUs.

    These activities are not isolated. They are part of a global boom in infrastructure construction that, according to Dell’Oro Group, lifted data centre spending by 43% year-on-year to a dizzying $158 billion in the second quarter.

    Microsoft alone committed more than $24 billion in the last quarter. There is a gold rush at stake with computing power capable of training and supporting the next generation of AI models.

    The collaboration between Nvidia, Microsoft, OpenAI and Nscale on the Stargate UK project shows the ambition of the project. This is not simply an extension of a server room.

    It is an attempt to create a supercomputer and ecosystem from scratch to attract the best talent and most advanced AI projects to the British Isles.

    Why the UK? Geopolitics and data sovereignty

    The sudden influx of such huge capital into one country is no coincidence. In an era of rising geopolitical tensions, privacy concerns and increasingly stringent data sovereignty requirements, the UK is becoming a safe haven for US tech giants.

    It offers a stable regulatory environment, proximity to the European market and status as a global financial centre.

    Competitors have long recognised this potential. In recent years, Amazon has pledged investments of more than $10 billion in its AWS infrastructure in the region.

    Oracle plans to spend $5 billion and Google has just opened a new data centre as part of a nearly $7 billion investment.

    These decisions are a direct response to growing demand from businesses. Gartner predicts that spending on cloud infrastructure will increase by more than 21% this year to $723 billion. Even more impressive is the projected growth in spending on artificial intelligence alone, which is expected to reach almost $1.5 trillion by 2025.

    Companies need to build data centres where their customers are and where regulations allow.

    The wave of investment flooding into the UK is more than just money and equipment. It is a strategic move that could permanently change the technological map of Europe.

    1 Consolidating Power: These investments cement the position of a few key players – Microsoft, Nvidia, AWS and Google – as the foundation on which the future of AI will be built. It will become increasingly difficult for smaller companies to compete at the infrastructure level.

    2 New Gravity Centre: The UK is becoming the undisputed leader in Europe in terms of computing power for AI. This could attract a wave of startups, researchers and capital, creating a self-perpetuating Silicon Valley-like ecosystem.

    3 Challenges and Risks: The concentration of such massive infrastructure in one place raises questions about energy security and energy demand. Furthermore, it remains an open question as to how much of the profits from this revolution will stay in the UK and how much will feed into the accounts of US corporations.

    One thing is certain: the spending frenzy is not slowing down. Microsoft is already announcing further investments in the US and Norway. Nvidia is deploying hundreds of thousands of its processors worldwide.

    The UK has won an important battle to become a key hub in the global AI network. However, the long-term war for talent, innovation and real economic benefit is only just beginning.

  • The clock for Windows 10 is ticking. Companies face an inevitable decision

    The clock for Windows 10 is ticking. Companies face an inevitable decision

    On 14 October 2025, Microsoft will end free support for Windows 10, a date that should be highlighted in red in every IT manager’s calendar.

    After this date, computers running this system will stop receiving key security updates, making them an easy target for cyber attacks.

    Even though there is just over a year left until the end of support, a huge part of the market is still ignoring the upcoming changes. This is a mistake that could cost companies much more than the price of new licences and hardware.

    Resistance to matter and the illusion of safety

    The market data is clear. Despite the growing popularity of Windows 11, its predecessor still dominates a huge number of machines. According to Statcounter data from August 2025, Windows 10 is still running on more than 55% of Microsoft PCs worldwide. Why the resistance to change?

    The reasons are understandable. Users value Windows 10 for its familiar interface and stability of operation. From a business perspective, migration is a complex project – it involves the cost of buying new hardware, verifying software compatibility and the need to train employees.

    Many companies put off this decision, following the thinking: “if the computer still works, why change it?”.

    However, this is a dangerous illusion. In the context of cyber security, the argument “it still works” has no value. A system without updates is like a house with the door wide open. Any newly discovered security vulnerability will remain there forever, giving attackers constant and easy access to company data. The cost of a single successful ransomware attack or data leak can outweigh the expense of infrastructure upgrades many times over.

    A technological imperative, not a manufacturer’s whim

    There has been a lot of controversy surrounding the Windows 11 hardware requirements, but this is not due to any ill will on Microsoft’s part. Modern operating systems base their security architecture on features integrated directly into the hardware.

    These include mechanisms such as TPM 2.0 (Trusted Platform Module), which enables encryption at the chipset level, and Secure Boot, which protects the boot process from malware.

    Older computers simply do not have these components, making it impossible to implement a full, multi-layered protection model. Continuing to support incompatible hardware would mean a security compromise that cannot be afforded in today’s threat landscape.

    What do we gain? New opportunities and competitive advantages

    Migrating to Windows 11 is not only a necessity dictated by security, but also an opportunity to implement tools that make a real difference to productivity.

    • Integration with AI: Functions such as Copilot are deeply integrated into the system, assisting employees in writing texts, analysing data or creating presentations. These are tools that speed up work and automate repetitive tasks.
    • Deeper integration with the cloud: Data synchronisation, backup and collaboration in distributed teams run much more smoothly in Windows 11, which is crucial in the age of hybrid working.
    • Performance and efficiency: New devices with Windows 11 pre-installed are often lighter, more energy-efficient and more powerful, also thanks to support for ARM architecture. This means longer battery life and a better user experience.

    A chaotic last-minute migration is a recipe for disaster. Companies that have not yet started the process should follow a well-thought-out plan:

    1. Infrastructure audit: The first step is to inventory the hardware and software. It is important to identify which devices are compatible with Windows 11 and which need to be replaced. It is also crucial to check that business-critical applications run correctly on the new system.
    2. Create a schedule: Based on the audit, a detailed migration roadmap should be prepared, identifying which departments or user groups will be switched first. It is worth starting with pilot projects.
    3. Data management and backup: Before migration, it is crucial to create full backups of data. Modern cloud tools make this process much easier, but it requires planning.
    4. Communication and training: Employees need to understand why the change is necessary and how it will affect their daily work. Transparent communication and short training sessions will minimise resistance and fears, ensuring a smooth transition.

    The end of support for Windows 10 is a fact. Continuing to use it will be an act of conscious acceptance of risk. For companies, the question is no longer ‘whether’ to move to Windows 11, but ‘how’ and ‘when’ to organise the process. The sooner they take action, the more control they will retain over the security and future of their digital infrastructure.

  • Quantum Game of Thrones: who will build the machine that will change the world?

    Quantum Game of Thrones: who will build the machine that will change the world?

    There are moments in the history of technology that redefine the limits of possibility. The mastery of fire, the invention of printing, the digital age – each of these eras was sparked by a fundamental discovery.

    Today we stand on the threshold of another such transformation, which is not simply the evolution of computing power, but the birth of an entirely new paradigm. We are talking about quantum computing.

    The race to build a functional quantum computer is the most important technological and geopolitical duel of the 21st century.

    At stake is the ability to solve problems that are today beyond the reach of the most powerful supercomputers – from designing drugs at the molecular level, to creating revolutionary materials, to breaking almost all modern encryption systems.

    At the heart of this revolution is quantum mechanics, with its principles of superposition and entanglement, which allows a qubit – the quantum equivalent of a bit – to exist in multiple states simultaneously. It is this fundamental difference that gives quantum computers their unimaginable potential.

    The year 2025, declared by the UN as the International Year of Quantum Science and Technology, is a symbolic turning point . We are no longer in the realm of purely theoretical considerations. We have entered the NISQ (Noisy Intermediate-Scale Quantum) era – a time when we have imperfect, ‘noisy’ quantum computers, but which are becoming more powerful every year.

    It is a nascent industry, estimated to be worth US$866 million in 2023 and projected to reach US$4.4 billion by 2028.

    Great quantum families: contenders for the crown

    Three powerful players have emerged on the battlefield for the quantum future: Google, IBM and Microsoft. Each has a different strategy to sit on the technological throne.

    Google: the alchemists of Mountain View

    Google’s strategy focuses on spectacular, breakthrough demonstrations of power. Their latest weapon is the ‘Willow’ processor, but the real breakthrough lies not in the number of qubits, but in the mastery of error correction.

    Google engineers have announced that they are able to maintain the stability of a logical cubit – that is, a set of physical cubits working together to correct errors – for up to an hour.

    This is a monumental leap compared to the microseconds that were the standard not so long ago. Their claim to the throne is based on being the first to push the boundaries of science, as in 2019 when they were the first to announce the achievement of ‘quantum supremacy’.

    IBM: kingdom builders for all

    IBM is playing a very different game. Instead of isolated breakthroughs, they are betting on consistent progress and democratising access to technology. Their roadmap is public and precise, and they plan to make the ‘Nighthawk’ processor available in 2025.

    A key element of their strategy is to integrate quantum computers with classical supercomputers (HPC), creating a hybrid future. The opening of Europe’s first quantum data centre in Germany is a strategic move, bringing quantum resources directly into European industry and academia.

    IBM is not just building a lab experiment; it is creating a business-ready platform accessible through the cloud.

    Microsoft: patient architects in Redmond

    Microsoft took the path of highest risk, but also potentially highest reward. For decades they had invested in research into the mythical ‘topological cubit’, which would be inherently fault-tolerant.

    While waiting for this technology to mature, they built the powerful Azure Quantum ecosystem, designed to be independent of any particular hardware architecture . Their latest breakthrough is a demonstration of 12 entangled logical qubits with an error rate 800 times lower than single physical qubits, achieved in collaboration with Quantinuum.

    The partnership with Atom Computing aims to build “the world’s most powerful quantum machine”, combining their error correction software with promising technology based on neutral atoms .

    The geopolitical great game: the dragon versus the eagle

    The rivalry is moving into the global arena, where it is becoming central to the confrontation between the United States and China. It is a battle for technological hegemony involving billions of dollars of public and private funds.

    The US leads the way when it comes to the dynamism of the startup ecosystem, with 77 quantum technology companies operating there . This innovation is driven by gigantic private investment and the research power of big tech families.

    The federal government also plays a key role by providing significant funding for basic research.

    However, China is playing a long-term, fully state-controlled game. They are catching up with shocking speed. According to a report by the Australian Strategic Policy Institute (ASPI), China is already leading in 57 out of 64 key technologies, including such vital areas as quantum sensors.

    The Middle Kingdom is pursuing a strategy based on gigantic investments in research infrastructure and aims to achieve dominance in the production of mature chips.

    Europe, although a significant player, lags behind the two superpowers in terms of the scale of investment.

    Nevertheless, initiatives such as EuroHPC and the strategic positioning of quantum computers in Poland and Germany are evidence of a coordinated effort to maintain competitiveness.

    Winners’ trophies: industries on the threshold of tomorrow

    Why are governments and corporations investing billions in this technology? The answer lies in the revolutionary applications that await the winners.

    Health service and pharmacy: drug design

    One of the most difficult problems for classical computers is the precise simulation of complex molecules. Quantum computers are naturally predisposed to simulate such systems. Their use can reduce the time needed to discover and develop a new drug by up to 50-70%.

    Pharmaceutical giants such as Roche and Pfizer are actively working with technology companies to prepare for the coming of the quantum era.

    Pfizer’s collaboration with technology company XtalPi, using artificial intelligence as a bridge to full quantum computing, has already reduced the time it takes to calculate the crystal structure of molecules from months to just days.

    Finance: quantum hedge fund

    Financial markets are a world of complex optimisation and risk modelling problems. Quantum algorithms are able to analyse a much larger number of variables and scenarios simultaneously, leading to optimised portfolios and more accurate risk assessment.

    Financial institutions such as JPMorgan and BBVA are already running pilot projects in collaboration with IBM and D-Wave . However, this same power also poses an existential threat. A quantum computer of sufficient scale will be able to crack the encryption algorithms that underpin the security of the entire digital economy.

    This creates an urgent need to implement so-called post-quantum cryptography.

    Materials science and chemistry: engineering the impossible

    The creation of new materials today relies heavily on trial and error. Quantum computers are opening the way to ‘custom material design’, enabling the precise simulation of the quantum properties of substances before they are even produced in the laboratory . This could lead to breakthroughs such as superconductors that work at room temperature or catalysts that make industrial processes radically more energy efficient. Companies such as BASF are deeply involved in research, forming partnerships with startups and academic institutions.

    From supremacy to advantage: the true measure of victory

    Google’s announcement of ‘quantum supremacy’ in 2019 was a milestone, but not a commercial turning point. The problem their computer solved had no practical application . It is therefore crucial to distinguish the terms:

    • Quantum Supremacy (Quantum Supremacy): Proof that a quantum computer can beat a classical one at any task, even a useless one. It is a scientific benchmark, but without direct commercial relevance .
    • Quantum Advantage (Quantum Advantage): The real goal. Means the ability to solve a useful, real-world business problem faster, cheaper or more accurately than any classical computer .
    • Quantum Utility: The pragmatic state we are currently in. It means using today’s imperfect NISQ computers to achieve tangible, though not yet revolutionary, results .

    The change in language itself, moving away from the confrontational term ‘supremacy’ to the more practical terms ‘advantage’ and ‘usability’, is symptomatic of the maturity of the industry as a whole. It marks a shift from pure science to commercial applications.

    The quantum revolution will not come with a bang. It will be a quiet, creeping transformation. The true measure of victory in this game of thrones will not be supremacy, but utility – the number of problems solved and the value created. The time to prepare is not when the throne is won, but now, when the great families are making their first strategic moves. The game has begun.

  • Technology giants’ slip-ups. What can we learn from the biggest failures in the IT world?

    Technology giants’ slip-ups. What can we learn from the biggest failures in the IT world?

    In Silicon Valley, it is said that failure is not a shame, but a badge of honour; proof that one had the courage to take risks. It’s a convenient narrative, but there is a grain of truth behind it. Even the biggest players with almost unlimited budgets – Google, Microsoft or Samsung – have had their share of spectacular stumbles.

    Products that were supposed to revolutionise the market now rest in the technological graveyard. However, let’s forget about malice. Let’s look at these stories to extract universal and timeless business lessons.

    Google Glass – When technology overtakes society

    Do you remember 2012? Google presented the future to the world, and it came in the form of smart glasses. The Glass project, with its futuristic interface projected directly in front of the eye, created a wave of excitement.

    The journalists and developers who joined the $1,500 Explorer programme felt they were touching tomorrow. They could shoot video, take photos and navigate, looking at the world through the lens of data.

    However, the spell faded as quickly as it had appeared. Glass users became infamously known as ‘Glassholes’ because those around them felt permanently invigilated. Is my interlocutor recording me? Is he or she just taking my picture?

    The lack of a clear answer to these questions spawned an insurmountable barrier. What’s more, beyond the ‘wow’ effect, no one really knew what the device was to be used for on a day-to-day basis.

    It was a solution in search of a problem – expensive, weird-looking and socially troublesome.

    Business moral: Innovation must be socially acceptable. The most advanced technology will fail if it ignores the cultural context, social norms and real user needs.

    The lesson with Google Glass is simple: it is not enough to ask “can we build this?”, the key question is “should we and does anyone need it?”.

    Windows Phone – Building a great product in a market vacuum

    Microsoft was late to the smartphone revolution, but when it finally got into the game, it did so with aplomb. Windows Phone was a system that delighted critics. Its ’tile’ based interface, known as Metro UI, was fresh, elegant and ran incredibly smoothly even on weaker devices.

    To pose a real challenge to the Apple and Google duopoly, the Redmond giant even took over Nokia’s legendary mobile division. It had a great system and excellent hardware on its hands. What could go wrong?

    Everything that was around. The Windows Phone debacle is a textbook example of the problem known as the ‘app gap’. Users didn’t want a system that didn’t have Snapchat, the latest games or banking apps.

    Developers, in turn, did not want to develop software for a platform with a marginal market share. This vicious circle proved deadly. Microsoft built a beautiful and capable car, but forgot about roads, petrol stations and garages.

    Business moral: The product itself, even the best, is not enough. In today’s world, the king is the ecosystem. Users don’t buy the device or system itself – they buy access to millions of apps, services and communities. Without the support of third-party developers and a strong network effect, even the biggest player is doomed to fail.

    Samsung Galaxy Note 7 – When haste leads to spontaneous combustion

    In the second half of 2016, Samsung was on an upward wave. The Galaxy Note 7 was to be the masterpiece crowning its dominance of the Android market and the ultimate ‘iPhone killer’. The device received rave reviews for its symmetrical design, phenomenal screen and the best camera on the market. Sales kicked off. And then the phones started to flame out.

    Reports of exploding batteries, initially treated as isolated incidents, quickly turned into a global crisis. It turned out that, in the pursuit of the thinnest possible chassis and the desire to get ahead of Apple’s launch, engineers had packed the battery cells too aggressively, leaving no room for them to work naturally.

    Faulty design combined with insufficient quality assurance (QA) testing created a ticking bomb. A global recall and bans on bringing the product on board aircraft became an image nightmare.

    Business moral: Never sacrifice quality and safety on the altar of speed-to-market (Time-to-Market). Foundations are more important than fireworks. One critical mistake can not only destroy a brilliant product, but also cost a company billions of dollars and, more valuable, years of rebuilding customer trust.

    Golden lessons from the technology graveyard

    The stories of Google Glass, Windows Phone and Galaxy Note 7 are more than curiosities – they are case studies illustrating key dynamics governing the technology market. The Google Glass story shows how even the most advanced technology can fail if it ignores societal needs and norms.

    The case of Windows Phone, on the other hand, proves that in today’s world an isolated product, even a technically polished one, stands little chance against the power of a vibrant ecosystem.

    Finally, the Galaxy Note 7 fiasco is a clear example that rushing and compromising on quality leads to the loss of the most valuable capital – customer trust.

    These challenging failures are not a sign of weakness, but a natural part of the innovation process. The ability to learn from them and adapt is what ultimately creates more mature and successful products.