Category: Technology

  • The economics of open source: who pays for the code the world runs on?

    The economics of open source: who pays for the code the world runs on?

    Every day, as we reach for our smartphone, launch our favourite TV series or send a business email, we participate in the quiet miracle of modern technology. Beneath the shiny surface of apps and services lies an invisible foundation – open source software.

    It is millions of lines of code, written, refined and shared with the world for free by a global community. This code is the bloodstream of the internet and the backbone of the AI revolution.

    But this digital world, raised on the idea of freedom and collaboration, conceals a profound paradox. The global economy relies on an infrastructure created largely by volunteers, often balancing on the brink of professional burnout.

    It is as if global trade routes were based on bridges built as a hobby after hours. How long can such a structure last? Who actually pays for the code we all rely on?

    The invisible foundation: our global dependence

    Open source software is no longer an alternative. It has become the default building block of the digital world. Hard data paints a picture of almost total dependence. An analysis by Synopsys in 2024 showed that as much as 96% of the commercial code bases examined contained open source components.

    What’s more, on average, 77% of all code in these applications came from open source. It’s no longer a question of using individual libraries – it’s about building entire systems on a foundation created by the community.

    The scale of this dependency becomes even more striking when looking at the dynamics of consumption. In 2024, it was forecast that the total number of downloads of open source packages would reach the unimaginable figure of 6.6 trillion.

    The npm (JavaScript) ecosystem alone was responsible for 4.5 trillion requests, recording 70% year-on-year growth, while the AI-powered Python ecosystem (PyPI) grew by 87% to reach 530 billion downloads.

    The average commercial application today is a complex mosaic of an average of 526 different open source components. Each has its own life cycle, its own maintainers and its own potential problems.

    Cracks in the foundation: zombie code and a wake-up call called Log4j

    The ubiquity of open source is a double-edged sword. The same ease with which developers can incorporate off-the-shelf components into their projects leads to systemic neglect. The data is alarming: as many as 91% of the commercial code bases surveyed contain components that are ten or more versions out of date.

    This problem leads to so-called ‘zombie code’ – components that have had no development activity for more than two years. This phenomenon affects almost half (49%) of the applications on the market.

    This means that companies are building their critical systems on abandoned projects, without active support and, most importantly, without security patches. The consequence is a ticking time bomb: in just one year, the percentage of code bases containing high-risk security vulnerabilities has increased from 48% to 74%.

    Nothing illustrates this risk better than the December 2021 incident, when the world learned of the Log4j vulnerability. This small, free Java library for logging turned out to be embedded in millions of applications around the world.

    The vulnerability, named Log4Shell, received a maximum criticality rating of 10/10. An attacker could take full control of a server by sending a simple string of characters. US CISA director Jen Easterly called it “one of the most serious vulnerabilities she has seen in her entire career”.

    The Log4j incident became a global wake-up call, making companies brutally aware of how much their security depends on the work of anonymous volunteers.

    Worse still, even three years after the discovery of Log4Shell, up to 13% of all Log4j library downloads are still vulnerable versions. This demonstrates the profound inertia of organisations that fail to update their dependencies even in the face of a well-known, critical threat.

    The human cost of ‘free’ software: the burden of the custodian

    There are people behind every line of code. A model that treats their work as a free resource generates a huge human cost. Salvatore Sanfilippo, the creator of the Redis database, described this phenomenon as the ‘flooding effect’.

    Over time, the stream of emails, GitHub submissions and questions turns into a never-ending flood that leads to guilt over not being able to help everyone.

    The scale of this pressure is illustrated by the example of Jeff Geerling, who looks after more than 200 projects. Each day he receives between 50 and 100 notifications, of which he is only able to deal with a fraction.

    Nolan Lawson, another well-known maintainer, aptly put the emotional weight of this work. Notifications on GitHub are “a constant stream of negativity”. No one opens a notification to praise working code. People only post when something is wrong.

    This chronic pressure leads to burnout, which, in the context of open source, has clearly defined causes: demanding users, low quality contributions, lack of time and, most acutely, lack of remuneration.

    Knowing that work that consumes huge amounts of energy is the foundation for commercial products that make real profits for others is extremely demotivating. As one maintainer put it:

    “My software is free, but my time and attention is not”. Caregiver burnout is not just a personal tragedy. It is a critical risk to the global infrastructure.

    ‘Zombie code’ is a direct, measurable symptom of this crisis at the human level.

    The New Economy of Code: Towards a Sustainable Future

    In the face of these risks, the open source ecosystem is slowly maturing, moving from a volunteer-based model to more sustainable forms of funding.

    1. corporate patrons: strategy, not altruism

    At the forefront of this transformation are the technology giants. Companies such as Google, Microsoft and Red Hat have been the biggest contributors to the open source world for years. Their motivations, however, are not altruistic – they are cold, strategic calculations.

    Joint development of fundamental components (such as operating systems or containerisation) is simply more efficient. This allows them to compete at a higher level, in areas that directly differentiate their products.

    By becoming involved in key projects, corporations can also influence their direction, ensuring alignment with their own strategy.

    2 The power of institutions: the role of foundations

    The second pillar is non-profit foundations such as the Linux Foundation and the Apache Software Foundation. They act as neutral trustees for the most important projects, ensuring their stability and independence from a single corporation.

    They collect contributions from sponsors, creating a budget that allows them to fund key developers and safety audits.

    3 The maker revolution: the GitHub Sponsors model

    Alongside the big players, a new grassroots funding wave has been born. Platforms such as GitHub Sponsors allow direct, recurring contributions from users and companies, creating a revenue stream for maintainers.

    The story of Caleb Porzio, creator of Livewire and AlpineJS tools, is a prime example of the potential of this model. Standing on the brink of burnout, he decided to try his hand at the GitHub Sponsors programme.

    The real breakthrough came when he changed the paradigm: instead of asking for support, he decided to offer his sponsors additional, exclusive value. His secret turned out to be paid screencasts – a series of video tutorials.

    He reserved access to the full library exclusively for backers on GitHub. The effect was spectacular. His annual revenue grew by $80,000 in 90 days and crossed the $1 million threshold in the following years.

    This is a key lesson: a sustainable model does not have to be based on charity, but on building a viable business model around a free, open core.

    From stowaway to stakeholder

    ‘Free’ software has never been free. Its price, hitherto hidden, has been paid with the time, energy and mental health of a global army of volunteers. The model in which we treated their work as an inexhaustible resource is coming to an end.

    It is time for every participant in this ecosystem to undergo a transformation – from a passive ‘stowaway’ to an active stakeholder.

    This requires specific actions. Developers need to practice ‘software hygiene’ – regularly updating dependencies and consciously managing technical debt.

    Companies need to treat open source as a critical part of the supply chain, creating ‘software component inventories’ (SBOMs) and investing in business-critical projects. Investing in open source is not a cost, it is business continuity insurance.

    We stand at the threshold of a new era for open source – an era of professionalisation and sustainability. A future where creators are fairly remunerated and the global digital infrastructure is secure is within our reach. Building it, however, requires a conscious effort from each of us.

  • Mistral AI vs OpenAI – Has Europe gained a viable alternative to ChatGPT?

    Mistral AI vs OpenAI – Has Europe gained a viable alternative to ChatGPT?

    Last week will be remembered as the moment when Europe’s artificial intelligence sector went from defensive to precision technology offensive. In just 48 hours, Paris-based Mistral AI made a series of moves that go beyond mere model updates. By simultaneously launching the Mistral Medium 3.5 model, the Vibe development environment, the Workflows orchestration platform and the Le Chat mode of operation, the company unveiled a complete vertical technology stack (full-stack). For IT decision-makers and business leaders in Europe, the message is clear: digital sovereignty has become a measurable operational and financial category.

    The end of distributed models – Economics Mistral Medium 3.5

    A key element of the new strategy is Mistral Medium 3.5, a 128-billion-parameter scale model released under licence with open weights. From an analytical perspective, its greatest value is not just in its ‘raw power’, but in the unification of capabilities. It is the first Mistral model to combine advanced reasoning, deep instructional understanding and high consistency of generated code within a single parameter set.

    From a business perspective, such integration directly affects the total cost of ownership (TCO). Until now, companies have been forced to maintain a fleet of specialised models: one to analyse legal documents, another to support developers and another for simple classification tasks. Medium 3.5 allows for infrastructure consolidation. Results in benchmarks such as SWE-Bench Verified(77.6%) or tau³-Telecom (91.4%) prove that this model not only matches, but in specific engineering applications outperforms closed systems such as GPT-4o or Claude 3.5.

    Importantly for operations departments, Medium 3.5 can be deployed locally using four H100 or H200 GPUs. This opens the door to building private, secure AI environments inside corporate data centres, eliminating reliance on the latency and pricing policies of external cloud providers.

    From conversation to implementation – Vibe and Workflows

    Mistral AI has rightly diagnosed that the bottleneck for AI adoption in business is no longer the quality of the text generated, but the integration with processes. Vibe and Workflows tools are the answer.

    Vibe addresses a key productivity issue for engineering teams: developer lock-in when AI agents are working. The introduction of remote agents running in parallel in the Mistral cloud, while remaining fully synchronised with the local environment, changes the working paradigm. Integration with GitHub, Jira, Sentry and Slack means that AI ceases to be a ‘question assistant’ and becomes a ‘task performer’ that only notifies the human once the process is complete.

    Workflows, on the other hand, built on the proven Temporary engine (used by Stripe and Netflix, among others), is an orchestration layer that allows the construction of long-term, fault-tolerant workflows. This architecture separates the control plane from the data plane. In practice, this means that a regulated sector company can benefit from advanced process management in the cloud, while the data itself and its processing never leave the client’s secure, local infrastructure. This solution is ideally suited to the needs of players such as ASML or La Banque Postale, who are already automating customs processes or document compliance verification using it.

    Sovereignty as strategic risk management

    In 2026, the argument of digital sovereignty has evolved from an ideological discourse to a hard risk analysis. Statements by UK Secretary of State Liz Kendall or actions by the French Ministry of the Armed Forces point to a growing awareness of the risks posed by the concentration of computing power in the hands of just a few Silicon Valley players.

    For a European technology director, the on-premise model offered by Mistral is an insurance policy against three risks:

    1. political risk: the unpredictability of US export regulations and the impact of the US administration on the availability of AI services in situations of geopolitical tension.

    2 Regulatory risk: The need to strictly comply with RODO, the EU AI Act and the NIS2 and DORA directives. In the financial or healthcare sector, the ‘right to audit’ and full control over the location of data are legal requirements that standard APIs from OpenAI or Anthropic are not always structurally able to fulfil.

    3 Operational risk: Sudden changes in the behaviour of models (so-called model drift) or unilateral modifications of service terms by SaaS providers.

    With 60% of its revenues in Europe, Mistral has a natural interest in adapting to the local regulatory framework, making it a more predictable partner than its US competitors.

    Alliances and financial foundations

    Critics of the European approach have often pointed to a lack of capital and infrastructure. Mistral AI systematically refutes these claims. Institutional funding of €830 million from a consortium of banks (including BNP Paribas, HSBC, MUFG) for the purchase of 13,800 NVIDIA processors is a signal that AI in Europe is becoming an infrastructure asset, not just a speculative one.

    Equally important is Mistral’s incorporation into the NVIDIA Nemotron Coalition. The partnership with Jensen Huang allows Mistral to co-create boundary models on DGX Cloud infrastructure, while keeping them open. It is a strategic balancing act: using the best available hardware while promoting open model scales, driving innovation across the European developer ecosystem.

    Analysis of recent Mistral AI activities leads to three key conclusions for business leaders in Europe:

    • AI is becoming a commodity (Commodity), but control is not: Competitive advantage is built not by simply having access to models, but by being able to integrate them deeply into one’s own infrastructure without the risk of data leakage.
    • Cost optimisation requires flexibility: Open-weighted models allow for fine-tuning of performance to cost. The ability to run a Medium class model on your own servers drastically changes ROI calculations in AI projects.
    • Compliance is an opportunity, not a burden: Companies that choose the path of sovereign AI will pass through the regulatory sieve of the EU AI Act and NIS2 more quickly, gaining the trust of customers in critical sectors.

    Mistral AI is no longer just a ‘European alternative’. In May 2026, it appears as the mature architect of a new technological order in which performance goes hand in hand with autonomy. On the global chessboard of artificial intelligence, Europe, thanks to Mistral, has gained the ability to play its own sovereign game. Companies that recognise this now will gain a strategic resilience that no contract with a supplier from overseas can provide.

  • How to stabilise the grid in the city? Energy storage from Stoen and ZPUE

    How to stabilise the grid in the city? Energy storage from Stoen and ZPUE

    Stoen Operator and ZPUE are implementing a project in Warsaw that pushes the boundaries of the use of energy storage in the Polish electricity infrastructure. Instead of isolated test installations, ten battery-based units integrated directly into medium- and low-voltage (MV/nn) substations are appearing in the capital’s distribution network. This initiative is not just an experiment, but an operational response to the specific challenges of a large agglomeration: dense housing, surging power demand and the dynamic development of RES micro-installations. In this system, the storages take on the role of active voltage stabilisers, becoming an integral part of the daily operation of the system.

    This implementation sheds new light on the evolving role of distribution system operators (DSOs). The shift from passive energy transmission to active management of energy resources is now becoming a business necessity and not just a technological curiosity. The example of Warsaw shows that energy storage is no longer seen as a costly addition to the infrastructure and is starting to be treated as one of the foundations of modern distribution. A key lesson from the Warsaw project is that, in an urban setting, the success of an investment depends not on battery performance alone, but on deep system integration and the ability to operate in different load scenarios.

    It is worth noting several aspects that may determine the effectiveness of similar projects in the future. It seems sensible to move away from point-based design to thinking about the full life cycle of an installation. Taking into account the costs of operation, service and emergency behaviour of the system as early as the planning stage makes it possible to avoid costly adjustments later.

    It is also worth considering closer collaboration between technology providers and operators to develop standards that will facilitate the scaling of solutions in other regions of the country. Rather than waiting for a final regulatory settlement, the market has the most to gain from gathering and sharing operational experience. It is this practical data, gained from working in a living urban organism, that is today’s most valuable asset for energy companies planning long-term investments in network flexibility.

  • Why are AI agents becoming the target of cyber attacks? Trend overview 2026

    Why are AI agents becoming the target of cyber attacks? Trend overview 2026

    Over the past eighteen months, the enterprise sector has moved from a fascination with generative artificial intelligence to a phase of actively implementing it into operational processes. A key trend in this evolution is the shift from passive language models (LLMs) to AI agents – autonomous systems capable not only of generating text but also of performing tasks: writing code, managing email communications, calling APIs or authorising financial transactions. With this agility, however, comes a critical new category of threats: Indirect Prompt Injection (IPI). Recent data from reports by Google and Forcepoint shed new light on the scale and sophistication of these attacks, suggesting that agent systems security will become one of the biggest challenges for chief information security officers (CISOs) in the coming years.

    IPI mechanism: Data as instructions

    Traditional prompt injection attacks relied on direct manipulation of the model by the user (e.g. attempting to ‘jailbreak’ a bot by giving it the command to ignore security). Indirect Prompt Injection is a much more insidious phenomenon. It involves inserting malicious instructions into content that the AI agent processes as input – this could be web pages, PDF documents, emails or code repositories.

    The problem lies in the very architecture of current LLM models, which cannot absolutely separate system instructions (issued by the tool developer) from external data. When an AI agent analyses a web page in search of information, it may come across hidden text, which the model will interpret as a new overarching command. As a result, the attacker takes control of the agent’s logic, instructing it to, for example, send sensitive data to an external server or perform a destructive operation on the user’s file system.

    Analysis of market trends

    Google Security Research researchers, analysing CommonCrawl resources, point to an alarming trend. Between November 2025 and February 2026, there was a 32 per cent increase in the number of detected malicious injection attempts in publicly accessible web resources. This relatively short time frame demonstrates the dynamism with which the criminal community is adapting to new technologies.

    From a market perspective, Google’s observation on cost-benefit calculus is key. Until recently, IPI attacks were considered the realm of academic research – they were difficult to implement and often failed due to the instability of the results generated by AI. Now, with the increased reliability and agility of agents, these attacks are becoming ‘viable’. AI’s ability to autonomously call external tools (tool calling) means that a successful injection of instructions has an immediate and measurable financial or operational impact.

    The Google study allowed the current IPI trials to be categorised into five groups:

    1. Harmless jokes: Attempts to change the tone of an agent’s response.
    2. Helpful tips: Suggesting preferential answers to the model (often on the edge of ethics).
    3. Optimisation for AI (AI-SEO):Hidden phrases to position products in assistants’ responses.
    4. Deterring agents: Instructions prohibiting AI from indexing or summarising a particular page.
    5. Malicious attacks: Data exfiltration and sabotage (deletion of files, destruction of backups).

    Although the latter are often at an experimental stage at present, their increasing complexity suggests that it is only a matter of time before they enter the phase of mass attacks.

    From coding assistants to financial transactions

    The Forcepoint report provides concrete evidence of how IPI manifests itself in professional software and financial tools. Experts have identified ten verified indicators of attacks targeting popular tools such as GitHub Copilot, Cursor and Claude Code.

    The attack scenario is mundane: a programmer uses an AI agent to analyse a library or documentation on an external site. This site contains a hidden AI instruction. When the agent ‘reads’ the site, it is instructed to execute a command in the terminal that destroys local backups. Since the agent has permission to operate on the file system (which is essential in a programmer’s job), the command can be executed without additional verification.

    Even more dangerous are attempts at financial fraud. Forcepoint points to cases where complete transaction instructions are sewn into web content, e.g. PayPal.me links with a predefined amount along with step-by-step instructions on how the agent is to finalise the payment. In systems where AI has access to digital wallets or corporate payment systems, the risk of capital loss becomes immediate.

    The paradox of detection and the challenges for business

    One of the most worrying findings from the Forcepoint report is the so-called detection paradox. The phrases and keywords used by attackers to inject hints are identical to the terminology the cyber security community uses to describe and analyse these threats. This renders simple filters based on word blacklists ineffective – either blocking legitimate expert communications or letting intelligently worded attacks through.

  • Windows K2, Microsoft’s new strategy for dealing with the problems of Windows 11

    Windows K2, Microsoft’s new strategy for dealing with the problems of Windows 11

    In the history of Microsoft’s operating systems, it has rarely been the case that a product still has to prove its worth almost five years after its release. Windows 11, while statistically dominating the market, is at a critical turning point. The project, internally dubbed ‘Windows K2’, is not just a package of technical fixes – it is an admission of flaws in user experience (UX) design and an attempt to regain the trust of the business sector at a time when support for Windows 10 has finally expired.

    Forced statistics: The reality of market 2026

    From an analytical perspective, the current market position of Windows 11 is the result not so much of user enthusiasm as of the inevitability of the software lifecycle. Although the system now controls around two-thirds of the market, a third of the PC fleet still operates on Windows 10 or older versions. In the enterprise sector, this resistance has been particularly pronounced.

    For business, the transition to Windows 11 presented two main barriers: stringent hardware requirements (TPM 2.0 module, newer generations of processors) and operational costs due to the need to train employees and adapt infrastructure. Microsoft, realising the risk of mass migration to alternative ecosystems or extending the life of old hardware, launched the ESU (Extended Security Updates) programme. However, paid support for Windows 10 is only a temporary solution – an expensive ‘stability tax’ that companies pay to avoid a still immature system. The K2 project is supposed to be an argument for investing this money in migration rather than persisting with the past.

    Performance architecture: Tackling “resource intensity”

    One of the most serious criticisms of Windows 11 is its inefficiency in resource management compared to its predecessor. Benchmark tests on identical hardware indicated that Windows 11 shows a greater appetite for RAM, without offering a commensurate increase in productivity in return. For IT departments managing thousands of workstations, this system ‘overweight’ means a shorter hardware lifecycle and higher TCO.

    A key element of the K2 operation is the full integration of the WinUI 3 structure. Microsoft is aiming to unify the interface, which is expected to eliminate historical legacies in the code that slow down File Explorer or the Start Menu. From a business point of view, the smoothness of the interface is not a question of aesthetics, but of ergonomics. Every second of delay in rendering menus or searching for files on a corporate scale translates into measurable efficiency losses.

    An end to ideology in favour of pragmatism

    Over the past few years, Microsoft has tried to impose its vision of the system as a service platform on users, manifesting itself through, among other things:

    • Stiff, limited taskbar.
    • Intrusive suggestions and ads in the Start Menu.
    • Aggressive promotion of Edge, Bing and OneDrive services.

    From a systems administrator’s perspective, this approach is problematic. An operating system in a professional environment should be a transparent tool, not a marketing channel. Pavan Davuluri’s announcements about restoring full functionality to the taskbar (including the ability to position it freely) and reducing unwanted content in the Start Menu demonstrate a return to pragmatism.

    Removing the ‘advertorial’ and intrusiveness of MSN services from the widgets is a step towards regaining the professional nature of the system. Business does not need the weather forecast interspersed with tabloid gossip inside a work tool. The K2 project seems to understand that control of the desktop must return to the user and administrator.

    Copilot: From euphoria to manageable assistance

    Artificial intelligence has become a cornerstone of Microsoft’s strategy, but the way it has been implemented in Windows 11 has been controversial. The integration of Copilot into applications such as Notepad and Paint was seen by many professional users as an unnecessary burden on the system and a potential risk to data confidentiality.

    There is a significant redefinition of the role of AI within the K2 project. Microsoft is moving away from the concept of ‘AI everywhere’ to ‘AI where it makes sense’. For the business sector, the most significant change is the ability to fully manage and disable Copilot functions on computers managed by central policies (GPO/Intune). This is critical for companies in regulated industries (finance, medical, legal) where uncontrolled data flow to the cloud is unacceptable. Copilot is intended to become an optional assistant rather than an integral, non-removable part of the system kernel.

    Repairing the feedback loop

    The Windows 11 release cycle was plagued by unstable updates that could cripple entire departments. Criticism focused on prioritising new features over code quality. As part of Operation K2, Microsoft announced a ‘resuscitation’ of the Windows Insider programme.

    For the business, this signals that the patch testing process will become more rigorous. The promise that Insider feedback will realistically influence the final shape of the update is key to avoiding a Patch Tuesday scenario. Additionally, greater flexibility in deferring updates and streamlining the configuration process for new devices (OOBE) is expected to reduce technical downtime, a direct gain for the operational agility of businesses.

  • Benchmarks won over loyalty: Microsoft bets on Anthropic. A blow for OpenAI

    Benchmarks won over loyalty: Microsoft bets on Anthropic. A blow for OpenAI

    Microsoft ‘s choice of the Claude Mythos model as the foundation for its new software security architecture sets a significant precedent in the Redmond-based technology giant’s strategy. This decision, while at first glance it may appear to be a mere operational adjustment, in reality reveals deeper market shifts in the generative AI sector and changing priorities in digital risk management. Analysing the facts of Anthropic‘s model integration, a clear pattern can be discerned: Microsoft is moving from a phase of fascination with general AI capabilities to a phase of rigorous, benchmarked selection of specialised tools.

    A key reference point for this decision is the CTI-REALM benchmark, co-developed by Microsoft engineers. The fact that Claude Mythos scored highest in it, distancing the GPT-5.4-Cyber model, is a market signal that cannot be ignored. Microsoft, as OpenAI’s largest partner and investor, has shown that pragmatism and hard data, rather than corporate loyalty, wins in critical areas such as cyber security. This strategic approach to model vendor diversification avoids vendor lock-in and ensures access to the most effective solutions in specific niches.

    From a business perspective, integrating Mythos directly into the software development cycle is a classic implementation of the ‘Shift-Left’ strategy. The cost of fixing a vulnerability discovered at the production stage is many times higher than eliminating the bug at the code writing stage. The cited data about the detection of a vulnerability that has existed for 27 years and the success of Mozilla, which identified 271 vulnerabilities thanks to Claude Mythos, are not just technological curiosities. They are concrete indicators of return on investment (ROI). For companies operating on huge collections of legacy code, automating security audits using such high-precision models means saving thousands of hours of high-level professionals and drastically reducing the legal and reputational risks associated with potential data leaks.

    The market reaction to Mythos’ capabilities, manifested, for example, by concern in the banking and insurance sectors and interest from the NSA, suggests that there is a new kind of regulatory risk involved. Claude Mythos is seen as a dual-use technology. The model’s ability to instantaneously map vulnerabilities makes it a defensive tool of unprecedented power, but also a potential offensive instrument. The embargo under consideration by US agencies and the restrictive access under Project Glasswing suggest that in the near future, access to the most advanced cyber security models may be rationed in a similar way to armament or high-end cryptographic technologies. Companies must therefore take into account in their strategies the fact that technological advantage in the area of AI may be limited by state interventions.

    It is also worth noting a painful market lesson for OpenAI. The fact that the release of GPT-5.4-Cyber failed to draw attention away from the Anthropic solution is indicative of the change in expectations of corporate customers. The market has become saturated with promises of versatility; solutions with proven effectiveness in specific usage scenarios are now sought after. Microsoft, by implementing Claude into its 365 applications and its internal processes, de facto legitimises Anthropic as an equal, and in some respects superior, technology partner. This suggests that OpenAI’s dominance may be more fragile than stock market valuations would indicate.

    For Microsoft itself, the move is an attempt to run away from mounting criticism over historical security lapses. Redmond has understood that with the current scale and complexity of the Windows and Azure ecosystem, traditional methods of manual code review are inefficient. Using Claude Mythos as an intelligent filter to verify developers’ work is an attempt to systemically address the problem of technology debt. If Microsoft manages to significantly reduce the number of critical vulnerabilities in its products with this solution, it will set a new market standard to which all SaaS and Cloud players will have to adapt.

  • 14,000 cyber attacks in three months: Why is the 1970s protocol still a big risk for the industry?

    14,000 cyber attacks in three months: Why is the 1970s protocol still a big risk for the industry?

    The security of modern factories and power plants still relies on technology from almost half a century ago, which is becoming a growing concern for global business. The latest report from experts at Cato Networks warns of a wave of cyber attacks targeting industrial controllers (PLCs). Hackers are taking advantage of the fact that the widely used Modbus protocol was developed in the 1970s and has no security features – for someone who knows how to use it, taking control of a networked machine is worryingly easy today.

    Modbus, a communication protocol developed in 1979, is in the spotlight. At the time of its creation, no one assumed that industrial controllers (PLCs) would ever be connected to the public Internet. Modbus was designed with trusted, isolated internal networks in mind. As a result, it was completely devoid of the mechanisms we recognise as elementary today: encryption and authentication. This openness, once an advantage to facilitate system integration, has become an invitation to hackers.

    The scale of the problem is illustrated by data collected by a team led by Dr Guy Waizel and Jacob Osmani. Over just three months in autumn 2025, they identified coordinated activity targeting PLCs, involving more than 14,000 attacked IP addresses in 70 countries. These are not isolated incidents, but a systematic mapping of global industry vulnerabilities.

    The attackers’ strategy is multi-layered and precise. Most of the identified interactions – more than 235,000 requests – involved so-called data extraction. The hackers do not immediately try to destroy machines; instead, they quietly read the contents of registers, learning about process parameters and device configuration. The next step is to ‘fingerprint’ the hardware. By knowing the manufacturer and software version, criminals can match specific security vulnerabilities to a particular machine.

    What starts as innocent information gathering can quickly turn into a catastrophic scenario. To understand the real risks, Cato Networks experts ran a simulation on the Wildcat-Dam project. They demonstrated that, with just a laptop and access to the unsecured Modbus protocol, they were able to take control of the digital logic of the firewall. By manipulating register values, the researchers caused an artificial flood, overriding security limits and remotely opening the dam’s gates.

    The geography of the attacks coincides with the map of global industrial powers. The United States, France and Japan have been the main targets, together accounting for 61 per cent of incidents. It is also worrying that attackers are not confined to one industry. Although the manufacturing sector is the most common victim, traces of intrusion have been found in healthcare facilities, construction and even urban infrastructure management systems. What emerges is a picture of opportunistic hacking: attackers are looking for any available controller that has been recklessly exposed to the public network.

    Technical analysis suggests that some of this activity is coming from infrastructure located in China, although the identity of the actors remains hidden behind intermediary server systems. For business decision-makers, however, the key conclusion is not to identify a specific culprit, but to realise a structural flaw in their own systems.

  • The hangover from euphoria, or how AI agents can blow through a year’s budget in a few hours

    The hangover from euphoria, or how AI agents can blow through a year’s budget in a few hours

    Not so long ago, artificial intelligence was supposed to be the ‘ultimate solution’ to productivity problems – a digital alchemist turning empty process flows into pure efficiency gold. The ball was in full swing and the champagne was pouring from the presentations of the models promised by suppliers.

    Today, however, instead of more breakthroughs in machine reasoning, something far less spectacular is whispered about in the corridors of business conferences: the happiness bill. For it turns out that the ticket of admission to the world of AI was not a one-off fee, but a dynamic, hard-to-tame subscription for the future, the cost of which can rise exponentially overnight.

    What we are witnessing is the birth of ‘token fever’. It’s a state where the enthusiasm of engineers collides with the dismay of CFOs. For decades, we have been accustomed to the SaaS model – predictable, fixed licence fees that were easy to budget for. Generative AI has shattered this order, introducing a ‘probabilistic’ model. Here, a mistake in one agent’s logic or an overly effusive prompt can burn up financial resources faster than traditional cloud infrastructure consumes electricity.

    Uber and a mistake worth billions

    If the tech industry was looking for the ‘canary in the coal mine’, it found it in San Francisco in April 2026. At the IA HumanX conference, Praveen Neppalli Naga, Uber’s CTO, gave a speech that sobered even the biggest optimists. The giant, which had invested an astronomical $3.4 billion in research and development in 2025, faced a wall: its annual budget for artificial intelligence had evaporated in just four months.

    It wasn’t a matter of one misguided investment decision, but a side effect of an engineering fantasy with no brakes. Uber, aiming for aggressive technology adoption, encouraged its developers to use agents like Claude Code en masse. The result? 11% of back-end code was already being generated by artificial intelligence, but the price for this ‘efficiency’ proved deadly. Without proper performance filters and oversight of token consumption, AI ceased to be a lever for savings and became an out-of-control spending engine.

    The case of Uber is a classic example of a ‘tsunami of tokens’. Autonomous agents, entering infinite iteration loops with no clear limits, can burn a fortune in the time it takes to drink an espresso. It’s a painful lesson for any CIO: innovation without financial architecture is just a very expensive hobby. Naga admitted that the company had to go back to the design table to completely redefine its strategy. Any company that deploys AI today without a rigorous profitability analysis risks having its success measured not by margin growth, but by the speed with which it exhausts its own resources.

    Goodbye SaaS, hello volatility

    We are bidding farewell to an era where the IT budget was like a fixed Netflix subscription – predictable, secure and giving a false sense of control. For years, the SaaS model accustomed us to per-user licensing, where the only risk was a surplus of accounts that no one used. Generative AI brutally ends this period of ‘licensing peace of mind’ by introducing a billing model that is more akin to electricity bills during an energy crisis than traditional software.

    The shift from fixed costs to variable costs is a fundamental paradigm shift. In 2024, IT departments were buying AI access in a lump sum. Today, in 2026, vendors such as OpenAI and Anthropic have eliminated unlimited Enterprise plans, introducing dynamic billing for token consumption. The reason is mundane: AI agents have destroyed the distribution curve on which the old business was based. The subscription model only worked when the ‘lec’ users subsidised the ‘intensive’ ones. One, when we started employing autonomous agents, the differences became absurd. Analyses show cases where a user paying $100 a month generated costs of $5,600 in a single billing cycle. A subsidy ratio of 25 to 1 is a straightforward path to supplier bankruptcy, hence the sharp turn towards ‘use-pay’ billing.

    This makes IT spending probabilistic. This radically differentiates AI from the traditional cloud. A forgotten server in AWS generates a fixed, linear cost. A poorly designed prompt or agent without iteration limits, on the other hand, can go into a loop and generate millions of useless tokens in seconds. In this new world, a programmer’s logical error doesn’t end up ‘crashing’ the application – it ends up draining the company account at the speed of light. This means an immediate redesign of IT finance and the abandonment of rigid budget frameworks in favour of flexible management of the ‘economics of inference’.

    Tsunami of tokens – a new unit of risk

    In the modern CIO’s dictionary, a new, much more predatory term has emerged alongside ‘technical debt’: the ‘token tsunami’. This is a phenomenon in which autonomous agents, rather than freeing up staff time, fall into loops of endless iterations, burning up budgets with the intensity of a steel mill. The problem is that a bot, unlike a human, never feels fatigue or shame for duplicating mistakes – it simply consumes resources until it encounters a hard limit or empties its account.

    The scale of the problem is such that even the biggest players have had to revise their dogmas. Gartner is sounding the alarm: by the end of 2027, up to 40% of agent-based AI projects will be cancelled. The reason? Not a lack of vision, but brutal mathematics – rising costs while lacking precise tools to measure real business value.

    Here is where the biggest paradox of 2026 manifests itself: the unit price per token is steadily falling, but the total bill is rising. Indeed, AI agents consume between 5 and even 30 times more units per task than a standard chatbot. This is a classic trap of scale – an efficiency that becomes economically inefficient by its sheer volume. If your AI strategy is based solely on the hope that ‘models will be cheaper’, you’re just building a castle in the sand that the coming tsunami will wash away in one billing cycle. Without rigorous control over what machines process and why, modern IT becomes hostage to its own unbridled computing power.

    AI FinOps – the new alchemy of IT finance

    If you thought Cloud FinOps was challenging, get ready for a no-holds-barred ride. Traditional cloud optimisation was about simple craftsmanship: shutting down unused servers and keeping an eye on instance reservations. AI FinOps is a completely different discipline – it’s probabilistic rather than deterministic resource management. Here, the unit of expenditure is no longer processor man-hours, but the cost of a useful response relative to the cost of an erroneous or ‘hallucinated’ response.

    In 2026, as many as 98% of FinOps teams consider spending on AI as their number one priority. The reason is simple: in the traditional cloud, a technical error rarely leads to an exponential increase in cost. In the world of AI agents, misconfigured prompt logic can burn through budgets faster than you can refresh your dashboard. This is forcing IT leaders to define a new metric – the economics of inference. We no longer count how much a model costs us, but how much the operational success gained from its work costs us.

    And that means rewriting dashboards from scratch. Classic management frameworks such as ITIL 4 or COBIT, while providing a solid base, today require immediate extensions to include prompt lifecycle management or agent iteration limits. AI FinOps is not just about Excel tables; it is a new management philosophy where an engineer must think like an economist and a financier must understand LLM architecture. Without this synergy, buying tokens is akin to pouring rocket fuel into a hole in the tank – the effect is spectacular, but extremely short-lived and frighteningly expensive.

    How not to burn through a decade of innovation

    The time window for non-punitive errors has just slammed shut. To avoid a ‘token tsunami’, organisations need to move from a phase of joyful adaptation to a phase of rigorous architecture. The first and most pressing step is to conduct a token consumption audit – not a general one, but a precise one, broken down by specific teams and use cases. When a query to a model can cost as much as a good cup of coffee, we need to know who is ordering a double espresso without a clear business need.

    The key to financial survival is the implementation of three technical foundations:

    • RAG (Retrieval-Augmented Generation): Providing the model with only the data it actually needs, drastically reducing the token ‘diet’.
    • Specialist models: Abandoning the ‘all-knowing’ giants in favour of smaller, cheaper and finely-trained models for repetitive tasks.
    • Corporate charter for the bot: Establish rigid iteration limits and budgets per agent. This is a matter of elementary financial hygiene.

    We also need to review how our people work with the technology. Identifying the ‘Centaurs’ (experts empowering their AI skills) and eliminating the ‘Automators’ (unreflectively delegating work to a machine) will allow a real increase in ROI. The most expensive and fastest way to waste an innovation budget is to buy millions of tokens just to have teams working exactly as they will in 2022, only with an on-screen chat interface.

     

  • Cyber360 wins NIKard cyber security contract

    Cyber360 wins NIKard cyber security contract

    The Polish company Cyber360 has just finalised a contract to implement advanced security systems at the Stefan Cardinal Wyszynski National Institute of Cardiology (NIKard). This project, funded by the National Reconstruction Plan, sends an important signal to the market: the digitalisation of Polish medicine is entering a phase of maturity in which the security of patient data is treated on a par with modern diagnostics.

    The choice of Cyber360 as the contractor for this task is no coincidence. The company will deliver a solution based on XDR (Extended Detection and Response) technology, which goes beyond traditional anti-virus protection. The system is to monitor not only workstations and servers, but also LAN traffic, user behaviour and cloud applications. A key element of the implementation is the centralisation of log management, which in practice means a reduction in incident response time from hours to minutes. From a medical facility management perspective, such a ‘digital shield’ minimises the risk of operational paralysis, which, in the case of a cardiology institute, could have dire consequences.

    Zbigniew Kniżewski, CEO of Cyber360, emphasises that the aim of the project is to standardise incident management processes. For the public and private sector in Poland, this is an important lesson in adapting to new security standards. The implementation of the contract is part of a broader market trend in which organisations – rather than building costly in-house cyber security teams – are increasingly relying on a turnkey model and external security operations centres (SOCs).

    The collaboration between NIKard and Cyber360 is also proof of how EU NIP funds are really stimulating the local IT sector. The investment is helping Polish medical entities meet the stringent requirements of upcoming regulations such as the NIS2 directive.

  • DeepSeek V4: New AI model optimised for Huawei chips

    DeepSeek V4: New AI model optimised for Huawei chips

    DeepSeek, the Chinese startup that destabilised the AI market last year with its low-cost models, has just made a move of a strictly strategic nature. The release of a familiarisation version of the V4 model demonstrates that the Chinese AI ecosystem is preparing for a permanent disconnect from Western infrastructure.

    A key differentiator of V4 is its strict optimisation for the Huawei Ascend processor architecture. While the Hangzhou-based startup has historically based its success on Nvidia chips, the current turn to domestic solutions is a response to growing regulatory pressure from Washington. Huawei has confirmed that the entire Ascend ‘super node’ product line already supports the new DeepSeek architecture, suggesting deep integration at the hardware-software level to minimise performance losses from not having access to the latest H100 or Blackwell units.

    In terms of content, V4 Pro positions itself at the top of the world. According to the manufacturer, the model outperforms other open-source solutions in general knowledge tests, second only to the closed model Gemini-Pro-3.1 from Google. The strategy of providing a flash and preview version allows the company to collect real-time feedback data, which is essential for calibrating parameters prior to final deployment.

    The market reaction to the launch was immediate and painful for competitors. The stock market listing of rivals such as Zhipu AI and MiniMax saw significant declines, confirming DeepSeek’s dominant position in China’s open-source sector. At the same time, the company finds itself at the centre of a geopolitical cyclone. The White House openly accuses Beijing Labs of systemic intellectual property theft, and DeepSeek itself faces allegations of misuse of data from its OpenAI and Anthropic models.

    For investors, however, DeepSeek remains one of the most promising assets in Asia. The company, controlled by High-Flyer Capital Management, is aiming for a valuation in excess of $20 billion. Interest in taking a stake from giants such as Alibaba and Tencent suggests that Chinese Big Tech sees DeepSeek not just as a technology provider, but as the foundation of a national technology stack.

  • The printer as a ‘Trojan horse’ in the corporate network? How to turn the weakest link into a secure part of the IT ecosystem

    The printer as a ‘Trojan horse’ in the corporate network? How to turn the weakest link into a secure part of the IT ecosystem

    Digital transformation in the SME sector has reached a tipping point, but in this technological rush, one of the most obvious elements of office infrastructure has been forgotten. While the attention of IT departments is focused on securing the cloud, implementing AI and protecting employee laptops, there are ‘sleeper agents’ in the corners of offices – multifunction devices (MFPs). Today, the printer is no longer just a simple peripheral; it is an advanced endpoint with its own processor, hard drive and operating system, permanently connected to the heart of the corporate network.

    This makes printing devices the biggest ‘blind spot’ (blind spot) of modern cyber security. The data is unforgiving: according to Quocirca’s Managed Print Services Landscape report, more than 60% of organisations admitted to having experienced a data security breach linked directly to their print infrastructure in the past year.

    Why do hackers ‘love’ printers so much? The answer is painful in its simplicity. These devices are rarely covered by log monitoring systems (SIEM), their firmware tends to be updated sporadically, and in many companies – horror of horrors – they still operate on default administrator passwords. For a cybercriminal, an unsecured printer is the perfect ‘Trojan horse’ – a silent port of entry that allows them to infiltrate a network without sounding the alarm on major defence systems.

    Anatomy of an attack: How does a printer become a base of operations?

    Today’s cybercriminal rarely attacks the most heavily guarded ‘front door’ of the IT infrastructure. Instead, he or she looks for a side entrance, which increasingly turns out to be an unsecured multifunctional device (MFP). The attack through the printer is a textbook example of a lateral movement strategy – once the device has been infiltrated, the attacker uses it as a base to silently scan the internal network and escalate privileges. Because MFPs rarely come under the magnifying glass of monitoring systems (SIEM), a hacker can spend months intercepting scanned documents or stealing data from the device’s hard drive, remaining completely invisible to traditional anti-viruses.

    Nor should we forget the simplest, physical dimension of risk. Confidential financial reports or personal data left unattended on a receiving tray is an invitation to a data leak, which can have dramatic consequences under the RODO regime. Sharp expert Szymon Trela points out that the foundation of defence here is rigorous configuration hygiene, which still remains the biggest challenge for IT departments:

    “Among the most important mistakes in the configuration of MFPs is the lack of settings to restrict access to the device. It is worth considering defining IP or MAC addresses of devices with print privileges and blocking unused ports, which significantly reduces the field of attack. A very restrictive but effective setting is also to create a list of applications and processes that can communicate with the MFP. The second group of settings are encryption issues – both network communication and data stored by the device, always using the latest versions of the protocols. And finally, automatic system software updates are key. New firmware versions respond to emerging threats and address critical security issues. These updates are downloaded from the manufacturer’s trusted servers, which in the case of Sharp is a standard option for our customers,” – says Szymon Trela, Product Manager at Sharp Systems Business Poland.

    From ‘weakest link’ to active protection

    In 2026, the endpoint protection paradigm has shifted from defensive access blocking towards active analytics and real-time anomaly detection. Modern MFPs have ceased to be passive recipients of data and have become intelligent security sensors. Thanks to the Security by Design architecture, solutions such as integration with antivirus engines (e.g. Bitdefender) or TPM (Trusted Platform Module) modules allow system integrity to be verified at the boot stage. If the system software has been compromised, the device will simply not boot, preventing the spread of infections within the network.

    However, the real revolution is happening in the active monitoring layer. In the age of AI-driven automated attacks, humans cannot react fast enough. Therefore, it is the device itself that must take on the role of gatekeeper. This approach turns the MFP from a potential ‘Trojan horse’ into an advanced defence post that not only protects itself, but also alerts the entire organisation to danger.

    Szymon Trela, Sharp
    source: Sharp

    “There are a number of solutions in modern MFPs that help to monitor IT networks for security. One example is the anti-virus software installed on the device. Its primary task is, of course, to detect viruses that may appear in the print data. But in addition to this function, it also monitors the device’s system software and detects potential attempts to infect it with viruses or malware. In addition to this, it scans all network traffic passing through the device, blocking attempts to use the MFP to break into the corporate network. Of course, any suspicious events can be reported to those responsible. This solution is extremely useful in smaller organisations that do not have dedicated departments responsible for security. Another solution is the detection of attempted DoS attacks. If too many communication attempts from the same IP addresses are detected within a certain time period, the device automatically blocks the suspicious addresses, creating a list of them. This process takes place in the background, but it is also possible to report these events to the relevant people. For corporate customers, it is extremely important to integrate MFPs with SIEM class systems, which report any incidents in real time.” – comments Szymon Trela, Product Manager at Sharp Systems Business Poland.

    The use of anti-virus software directly on the MFP is a ‘game changer’ for the SME sector. In small businesses, where one person often combines the roles of IT manager, administrator and technical support, any automation is at a premium. A device that blocks Denial of Service (DoS) attacks and cuts off suspicious IP addresses on its own acts like an invisible bodyguard.

    For the big players, on the other hand, integration with SIEM systems closes the infrastructure visibility gap that has been treated as an audit blind spot for years. It brings printer logs into the same dashboard as data from servers or firewalls, allowing for full event correlation and instant NIS2-compliant incident response. In this way, the MFP becomes a fully-fledged, active component of the cyber security ecosystem.

    Printer in the NIS2 and RODO regime: Technical standards

    In 2026, ‘compliance’ has become a matter of business survival. The entry into force of the stringent requirements of the NIS2 Directive and the evolving interpretation of RODO have meant that any gap in the infrastructure – including that ‘standing in the corner of the corridor’ – can give rise to severe financial penalties. For an auditor, a printer is no longer a peripheral device; it is a data processing node that must meet so-called state-of-the-art cyber security standards.

    The biggest challenge for security engineers today is to ensure the so-called Root of Trust, i.e. an unchanging foundation of trust in the hardware. Standard software security is not enough. If a device’s firmware is altered by an attacker, no amount of file encryption will help.

    “It is extremely important to have functionalities that guarantee the integrity of the device, i.e. to ensure that the device systems have not been altered in an unauthorised way. For this reason, features that automatically detect the correctness of the system software and BIOS and, if they are changed, automatically restore the correct version are of great importance. This protects the device at the most basic level and ensures overall security. The second extremely important issue is the reporting of any suspicious events to the responsible persons, and it is important, even in the smallest organisation, to designate such persons and establish a procedure to deal with such cases. Finally, it should be noted that the technical aspects are only part of the security problem. In order to manage it properly, especially in the context of RODO, it is necessary to introduce other measures, related to the protection of documents, primarily these are: secure printing and user authorisation.” – says Szymon Trela, Product Manager at Sharp Systems Business Poland.

    The approach mentioned by the expert fits perfectly with the Security by Design concept. The mechanisms of a ‘self-healing’ BIOS (Self-Healing BIOS) is a key parameter that procurement departments should look at today. From a NIS2 perspective, a device that can detect manipulation in its own code and restore a secure version of the software drastically reduces risk in the supply chain.

    However, technology is only half the battle. RODO requires evidence of data protection at every point of contact. That’s why features such as Secure Print, which requires a contactless card to be swiped or a PIN to be entered at the device, are ceasing to be a convenient add-on and becoming an essential means of control. Without them, every payroll or contract left on a collection tray is a potential security incident that, in 2026, you must report to a supervisory authority within 72 hours.

  • Leaked controversial Claude Mythos model. Anthropic investigates security incident

    Leaked controversial Claude Mythos model. Anthropic investigates security incident

    Anthropic, one of the leading forces in the artificial intelligence sector, is facing a serious image and operational challenge. As reported by Bloomberg News, the company’s most advanced model, Claude Mythos Preview, was leaked to a small group of unauthorised users. The incident comes at a crucial time for the startup, which is just positioning its technology as the foundation of a new era of cyber security.

    The leak occurred on 7 April, exactly the day Anthropic announced ‘Project Glasswing’. The initiative was intended to allow selected organisations to test the Mythos model under controlled conditions, mainly to strengthen their defences against digital attacks. Meanwhile, a group of users on a private online forum gained access to the tool almost immediately after the official announcement. Although reports indicate that the model has not been used for criminal purposes to date, the fact that it is regularly used outside the manufacturer’s control raises legitimate concerns.

    A spokesperson for Anthropic confirmed that the company is investigating the matter, pointing to a third-party vendor environment as the likely source of the leak. The incident could complicate Anthropic’s relationship with regulators. Mythos is a model with an unprecedented ability to identify software vulnerabilities. It is a ‘dual-use’ tool – in the hands of defenders it patches systems, but in the hands of hackers it can become a precision weapon. The loss of control of such a powerful resource, even if temporary, reinforces the arguments of advocates of strict oversight of models critical to national security. Anthropic must now prove that it can effectively protect the technology that is supposed to protect the world.

  • eAuditor V10 AI – scalability and flexibility in modern IT management

    eAuditor V10 AI – scalability and flexibility in modern IT management

    eAuditor is an advanced IT security and management platform that brings significant enhancements and new operational capabilities in the V10 AI version. The system offers full freedom of technology choice – from support for open-source databases and containerised solutions to support for alternative virtualisation platforms. It allows you to build high-performance environments tailored to market challenges and optimise costs by moving away from restrictive licensing models.

    Innovations in eAuditor V10 AI

    Learn about the key new features and improvements made to the system:

    • Support for Proxmox virtualisation: Extension of support to open source environments, used among other things as an alternative to VMware.
    • Container-based architecture: Support for Docker, Kubernetes and OpenShift technologies in an on-premise model for instant scalability and easier application management.
    • Native support for PostgreSQL: Implementation of a new database engine allowing full optimisation of operating costs by eliminating the need to purchase MS SQL Server licences.
    • Mobile User Panel: A dedicated Android app that integrates the service request handling processes within the eAuditor and eHelpDesk systems, increasing the availability of technical support.

    Key advantages and benefits of eAuditor V10 AI

    The changes made to eAuditor V10 AI translate directly into business value:

    • lower implementation and maintenance costs – thanks to the use of PostgreSQL and open source technology,
    • better adaptation to market changes – migrating from VMware to Proxmox without losing visibility of the environment,
    • greater infrastructure flexibility – thanks to support for container technologies (Docker, Kubernetes),
    • increased user efficiency – through the introduction of a new interface (GUI) and a Mobile User Panel for Android.

    Source: BTC

  • AI performance crisis. Why is GitHub blocking access to new Copilot accounts?

    AI performance crisis. Why is GitHub blocking access to new Copilot accounts?

    GitHub’s decision to temporarily halt new sign-ups for its Pro, Pro+ and student subscriptions is a rare moment in the world of Big Tech, when the demand for artificial intelligence brutally collides with the physical limitations of the infrastructure. Microsoft, the platform’s owner, admits outright: Copilot has become a victim of its own success. The tool is consuming resources at a rate that the original business model simply did not anticipate.

    What initially looked like a technical problem actually exposes a deeper crisis in the ‘token economy’. Developers have stopped treating Copilot as a simple code autocomplete and have started using it for complex architectural tasks and deep refactoring. Such advanced operations require gigantic computing power and generate costs that are starting to strain GitHub’s margins. The company admitted that the current load “far exceeds” the assumptions on which the subscription plan structure was based.

    The introduction of a lock-in for new users is meant to protect the experience of those who are already paying, but even they must prepare to tighten their belts. GitHub has announced the introduction of strict session and weekly limits, which de facto ends the era of unlimited AI support. The most painful cut for professionals is the depletion of the library of available models. Claude Opus 4.5 and 4.6 have disappeared from the Pro and Pro+ subscriptions, leaving only the latest version 4.7 as the top-of-the-line offering.

    GitHub is openly encouraging developers to ‘save money’ and use smaller, cheaper models more often whenever possible. It’s a strategic shift that will force a new form of hygiene on IT departments – managing token budgets will become just as important as managing cloud budgets.

    The current registration paralysis is probably just a temporary pause needed to reformat the offering. We can expect that when Copilot goes back on sale, its pricing will be much more reflective of real process costs, perhaps moving to a ‘pay-as-you-go’ model for the most demanding tasks. Microsoft is proving that even with unlimited capital, computing capacity remains a scarce resource that must be managed with ruthless discipline.

  • AI can get a PhD in physics, but it won’t read a watch

    AI can get a PhD in physics, but it won’t read a watch

    Artificial intelligence AD 2026 resembles a brilliant polymath who defends his PhD in quantum physics on Monday only to fail a shoelace tying test on Tuesday. According to Stanford University’s latest Artificial Index Report 2026, we have reached a point where algorithms have not only caught up, but overtaken human experts in science and multimodal reasoning. This is no longer evolution; it is a digital blitzkrieg, with the industrial sector producing more than 90 per cent of the leading models and four out of five people at universities treating AI like a third hemisphere brain.

    However, this brilliant picture has a crack in it, which researchers call the ‘jagged frontier’ (jagged frontier). It is a fascinating paradox: a model that solves Olympic mathematics tasks without flinching capitulates before the … the dial of an analogue watch. The example of the Gemini Deep Think, which only reads the time correctly 50.1% of the time, is as comical as it is sobering.

    We are used to thinking of progress as a rising, smooth line. The Stanford report brutally verifies this belief. It shows a technology with almost godlike analytical capabilities, which at the same time stumbles over thresholds that a kindergartner passes effortlessly. This means that we are implementing systems that are at once superhumanly clever and painfully naive. The core competency in IT is no longer ‘implementing AI’ per se, but precisely mapping those invisible cliffs where the machine’s logic ends and its digital myopia begins.

    Peaks of possibility: When an algorithm puts a scientist to shame

    When you look at the hard data from the SWE Bench-Verified test, you get the impression that developers should slowly consider changing their profession to goose farming. A score jumping from 60% to 100% in just twelve months is a complete takeover of the sandbox where humans ruled until recently. AI is now reaching doctoral level in the sciences and crushing the mathematical competition, becoming the analytical partner we have been dreaming of for decades.

    The problem arises, however, when that same digital titan has to look at the wall. Literally. The aforementioned case of Gemini Deep Think and its 50.1 per cent efficiency in reading an analogue clock is a manifestation of the jagged frontier – a phenomenon in which the limit of an algorithm’s capabilities is not a continuous line, but a jagged boundary. The machine reasoning is multi-modal, operating on abstractions we don’t grasp, while stumbling over simple perceptual mechanisms we have mastered at the age of six.

    The same is true of AI agents. Their effectiveness in operational tasks in the OSWorld environment has increased spectacularly – from a niche 12% to an impressive 66%. This sounds like a success, until you realise that in business practice this means an error in one in three attempts. In the structured world of corporate systems, a margin of error of 33% is not ‘progress’, but a massive operational risk.

    This erraticity makes AI like a brilliant pianist who can play the most difficult Liszt sonata, but doesn’t always hit the keys when he is supposed to perform ‘There’s a kitten on a hurdle’. It is this unpredictability, not a lack of computing power, that is the biggest challenge for IT system architects today. We need to learn how to manage technology that is both omniscient and …. disarmingly inattentive.

    The wall you can’t see: Gemini and the unfortunate watch

    The implementation of artificial intelligence in organisations has reached a staggering 88% in 2026. In the business world, this is a result that is close to a plebiscite for breathing room – almost everyone is doing it, because no one wants to stay in a digital stasis. However, this massive flight to the front is taking place to the accompaniment of a worrying grinding of the brakes, or rather a chronic lack of them. The Stanford report sounds the alarm: responsible AI is not advancing at the same pace as its raw capabilities.

    In the last year, the number of documented AI incidents has risen to 362, up from 233 the year before, which should give policymakers pause for thought. These are no longer theoretical mistakes in sterile labs, but real stumbling blocks at the interface between technology and market. To make matters worse, engineers are facing an innovative paragraph 22: safety versus precision. Research shows that attempts to ‘tame’ models and put ethical muzzles on them often result in a decline in their effectiveness. We want AI to be safe, but when it becomes too cautious, it stops delivering the brilliant results we hired it for.

    It’s a classic technology stalemate. Almost all the makers of top models are keen to brag about their performance records, but when it comes to reporting on liability tests, there is suddenly a significant silence in the industry. The IT sector is speeding towards the horizon in a car with seatbelts still in the conceptual stage.

    Business on the brink: 88% adoption and no brakes

    The geopolitical chessboard of AI in 2026 resembles a game in which the incumbent grandmaster, the US, is starting to glance nervously at the clock – and not just because Gemini is having trouble reading it. Although US dollars are still flowing in a broad stream, the technological advantage over China has almost completely melted away. Worse still, the most valuable ammunition in this race – human genius – is beginning to evaporate from Silicon Valley.

    The dramatic 89 per cent drop in the number of AI researchers moving to the US since 2017 (with as much as 80 per cent of this occurring in the last year!) is a painful side-effect of migration policy and the rising cost of H-1B visas. While the US is betting on massive data centres, China is taking the lead in patents, industrial robotics and the number of scientific publications. New dots are also shining on the innovation map: South Korea dominates in patent density, and Singapore and the United Arab Emirates are becoming the training grounds for the world’s fastest technology adoption, leaving the giants behind.

    The open source movement, which effectively democratises access to AI, and the issue of public trust play a key role in this new split. There is a gigantic gap here: 73% of experts see AI as having a bright future, but only 23% of the public share this enthusiasm. Those regions that can tame this fear will win. The European model of regulation, although often criticised for being slow, builds a foundation of trust that is dramatically lacking in the US – with record low levels of faith in government.

    The conclusion? Success in AI is no longer just about having the most powerful model, but about navigating the geopolitical and human fabric in which that model operates. AI is a new form of national sovereignty – and one that is not built on silicon alone, but above all on open doors for talent and wise, trustworthy law.

  • KSC amendment – 38,000 entities under new digital rigour

    KSC amendment – 38,000 entities under new digital rigour

    On 3 April 2026, the Polish regulatory landscape underwent a permanent change, presenting thousands of organisations with a challenge that can no longer be pushed to the operational margins. The amendment to the National Cyber Security System(KSC) Act is not just a bureaucratic update, but above all a signal to management boards that digital security has become an integral part of business responsibility. Estimates from the Ministry of Digitalisation indicate the enormous scale of the changes: the new regulations will cover around 38,000 entities, of which more than 10,000 are private companies operating in sectors critical to the functioning of the state.

    It is crucial to understand the new hierarchy of importance. The legislator has introduced a division between ‘key’ and ‘important’ entities, which determines not only the scope of obligations but also the level of potential financial risk. Key sectors, including energy, banking, transport and digital infrastructure, among others, face penalties of up to €10 million or 2 per cent of revenue. Even those deemed ‘important’ – including food producers, chemicals or waste management companies – could pay up to €7 million for failings. Significantly, the amendment ends the era of impersonal corporate liability; managers sitting on boards of directors will now be directly responsible for breaches.

    The implementation calendar is tight and does not forgive tardiness. Although companies have one year to fully adapt their systems, the first important deadlines are already in the coming months. On 7 May 2026, the self-registration process begins for entities that will not be listed ex officio, with a deadline of 3 October.

    At the same time, the Ministry of Digitalisation announces the publication of detailed requirements for Information Security Management Systems (ISMS), which is expected to unify security standards across the country. In practice, this means an urgent revision of IT strategy and the implementation of advanced technical and organisational measures. For the modern enterprise in Poland, the KSC ceases to be a matter of compliance and becomes a prerequisite for maintaining operational continuity and market confidence in an increasingly dangerous digital environment.

  • Anthropic Mythos: Why is the Bundesbank warning against a new AI model?

    Anthropic Mythos: Why is the Bundesbank warning against a new AI model?

    According to Joachim Nagel, President of the Bundesbank, the financial industry has faced a dilemma in which advanced artificial intelligence ceases to be an assistant and becomes an autonomous tool capable of destabilising global infrastructure.

    The German central bank chief’s concerns centre on Mythos ‘ unprecedented ability to code and identify vulnerabilities. The model demonstrates an almost instinctive proficiency in finding software bugs, which in the hands of cybercriminals could spell the end of security based on ‘legacy systems’. Many financial institutions still operate on IT architectures built decades ago that, while stable, were not designed to fend off attacks generated by a machine that thinks faster than any team of cyber security experts.

    Nagel argues that Anthropic’s current strategy of making Mythos available only to a narrow, select group of companies and organisations creates a dangerous asymmetry. Instead of protecting the market, limited access can exacerbate systemic risk. If only a few have the shield of Mythos’ effectiveness, the rest of the sector is left exposed to the shot, which from a banking supervisor’s perspective is an unacceptable distortion of competition. The demand is clear: all relevant institutions must have access to the same defensive tools to avoid technological stratification, which could lead to a domino effect in the event of a successful attack on the weaker link.

    However, the Bundesbank’s perspective goes beyond mere cyber-security, striking at the foundations of monetary policy. Nagel challenges the widespread optimism that artificial intelligence will be a cure for inflation through increased productivity. On the contrary, he warns of price pressures resulting from the huge demand for investment in AI infrastructure and the drastic increase in the cost of electricity required to power data centres.

    Most intriguing, however, is the warning against ‘tacit collusion by algorithms’. There is evidence to suggest that sophisticated models can autonomously learn to optimise profits by keeping prices above competitive levels, doing so without direct communication between firms.

    For central banks tasked with maintaining price stability, this new form of algorithmic rate setting presents a challenge that will require entirely new regulatory tools. In a world dominated by models such as Mythos, central bankers’ vigilance must now extend not just to spreadsheets but to lines of code themselves.

  • Defence.Hub and WAT join forces in the development of anti-drone systems

    Defence.Hub and WAT join forces in the development of anti-drone systems

    NewConnect-listed Defence.Hub ‘s signing of an agreement with the Military University of Technology to develop the MACS system from Seraphim Defence Systems is a signal that the Polish C-UAS (Counter-UAS) sector is moving from a conceptual phase to hard operational integration.

    For investors following the defence-tech market, this collaboration has a strategic dimension. Defence.Hub is positioning itself as an integration platform, and WAT is not just another technical university. It is an institution directly supervised by the Ministry of Defence, with unique knowledge of the operational requirements of the Polish army. This partnership allows for the validation of technology in near-real conditions, which in the arms industry is a key condition for moving beyond the prototype phase.

    At the heart of the agreement is the MACS platform, a modular drone countermeasure system based on an advanced fusion of sensors and artificial intelligence. In an era of evolving threats, where fibre-optic-controlled FPV drones or drones equipped with autonomous algorithms are on the frontline, traditional jamming methods are becoming insufficient. Collaborative work on the detection, tracking and neutralisation of such objects, supported by edge computing solutions, is expected to make MACS a product that responds to the dynamically changing needs of the modern battlefield.

    From a business perspective, Defence.Hub is building an ecosystem that connects Polish technical thought with capital and institutional backing. Seraphim Defence Systems, as a finalist in the NATO Innovation Challenge 2025, already enjoys international recognition. Substantive support from WAT scientists, who have been shaping Polish military engineering for seven decades, significantly reduces technological risk and accelerates the commercialisation process of dual-use solutions.

    In the current geopolitical situation, the need for anti-drone systems has ceased to be a theoretical consideration and has become a pressing market need. Defence.Hub, by integrating academic competence with the flexibility of technology start-ups, faces the opportunity to create a real counterweight to global players in the CEE region. The success of this venture will be measured not only by the number of signed letters of intent, but above all by the effectiveness of the deployment of ready-made systems in defence structures.

  • Remanufacturing instead of production – Canon business model recognised by analysts

    Remanufacturing instead of production – Canon business model recognised by analysts

    The latest Quocirca Sustainability Leaders 2025 report confirms that Canon is consolidating its leadership position, but the real story lies in the brand perception data and process logistics.

    The most striking indicator is the increase in market confidence. In just one year, the percentage of respondents perceiving Canon as a brand with a strong connection to the environment has risen from 38% to 49%. This is a rare jump in a mature industry, suggesting that long-term investment in ‘remanufacturing’ is beginning to resonate with the needs of a business grappling with new ESG reporting regulations.

    The foundation of this strategy is not new products, but existing ones. Canon has been developing refurbishment processes since 1992, and its factory in Giessen, Germany, has become a benchmark for efficiency in a closed-loop economy. Refurbishing the imageRUNNER ADVANCE ES series, with at least 90 per cent of parts coming from recycled sources, is not only a nod to the planet, but above all an optimisation of the supply chain and material costs. For the business customer, this means access to ‘Certified Used’ standard equipment, which combines reliability with a lower carbon footprint, which is becoming crucial when tendering with environmental requirements.

    Quocirca analysts highlight another aspect: the digitalisation of the service. The move to intelligent remote services drastically reduces the need for physical interventions by technicians. In this way, Canon has killed two birds with one stone – it has reduced transport emissions and increased the operational efficiency of its customers.

    Partnerships with ClimatePartner and a platinum rating from EcoVadis position the Japanese manufacturer as a safe choice in uncertain times. Kyosei’s philosophy of the common good, while sounding ideological, is applied very pragmatically by Canon: from reducing plastic in packaging to an innovative ‘container round use’ method that eliminates empty runs in logistics.

  • SME cyber security 2026: How to build 360° resilience?

    SME cyber security 2026: How to build 360° resilience?

    As we enter the second quarter of 2026, the threat landscape for the SME sector resembles a minefield where the mines themselves can look for a target. According to the latest ENISA Threat Landscape report, cybercrime has undergone the ultimate metamorphosis: from guerrilla attacks to a fully professionalised Ransomware-as-a-Service (RaaS) model. Nowadays, the aggressor does not need to be a brilliant programmer – all they need is a purchased subscription and AI algorithms that scan the network with surgical precision for the smallest cracks.

    The statistics are merciless: as many as 43% of all cyber attacks target small and medium-sized companies directly. Most striking, however, is the distance between risk and preparedness – only 14% of businesses in this sector feel realistically prepared to fend off an incident.

    This is because the notion that security is ‘an IT department problem’ is still being perpetuated. True security requires a radical paradigm shift: moving from protecting the devices themselves to protecting processes, identities and data flows. If you only protect the ‘boxes’, you are leaving the door open to the heart of your business.

    Extended definition of endpoint

    In the traditional security model that prevailed just a few years ago, the ‘endpoint’ was a static and easily defined concept – usually a laptop in an employee’s bag or a workstation connected to a company cable. However, in 2026, this framing is a dangerous oversimplification. Today’s endpoint is any piece of infrastructure with an IP address and access to data resources: from smart CCTV cameras and environmental sensors, to private smartphones (BYOD), to sophisticated printing and document digitisation systems.

    It is the latter, often treated as ‘background devices’, that are becoming a favourite gateway for cybercriminals. The modern MFP is in reality a powerful computer with its own operating system, hard drive and direct access to the user directory. Poorly secured, it becomes the ideal launching point for a lateral movement attack. A hacker does not need to break into the best-protected server; all he needs to do is take control of the printer and, from within it, silently and methodically scan the internal network for vulnerabilities in other devices.

    Understanding these dynamics requires decision-makers in the SME sector to abandon the ‘box protection’ mindset in favour of protecting the entire information flow cycle.

    “In many SME companies, security is still mainly associated with the employee’s laptop and the antivirus installed on it. The problem is that today’s IT environment has long ceased to end with the PC. From our perspective, what is most often overlooked are those elements that “just run in the background” – network devices, servers, printers or access to cloud systems from private devices. A very often underestimated area is also the user accounts themselves – because today it is the identity, not the device, that is the main target of attack. The key change is that a cyber-attack no longer has to ‘enter via a virus’. A single hijacked account or employee inattention is enough. Therefore, classic antivirus, while still necessary, no longer provides the full picture. It protects a fragment of the environment, but does not show what is happening in the entire company ecosystem. And today, security is precisely the ability to combine all these elements into one coherent whole.” – says Roman Porechin, Business Development Manager at Sharp Systems Business Poland.

    Zero Trust architecture as a foundation for SMEs

    The traditional security model, based on building a ‘digital fortress’ and trusting everything inside the corporate network, has become an anachronism. It is worth noting that, at a time when distributed team-based and hybrid working models are becoming popular, the notion of a secure office perimeter no longer exists. A solution that has gone from the enterprise segment to ‘under the thatch’ of smaller companies is the Zero Trust architecture. Its foundation is a simple but relentless principle: ‘never trust, always verify’.

    For the SME sector, implementing Zero Trust is a hard economic calculation. Citing data from IBM’s Cost of a Data Breach report, companies that have implemented this model save an average of USD 1.5 million on the impact of potential data leaks compared to organisations relying on legacy systems.

    However, the biggest barrier to implementing rigorous policies in smaller companies is the fear of decreased efficiency. Decision makers fear that additional layers of verification will turn work into a constant battle with the system. And how are business systems designed to combine high levels of restriction with the fluidity and intuitiveness of working in a hybrid environment?

    Roman Porechin Sharp
    Roman Porechin, Sharp Systems Business Poland

    “At Sharp we take a very practical approach. We start by analysing the way the organisation works, rather than imposing ready-made security policies. We first identify the key processes and access to systems, and then build the policies in such a way that they are least impactful on the user. We place great emphasis on ensuring that the employee has access to exactly what they need – without excessive privileges, but also without unnecessary barriers. In practice, this means, among other things, using mechanisms that simplify work, such as single sign-on or a contextual approach to access. The system itself assesses whether a login is secure and when additional steps are required. In this way, security works ‘in the background’ and the user sees an orderly and predictable environment rather than additional complications. In many cases, customers even notice an improved user experience after implementation, because we eliminate access chaos and unnecessary infrastructure elements,” comments Roman Porechin, Sharp Systems Business Polska.

    From the perspective of the modern SME, Zero Trust is therefore not just a ‘shield’, but an optimisation tool. Rather than building walls that make it difficult for employees themselves to move around, smart systems use contextual security. If an employee logs in from the office at 9am from a trusted laptop, the system will not harass them with ten levels of verification. However, if the same attempt is made at 3am from another continent, the barriers will be immediately raised.

    Infrastructure management and the role of AI

    The SME sector is facing a painful paradox: on the one hand, cyber threats have become more sophisticated than ever; on the other, the shortage of skilled IT staff has reached a critical level. Small and medium-sized companies can rarely afford to maintain their own 24/7 Security Operations Centre (SOC). In this reality, Managed Security Services, the outsourcing of security to specialised partners, has become the dominant model. It allows organisations to benefit from professional security without having to fight for scarce and expensive experts in the labour market.

    Another pillar of modern defence is artificial intelligence, which has ceased to be a marketing buzzword and has become a necessity. Because attacks today are automated and driven by AI, defences must react at machine speed. Predictive systems do not wait for an incident to occur – they analyse billions of signals in real time, detecting anomalies in the behaviour of users or devices before these turn into real data leaks.

    However, in this whole technological arms race, the most serious change has been in the philosophy of risk management itself. However, technology is only part of the success – the change in attitude of decision-makers is key.

    “Until recently, the prevailing approach was ‘let’s protect ourselves so that nothing happens’. Today we know that this is not a realistic assumption. The focus has changed – from prevention alone to the ability to detect and respond quickly. Because, in practice, it is not a question of whether an incident happens, but when and how quickly it is noticed. The companies that do best do not necessarily have the most tools. Instead, they have a structured approach and know what to do when there is a problem. For SME companies with limited budgets, the key is to focus on the fundamentals:
    – securing access to systems,
    – regular updates,
    – a working and tested backup.
    Only on this can the next elements be built. The biggest mistake is to try to ‘buy security’ as a single solution. In practice, it’s always a process and it’s consistency in building it that makes the biggest difference.” – Roman Porechin, Business Development Manager at Sharp Systems Business Poland, concludes.

    Security as a process

    It is thus becoming clear that cyber security has ceased to be a purely ‘technical’ domain and has become a strategic foundation for any modern SME. The most important lesson from our analysis is simple: security is not a product that can be bought and forgotten about, but a process that needs to be managed on an ongoing basis. Predictions for the coming years point to a further escalation of attacks using deep machine learning, which will make the line between a genuine message and a phishing attempt almost invisible to the human eye.

  • The AI 2030 paradox: Why does data investment still not guarantee returns?

    The AI 2030 paradox: Why does data investment still not guarantee returns?

    There is a specific kind of gold rush today. The companies that are winning the race for successful AI implementations are investing up to four times more in the foundations – data quality, management and staff readiness – than the market’s prodigies. These are gigantic outlays that are akin to building an ultra-modern skyscraper. The problem is that despite the luxurious façade, you can still hear the structure creaking in the boardrooms.

    This is where the title paradox manifests itself. Although the money stream flowing towards data ‘hygiene’ is unprecedented, according to Gartner data, only one in three technology leaders are looking to the future with genuine optimism. Only 39% believe that current investments in artificial intelligence will realistically improve the company’s bottom line. What we have, then, is a situation where the biggest players are buying the most expensive insurance policies while still being unsure whether their ship will even make it to port.

    Why is this happening? Because the mandate of data and analytics leadership by 2030 is evolving dramatically. It is no longer about simply ‘owning’ the technology, but about providing the perceptual intelligence and contextual foundations that allow machines to realistically understand the business world. The success of AI has become a challenge of trust and a complete overhaul of the value architecture. Building an AI-first strategy is a pioneering leadership that must face the fact that the old ways of counting profits are no longer compatible with the new algorithmic reality.

    The trap of traditional ROI, or measuring the future with an old ruler

    Trying to measure the potential of AI with a classic ROI is akin to assessing the usefulness of electricity solely through the lens of candlelight savings. In corporate excel sheets, where every investment has to “bounce back” in a few quarters, building deep contextual foundations often looks like an expensive whim. It is this accounting corset – trying to measure the future with an old ruler – that causes anxiety for nearly two-thirds of technology leaders.

    Meanwhile, the modern approach to D&A requires a shift from static ROI to value composition. Leaders who actually set the pace are no longer treating AI as just another ERP module to be ‘fobbed off’. Instead, they are building a value flywheel: a model in which the efficiency gains gained from AI are deliberately and systemically reinvested in the further development of perceptual intelligence and innovation.

    In this view, AI becomes the company’s new operating system, not just a tool for cost optimisation. If an organisation gets stuck in an endless loop of Proof of Concept cycles, looking for ad hoc savings, it will probably never achieve the scale necessary to survive the 2030 transformation. This is because the real value comes not when an algorithm is implemented, but when integrated engineering practices allow trust and context to scale across the enterprise.

    dane

    Foundations are not just about technology

    In 2030, competitive advantage will not be measured by terabytes of data, but by the precision with which machines can interpret it. This is where the new mandate of the D&A leader comes in: to deliver *perceptual intelligence. Until now, the role of the data director has often been reduced to being the custodian of a digital archive; today, he or she must become the architect of the organisation’s ‘collective brain’.

    The technology itself is merely the engine. The real fuel is context, treated as critical infrastructure. AI agents, lacking a deep semantic layer, resemble brilliant chess players playing in total darkness – they have immense computing power, but cannot see the board. Without a trusted contextual foundation, autonomous systems become mere expensive confabulation factories. This is why shifting the centre of gravity from ‘having models’ to ‘designing meaning’ is so crucial.

    Data management is now a steering wheel support system. Pace-setting companies are able to embed privacy and ethics issues directly into the workflows of AI agents. For trust in the world of algorithms is not a sentiment – it is a technical necessity. Without it, every decision made by AI will be fraught with risks that no rational board would accept. A true D&A leader understands that his or her job is no longer to provide dry reports, but to build a foundation on which AI can finally stop guessing and start realistically understanding the business.

    Strategy 2030: AI-first as a state of mind, not a shopping list

    Ultimately, AI-first transformation is not an IT project, but a test of leadership maturity. By 2030, D&A leaders must abandon the role of technology providers in favour of architects of new operating models. True scaling requires the courage to break out of the ‘endless loop of Proof of Concept cycles’ and move to deeply integrated engineering practices. Data, software and context must stop operating in silos – in the new reality, they are one inseparable organism.

    Let us return to the initial paradox: why do only 39% of leaders believe in the financial success of their investments? This scepticism is paradoxically a good sign. It shows that the market is moving out of its phase of childlike admiration for ‘magical’ algorithms and is beginning to understand the scale of the challenge. True return on investment in AI is not a matter of luck, but of consistently building trust and perceptual intelligence.