Tag: AI

  • Why are AI agents becoming the target of cyber attacks? Trend overview 2026

    Why are AI agents becoming the target of cyber attacks? Trend overview 2026

    Over the past eighteen months, the enterprise sector has moved from a fascination with generative artificial intelligence to a phase of actively implementing it into operational processes. A key trend in this evolution is the shift from passive language models (LLMs) to AI agents – autonomous systems capable not only of generating text but also of performing tasks: writing code, managing email communications, calling APIs or authorising financial transactions. With this agility, however, comes a critical new category of threats: Indirect Prompt Injection (IPI). Recent data from reports by Google and Forcepoint shed new light on the scale and sophistication of these attacks, suggesting that agent systems security will become one of the biggest challenges for chief information security officers (CISOs) in the coming years.

    IPI mechanism: Data as instructions

    Traditional prompt injection attacks relied on direct manipulation of the model by the user (e.g. attempting to ‘jailbreak’ a bot by giving it the command to ignore security). Indirect Prompt Injection is a much more insidious phenomenon. It involves inserting malicious instructions into content that the AI agent processes as input – this could be web pages, PDF documents, emails or code repositories.

    The problem lies in the very architecture of current LLM models, which cannot absolutely separate system instructions (issued by the tool developer) from external data. When an AI agent analyses a web page in search of information, it may come across hidden text, which the model will interpret as a new overarching command. As a result, the attacker takes control of the agent’s logic, instructing it to, for example, send sensitive data to an external server or perform a destructive operation on the user’s file system.

    Analysis of market trends

    Google Security Research researchers, analysing CommonCrawl resources, point to an alarming trend. Between November 2025 and February 2026, there was a 32 per cent increase in the number of detected malicious injection attempts in publicly accessible web resources. This relatively short time frame demonstrates the dynamism with which the criminal community is adapting to new technologies.

    From a market perspective, Google’s observation on cost-benefit calculus is key. Until recently, IPI attacks were considered the realm of academic research – they were difficult to implement and often failed due to the instability of the results generated by AI. Now, with the increased reliability and agility of agents, these attacks are becoming ‘viable’. AI’s ability to autonomously call external tools (tool calling) means that a successful injection of instructions has an immediate and measurable financial or operational impact.

    The Google study allowed the current IPI trials to be categorised into five groups:

    1. Harmless jokes: Attempts to change the tone of an agent’s response.
    2. Helpful tips: Suggesting preferential answers to the model (often on the edge of ethics).
    3. Optimisation for AI (AI-SEO):Hidden phrases to position products in assistants’ responses.
    4. Deterring agents: Instructions prohibiting AI from indexing or summarising a particular page.
    5. Malicious attacks: Data exfiltration and sabotage (deletion of files, destruction of backups).

    Although the latter are often at an experimental stage at present, their increasing complexity suggests that it is only a matter of time before they enter the phase of mass attacks.

    From coding assistants to financial transactions

    The Forcepoint report provides concrete evidence of how IPI manifests itself in professional software and financial tools. Experts have identified ten verified indicators of attacks targeting popular tools such as GitHub Copilot, Cursor and Claude Code.

    The attack scenario is mundane: a programmer uses an AI agent to analyse a library or documentation on an external site. This site contains a hidden AI instruction. When the agent ‘reads’ the site, it is instructed to execute a command in the terminal that destroys local backups. Since the agent has permission to operate on the file system (which is essential in a programmer’s job), the command can be executed without additional verification.

    Even more dangerous are attempts at financial fraud. Forcepoint points to cases where complete transaction instructions are sewn into web content, e.g. PayPal.me links with a predefined amount along with step-by-step instructions on how the agent is to finalise the payment. In systems where AI has access to digital wallets or corporate payment systems, the risk of capital loss becomes immediate.

    The paradox of detection and the challenges for business

    One of the most worrying findings from the Forcepoint report is the so-called detection paradox. The phrases and keywords used by attackers to inject hints are identical to the terminology the cyber security community uses to describe and analyse these threats. This renders simple filters based on word blacklists ineffective – either blocking legitimate expert communications or letting intelligently worded attacks through.

  • Why PCHE chips are key to the next stage of artificial intelligence development

    Why PCHE chips are key to the next stage of artificial intelligence development

    If it seems like the semiconductor market is back in the spotlight, that’s because it really is. ASML, the world’s leading supplier of photolithography systems, recently reported that the company’s share value has risen by around 97% in the last six months, reflecting a renewed increase in investment in chip manufacturing. However, behind the headlines is a less high-profile, and perhaps equally important, issue related to managing the heat generated both during chip production and by the AI equipment that depends on them, explains Ben Kitson, director of business development at chemical etch manufacturing company Precision Micro.

    The current cycle is atypical. Technology giants are pouring huge resources into AI data centres, generating unprecedented demand for high-performance hardware. What’s more, much of this computing hardware has already been contracted, according to Simply Wall St.

    This combination poses a real challenge for infrastructure planning, as AI system operators face high power density and unprecedented cooling requirements in their data centres.

    Traditional data centres were designed for racks with power consumption of 5-10 kW, but AI clusters now consume 30-50 kW per rack. Furthermore, advanced GPU and accelerator platforms are now reaching 100-120 kW per rack, meaning that air cooling alone is no longer sufficient.

    Thermal management at the forefront

    Thermal constraints are finally starting to attract attention. In May 2025, semiconductor giant Nvidia announced that hyperscale operators are installing tens of thousands of its latest GPUs every week, and the pace of deployment is set to accelerate further with the introduction of the ‘Blackwell Ultra’ platform.

    According to the company’s public development plan, its next ‘Ruby Ultra’ architecture will allow more than 500 GPUs to be housed in a single server rack with up to 600 kW of power consumption, highlighting the scale of the cooling challenges currently facing artificial intelligence infrastructure.

    Across the AI infrastructure sector, thermal stability has become a key constraint not only in chip design, but also in the infrastructure required to power and cool high-density computing environments.

    High-performance liquid cooling systems and microchannel heat exchangers have ceased to be niche solutions and have become essential components. The same engineering principles – precise control of fluid flow, maximisation of heat transfer and production of compact components with tight tolerances – apply to many applications today.

    The engineering expertise gained in high-precision semiconductor environments is now being applied to printed circuit heat exchanger (PCHE) technology for AI data centres, which is the interface between electronics manufacturing and energy infrastructure.

    Why PCHE systems matter

    PCHE systems are not just a more advanced version of conventional designs such as shell-and-tube or plate-and-frame heat exchangers. They are smaller, lighter and more efficient, making them ideal for space-constrained and high-density installations.

    In data centres, this translates into a higher number of racks per square metre without compromising reliability, while at the same time reducing the energy required to cool the computing equipment.

    Energy efficiency is another factor, as AI workloads are predicted to cause a significant increase in global electricity demand. Goldman Sachs forecasts an increase of up to 165% by 2030, meaning that every watt of energy used for cooling counts.

    Compact, high-performance PCHEs not only save installation space, but also help control energy costs and improve the overall energy efficiency ratio (PUE), becoming a key component of high-density AI infrastructures in hyperscale environments.

    Chemical digestion scaling

    The very qualities that make PCHEs so effective – microchannels, large heat transfer area and tight tolerances – simultaneously make them difficult to manufacture. Conventional machining allows prototyping, but is slow, causes burrs and is not cost-effective for volume production.

    Chemical etching, on the other hand, eliminates these problems by creating all the channels simultaneously over the entire surface of the plate. In this way, precise stress-free structures are achieved, and then the finished heat exchanger plate is created by diffusion welding.

    Chemical etching company Precision Micro has been producing PCHE boards since the technology was introduced to the market in the 1990s. It has a specialist 4,100sq m facility that is capable of processing thousands of boards up to 1.5 metres long and up to 2 mm thick each week. This enables batch production of etched plates and makes the facility one of the largest sheet etching centres of its kind in the world.

    This is because scaling production to thousands of boards requires tightly controlled chemical processes and rigorous quality control. Few suppliers in the world have the expertise, production capacity and process control system necessary to mass-produce etched PCHE boards.

    Pressure on the supply chain

    Producing PCHE boards in high volumes requires significant capital investment and advanced technological processes. Although new production capacity is emerging in Asian markets, many OEMs in Europe and North America continue to emphasise reliability, process repeatability and quality as key criteria when sourcing precision components.

    Working with established regional partners can reduce logistical complexity, improve intellectual property protection and ensure consistent quality, especially when supply chains are looking for local suppliers of core competencies.

    Etched flow plates and high-performance heat exchangers are an essential, but often invisible, part of the AI ecosystem. Through precise temperature control, they help data centres maintain high-density computing racks without the risk of overheating and enable reliable and efficient scalability of AI infrastructure.

    This is the hidden reality behind the renewed increase in investment in chip manufacturing. Innovation is not just driven by smaller transistors, new node geometries or more efficient GPUs. They also depend on the physical infrastructure that enables these technologies to operate reliably at industrial scale.

    PCHE chips may not attract as much attention as chips or artificial intelligence models, but they underpin the performance, efficiency and scalability of both. Where every watt of energy and every fraction of a degree of temperature counts, precision thermal hardware is quietly enabling the progress of one of the fastest growing technology cycles of the last decade.

    Source: Precision Micro

  • The end of Microsoft’s monopoly on OpenAI. What does the new agreement mean for the market?

    The end of Microsoft’s monopoly on OpenAI. What does the new agreement mean for the market?

    The most influential partnership in the history of artificial intelligence has just undergone a fundamental transformation. Microsoft and OpenAI have announced a renegotiation of the terms of their partnership, ending Azure’s previous exclusivity to offer ChatGPT creator models. The new agreement paves the way for the startup to have a direct presence in the ecosystems of Microsoft’s biggest competitors, including Amazon Web Services and Google Cloud. While the original deal, backed by a $13 billion investment, defined the current AI landscape, both parties recognised that the existing formula had become too cramped for their growing ambitions.

    Strategic foundations for change

    Under the new arrangement, Microsoft will remain OpenAI’s primary cloud partner until 2032, and the startup has committed to spend at least $250 billion on Azure services. The Redmond giant retains priority rights to deploy new products, but loses its sales monopoly. In return, Microsoft has secured a 20 per cent share of OpenAI’s revenue by 2030, importantly including if the startup achieves so-called artificial general intelligence (AGI). Previous provisions would have allowed OpenAI to stop paying Microsoft when it made the technological leap to AGI, which was a significant risk for the investor. At the same time, Microsoft stops sharing profits with OpenAI from offering their models within Azure, simplifying the giant’s financial structure.

    The loosening of ties is a move dictated by the maturity of the market. OpenAI, as it prepares to go public, needs to demonstrate its ability to scale its enterprise business beyond a single vendor’s infrastructure, especially in a clash with the rising Anthropic. From Microsoft’s perspective, giving up some control of OpenAI’s model distribution is the price of taking off the burden of funding the giant infrastructure needed by the startup and, perhaps most importantly, easing pressure from antitrust authorities in the US and Europe. Satya Nadella’s strategy is evolving towards diversification; Microsoft is increasingly promoting its own models and third-party solutions within Copilot, reducing the critical dependence on a single technology provider.

    It is worth noting the increasing freedom to build multi-cloud strategies. It seems a good direction to review current contracts with cloud providers for upcoming AWS Bedrock or Google Vertex AI deployments, which will optimise costs and reduce latency. It is also worth monitoring the pace of Microsoft’s in-house models, as their growing role in Copilot 365 may soon offer better value for money than standard external models.

  • Benchmarks won over loyalty: Microsoft bets on Anthropic. A blow for OpenAI

    Benchmarks won over loyalty: Microsoft bets on Anthropic. A blow for OpenAI

    Microsoft ‘s choice of the Claude Mythos model as the foundation for its new software security architecture sets a significant precedent in the Redmond-based technology giant’s strategy. This decision, while at first glance it may appear to be a mere operational adjustment, in reality reveals deeper market shifts in the generative AI sector and changing priorities in digital risk management. Analysing the facts of Anthropic‘s model integration, a clear pattern can be discerned: Microsoft is moving from a phase of fascination with general AI capabilities to a phase of rigorous, benchmarked selection of specialised tools.

    A key reference point for this decision is the CTI-REALM benchmark, co-developed by Microsoft engineers. The fact that Claude Mythos scored highest in it, distancing the GPT-5.4-Cyber model, is a market signal that cannot be ignored. Microsoft, as OpenAI’s largest partner and investor, has shown that pragmatism and hard data, rather than corporate loyalty, wins in critical areas such as cyber security. This strategic approach to model vendor diversification avoids vendor lock-in and ensures access to the most effective solutions in specific niches.

    From a business perspective, integrating Mythos directly into the software development cycle is a classic implementation of the ‘Shift-Left’ strategy. The cost of fixing a vulnerability discovered at the production stage is many times higher than eliminating the bug at the code writing stage. The cited data about the detection of a vulnerability that has existed for 27 years and the success of Mozilla, which identified 271 vulnerabilities thanks to Claude Mythos, are not just technological curiosities. They are concrete indicators of return on investment (ROI). For companies operating on huge collections of legacy code, automating security audits using such high-precision models means saving thousands of hours of high-level professionals and drastically reducing the legal and reputational risks associated with potential data leaks.

    The market reaction to Mythos’ capabilities, manifested, for example, by concern in the banking and insurance sectors and interest from the NSA, suggests that there is a new kind of regulatory risk involved. Claude Mythos is seen as a dual-use technology. The model’s ability to instantaneously map vulnerabilities makes it a defensive tool of unprecedented power, but also a potential offensive instrument. The embargo under consideration by US agencies and the restrictive access under Project Glasswing suggest that in the near future, access to the most advanced cyber security models may be rationed in a similar way to armament or high-end cryptographic technologies. Companies must therefore take into account in their strategies the fact that technological advantage in the area of AI may be limited by state interventions.

    It is also worth noting a painful market lesson for OpenAI. The fact that the release of GPT-5.4-Cyber failed to draw attention away from the Anthropic solution is indicative of the change in expectations of corporate customers. The market has become saturated with promises of versatility; solutions with proven effectiveness in specific usage scenarios are now sought after. Microsoft, by implementing Claude into its 365 applications and its internal processes, de facto legitimises Anthropic as an equal, and in some respects superior, technology partner. This suggests that OpenAI’s dominance may be more fragile than stock market valuations would indicate.

    For Microsoft itself, the move is an attempt to run away from mounting criticism over historical security lapses. Redmond has understood that with the current scale and complexity of the Windows and Azure ecosystem, traditional methods of manual code review are inefficient. Using Claude Mythos as an intelligent filter to verify developers’ work is an attempt to systemically address the problem of technology debt. If Microsoft manages to significantly reduce the number of critical vulnerabilities in its products with this solution, it will set a new market standard to which all SaaS and Cloud players will have to adapt.

  • Alphabet invests $40bn in Anthropic. Is it fighting for control with Amazon?

    Alphabet invests $40bn in Anthropic. Is it fighting for control with Amazon?

    Alphabet, Google’s parent company, has announced its intention to invest up to $40 billion in Anthropic, a startup that for the Mountain View giant is both a key cloud customer and one of its fiercest competitors in the race for supremacy in artificial intelligence.

    The structure of this deal reflects the new reality of funding the AI sector, where capital is closely tied to specific outcomes. Google will put up $10 billion in cash at a $350 billion valuation for the startup. The remaining 30 billion will only be deployed once the developers of the Claude model achieve rigorous performance targets. For Alphabet, this is not only an investment of capital, but above all an attempt to forge closer ties with an entity that has emerged as a leader in niches where Google is still searching for its identity.

    The move comes just days after Amazon pledged its own $25 billion cash injection to Anthropic. A situation where two of the world’s biggest cloud providers are bidding for the same startup shows how desperately tech giants need the success of external models to drive sales of their own computing infrastructure.

    Anthropic’s driving force is no longer just the promise of secure artificial intelligence, but real financial results. The company’s annual revenue has just surpassed the $30 billion barrier, an impressive jump from the $9 billion recorded at the end of 2025. Investors are responding enthusiastically, with some offers from the venture capital market valuing the company at up to $800 billion. Underpinning this growth is Claude Code, a tool that dominates the software segment, and Anthropic’s Cowork agent, whose plug-ins have recently caused jitters in the stock markets, driving down the valuations of traditional SaaS software companies.

    Anthropic’s greatest challenge, however, remains its ‘hunger for power’. Scaling the models requires infrastructure of a scale never seen before. The startup is securing this through multi-year agreements with Broadcom and CoreWeave, as well as an ambitious $50 billion plan to build its own data centres in the US.

    The market is divided into specialised tools and Anthropic, with its focus on coding and autonomous agents, is proving that it is possible to successfully challenge general-purpose models. Alphabet, by investing in Anthropic, is buying itself an insurance policy in case the startup’s approach proves to be the target business standard.

  • The hangover from euphoria, or how AI agents can blow through a year’s budget in a few hours

    The hangover from euphoria, or how AI agents can blow through a year’s budget in a few hours

    Not so long ago, artificial intelligence was supposed to be the ‘ultimate solution’ to productivity problems – a digital alchemist turning empty process flows into pure efficiency gold. The ball was in full swing and the champagne was pouring from the presentations of the models promised by suppliers.

    Today, however, instead of more breakthroughs in machine reasoning, something far less spectacular is whispered about in the corridors of business conferences: the happiness bill. For it turns out that the ticket of admission to the world of AI was not a one-off fee, but a dynamic, hard-to-tame subscription for the future, the cost of which can rise exponentially overnight.

    What we are witnessing is the birth of ‘token fever’. It’s a state where the enthusiasm of engineers collides with the dismay of CFOs. For decades, we have been accustomed to the SaaS model – predictable, fixed licence fees that were easy to budget for. Generative AI has shattered this order, introducing a ‘probabilistic’ model. Here, a mistake in one agent’s logic or an overly effusive prompt can burn up financial resources faster than traditional cloud infrastructure consumes electricity.

    Uber and a mistake worth billions

    If the tech industry was looking for the ‘canary in the coal mine’, it found it in San Francisco in April 2026. At the IA HumanX conference, Praveen Neppalli Naga, Uber’s CTO, gave a speech that sobered even the biggest optimists. The giant, which had invested an astronomical $3.4 billion in research and development in 2025, faced a wall: its annual budget for artificial intelligence had evaporated in just four months.

    It wasn’t a matter of one misguided investment decision, but a side effect of an engineering fantasy with no brakes. Uber, aiming for aggressive technology adoption, encouraged its developers to use agents like Claude Code en masse. The result? 11% of back-end code was already being generated by artificial intelligence, but the price for this ‘efficiency’ proved deadly. Without proper performance filters and oversight of token consumption, AI ceased to be a lever for savings and became an out-of-control spending engine.

    The case of Uber is a classic example of a ‘tsunami of tokens’. Autonomous agents, entering infinite iteration loops with no clear limits, can burn a fortune in the time it takes to drink an espresso. It’s a painful lesson for any CIO: innovation without financial architecture is just a very expensive hobby. Naga admitted that the company had to go back to the design table to completely redefine its strategy. Any company that deploys AI today without a rigorous profitability analysis risks having its success measured not by margin growth, but by the speed with which it exhausts its own resources.

    Goodbye SaaS, hello volatility

    We are bidding farewell to an era where the IT budget was like a fixed Netflix subscription – predictable, secure and giving a false sense of control. For years, the SaaS model accustomed us to per-user licensing, where the only risk was a surplus of accounts that no one used. Generative AI brutally ends this period of ‘licensing peace of mind’ by introducing a billing model that is more akin to electricity bills during an energy crisis than traditional software.

    The shift from fixed costs to variable costs is a fundamental paradigm shift. In 2024, IT departments were buying AI access in a lump sum. Today, in 2026, vendors such as OpenAI and Anthropic have eliminated unlimited Enterprise plans, introducing dynamic billing for token consumption. The reason is mundane: AI agents have destroyed the distribution curve on which the old business was based. The subscription model only worked when the ‘lec’ users subsidised the ‘intensive’ ones. One, when we started employing autonomous agents, the differences became absurd. Analyses show cases where a user paying $100 a month generated costs of $5,600 in a single billing cycle. A subsidy ratio of 25 to 1 is a straightforward path to supplier bankruptcy, hence the sharp turn towards ‘use-pay’ billing.

    This makes IT spending probabilistic. This radically differentiates AI from the traditional cloud. A forgotten server in AWS generates a fixed, linear cost. A poorly designed prompt or agent without iteration limits, on the other hand, can go into a loop and generate millions of useless tokens in seconds. In this new world, a programmer’s logical error doesn’t end up ‘crashing’ the application – it ends up draining the company account at the speed of light. This means an immediate redesign of IT finance and the abandonment of rigid budget frameworks in favour of flexible management of the ‘economics of inference’.

    Tsunami of tokens – a new unit of risk

    In the modern CIO’s dictionary, a new, much more predatory term has emerged alongside ‘technical debt’: the ‘token tsunami’. This is a phenomenon in which autonomous agents, rather than freeing up staff time, fall into loops of endless iterations, burning up budgets with the intensity of a steel mill. The problem is that a bot, unlike a human, never feels fatigue or shame for duplicating mistakes – it simply consumes resources until it encounters a hard limit or empties its account.

    The scale of the problem is such that even the biggest players have had to revise their dogmas. Gartner is sounding the alarm: by the end of 2027, up to 40% of agent-based AI projects will be cancelled. The reason? Not a lack of vision, but brutal mathematics – rising costs while lacking precise tools to measure real business value.

    Here is where the biggest paradox of 2026 manifests itself: the unit price per token is steadily falling, but the total bill is rising. Indeed, AI agents consume between 5 and even 30 times more units per task than a standard chatbot. This is a classic trap of scale – an efficiency that becomes economically inefficient by its sheer volume. If your AI strategy is based solely on the hope that ‘models will be cheaper’, you’re just building a castle in the sand that the coming tsunami will wash away in one billing cycle. Without rigorous control over what machines process and why, modern IT becomes hostage to its own unbridled computing power.

    AI FinOps – the new alchemy of IT finance

    If you thought Cloud FinOps was challenging, get ready for a no-holds-barred ride. Traditional cloud optimisation was about simple craftsmanship: shutting down unused servers and keeping an eye on instance reservations. AI FinOps is a completely different discipline – it’s probabilistic rather than deterministic resource management. Here, the unit of expenditure is no longer processor man-hours, but the cost of a useful response relative to the cost of an erroneous or ‘hallucinated’ response.

    In 2026, as many as 98% of FinOps teams consider spending on AI as their number one priority. The reason is simple: in the traditional cloud, a technical error rarely leads to an exponential increase in cost. In the world of AI agents, misconfigured prompt logic can burn through budgets faster than you can refresh your dashboard. This is forcing IT leaders to define a new metric – the economics of inference. We no longer count how much a model costs us, but how much the operational success gained from its work costs us.

    And that means rewriting dashboards from scratch. Classic management frameworks such as ITIL 4 or COBIT, while providing a solid base, today require immediate extensions to include prompt lifecycle management or agent iteration limits. AI FinOps is not just about Excel tables; it is a new management philosophy where an engineer must think like an economist and a financier must understand LLM architecture. Without this synergy, buying tokens is akin to pouring rocket fuel into a hole in the tank – the effect is spectacular, but extremely short-lived and frighteningly expensive.

    How not to burn through a decade of innovation

    The time window for non-punitive errors has just slammed shut. To avoid a ‘token tsunami’, organisations need to move from a phase of joyful adaptation to a phase of rigorous architecture. The first and most pressing step is to conduct a token consumption audit – not a general one, but a precise one, broken down by specific teams and use cases. When a query to a model can cost as much as a good cup of coffee, we need to know who is ordering a double espresso without a clear business need.

    The key to financial survival is the implementation of three technical foundations:

    • RAG (Retrieval-Augmented Generation): Providing the model with only the data it actually needs, drastically reducing the token ‘diet’.
    • Specialist models: Abandoning the ‘all-knowing’ giants in favour of smaller, cheaper and finely-trained models for repetitive tasks.
    • Corporate charter for the bot: Establish rigid iteration limits and budgets per agent. This is a matter of elementary financial hygiene.

    We also need to review how our people work with the technology. Identifying the ‘Centaurs’ (experts empowering their AI skills) and eliminating the ‘Automators’ (unreflectively delegating work to a machine) will allow a real increase in ROI. The most expensive and fastest way to waste an innovation budget is to buy millions of tokens just to have teams working exactly as they will in 2022, only with an on-screen chat interface.

     

  • Intel is back in the game – results above expectations and massive share gains

    Intel is back in the game – results above expectations and massive share gains

    After years of strategic drift and management missteps, Intel under Lip-Bu Tan is beginning to prove that its turnaround plan is more than just aggressive cost-cutting. Its latest second-quarter revenue guidance, settling in at $14.3 billion, not only beat Wall Street’s expectations, but triggered a euphoric 19 per cent rise in share value. This signals that the former Silicon Valley icon has found its path in a world dominated by artificial intelligence.

    A strategic shift towards CPUs and AI agents

    Key to Intel’s optimism is a paradigm shift in the data centre sector. While the first phase of the AI boom undeniably belonged to Nvidia’s GPUs, used to train powerful models, the market is now entering the deployment (inference) phase. This is where Intel’s CPUs are regaining relevance. In an architecture based on autonomous AI agents, requiring advanced reasoning and handling complex workloads, traditional CPUs are proving to be an indispensable part of the infrastructure. Lip-Bu Tan makes it clear that this demand is not just wishful thinking, but a real trend coming from the major cloud providers.

    Partnership with Musk as foundation of foundry

    The biggest image and technology victory of recent days, however, is securing Tesla as a key customer for the upcoming 14A technology process. Elon Musk’s participation in the Terafab project is a massive credibility boost for Intel’s manufacturing business (Intel Foundry). The partnership aims to create next-generation processors for robotics and data centres, directly challenging TSMC’s dominance. While financial details remain confidential, the strategic alliance with players such as Musk, Nvidia and SoftBank gives Intel the fuel it needs to transform itself into a modern, contract chip foundry.

    A risky road to 2030

    Despite its financial success in the first quarter, where adjusted earnings per share were 29 cents, Intel is still treading on thin ice. The transformation from ‘old giant’ to ‘nimble foundry athlete’ requires not only breaking through manufacturing bottlenecks, but also maintaining the pace of innovation in the face of increasing competition from AMD and ARM. For investors, however, the current valuation may be an attractive entry point. If Intel successfully manages demand for silicon in the coming robotics era, today’s ‘high-stakes gamble’ could end with the company returning to the throne of technological empire.

  • Japan sets up task force against Mythos AI threats

    Japan sets up task force against Mythos AI threats

    When Anthropic announced that its latest AI model, Mythos, had identified thousands of previously unknown security vulnerabilities in operating systems, Silicon Valley was in an uproar. But it was in Tokyo, the heart of Asia’s conservative financial system, that the most concrete policy decision was made. Finance Minister Satsuki Katayama announced the creation of a special task force to secure Japan’s banking sector against a new era of threats generated by artificial intelligence.

    For the market, Japan’s move means that the traditional approach to cyber security based on cycles of patching holes is about to become history. The new entity includes key state institutions, including the Financial Services Agency and the Bank of Japan, as well as private giants and exchange operator Japan Exchange Group. The scale of this coalition reflects the seriousness of the situation: Mythos is not just another language model, but a tool capable of detecting and exploiting software vulnerabilities at a speed that human administrators cannot match.

    For the financial sector, this is a critical scenario. Banks, despite modern interfaces, still rely heavily on a complex, multi-layered IT architecture, the elements of which still remember previous decades. The interconnectedness of transactional systems means that a single breakout can have a knock-on effect. Katayama rightly points out that in a world of real-time operations, a digital crisis immediately translates into a loss of confidence in the market and real losses of liquidity.

    Although there have been no incidents directly related to the Mythos model to date, Japan’s pre-emptive action sets a new regulatory standard. Regulators in the US and Europe have also issued warnings, suggesting banks urgently review their defences. However, it was the Japanese administration that was the first to openly acknowledge that there was a ‘crisis at hand’.

    Executives in the fintech and banking sectors should take note of the fact that AI has dramatically reduced the amount of time that a security vulnerability remains a theoretical threat. Security investments should now evolve towards autonomous systems capable of responding at the same speed that models such as Mythos can strike. The fight for financial stability in 2026 is no longer about whether a system will be attacked, but whether it will have time to repair itself before the market sees an anomaly.

  • DeepSeek V4: New AI model optimised for Huawei chips

    DeepSeek V4: New AI model optimised for Huawei chips

    DeepSeek, the Chinese startup that destabilised the AI market last year with its low-cost models, has just made a move of a strictly strategic nature. The release of a familiarisation version of the V4 model demonstrates that the Chinese AI ecosystem is preparing for a permanent disconnect from Western infrastructure.

    A key differentiator of V4 is its strict optimisation for the Huawei Ascend processor architecture. While the Hangzhou-based startup has historically based its success on Nvidia chips, the current turn to domestic solutions is a response to growing regulatory pressure from Washington. Huawei has confirmed that the entire Ascend ‘super node’ product line already supports the new DeepSeek architecture, suggesting deep integration at the hardware-software level to minimise performance losses from not having access to the latest H100 or Blackwell units.

    In terms of content, V4 Pro positions itself at the top of the world. According to the manufacturer, the model outperforms other open-source solutions in general knowledge tests, second only to the closed model Gemini-Pro-3.1 from Google. The strategy of providing a flash and preview version allows the company to collect real-time feedback data, which is essential for calibrating parameters prior to final deployment.

    The market reaction to the launch was immediate and painful for competitors. The stock market listing of rivals such as Zhipu AI and MiniMax saw significant declines, confirming DeepSeek’s dominant position in China’s open-source sector. At the same time, the company finds itself at the centre of a geopolitical cyclone. The White House openly accuses Beijing Labs of systemic intellectual property theft, and DeepSeek itself faces allegations of misuse of data from its OpenAI and Anthropic models.

    For investors, however, DeepSeek remains one of the most promising assets in Asia. The company, controlled by High-Flyer Capital Management, is aiming for a valuation in excess of $20 billion. Interest in taking a stake from giants such as Alibaba and Tencent suggests that Chinese Big Tech sees DeepSeek not just as a technology provider, but as the foundation of a national technology stack.

  • AI performance crisis. Why is GitHub blocking access to new Copilot accounts?

    AI performance crisis. Why is GitHub blocking access to new Copilot accounts?

    GitHub’s decision to temporarily halt new sign-ups for its Pro, Pro+ and student subscriptions is a rare moment in the world of Big Tech, when the demand for artificial intelligence brutally collides with the physical limitations of the infrastructure. Microsoft, the platform’s owner, admits outright: Copilot has become a victim of its own success. The tool is consuming resources at a rate that the original business model simply did not anticipate.

    What initially looked like a technical problem actually exposes a deeper crisis in the ‘token economy’. Developers have stopped treating Copilot as a simple code autocomplete and have started using it for complex architectural tasks and deep refactoring. Such advanced operations require gigantic computing power and generate costs that are starting to strain GitHub’s margins. The company admitted that the current load “far exceeds” the assumptions on which the subscription plan structure was based.

    The introduction of a lock-in for new users is meant to protect the experience of those who are already paying, but even they must prepare to tighten their belts. GitHub has announced the introduction of strict session and weekly limits, which de facto ends the era of unlimited AI support. The most painful cut for professionals is the depletion of the library of available models. Claude Opus 4.5 and 4.6 have disappeared from the Pro and Pro+ subscriptions, leaving only the latest version 4.7 as the top-of-the-line offering.

    GitHub is openly encouraging developers to ‘save money’ and use smaller, cheaper models more often whenever possible. It’s a strategic shift that will force a new form of hygiene on IT departments – managing token budgets will become just as important as managing cloud budgets.

    The current registration paralysis is probably just a temporary pause needed to reformat the offering. We can expect that when Copilot goes back on sale, its pricing will be much more reflective of real process costs, perhaps moving to a ‘pay-as-you-go’ model for the most demanding tasks. Microsoft is proving that even with unlimited capital, computing capacity remains a scarce resource that must be managed with ruthless discipline.

  • The Meta’s new strategy. Employees teach their successors AI

    The Meta’s new strategy. Employees teach their successors AI

    Inside Menlo Park, a fundamental shift in the definition of white-collar work is currently taking place. The Met, led by Mark Zuckerberg, is implementing a system that transforms the daily activities of engineers and managers into the raw material for building autonomous AI agents. The programme, called the Model Capability Initiative (MCI), is not only a new monitoring tool, but above all a signal that Silicon Valley is entering a new, aggressive phase of automation.

    According to internal company notes, MCI records mouse movements, clicks and keystrokes of employees in the US. The tool also takes occasional snapshots of the screen to teach AI models the subtleties of human interaction with the software – from handling keyboard shortcuts to navigating complex drop-down menus. What was previously an intuitive human craft becomes a training data set.

    Meta’s technical director, Andrew Bosworth, leaves no illusions about the purpose of this initiative, currently operating under the Agent Transformation Accelerator (ATA) programme. The company’s vision is of a world where AI agents do most of the work and the role of humans is reduced to that of supervisor and equalizer. To achieve this, Meta must first ‘clone’ the behavioural patterns of its top professionals.

    This strategy is inextricably linked to a deep restructuring of the workforce structure. Meta is not only planning further job cuts, but is also blurring the lines between traditional roles by introducing the universal title of ‘AI developer’. The creation of the Applied AI (AAI) team aims to create systems capable of writing, testing and shipping code independently. In this model, the software engineer ceases to be a developer and becomes a teacher of the algorithm, ultimately replacing it in repeatable processes.

    However, the initiative raises serious questions about the limits of surveillance in the white-collar sector. While real-time tracking of movements has so far been the domain of logistics staff or delivery drivers, the transfer of these methods to engineering offices sets a precedent. Legal experts point to a profound gap between the liberal approach in the US and the strict regulations in Europe. While this surveillance is legally permissible in the US, in the European Union, GDPR regulations and national labour protection laws would likely prevent the implementation of MCI on such a scale.

    Meta spokesperson Andy Stone assures that the data is not used to assess performance and that the company has safeguards in place to protect sensitive content. But for the business market, the lesson is clear: Meta is putting everything on the line. If the ‘Agent Transformation’ experiment succeeds, the company will gain an efficiency advantage that competitors may not be able to make up for without similarly compromising the privacy of their staff.

  • AI can get a PhD in physics, but it won’t read a watch

    AI can get a PhD in physics, but it won’t read a watch

    Artificial intelligence AD 2026 resembles a brilliant polymath who defends his PhD in quantum physics on Monday only to fail a shoelace tying test on Tuesday. According to Stanford University’s latest Artificial Index Report 2026, we have reached a point where algorithms have not only caught up, but overtaken human experts in science and multimodal reasoning. This is no longer evolution; it is a digital blitzkrieg, with the industrial sector producing more than 90 per cent of the leading models and four out of five people at universities treating AI like a third hemisphere brain.

    However, this brilliant picture has a crack in it, which researchers call the ‘jagged frontier’ (jagged frontier). It is a fascinating paradox: a model that solves Olympic mathematics tasks without flinching capitulates before the … the dial of an analogue watch. The example of the Gemini Deep Think, which only reads the time correctly 50.1% of the time, is as comical as it is sobering.

    We are used to thinking of progress as a rising, smooth line. The Stanford report brutally verifies this belief. It shows a technology with almost godlike analytical capabilities, which at the same time stumbles over thresholds that a kindergartner passes effortlessly. This means that we are implementing systems that are at once superhumanly clever and painfully naive. The core competency in IT is no longer ‘implementing AI’ per se, but precisely mapping those invisible cliffs where the machine’s logic ends and its digital myopia begins.

    Peaks of possibility: When an algorithm puts a scientist to shame

    When you look at the hard data from the SWE Bench-Verified test, you get the impression that developers should slowly consider changing their profession to goose farming. A score jumping from 60% to 100% in just twelve months is a complete takeover of the sandbox where humans ruled until recently. AI is now reaching doctoral level in the sciences and crushing the mathematical competition, becoming the analytical partner we have been dreaming of for decades.

    The problem arises, however, when that same digital titan has to look at the wall. Literally. The aforementioned case of Gemini Deep Think and its 50.1 per cent efficiency in reading an analogue clock is a manifestation of the jagged frontier – a phenomenon in which the limit of an algorithm’s capabilities is not a continuous line, but a jagged boundary. The machine reasoning is multi-modal, operating on abstractions we don’t grasp, while stumbling over simple perceptual mechanisms we have mastered at the age of six.

    The same is true of AI agents. Their effectiveness in operational tasks in the OSWorld environment has increased spectacularly – from a niche 12% to an impressive 66%. This sounds like a success, until you realise that in business practice this means an error in one in three attempts. In the structured world of corporate systems, a margin of error of 33% is not ‘progress’, but a massive operational risk.

    This erraticity makes AI like a brilliant pianist who can play the most difficult Liszt sonata, but doesn’t always hit the keys when he is supposed to perform ‘There’s a kitten on a hurdle’. It is this unpredictability, not a lack of computing power, that is the biggest challenge for IT system architects today. We need to learn how to manage technology that is both omniscient and …. disarmingly inattentive.

    The wall you can’t see: Gemini and the unfortunate watch

    The implementation of artificial intelligence in organisations has reached a staggering 88% in 2026. In the business world, this is a result that is close to a plebiscite for breathing room – almost everyone is doing it, because no one wants to stay in a digital stasis. However, this massive flight to the front is taking place to the accompaniment of a worrying grinding of the brakes, or rather a chronic lack of them. The Stanford report sounds the alarm: responsible AI is not advancing at the same pace as its raw capabilities.

    In the last year, the number of documented AI incidents has risen to 362, up from 233 the year before, which should give policymakers pause for thought. These are no longer theoretical mistakes in sterile labs, but real stumbling blocks at the interface between technology and market. To make matters worse, engineers are facing an innovative paragraph 22: safety versus precision. Research shows that attempts to ‘tame’ models and put ethical muzzles on them often result in a decline in their effectiveness. We want AI to be safe, but when it becomes too cautious, it stops delivering the brilliant results we hired it for.

    It’s a classic technology stalemate. Almost all the makers of top models are keen to brag about their performance records, but when it comes to reporting on liability tests, there is suddenly a significant silence in the industry. The IT sector is speeding towards the horizon in a car with seatbelts still in the conceptual stage.

    Business on the brink: 88% adoption and no brakes

    The geopolitical chessboard of AI in 2026 resembles a game in which the incumbent grandmaster, the US, is starting to glance nervously at the clock – and not just because Gemini is having trouble reading it. Although US dollars are still flowing in a broad stream, the technological advantage over China has almost completely melted away. Worse still, the most valuable ammunition in this race – human genius – is beginning to evaporate from Silicon Valley.

    The dramatic 89 per cent drop in the number of AI researchers moving to the US since 2017 (with as much as 80 per cent of this occurring in the last year!) is a painful side-effect of migration policy and the rising cost of H-1B visas. While the US is betting on massive data centres, China is taking the lead in patents, industrial robotics and the number of scientific publications. New dots are also shining on the innovation map: South Korea dominates in patent density, and Singapore and the United Arab Emirates are becoming the training grounds for the world’s fastest technology adoption, leaving the giants behind.

    The open source movement, which effectively democratises access to AI, and the issue of public trust play a key role in this new split. There is a gigantic gap here: 73% of experts see AI as having a bright future, but only 23% of the public share this enthusiasm. Those regions that can tame this fear will win. The European model of regulation, although often criticised for being slow, builds a foundation of trust that is dramatically lacking in the US – with record low levels of faith in government.

    The conclusion? Success in AI is no longer just about having the most powerful model, but about navigating the geopolitical and human fabric in which that model operates. AI is a new form of national sovereignty – and one that is not built on silicon alone, but above all on open doors for talent and wise, trustworthy law.

  • Anthropic Mythos: Why is the Bundesbank warning against a new AI model?

    Anthropic Mythos: Why is the Bundesbank warning against a new AI model?

    According to Joachim Nagel, President of the Bundesbank, the financial industry has faced a dilemma in which advanced artificial intelligence ceases to be an assistant and becomes an autonomous tool capable of destabilising global infrastructure.

    The German central bank chief’s concerns centre on Mythos ‘ unprecedented ability to code and identify vulnerabilities. The model demonstrates an almost instinctive proficiency in finding software bugs, which in the hands of cybercriminals could spell the end of security based on ‘legacy systems’. Many financial institutions still operate on IT architectures built decades ago that, while stable, were not designed to fend off attacks generated by a machine that thinks faster than any team of cyber security experts.

    Nagel argues that Anthropic’s current strategy of making Mythos available only to a narrow, select group of companies and organisations creates a dangerous asymmetry. Instead of protecting the market, limited access can exacerbate systemic risk. If only a few have the shield of Mythos’ effectiveness, the rest of the sector is left exposed to the shot, which from a banking supervisor’s perspective is an unacceptable distortion of competition. The demand is clear: all relevant institutions must have access to the same defensive tools to avoid technological stratification, which could lead to a domino effect in the event of a successful attack on the weaker link.

    However, the Bundesbank’s perspective goes beyond mere cyber-security, striking at the foundations of monetary policy. Nagel challenges the widespread optimism that artificial intelligence will be a cure for inflation through increased productivity. On the contrary, he warns of price pressures resulting from the huge demand for investment in AI infrastructure and the drastic increase in the cost of electricity required to power data centres.

    Most intriguing, however, is the warning against ‘tacit collusion by algorithms’. There is evidence to suggest that sophisticated models can autonomously learn to optimise profits by keeping prices above competitive levels, doing so without direct communication between firms.

    For central banks tasked with maintaining price stability, this new form of algorithmic rate setting presents a challenge that will require entirely new regulatory tools. In a world dominated by models such as Mythos, central bankers’ vigilance must now extend not just to spreadsheets but to lines of code themselves.

  • Algorithms instead of a glass ball. In 2026, is the purchasing manager’s intuition an anachronism?

    Algorithms instead of a glass ball. In 2026, is the purchasing manager’s intuition an anachronism?

    For years, the ‘nose’ ruled in purchasing departments. It was this famous merchant’s intuition, built up over decades of negotiation, that allowed opportunities to be sensed and reefs to be avoided. Today, however, relying solely on instinct is becoming akin to forecasting the weather from the flight of swallows in the middle of a cyclone.

    According to the latest WEF Risk Report, we have entered an ‘age of competition’ in which threats are colliding with each other at a speed that the human mind cannot process on its own. The statistics are merciless: as many as 99% of experts predict that the coming years will be “turbulent” or even “stormy”. The scenario of calm and stability has become an exoticism reserved for only 1% of the greatest optimists.

    Regulatory changes, cost spikes and staff shortages are hitting supply chains. At the same time, traditional methods are failing. Today, no one asks anymore if there will be disruption – the question is how quickly we will react to it. So clinging to the old school of ‘feeling the market’ is not bravery, but a risky mismatch with reality.

    In order to ride out this storm, we must acknowledge that intuition is not enough today. To navigate effectively, purchasing departments need to swap the glass ball for precision analytics.

    Procurement 4.0 – from Excel to the prediction engine

    Until recently, the purchasing department was seen as a corporate ‘back office’ – a place where the main task was to painstakingly cut costs and keep an eye on invoices. Today, this role is undergoing a major metamorphosis. Procurement is a strategic engine that generates real value for the entire organisation.

    This change did not come from a vacuum. The companies that have coped best with the crises of recent years had one thing in common: they were digitised. It was then that it was understood that supply chain resilience does not depend on luck, but on the quality of the information they have. However, simply collecting data is only half the battle. The real challenge of 2026 is not the lack of information, but its dispersion.

    Most companies have mountains of data, but they are trapped in so-called ‘silos’ – separate sheets and systems that do not talk to each other. Modern procurement acts as a bridge connecting these scattered points. It ensures that the manager is no longer just looking in the rear-view mirror, analysing historical spend in Excel. He or she begins to look through the windscreen, using technology to anticipate upcoming events.

    This is where a new competitive advantage is born. Turning scattered facts into a coherent strategy makes it possible not only to respond to crises, but to stay one step ahead of them. In essence, it is gratifying that technology has ceased to be a luxury – it has become a tool to turn the chaos of uncertainty into measurable risks that can be managed effectively.

    AI – new optics

    Implementing artificial intelligence in purchasing departments can be associated with a technological fad. Nothing could be further from the truth. In 2026, AI is a powerful analytical engine that sees what, to the human eye, remains hidden in a jumble of thousands of tables. It is a digital detective that can connect the dots between scattered data.

    What does this look like in practice? Three areas that redefine the daily work of purchasing departments are key:

    • Predicting demand: AI has stopped looking only in the rear-view mirror. Instead of only analysing historical spending, it models future scenarios. It takes into account market trends, social changes and even weather forecasts, providing precise answers before the question of stock is asked.
    • Supplier risk assessment: Instead of waiting to be informed of counterparty problems, algorithms monitor warning signals in real time. They catch financial fluctuations or geopolitical tensions, allowing you to change your strategy before the supply chain is disrupted.
    • Cycle optimisation: Thanks to the automation of tedious processes and intelligent recommendations, purchasing cycles are shortening dramatically. What used to require days of analysis and dozens of emails now happens almost seamlessly.

    Artificial intelligence integration is the process of turning the chaos of data into a strategic advantage. It allows procurement to stop guessing and start knowing. AI does not replace humans here – it gives them the best possible fuel to make accurate decisions.

    Business cost of delay

    Time in business moves much faster than the pages on the calendar suggest. While 2030 seems like a distant future, the fact is that we will welcome that year in just 14 quarters, and from a technology perspective, that time will pass faster than you might think. The data is inexorable: global investment in artificial intelligence is going into the trillions of dollars. This is not money being spent on futuristic experiments, but real capital being pumped into infrastructure to ensure companies survive in the ‘competitive era’.

    For purchasing managers, the warning signal is clear. Since as many as 80 per cent of leaders in this area consider digital transformation to be their absolute priority, the race for market dominance has long since started. The question is: what about the remaining 20 per cent? For them, the forecasts are harsh. Companies that do not integrate AI and advanced automation into their processes by the end of the decade could collide with an impregnable wall.

    The cost of delay is not just a slightly lower margin. It is the risk of falling out of the loop altogether. Without digital support, purchasing processes will become too slow, too error-prone and simply too expensive compared to competitors who ‘think’ in real time. In 2030, running a large purchasing department without AI support will be akin to trying to send an email using a typewriter. It can be done with sentiment, but the rest of the world will be ahead of us before we have time to put a piece of paper in the drum. Investing in technology today is nothing more than taking out a policy for the future.

    Man in the loop: AI with rules

    Introducing AI into purchasing processes is not a ‘set it and forget it’ project. While algorithms can recalculate millions of scenarios in seconds, humans still need to keep their hand in. Technology devoid of ethics and oversight can become a source of new and unforeseen risks – from misinterpretation of data to lack of transparency with contractors.

    The key to success is to avoid the ‘black box’ syndrome. If the system recommends a sudden change of a key supplier, the manager must understand exactly why. AI in purchasing must be based on trust and accountability. Only then does it become a real support and not a risky dictate of code over common sense.

    What does this mean for the merchant himself? His role is not disappearing, but undergoing a fascinating evolution. He is changing from a person performing repetitive, tedious operations to a strategist and relationship architect. AI is taking over the ‘dirty work’ of analytics, freeing up time to do what a machine (for the time being) can’t: build long-term trust, negotiate creatively and react intuitively in tricky situations.

    At the end of the day, AI won’t go to coffee with a supplier to discuss joint growth plans in uncertain times. The best performers in 2026 are those companies that rely on hybrid intelligence. This is a model where the cool logic of an algorithm provides hard evidence, but it is the human who makes the final decision, taking responsibility for it. In this duo, it is still us holding the baton.

  • The AI 2030 paradox: Why does data investment still not guarantee returns?

    The AI 2030 paradox: Why does data investment still not guarantee returns?

    There is a specific kind of gold rush today. The companies that are winning the race for successful AI implementations are investing up to four times more in the foundations – data quality, management and staff readiness – than the market’s prodigies. These are gigantic outlays that are akin to building an ultra-modern skyscraper. The problem is that despite the luxurious façade, you can still hear the structure creaking in the boardrooms.

    This is where the title paradox manifests itself. Although the money stream flowing towards data ‘hygiene’ is unprecedented, according to Gartner data, only one in three technology leaders are looking to the future with genuine optimism. Only 39% believe that current investments in artificial intelligence will realistically improve the company’s bottom line. What we have, then, is a situation where the biggest players are buying the most expensive insurance policies while still being unsure whether their ship will even make it to port.

    Why is this happening? Because the mandate of data and analytics leadership by 2030 is evolving dramatically. It is no longer about simply ‘owning’ the technology, but about providing the perceptual intelligence and contextual foundations that allow machines to realistically understand the business world. The success of AI has become a challenge of trust and a complete overhaul of the value architecture. Building an AI-first strategy is a pioneering leadership that must face the fact that the old ways of counting profits are no longer compatible with the new algorithmic reality.

    The trap of traditional ROI, or measuring the future with an old ruler

    Trying to measure the potential of AI with a classic ROI is akin to assessing the usefulness of electricity solely through the lens of candlelight savings. In corporate excel sheets, where every investment has to “bounce back” in a few quarters, building deep contextual foundations often looks like an expensive whim. It is this accounting corset – trying to measure the future with an old ruler – that causes anxiety for nearly two-thirds of technology leaders.

    Meanwhile, the modern approach to D&A requires a shift from static ROI to value composition. Leaders who actually set the pace are no longer treating AI as just another ERP module to be ‘fobbed off’. Instead, they are building a value flywheel: a model in which the efficiency gains gained from AI are deliberately and systemically reinvested in the further development of perceptual intelligence and innovation.

    In this view, AI becomes the company’s new operating system, not just a tool for cost optimisation. If an organisation gets stuck in an endless loop of Proof of Concept cycles, looking for ad hoc savings, it will probably never achieve the scale necessary to survive the 2030 transformation. This is because the real value comes not when an algorithm is implemented, but when integrated engineering practices allow trust and context to scale across the enterprise.

    dane

    Foundations are not just about technology

    In 2030, competitive advantage will not be measured by terabytes of data, but by the precision with which machines can interpret it. This is where the new mandate of the D&A leader comes in: to deliver *perceptual intelligence. Until now, the role of the data director has often been reduced to being the custodian of a digital archive; today, he or she must become the architect of the organisation’s ‘collective brain’.

    The technology itself is merely the engine. The real fuel is context, treated as critical infrastructure. AI agents, lacking a deep semantic layer, resemble brilliant chess players playing in total darkness – they have immense computing power, but cannot see the board. Without a trusted contextual foundation, autonomous systems become mere expensive confabulation factories. This is why shifting the centre of gravity from ‘having models’ to ‘designing meaning’ is so crucial.

    Data management is now a steering wheel support system. Pace-setting companies are able to embed privacy and ethics issues directly into the workflows of AI agents. For trust in the world of algorithms is not a sentiment – it is a technical necessity. Without it, every decision made by AI will be fraught with risks that no rational board would accept. A true D&A leader understands that his or her job is no longer to provide dry reports, but to build a foundation on which AI can finally stop guessing and start realistically understanding the business.

    Strategy 2030: AI-first as a state of mind, not a shopping list

    Ultimately, AI-first transformation is not an IT project, but a test of leadership maturity. By 2030, D&A leaders must abandon the role of technology providers in favour of architects of new operating models. True scaling requires the courage to break out of the ‘endless loop of Proof of Concept cycles’ and move to deeply integrated engineering practices. Data, software and context must stop operating in silos – in the new reality, they are one inseparable organism.

    Let us return to the initial paradox: why do only 39% of leaders believe in the financial success of their investments? This scepticism is paradoxically a good sign. It shows that the market is moving out of its phase of childlike admiration for ‘magical’ algorithms and is beginning to understand the scale of the challenge. True return on investment in AI is not a matter of luck, but of consistently building trust and perceptual intelligence.

     

  • Marriage of convenience – How is IT infrastructure forcing a new dialogue between CIO and CFO?

    Marriage of convenience – How is IT infrastructure forcing a new dialogue between CIO and CFO?

    For years, the relationship between the CIO and CFO resembled a long-established marriage, communicating mainly through laconic notes left on the fridge. The CIO would ask for budgets for ‘solutions that no one but him understands’, and the CFO would respond with a question about cost optimisation, treating the server room as a necessary evil – an expensive black box that would be best moved entirely to the cloud and forgotten about.

    This model is about to become history. The latest Deloitte report, based on a survey of leaders from more than 500 US corporations, leaves no illusions: a financial tsunami is coming that cannot be waited out in a silo.

    The projected tripling of AI infrastructure budgets by 2028 is the critical moment when the technology becomes too expensive, too energy-intensive and, most importantly, too strategic to leave its oversight solely in the hands of engineers. When spending on computing power quadruples in a few years, it ceases to be an issue for the IT department and becomes a matter of sovereignty and survival for the entire organisation.

    Blurring boundaries is a painful but fascinating process. The CFO spreadsheet and the CIO hybrid architecture diagram are no longer two different documents. It’s time to abandon translators and diplomatic protocols – the leaders of tomorrow must become bilingual, because a communication error between the ‘boardroom floor’ and the ‘server room’ could cost a fortune.

    Financial culture shock

    For the past decade, the mantra of CFOs has been ‘OpEx above all else’. The public cloud was supposed to be the cure-all for all evil – a flexible cost that could be scaled up or down, avoiding the costly maintenance of in-house ‘server housing’. However, artificial intelligence, with its insatiable appetite for computing power, is brutally verifying this optimism.

    There is a clear conclusion from the Deloitte report: the traditional IT spending model, based on one-off upgrade spurts, is becoming a thing of the past. Instead of cyclical ‘fleet replacement’ projects, IT departments are moving to a model of constant, high and growing annual spending. After all, AI is not a sprint after which you can rest; it is an arms race in which the fuel – i.e. computing power – gets more expensive with each new deployment.

    Interestingly, we are seeing a fascinating twist: the return to favour of the CapEx model. Companies that not long ago were aiming for total ‘hardwarelessness’ are now queuing up for their own GPUs and TPUs. Why? Because at the scale Deloitte is talking about – where the amount of tokens being processed is doubling every year – renting ‘power’ in the cloud is simply becoming economically inefficient.

    For CFOs, this is a real culture shock. They have to accept that having their own physical AI infrastructure becomes a strategic asset, not just an operational ballast. An in-house hybrid server room becomes an insurance policy for the future. Companies stop asking ‘how much is it going to cost us this month’ and start calculating how much computing power they need to own so that their models don’t get stuck in a queue at hyperscalers.

    The ’30 pilots’ trap, or where the money is running away

    The figure of ’30 pilot projects’ sounds impressive in an annual report and looks great on shareholder slides. However, for the CIO-CFO duo, this statistic is first and foremost a wake-up call. Deloitte indicates that by 2028, almost 70% of companies will be conducting such extensive AI trials. The problem is that, with soaring infrastructure costs, spreading resources across thirty different fronts is a straightforward way to cultivate so-called ‘innovation theatre’.

    There is a lot going on in this model, dozens of prototypes are being developed, but none of them get beyond the experimental phase to realistically feed into the profit and loss account. With giants such as Anthropic reserving gigawatts of power for years ahead, smaller players have to demonstrate downright surgical precision in resource allocation.

    This is where the new role of management manifests itself: The CIO and the CFO must jointly act as ‘silicon guardians’. Their job is no longer just to check that the budget is closing, but to build an absolute hierarchy of importance. Each of the 30 pilots should pass the sieve of a hard ROI analysis: does this model really optimise the process or is it just a technological curiosity?

    Any decision to allocate resources to a particular project is a de facto decision about which area the company wants to gain a competitive advantage in and which it is letting go. The real art of management in 2028 will not be how many AI projects can get off the ground, but how many of them can be killed off early enough for the most promising ones to have something to work on.

    New business grammar: Tokens instead of man-hours

    “The boundary between business and technology isn’t just blurring – it’s ceasing to exist” – these words from Chris Thomas of Deloitte should be engraved above the entrance to every modern conference room. The traditional grammar of business, based on man-hours, licences per user or the number of ‘seats’ in a CRM system, is giving way to a new currency: tokens.

    For CFOs, understanding what a token is and how it affects the balance sheet becomes as critical as analysing operating margins. Tokens are the blood in the veins of AI models, and their volume directly translates into computing power requirements. If, as the report predicts, their volume in corporate processes is set to double or triple in the next three years, then the infrastructure discussion is no longer a debate about ‘buying hardware’. It is a debate about the capacity of the entire enterprise and its ability to generate value.

    In this new hand, AI infrastructure is being promoted from the role of a quiet back office to that of a major actor on the frontline of the battle for customers. Companies that are able to effectively manage their own ‘computing portfolio’ – skilfully combining closed, open and proprietary on-premise models – are gaining a flexibility that competitors relying solely on off-the-shelf SaaS services can only dream of.

    Strategic advantage in 2028 will not come from having the best marketing slogans, but from optimising the cost of generating a single intelligent operation. Infrastructure becomes the foundation of innovation: it determines how quickly a company can implement new functions and how deeply it can automate its structures. He who controls access to processors and optimises their use de facto controls the rate at which his business can grow. This is the new economy of scale, in which hardware becomes the hardest of the hard currencies of business.

  • Why does investing in leaders pay off more than AI alone?

    Why does investing in leaders pay off more than AI alone?

    Traditional leadership, based on optimising ‘output’ and overseeing workflow, is becoming an anachronism. Why? Because in these disciplines, algorithms are already unrivalled.

    Leaders face the greatest paradox of digital transformation: the more processes artificial intelligence takes over, the more the human capacity to build trust and make work meaningful becomes a critical bottleneck for organisations.

    Data from the latest McKinsey report (January 2026) exposes the scale of this challenge. While as many as 84% of leaders plan to dramatically expand the role of AI agents in key business verticals this year, at the same time 86% admit that their organisations are not culturally and structurally ready for this change. This gap is not due to a lack of technology, but to ‘leadership debt’ – the lack of a new management framework for teams whose daily routines have been automated.

    Leadership in 2026 is not about managing the delivery of results, but managing the energy, anxiety and creativity of people who have been freed from repetitive tasks. As the machine takes over the ‘what’ and the ‘how’, the role of the leader becomes the categorical and inspiring ‘why’.

    This is where the Empathy Algorithm – the new currency in the world of AI – is born. Leaders who can turn the time savings generated by AI into a space for innovation and deepening relationships will gain an advantage that cannot be copied by any LLM model. The question for boards of directors is no longer: “How to implement AI?”, but: “How to lead people in a world where AI is already everywhere?”.

    From manager to systems architect

    In 2026, the role of the IT leader evolves from that of a ‘resource manager’ to that of a socio-technological systems architect. The traditional division between ‘business’ and ‘IT’ is finally collapsing, to be replaced by an orchestration of hybrid teams in which AI agents and humans share backlogs.

    Gartner’s data (2026) leaves no illusions: by the end of this year, up to 80% of enterprises will have fully operationalised AI in their core business processes. This means that a leader can no longer measure success by the speed of code delivery or the number of closed tickets – these metrics have been ‘hacked’ by algorithmic performance.

    The leader-architect today must answer the question, “Where is the place for unique human judgement in this process?”. According to Forrester’s analysis, in organisations with the highest degree of digital maturity, executives now spend 40% more time on human-machine interaction design than on classic progress monitoring.

    In this leadership model, the biggest challenge is redefining productivity. If AI performs a task in 3 seconds and a human spends 3 hours critically reviewing and ethically monitoring it, those 3 hours are now the company’s most valuable investment. C-level leaders need to learn to defend this ‘slowness’ against boards accustomed to old metrics. The real value no longer lies in the generation of content, but in their curation and accountability for the final decision.

    Why is empathy the new ROI?

    By reducing computational errors on a macro scale, we are shifting the burden of competitiveness and verification of effectiveness to entirely different areas. Today, it is human emotion that is becoming the most unpredictable – and costly – variable in the spreadsheet. In the third act of AI transformation, C-level leaders need to understand that empathy has ceased to be a ‘soft add-on’ and has become a hard mechanism for securing profitability.

    As AI takes over the executive layer, employees are facing a professional identity crisis. The Gallup report ‘State of the Global Workplace 2026‘ points to an alarming trend: global employee engagement, despite technological facilitation, is hovering around just 20%. This ‘meaning deficit’ and sense of being replaceable are costing the global economy nearly $10 trillion a year in lost productivity.

    The conclusion is pragmatic: in an automated environment, leader empathy is the main mechanism for retaining the rarest talent. When operations becomes a commodity, the only barrier against the exodus of experts to competitors is culture and relationships. According to Deloitte (2025/2026), organisations that rely on ‘High-Trust Leadership’ have a 35% lower turnover rate in key R&D and engineering teams.

    Empathy in 2026 is also a catalyst for innovation. An employee who feels safe and understood by a supervisor is more willing to take risks beyond the algorithm’s suggestions. This ‘creative risk’ is the one thing that AI – oriented towards statistical optimisation – cannot fully simulate. Investing in the emotional intelligence of executives is the most effective way today to repay the ‘cultural debt’ and ensure that the organisation remains innovative, not just efficient. In 2026, empathy is the hardest of the soft competencies – it is the fuse that protects the company from dehumanisation and strategic stagnation.

    People as the ultimate differentiator

    As I have already mentioned, the proliferation of AI use on a macro scale means that the technology itself is no longer a source of sustainable competitive advantage. It becomes an ‘entry ticket’ rather than a differentiator. The real difference between market leaders and marauders in 2026 lies in the way organisations integrate the potential of machines with the unique capabilities of humans.

    Deloitte’s ‘Human Capital Trends‘ report makes it clear: organisations that invest in soft skills development and work culture transformation alongside technology implementation achieve 1.8 times better financial results* than companies focused solely on technical optimisation. This proves that technology without the right ‘operating system’ in the form of trained and motivated people is a low-return investment.

    The contemporary framework for C-level is based on the pragmatic principle of 1:5. According to market best practice, for every dollar spent on AI licences and infrastructure, leaders should spend $5 on human transformation: reskilling, upskilling and changing decision-making processes. Overlooking this proportionate outlay leads to the phenomenon of ‘cultural debt’, which cripples innovation faster than any technical debt.

    Investing in Human-Centric AI is a strategic shift in focus from the question “what can the algorithm do?” to “what can our humans do through the algorithm?”. It is this synergy that creates a barrier to entry for competitors that cannot be jumped over by simply buying a new version of an API. In 2026, humans are no longer just machine operators; they are their most important instructors and guardians of the values that build brand uniqueness in the digital noise.

    3 steps to implement an “empathy algorithm”

    Theory must give way to execution. To ensure that the ’empathy algorithm’ does not remain just an attractive buzzword in the annual report, experts believe that C-level leaders in 2026 must implement a concrete operational framework that safeguards human capital in the age of total automation.

    1. Autonomy audit and relocation of talent

    The first step is to identify precisely the processes that AI agents take over 100%. The key, however, is not to reduce FTEs, but to immediately redeploy freed human capital to high-margin tasks. If AI is managing logistics or code testing, your best people need to be redirected to building deep relationships with key partners or designing innovations that the algorithm won’t come up with.

    2. from literacy to proficiency

    In 2026, ‘understanding’ AI is not enough. Leadership requires promoting AI Fluency – a culture of safe experimentation. A leader must create a space where a mistake made while working with technology is not a reason for sanctions, but a valuable data point for optimising the system. This builds psychological safety, without which innovation dies.

    3. radical transparency and an ethical watchdog

    Trust in the age of AI is fragile. Leaders need to put clear rules in place about how algorithms affect job evaluation and career paths. Lack of transparency breeds fear, and fear paralyses effectiveness. The role of the leader is evolving into that of an ethical arbiter, ensuring that technology supports rather than dehumanises the employee.

    The winners of 2026 will not be the organisations with the fastest processors or the largest language models. The winners will be those who understand that technology is merely an amplifier of human intent. True competitive advantage is born where code ends and trust, vision and empathy begin.

  • CFO: 30% of cloud spend is wasteful. How do you get your AI budget back?

    CFO: 30% of cloud spend is wasteful. How do you get your AI budget back?

    For the past decade, migration to the cloud has been synonymous with modernity and inevitability for managements. The promise was simple: flexibility, scalability and – ultimately – cost savings. Today, however, as the enthusiasm for digital transformation clashes with the hard reality of bills from providers such as AWS and Azure, the tone of conversation in finance cabinets is changing radically.

    A picture of growing frustration is emerging from Azul ‘s latest report, with chief financial officers (CFOs) beginning to see the cloud not as an unlimited resource, but as a strategic financial risk that requires top-level intervention.

    The scale of the problem is difficult to ignore. As many as 69% of CFOs admit that between 10% and up to 30% of their spending on cloud infrastructure is pure waste. This means billions leaking through their fingers due to inefficient architecture, unused instances or errors in demand forecasting.

    This is no longer an operational issue that can be delegated to the DevOps department. It’s a structural problem that directly hits the margins and profitability of businesses.

    The timing of this sobering development is no coincidence. The surge in interest in artificial intelligence has dramatically increased demand for computing power, which in turn has pushed up cloud invoices to levels that were not anticipated by last year’s forecasts.

    Nearly 90 per cent of the finance leaders surveyed indicate that infrastructure costs in their organisations are steadily increasing, and for two-thirds of them, oversight of these expenses has become a standing item on the board’s agenda.

    In this new landscape, cloud cost optimisation is no longer seen as ‘belt-tightening’. Instead, it is becoming a strategic lever. CFOs such as Azul’s Scott Sellers note that recouping wasted resources is the fastest way to fund AI innovation.

    In a period of high market volatility, where capital is more expensive than it was a few years ago, companies cannot count on unlimited increases in budgets. They have to look for money within their own structures. For 45% of finance managers, the overriding goal of optimisation is precisely to increase budget flexibility to allow digital projects to be implemented without jeopardising the financial stability of the company.

    The main obstacle, however, remains a lack of transparency. Modern cloud environments are so complex that pinpointing who is spending money in real time, and on what, borders on the miraculous. This ‘technological fog’ makes demand forecasting a guessing game.

    But for finance leaders, whose performance is increasingly linked to operational efficiency, the status quo is unacceptable. 42% of respondents explicitly indicate that margin improvement today depends directly on how efficiently an organisation manages its resources in the cloud.

    The message coming from the market is clear: the period of carefree scaling at any cost is over. We are entering an era of cloud maturity in which those companies that can combine technological ambition with ruthless financial discipline will win.

    The cloud, once seen as an escape from fixed costs, has itself become a burden that, if not properly managed, could slow down the next wave of innovation.

  • A secure environment for AI: Cloudflare introduces Dynamic Workers and Think

    A secure environment for AI: Cloudflare introduces Dynamic Workers and Think

    In Silicon Valley, the artificial intelligence narrative is shifting from simple chatbots to autonomous agents – systems that not only answer questions but perform complex tasks themselves. Cloudflare, traditionally associated with protecting sites from DDoS attacks and content delivery networks, has just made a move that puts it at the centre of this transformation. The expansion of the Agent Cloud platform is a signal that the company wants to become the ‘operating system’ for artificial intelligence.

    A key challenge for businesses deploying AI agents is the security and performance of the code they execute. The Dynamic Workers solution addresses this through isolated environments that run in milliseconds. Unlike heavy containers, Cloudflare’s new architecture allows agents to call APIs or transform data instantly, minimising operational costs and latency, which is critical in scalable enterprise applications.

    However, the real innovation lies in the durability of AI activities. Previous language models have often suffered from a lack of ‘long-term memory’ in the context of complex software projects. Cloudflare introduces Artifacts, a Git-compatible data store that allows agents to manage millions of repositories. This provides artificial intelligence with a permanent workspace, able to clone code, install packages in isolated Linux environments and iterate over projects in a manner similar to a human developer.

    Complementing this vision is the Think framework, integrated into the new SDK. It resolves the fundamental disconnect between the short session time of the AI model and the long-term nature of business tasks. It allows agents to be built capable of running multi-step operations lasting days or weeks, not just seconds.

    Cloudflare ‘s strategy is becoming clear especially with the recent acquisition of Replicate. By integrating a wide catalogue of models – from the latest GPT to open-source solutions – Matthew Prince’s company is no longer just a conduit for data. It is becoming an indispensable building site for a new generation of software, where it is not humans but machine-written code that generates network traffic. For technology leaders, this sends a clear message: the era of static applications is coming to an end, and the race for an infrastructure capable of supporting autonomous AI systems has just entered a decisive phase.

  • NVIDIA introduces Ising – AI as an operating system for quantum processors

    NVIDIA introduces Ising – AI as an operating system for quantum processors

    In the race for quantum supremacy, NVIDIA is making a move that could change the balance of power not only in the labs, but also in the data centres. The NVIDIA Ising family of models just unveiled is the world’s first open attempt to harness artificial intelligence to solve the ‘Achilles’ heel’ of quantum computers: their extreme instability.

    Today’s quantum processors (QPUs) are technologically impressive but business-wise unusable. They generate an error on average once per thousand operations. For the technology to realistically compete with traditional silicon in pharma or logistics, this rate needs to drop to one error per billion. Jensen Huang, Nvidia’s chief executive, makes it clear: AI is not just an add-on here, but an essential ‘operating system’ to manage this fragile architecture.

    Architecture instead of promises

    Instead of building its own quantum computer, NVIDIA is positioning itself as a critical layer provider. The Ising family consists of two specialised tools that hit the industry’s narrowest bottlenecks. The Ising Calibration model uses computer vision technology to automate processor settings. What previously took physicists days of painstaking work, AI can cut down to a few hours.

    Ising Decoding, on the other hand, is a 3D neural network designed for real-time error correction. The results are promising. Compared to the current market standard, pyMatching, Nvidia’s solution shows three times the accuracy and 2.5 times the speed. In a world where milliseconds of delay determine the decay of a quantum state, such an advantage is fundamental.

    Open door strategy

    The decision to make models available in an open source format is a smart business move. By integrating Ising with the existing CUDA-Q platform and NVQLink hardware link, the green giant is creating an ecosystem that will be difficult to disconnect from. Companies and universities can train these models on their own data while retaining full control of the infrastructure, which is crucial for sectors such as cyber security or finance.

  • The architecture of distrust. The only way to safely believe AI

    The architecture of distrust. The only way to safely believe AI

    As estimates of spending on GenAI-type systems soar by nearly 40% a year, the time for joyful partisanship in innovation departments is coming to an end. We are entering an era where the CIO must stop seeing AI as a flashy curiosity and start treating it as a raw, unpredictable and deeply structured operational resource. The problem is that the traditional governance framework, based on static audits and periodic compliance reviews, is crashing against the wall of modern, non-deterministic architectures.

    Beyond the horizon of static control

    Implementing advanced systems, such as search-enhanced generation (RAG) or autonomous agents, is akin to trying to manage a living organism with a washing machine manual. The classical approach to IT security assumed predictability: a specific input generates a specific output. Language models invalidate this principle. This is why the discussion about surveillance needs to be moved from conference rooms straight into code repositories.

    Instead of treating governance as a cumbersome post-factum add-on, technology leaders are being forced to implement governance by design strategies. This is a fundamental change: ethics and security cease to be a wish list written in a PDF document and become a hard technical requirement, as important as bandwidth or server performance. In this new hierarchy of values, it is the system architecture that defines the limits of algorithmic freedom, not the other way around.

    Construction of a stable ecosystem

    The foundation upon which the secure integration of AI into the fabric of the enterprise rests are six technical pillars. Each represents a critical interface between raw computing power and business accountability.

    The first of these is technical guardrails, which act as a proactive fuse. They operate in real-time, filtering requests and responses even before they reach the end user. This is not just content censorship, but an advanced layer of validation that protects against the leakage of sensitive data or unknowing infringement of intellectual property. The level of stringency of these barriers needs to be dynamically scaled against the risk – different rigour applies to the internal bot supporting coding, and others to the system analysing patients’ medical data.

    Equally important is observability, which in the world of AI is evolving far beyond simple server time monitoring. The CIO needs tools to pinpoint the point at which a model starts to ‘drift’ – losing precision or changing inference under the influence of new data. Observability provides fuel for management processes, triggering automatic re-training loops at moments when the algorithm no longer aligns with business reality.

    The third pillar is traceability, a remedy for the ‘black box’ problem. In systems that use data from multiple sources, precise logging of the inference path allows for backward auditing. This makes it possible to determine from which specific document the model formed an erroneous conclusion. This is key to building trust not only among regulators, but especially among business users, who need to know what the suggested strategy is based on.

    The fourth element, centralised AI gateways, brings order to the chaos of access and cost. Acting as the sole point of entry for intelligent services, these gateways allow for precise management of token limits and protection of API keys. Without this level of control, dispersed subscriptions across different departments of a company become a financial and security black hole.

    AI catalogues and technology packaging complement this structure. Catalogues provide a single source of truth for all models and agents running in an organisation, preventing duplication of work and ambiguity of responsibility. Wrappers, on the other hand, allow business logic to be isolated from the underlying model itself. This enables rapid replacement of the technology provider without having to rebuild the entire application ecosystem, which, given the dynamic changes in the language model market, is an insurance policy for the future.

    Integration into the global order

    Building such an advanced architecture does not happen in a vacuum. It must resonate with emerging regulatory frameworks such as the EU AI Act or NIST standards. Aligning technical controls with these regulations allows abstract ethical principles to be transformed into measurable system parameters. This is where responsible AI ceases to be a marketing buzzword and becomes a rigorous code of conduct enshrined in the infrastructure.

    However, it is worth noting that even the most sophisticated automation does not eliminate the need for human supervision. On the contrary, in highly critical scenarios, the architecture should be designed to force human intervention. Defining clear ownership structures for any AI system is the final, critical link in the chain of responsibility.