Tag: AI agents

  • Why are AI agents becoming the target of cyber attacks? Trend overview 2026

    Why are AI agents becoming the target of cyber attacks? Trend overview 2026

    Over the past eighteen months, the enterprise sector has moved from a fascination with generative artificial intelligence to a phase of actively implementing it into operational processes. A key trend in this evolution is the shift from passive language models (LLMs) to AI agents – autonomous systems capable not only of generating text but also of performing tasks: writing code, managing email communications, calling APIs or authorising financial transactions. With this agility, however, comes a critical new category of threats: Indirect Prompt Injection (IPI). Recent data from reports by Google and Forcepoint shed new light on the scale and sophistication of these attacks, suggesting that agent systems security will become one of the biggest challenges for chief information security officers (CISOs) in the coming years.

    IPI mechanism: Data as instructions

    Traditional prompt injection attacks relied on direct manipulation of the model by the user (e.g. attempting to ‘jailbreak’ a bot by giving it the command to ignore security). Indirect Prompt Injection is a much more insidious phenomenon. It involves inserting malicious instructions into content that the AI agent processes as input – this could be web pages, PDF documents, emails or code repositories.

    The problem lies in the very architecture of current LLM models, which cannot absolutely separate system instructions (issued by the tool developer) from external data. When an AI agent analyses a web page in search of information, it may come across hidden text, which the model will interpret as a new overarching command. As a result, the attacker takes control of the agent’s logic, instructing it to, for example, send sensitive data to an external server or perform a destructive operation on the user’s file system.

    Analysis of market trends

    Google Security Research researchers, analysing CommonCrawl resources, point to an alarming trend. Between November 2025 and February 2026, there was a 32 per cent increase in the number of detected malicious injection attempts in publicly accessible web resources. This relatively short time frame demonstrates the dynamism with which the criminal community is adapting to new technologies.

    From a market perspective, Google’s observation on cost-benefit calculus is key. Until recently, IPI attacks were considered the realm of academic research – they were difficult to implement and often failed due to the instability of the results generated by AI. Now, with the increased reliability and agility of agents, these attacks are becoming ‘viable’. AI’s ability to autonomously call external tools (tool calling) means that a successful injection of instructions has an immediate and measurable financial or operational impact.

    The Google study allowed the current IPI trials to be categorised into five groups:

    1. Harmless jokes: Attempts to change the tone of an agent’s response.
    2. Helpful tips: Suggesting preferential answers to the model (often on the edge of ethics).
    3. Optimisation for AI (AI-SEO):Hidden phrases to position products in assistants’ responses.
    4. Deterring agents: Instructions prohibiting AI from indexing or summarising a particular page.
    5. Malicious attacks: Data exfiltration and sabotage (deletion of files, destruction of backups).

    Although the latter are often at an experimental stage at present, their increasing complexity suggests that it is only a matter of time before they enter the phase of mass attacks.

    From coding assistants to financial transactions

    The Forcepoint report provides concrete evidence of how IPI manifests itself in professional software and financial tools. Experts have identified ten verified indicators of attacks targeting popular tools such as GitHub Copilot, Cursor and Claude Code.

    The attack scenario is mundane: a programmer uses an AI agent to analyse a library or documentation on an external site. This site contains a hidden AI instruction. When the agent ‘reads’ the site, it is instructed to execute a command in the terminal that destroys local backups. Since the agent has permission to operate on the file system (which is essential in a programmer’s job), the command can be executed without additional verification.

    Even more dangerous are attempts at financial fraud. Forcepoint points to cases where complete transaction instructions are sewn into web content, e.g. PayPal.me links with a predefined amount along with step-by-step instructions on how the agent is to finalise the payment. In systems where AI has access to digital wallets or corporate payment systems, the risk of capital loss becomes immediate.

    The paradox of detection and the challenges for business

    One of the most worrying findings from the Forcepoint report is the so-called detection paradox. The phrases and keywords used by attackers to inject hints are identical to the terminology the cyber security community uses to describe and analyse these threats. This renders simple filters based on word blacklists ineffective – either blocking legitimate expert communications or letting intelligently worded attacks through.

  • The hangover from euphoria, or how AI agents can blow through a year’s budget in a few hours

    The hangover from euphoria, or how AI agents can blow through a year’s budget in a few hours

    Not so long ago, artificial intelligence was supposed to be the ‘ultimate solution’ to productivity problems – a digital alchemist turning empty process flows into pure efficiency gold. The ball was in full swing and the champagne was pouring from the presentations of the models promised by suppliers.

    Today, however, instead of more breakthroughs in machine reasoning, something far less spectacular is whispered about in the corridors of business conferences: the happiness bill. For it turns out that the ticket of admission to the world of AI was not a one-off fee, but a dynamic, hard-to-tame subscription for the future, the cost of which can rise exponentially overnight.

    What we are witnessing is the birth of ‘token fever’. It’s a state where the enthusiasm of engineers collides with the dismay of CFOs. For decades, we have been accustomed to the SaaS model – predictable, fixed licence fees that were easy to budget for. Generative AI has shattered this order, introducing a ‘probabilistic’ model. Here, a mistake in one agent’s logic or an overly effusive prompt can burn up financial resources faster than traditional cloud infrastructure consumes electricity.

    Uber and a mistake worth billions

    If the tech industry was looking for the ‘canary in the coal mine’, it found it in San Francisco in April 2026. At the IA HumanX conference, Praveen Neppalli Naga, Uber’s CTO, gave a speech that sobered even the biggest optimists. The giant, which had invested an astronomical $3.4 billion in research and development in 2025, faced a wall: its annual budget for artificial intelligence had evaporated in just four months.

    It wasn’t a matter of one misguided investment decision, but a side effect of an engineering fantasy with no brakes. Uber, aiming for aggressive technology adoption, encouraged its developers to use agents like Claude Code en masse. The result? 11% of back-end code was already being generated by artificial intelligence, but the price for this ‘efficiency’ proved deadly. Without proper performance filters and oversight of token consumption, AI ceased to be a lever for savings and became an out-of-control spending engine.

    The case of Uber is a classic example of a ‘tsunami of tokens’. Autonomous agents, entering infinite iteration loops with no clear limits, can burn a fortune in the time it takes to drink an espresso. It’s a painful lesson for any CIO: innovation without financial architecture is just a very expensive hobby. Naga admitted that the company had to go back to the design table to completely redefine its strategy. Any company that deploys AI today without a rigorous profitability analysis risks having its success measured not by margin growth, but by the speed with which it exhausts its own resources.

    Goodbye SaaS, hello volatility

    We are bidding farewell to an era where the IT budget was like a fixed Netflix subscription – predictable, secure and giving a false sense of control. For years, the SaaS model accustomed us to per-user licensing, where the only risk was a surplus of accounts that no one used. Generative AI brutally ends this period of ‘licensing peace of mind’ by introducing a billing model that is more akin to electricity bills during an energy crisis than traditional software.

    The shift from fixed costs to variable costs is a fundamental paradigm shift. In 2024, IT departments were buying AI access in a lump sum. Today, in 2026, vendors such as OpenAI and Anthropic have eliminated unlimited Enterprise plans, introducing dynamic billing for token consumption. The reason is mundane: AI agents have destroyed the distribution curve on which the old business was based. The subscription model only worked when the ‘lec’ users subsidised the ‘intensive’ ones. One, when we started employing autonomous agents, the differences became absurd. Analyses show cases where a user paying $100 a month generated costs of $5,600 in a single billing cycle. A subsidy ratio of 25 to 1 is a straightforward path to supplier bankruptcy, hence the sharp turn towards ‘use-pay’ billing.

    This makes IT spending probabilistic. This radically differentiates AI from the traditional cloud. A forgotten server in AWS generates a fixed, linear cost. A poorly designed prompt or agent without iteration limits, on the other hand, can go into a loop and generate millions of useless tokens in seconds. In this new world, a programmer’s logical error doesn’t end up ‘crashing’ the application – it ends up draining the company account at the speed of light. This means an immediate redesign of IT finance and the abandonment of rigid budget frameworks in favour of flexible management of the ‘economics of inference’.

    Tsunami of tokens – a new unit of risk

    In the modern CIO’s dictionary, a new, much more predatory term has emerged alongside ‘technical debt’: the ‘token tsunami’. This is a phenomenon in which autonomous agents, rather than freeing up staff time, fall into loops of endless iterations, burning up budgets with the intensity of a steel mill. The problem is that a bot, unlike a human, never feels fatigue or shame for duplicating mistakes – it simply consumes resources until it encounters a hard limit or empties its account.

    The scale of the problem is such that even the biggest players have had to revise their dogmas. Gartner is sounding the alarm: by the end of 2027, up to 40% of agent-based AI projects will be cancelled. The reason? Not a lack of vision, but brutal mathematics – rising costs while lacking precise tools to measure real business value.

    Here is where the biggest paradox of 2026 manifests itself: the unit price per token is steadily falling, but the total bill is rising. Indeed, AI agents consume between 5 and even 30 times more units per task than a standard chatbot. This is a classic trap of scale – an efficiency that becomes economically inefficient by its sheer volume. If your AI strategy is based solely on the hope that ‘models will be cheaper’, you’re just building a castle in the sand that the coming tsunami will wash away in one billing cycle. Without rigorous control over what machines process and why, modern IT becomes hostage to its own unbridled computing power.

    AI FinOps – the new alchemy of IT finance

    If you thought Cloud FinOps was challenging, get ready for a no-holds-barred ride. Traditional cloud optimisation was about simple craftsmanship: shutting down unused servers and keeping an eye on instance reservations. AI FinOps is a completely different discipline – it’s probabilistic rather than deterministic resource management. Here, the unit of expenditure is no longer processor man-hours, but the cost of a useful response relative to the cost of an erroneous or ‘hallucinated’ response.

    In 2026, as many as 98% of FinOps teams consider spending on AI as their number one priority. The reason is simple: in the traditional cloud, a technical error rarely leads to an exponential increase in cost. In the world of AI agents, misconfigured prompt logic can burn through budgets faster than you can refresh your dashboard. This is forcing IT leaders to define a new metric – the economics of inference. We no longer count how much a model costs us, but how much the operational success gained from its work costs us.

    And that means rewriting dashboards from scratch. Classic management frameworks such as ITIL 4 or COBIT, while providing a solid base, today require immediate extensions to include prompt lifecycle management or agent iteration limits. AI FinOps is not just about Excel tables; it is a new management philosophy where an engineer must think like an economist and a financier must understand LLM architecture. Without this synergy, buying tokens is akin to pouring rocket fuel into a hole in the tank – the effect is spectacular, but extremely short-lived and frighteningly expensive.

    How not to burn through a decade of innovation

    The time window for non-punitive errors has just slammed shut. To avoid a ‘token tsunami’, organisations need to move from a phase of joyful adaptation to a phase of rigorous architecture. The first and most pressing step is to conduct a token consumption audit – not a general one, but a precise one, broken down by specific teams and use cases. When a query to a model can cost as much as a good cup of coffee, we need to know who is ordering a double espresso without a clear business need.

    The key to financial survival is the implementation of three technical foundations:

    • RAG (Retrieval-Augmented Generation): Providing the model with only the data it actually needs, drastically reducing the token ‘diet’.
    • Specialist models: Abandoning the ‘all-knowing’ giants in favour of smaller, cheaper and finely-trained models for repetitive tasks.
    • Corporate charter for the bot: Establish rigid iteration limits and budgets per agent. This is a matter of elementary financial hygiene.

    We also need to review how our people work with the technology. Identifying the ‘Centaurs’ (experts empowering their AI skills) and eliminating the ‘Automators’ (unreflectively delegating work to a machine) will allow a real increase in ROI. The most expensive and fastest way to waste an innovation budget is to buy millions of tokens just to have teams working exactly as they will in 2022, only with an on-screen chat interface.

     

  • AI agents are the new potential vulnerabilities. How not to lose control of your company’s cyber security?

    AI agents are the new potential vulnerabilities. How not to lose control of your company’s cyber security?

    In the world of technology, the year 2026 will probably go down as the moment when the definition of ‘user’ changed permanently. For years, we took it for granted that there was a human on one side of the screen and a machine on the other, executing commands. Today, this boundary is becoming fluid. The advent of autonomous agents, capable of operating autonomously in network and transaction systems, means that artificial intelligence is no longer just a tool in the hands of an employee. It has become a new autonomous link in the structure of an organisation.

    Autonomy mechanism: Out of sight

    The evolution from simple language models to agent-based systems such as OpenAI Atlas has changed the dynamics of working with data. Today’s business environment is based on processes where AI not only generates reports, but can call APIs on its own, manage logistics or interact with external ecosystems. This shift from a ‘question-answer’ to a ‘goal-implementation’ model takes the burden of repetitive tasks off teams, but also introduces a new layer of complexity.

    In this set-up, the so-called process debt becomes a challenge. It arises discreetly when the automation of successive work steps is carried out without full insight into the logic behind the decisions made by the machines. Unlike human errors, which are usually visible immediately, AI-based system errors can accumulate for years within an organisation as small, undetectable deviations, affecting the ultimate profitability of operations in ways that are difficult to diagnose unequivocally.

    A shift in security: From blocking to identity management

    As AI agents become more autonomous, the traditional approach to security, based on static filters and firewalls, seems to be losing relevance. In 2026, the discussion about protecting corporate assets is shifting towards AI Access Fabric – a concept in which each AI process has its own verifiable identity.

    Instead of asking ‘how to block AI’, organisations are starting to look at ‘how to empower it’. The modern approach is that an AI agent acting on behalf of a company should be subject to the same rigours as any other system user. Classification of data at source and isolation of risky sessions inside agent browsers are becoming standard elements of digital hygiene. This maintains operational fluidity while reducing the risk of external malicious data sources manipulating the model.

    The new role of corporate governance

    The integration of AI systems into the company’s bloodstream has meant that the management of their status (AI-SPM) has naturally become part of the wider corporate governance framework. Compliance with standards such as NIST or ISO is no longer seen as a bureaucratic requirement and is beginning to be regarded as a foundation for operational stability.

    Traceability is becoming a key element of this new structure. The traceability of an agent’s decision path – from data intake to analysis to final action – is today not only a security issue, but also a matter of business transparency. Organisations that rely on transparent workflows are building error-proof systems that cannot be detected by the naked eye.

    The perspective of tomorrow: Strategic symbiosis

    Observing the business landscape of 2026, it is hard not to get the impression that success no longer depends on the sheer scale of AI implementation, but on the quality of the architecture on which it is embedded. Artificial intelligence, operating in a predictable manner and subject to clear governance rules, becomes a catalyst for growth that does not burden organisations with unforeseen risks.

    In this new paradigm, the role of business leaders is evolving. Instead of overseeing technology, they are designing an environment where people and autonomous agents can collaborate within secure, auditable and understandable rules. This is not a revolution in security, it is a new definition of digital maturity for the modern enterprise.

  • Kasparov syndrome in business. Why companies that treat AI as a partner, not a replacement, are winning

    Kasparov syndrome in business. Why companies that treat AI as a partner, not a replacement, are winning

    The story of the relationship between man and machine is often told through the prism of a single event: Garry Kasparov’s loss to supercomputer Deep Blue in 1997. In the popular narrative, this was the moment of the symbolic passing of the baton, the beginning of the dominance of silicon over protein. But from a business and strategic perspective, it is much more interesting what happened afterwards. Chess did not disappear. On the contrary, it has evolved into a model of so-called ‘centaur chess’, where teams made up of a human and an algorithm achieve results that are unattainable by either a standalone grandmaster or a standalone computing engine.

    Today, almost three decades later, the same mechanism is beginning to shape the global economy. We are at a turning point that analysts are increasingly comparing to 1999 and the Cloud Computing revolution. Back then, software distribution and infrastructure scalability were at stake.

    What is now at stake is redefining the very nature of operational work through the implementation of so-called Agentic AI – artificial intelligence based on autonomous agents.

    The key challenge has ceased to be an existential question (“Will AI replace us?”) and has become an architectural question: how do we design an organisation to avoid “digital friction” and effectively integrate silicon agents with human capital?

    Cognitive dissonance: Smart Home, Legacy Office

    The current technological landscape in large organisations is characterised by a specific paradox. The end user – who is also an employee of the corporation – is experiencing unprecedented digital fluidity in his or her private life.

    Consumer applications, supported by advanced algorithms, predict intent, integrate payments, logistics and communication in real time. The experience is holistic and immediate.

    Meanwhile, once logged into company systems, the same user is confronted with the reality of distributed applications, data silos and manual processes. CRM, ERP or HRIS systems often do not communicate seamlessly with each other, forcing a human to play the role of a ‘human API’ that manually moves data from one window to another.

    It is in this gap – between the expectations set by the consumer market and the realities of the enterprise environment – that the demand for a new generation of solutions is born. Agentic AI is no longer just an analytical tool or a text generator. It is an attempt to transfer this consumer fluency and decision-making into the complex bloodstream of the enterprise.

    The 95 per cent trap and the “Agent Gap”

    Enthusiasm for generative artificial intelligence (GenAI) has led to thousands of pilot projects being launched in recent years. However, a cool analysis of the data – corroborated by, among other things, MIT studies or reports from consultancies – indicates a worrying trend. It is estimated that up to 95% of these initiatives do not get beyond the Proof of Concept (PoC) phase and into production environments.

    The reason for this is rarely due to the inadequacy of the language models (LLMs) themselves. These models are ‘intelligent’ enough to understand commands. The structural problem is a lack of integration, a phenomenon referred to as the ‘Agentic Gap’.

    Artificial intelligence in isolation is glamorous, but not very business effective. In order for an AI agent to do real work – for example, to handle a return of goods on its own, change parameters in the supply chain or prepare a personalised B2B offer – it must have access to:

    • Trusted real-time data (Data Layer).
    • Business logic and compliance rules.
    • Possibilities for calling actions in other systems (Action Layer).

    The failure of most implementations stems from trying to overlay modern AI on top of an outdated, unstructured data infrastructure. Without a solid integration foundation, agents remain ‘hallucinatory advisors’ instead of becoming trusted task performers.

    From automation to autonomy: The Agentic Model

    The difference between legacy automation (RPA) and Agentic AI is fundamental. Traditional automation follows a rigid scenario (if/then). Agentic AI has the ability to reason, plan sequences of actions and adapt to changing conditions, while maintaining human-designated safety barriers.

    Implementing the agentic paradigm means moving from a ‘man operates a tool’ model to an orchestration model, where a human manages a swarm of agents. In this new working architecture:

    • Agents take on repetitive tasks, requiring analysis of large data sets and rapid, low-level decision-making.
    • People are migrating towards high-value-added tasks: exception management, strategy, human relations and ethical oversight.

    This is not a zero-sum game where the machine’s gain is the human’s loss. IDC analysis suggests that by 2030, AI-driven digital work will generate a global economic impact of trillions of dollars. This value will not arise from cost reductions (job replacement), but from the reallocation of resources.

    Freeing specialists from the administrative burden allows them to explore business areas that were previously neglected for lack of time or capacity.

    Integration as a new innovation

    The lessons from the current stage of AI development are clear. The time of isolated experiments is coming to an end. Competitive advantage is being built by organisations that can systemically integrate AI into the core of their business.

    The Agentic AI implementation strategy should be based on three pillars:

    1. getting the data layer right: Agents are only as good as the data they work on. Without a unified view of the customer (Customer 360) and the product, implementing AI will only multiply the chaos.

    2. Platformisation: Instead of building your own models from scratch, it is proving more efficient to use platforms that offer a ready-made framework for agents (‘Agentforce’), while ensuring security and regulatory compliance.

    3 The evolution of leadership: the new reality requires the management of hybrid teams. The ability to define goals for agents, audit their work and design processes in which machine and human delegate tasks seamlessly becomes a key competence.

    Garry Kasparov did not turn his back on technology after his defeat. He realised that the chess engine is not a game killer, but a powerful analytical tool that elevates the game.

    In business, we are seeing an analogous process. The question being asked at board meetings today has evolved. It no longer reads: “Should we implement AI?”. It is, “How do we make it so that technology realistically extends our capabilities?”. The answer lies in smart integration and the understanding that in the economy of the future, the winners are not those who have the best AI, but those who can best work with it. Agentic AI is not the end of human work – it is the beginning of higher-value work.