Tag: Google

  • Why are AI agents becoming the target of cyber attacks? Trend overview 2026

    Why are AI agents becoming the target of cyber attacks? Trend overview 2026

    Over the past eighteen months, the enterprise sector has moved from a fascination with generative artificial intelligence to a phase of actively implementing it into operational processes. A key trend in this evolution is the shift from passive language models (LLMs) to AI agents – autonomous systems capable not only of generating text but also of performing tasks: writing code, managing email communications, calling APIs or authorising financial transactions. With this agility, however, comes a critical new category of threats: Indirect Prompt Injection (IPI). Recent data from reports by Google and Forcepoint shed new light on the scale and sophistication of these attacks, suggesting that agent systems security will become one of the biggest challenges for chief information security officers (CISOs) in the coming years.

    IPI mechanism: Data as instructions

    Traditional prompt injection attacks relied on direct manipulation of the model by the user (e.g. attempting to ‘jailbreak’ a bot by giving it the command to ignore security). Indirect Prompt Injection is a much more insidious phenomenon. It involves inserting malicious instructions into content that the AI agent processes as input – this could be web pages, PDF documents, emails or code repositories.

    The problem lies in the very architecture of current LLM models, which cannot absolutely separate system instructions (issued by the tool developer) from external data. When an AI agent analyses a web page in search of information, it may come across hidden text, which the model will interpret as a new overarching command. As a result, the attacker takes control of the agent’s logic, instructing it to, for example, send sensitive data to an external server or perform a destructive operation on the user’s file system.

    Analysis of market trends

    Google Security Research researchers, analysing CommonCrawl resources, point to an alarming trend. Between November 2025 and February 2026, there was a 32 per cent increase in the number of detected malicious injection attempts in publicly accessible web resources. This relatively short time frame demonstrates the dynamism with which the criminal community is adapting to new technologies.

    From a market perspective, Google’s observation on cost-benefit calculus is key. Until recently, IPI attacks were considered the realm of academic research – they were difficult to implement and often failed due to the instability of the results generated by AI. Now, with the increased reliability and agility of agents, these attacks are becoming ‘viable’. AI’s ability to autonomously call external tools (tool calling) means that a successful injection of instructions has an immediate and measurable financial or operational impact.

    The Google study allowed the current IPI trials to be categorised into five groups:

    1. Harmless jokes: Attempts to change the tone of an agent’s response.
    2. Helpful tips: Suggesting preferential answers to the model (often on the edge of ethics).
    3. Optimisation for AI (AI-SEO):Hidden phrases to position products in assistants’ responses.
    4. Deterring agents: Instructions prohibiting AI from indexing or summarising a particular page.
    5. Malicious attacks: Data exfiltration and sabotage (deletion of files, destruction of backups).

    Although the latter are often at an experimental stage at present, their increasing complexity suggests that it is only a matter of time before they enter the phase of mass attacks.

    From coding assistants to financial transactions

    The Forcepoint report provides concrete evidence of how IPI manifests itself in professional software and financial tools. Experts have identified ten verified indicators of attacks targeting popular tools such as GitHub Copilot, Cursor and Claude Code.

    The attack scenario is mundane: a programmer uses an AI agent to analyse a library or documentation on an external site. This site contains a hidden AI instruction. When the agent ‘reads’ the site, it is instructed to execute a command in the terminal that destroys local backups. Since the agent has permission to operate on the file system (which is essential in a programmer’s job), the command can be executed without additional verification.

    Even more dangerous are attempts at financial fraud. Forcepoint points to cases where complete transaction instructions are sewn into web content, e.g. PayPal.me links with a predefined amount along with step-by-step instructions on how the agent is to finalise the payment. In systems where AI has access to digital wallets or corporate payment systems, the risk of capital loss becomes immediate.

    The paradox of detection and the challenges for business

    One of the most worrying findings from the Forcepoint report is the so-called detection paradox. The phrases and keywords used by attackers to inject hints are identical to the terminology the cyber security community uses to describe and analyse these threats. This renders simple filters based on word blacklists ineffective – either blocking legitimate expert communications or letting intelligently worded attacks through.

  • Alphabet invests $40bn in Anthropic. Is it fighting for control with Amazon?

    Alphabet invests $40bn in Anthropic. Is it fighting for control with Amazon?

    Alphabet, Google’s parent company, has announced its intention to invest up to $40 billion in Anthropic, a startup that for the Mountain View giant is both a key cloud customer and one of its fiercest competitors in the race for supremacy in artificial intelligence.

    The structure of this deal reflects the new reality of funding the AI sector, where capital is closely tied to specific outcomes. Google will put up $10 billion in cash at a $350 billion valuation for the startup. The remaining 30 billion will only be deployed once the developers of the Claude model achieve rigorous performance targets. For Alphabet, this is not only an investment of capital, but above all an attempt to forge closer ties with an entity that has emerged as a leader in niches where Google is still searching for its identity.

    The move comes just days after Amazon pledged its own $25 billion cash injection to Anthropic. A situation where two of the world’s biggest cloud providers are bidding for the same startup shows how desperately tech giants need the success of external models to drive sales of their own computing infrastructure.

    Anthropic’s driving force is no longer just the promise of secure artificial intelligence, but real financial results. The company’s annual revenue has just surpassed the $30 billion barrier, an impressive jump from the $9 billion recorded at the end of 2025. Investors are responding enthusiastically, with some offers from the venture capital market valuing the company at up to $800 billion. Underpinning this growth is Claude Code, a tool that dominates the software segment, and Anthropic’s Cowork agent, whose plug-ins have recently caused jitters in the stock markets, driving down the valuations of traditional SaaS software companies.

    Anthropic’s greatest challenge, however, remains its ‘hunger for power’. Scaling the models requires infrastructure of a scale never seen before. The startup is securing this through multi-year agreements with Broadcom and CoreWeave, as well as an ambitious $50 billion plan to build its own data centres in the US.

    The market is divided into specialised tools and Anthropic, with its focus on coding and autonomous agents, is proving that it is possible to successfully challenge general-purpose models. Alphabet, by investing in Anthropic, is buying itself an insurance policy in case the startup’s approach proves to be the target business standard.

  • Meta will overtake Google – Ad revenue forecasts for 2026

    Meta will overtake Google – Ad revenue forecasts for 2026

    For more than a decade, the hierarchy in Silicon Valley was unchanged: Google dominated the digital advertising ecosystem, with Meta in a solid second place. However, according to the latest forecasts from research firm Emarketer, we are approaching a historic turning point. By the end of 2026, the Mark Zuckerberg-led giant is poised to dethrone Alphabet in terms of global net ad revenue, reaching a ceiling of 243.46 billion against a projected 239.54 billion for Google.

    This changing of the guard is not just a matter of numbers, but more importantly a testament to the effectiveness of the transformation that the Met has undergone following the privacy crisis in Apple’s systems. A key driver has been the Advantage+ package, which uses artificial intelligence to automate campaigns. By simplifying setup and optimising ROI, the tool has made marketers more willing to move budgets to where the algorithm does the hardest work for them.

    Meta’s strategic advantage also stems from its growth rate. While Google maintains a steady but more modest rate of 11.9%, Meta is accelerating – from a projected 22.1% in 2025 to 24.1% a year later. The company is effectively monetising new channels such as WhatsApp and Threads, directly hitting the X platform’s position, while Reels is successfully competing for users’ attention with TikTok and YouTube Shorts.

    The lesson for business decision-makers is clear: the advertising market is becoming increasingly consolidated. Although smaller players such as Snap and Pinterest offer unique niches, in times of geopolitical uncertainty, capital is fleeing to the safe havens with the largest reach. Google remains a powerhouse, but its diversification into YouTube Premium subscriptions, while beneficial for financial stability, is weakening its momentum in the direct advertising primacy battle.

    Nevertheless, the dominance of the triopoly of Meta, Google and Amazon – which is expected to control more than 62% of global digital ad spend in 2026 – appears unthreatened by legal issues. Analysts predict that even recent court rulings will not put the brakes on this machine. The race for the leadership seat is entering a decisive phase, and Meta, with its bet on AI and short video content, now seems to have the better leverage in this competition.

  • Hyperscalers are taking over the data centre market. Is this the end of on-premise?

    Hyperscalers are taking over the data centre market. Is this the end of on-premise?

    For decades, the company server room was the technological equivalent of a family castle. It was tangible proof of sovereignty, a safe haven for data and the pride of IT departments that nurtured their own silicon with almost craftsmanlike precision. But the latest predictions from Synergy Research Group plot a scenario in which these digital fortresses become costly open-air museums. By 2031, hyperscalers such as Google, Microsoft and AWS will have seized 67% of global data centre capacity for themselves. What we are seeing is a rapid shift in the centre of gravity of the digital world, necessitated by the brute physics of artificial intelligence.

    The architecture of coercion

    In 2018, enterprises controlled more than half of the world’s computing infrastructure. The prospect of 2031, in which this share shrinks to just 19%, seems at first glance a statistical error. However, the reason for this dip is not an unwillingness to own, but an inability to meet the demands of the new era. Modern AI systems, based on GPUs and specialised chips such as TPUs, require power densities and cooling systems that exceed the design standards of traditional office buildings.

    Hyperscalers are building infrastructure today at fourteen times the scale of just eight years ago. This scale creates a barrier to entry that is impossible for a single organisation to break through. When Satya Nadella announces a doubling of Microsoft’s physical data centre footprint in just two years, he is not talking about building data warehouses, he is talking about creating large-scale innovation reactors. For the average enterprise, trying to catch up to this pace in-house would be akin to building a private power plant network just to power the office kettle.

    The currency of gigawatts and limits

    In the new economic order, capital is no longer the only determinant of development opportunities. The availability of computing power, treated as a scarce and limited resource, is coming to the fore. Strategic partnerships, such as those entered into by Anthropic with Google or OpenAI with AMD, are in fact reservations of energy and silicon for years ahead. In a world dominated by language models and advanced analytics, the ‘power shortage’ referred to by Microsoft’s Amy Hood is becoming a real operational risk for any technology-dependent business.

    This phenomenon is fundamentally changing the role of technology leaders in organisations. The CIO ceases to be a steward of fixed assets and becomes a digital commodity strategist. He or she must operate in a reality where computing power is rationed and its price can skyrocket under local energy considerations. Projected energy price spikes of up to 79% in technology hubs will force a new discipline on business: algorithmic frugality.

    Physical resistance of the cloud

    Although the term ‘cloud’ suggests something ethereal and intangible, its foundations are heavy, loud and raising increasing public opposition. The expansion of technology giants is colliding with the barrier of local politics and ecology. Digital progress is no longer seen as an indisputable good.

    For business, this means a new form of localisation risk. Dependence on one region or supplier coming into conflict with a local community or energy system can become a bottleneck for AI-based product development. This is why more and more companies are attempting to secure operational continuity in the face of growing resentment towards energy-intensive giants.

    Risks of gigantism and opportunities of localism

    The dominance of hyperscale providers brings with it risks that become market opportunities for on-premise proponents. Dependence on a narrow group of suppliers (vendor lock-in) and their vulnerability to local social conflicts or investment blockades – such as those in Wisconsin or Maine – make a diversified in-house infrastructure an insurance policy.

    Opportunities for in-house data centres lie in their ability to adapt where the giants are too sluggish. Local units can deploy innovative heat recovery systems or use niche, green energy sources more quickly, building better relationships with the environment than anonymous, energy-intensive megastructures. This is where ‘edge AI’ is born, processing data where it arises, without the need for costly and slow transfer to global centres.

    Balance as the new overarching strategy

    A comprehensive look at 2031 dictates that we see it not as capitulation but as a new specialisation. The threat to business is not the power of Google or Microsoft, but the lack of an in-house, thoughtful infrastructure strategy. Organisations that indiscriminately abandon their own resources may wake up to a moment when access to innovation is rationed by external suppliers.

    The right chess move today is to reinvest in ‘intelligent on-premise’. This is a smaller but denser infrastructure, optimised for a company’s specific, unique algorithms, while generic computing tasks are delegated to the cloud. This duality allows the company to benefit from the enormity of hyperscalers’ investments, while retaining the hard core that makes the company a sovereign player in the market.

  • Broadcom and Google challenge Nvidia: New AI chip deal by 2031

    Broadcom and Google challenge Nvidia: New AI chip deal by 2031

    Monday’s announcement by Broadcom and Google sheds new light on the balance of power in Silicon Valley. The extension of the partnership until 2031 is not just another supply agreement; it formalises the foundation on which Google is building its alternative to the Nvidia ecosystem.

    The strengthening of the relationship with Broadcom, a key partner in the design of Tensor Processing Units (TPUs), suggests that Google is betting on deep vertical integration. For Broadcom, the contract guarantees long-term revenue in the custom processor (ASIC) segment, which is becoming a safe haven for investors looking for AI exposure outside of Jensen Huang’s portfolio. The market’s reaction – with shares up 3% in after-hours trading – confirms that analysts appreciate this predictability.

    However, the most intriguing piece of this puzzle is Anthropic’s role. The startup, which until recently was mainly seen through the prism of its relationship with Amazon, is now gaining access to 3.5 gigawatts of computing power based on Google processors, starting in 2027. For Anthropic, this is a purely pragmatic move. With revenues crossing the $30 billion barrier in 2026, the company cannot afford to be dependent on a single silicon supplier. Diversifying between AWS Trainium, Google’s TPU and Nvidia’s GPU is a ‘multi-cloud’ strategy taken to the hardware level.

    For the wider business market, this sends a clear message: the era of GPU monoculture is coming to an end. Google is effectively using TPUs as a growth lever for its cloud business, proving that optimising for specific AI workloads can deliver real savings and performance that one-size-fits-all solutions do not offer.

    At the same time, Anthropic’s pledge to invest $50 billion in US computing infrastructure is part of a wider geopolitical trend. With the increasing demand for power, having physical access to power and hardware is becoming a more important barrier to entry than the quality of the algorithms themselves. In this arms race, Broadcom is emerging as the quiet winner, providing the ‘picks and shovels’ for the biggest players, while Google and Anthropic are building an ecosystem capable of challenging the current status quo.

  • Child safety online. Courts hit back at social media giants

    Child safety online. Courts hit back at social media giants

    For nearly three decades, Section 230 of the US Communications Decency Act was the most effective line of defence for technology giants. This provision, which protects platforms from liability for user content, was the foundation on which the giants Meta or Google grew. However, recent jury verdicts in California and New Mexico suggest that the era of impunity based on this provision is coming to an end, and the focus of litigation is shifting from the content itself to the architecture of the systems.

    In Los Angeles, a jury found Meta and Google liable for a young woman’s mental health problems, ordering the payment of $6 million in damages. An even more severe blow fell on Meta in New Mexico, where it was ordered to pay $375 million for misrepresenting the safety of its products and allowing abuse of minors. The key here, however, is not the damages themselves, but the legal strategy: the plaintiffs successfully proved that it was not the specific post or video that was harmful, but the deliberate design of the algorithms and interfaces to addict the user.

    Courts are beginning to distinguish between a platform’s role as a ‘transmitter’ of information and its role as a ‘designer’ of experiences. If these rulings hold up in the appellate processes, every product feature – from the infinite scroll mechanism to recommendation systems – could become the basis for multi-billion dollar lawsuits.

    The risk is not limited to social media. Similar battles are already being fought by Roblox, and experts warn that all platforms hosting user-generated content, including gaming or e-commerce sites, could be targeted.

    Although Meta and Google are announcing a fight in the higher courts, the mood in the US legal system is changing. Even Supreme Court judges are suggesting that Section 230 cannot be a ‘get-out-of-jail-free card’ that exempts companies from elementary concern for the safety of their customers. For technology leaders, the time is coming when an ethical audit of algorithms will become as important as a financial audit. The outcome of the upcoming appeals will not only decide the fate of thousands of pending cases, but will set new rules of the game for the entire digital economy.

  • Google closes Wiz acquisition

    Google closes Wiz acquisition

    After months of speculation and negotiations, Google has finalised the biggest deal in its history, acquiring security platform Wiz for $32 billion. The move is not just a defensive reinforcement of infrastructure, but an aggressive attempt to redefine Google Cloud’s position in the clash with Microsoft Azure and AWS. In a world where AI budgets are growing faster than the security of these systems, Google has bought itself the most effective tool to fight for the trust of corporate customers.

    The decision to leave Wiz as a standalone brand operating inside Google Cloud suggests that Mountain View has learnt lessons from previous, less agile integrations. Wiz has built its power on ‘cloud agnosticism’, offering data protection regardless of whether it rests on Amazon or Microsoft servers. Maintaining this status quo is crucial. Google thus becomes not just a provider of computing power, but a global arbiter of security in multi-cloud environments.

    From a business perspective, the deal fills a critical gap in Google’s offering. While competitors have focused on traditional network monitoring, Wiz has from the outset designed its solutions for the specifics of cloud code and, now most importantly, artificial intelligence models. This integration allows companies to secure the entire AI lifecycle, from training models on massive datasets to their production deployments.

    The security market is undergoing consolidation and Google, by putting $32 billion on the table, is sending a clear message: the future of the cloud belongs to those who can guarantee its integrity in the age of generative AI.

  • Operation BRICKSTORM: When code becomes the target of a cyber attack and trust becomes the most expensive currency

    Operation BRICKSTORM: When code becomes the target of a cyber attack and trust becomes the most expensive currency

    In the classic iconography of cybercrime, the image of the attacker has evolved from the masked amateur hacker to organised crime groups paralysing hospitals for ransom. But the latest data flowing from the Google Threat Intelligence Group’ s 2025 report points to the birth of a new, much more sophisticated era. It is a time when the traditional ‘bank robbery’ – understood as the theft of personal data or outright theft of funds – is giving way to deeply strategic operations. In this new threat landscape, Operation BRICKSTORM is becoming a symbol of change. The attackers are no longer interested only in the contents of the vault; their targets have become the structural plans of the building itself, the schematics of the alarm systems and the fingerprints of the guards.

    Infrastructure as a soft underbelly

    For years, the cyber security narrative has centred around human error. Phishing and social engineering were cited as the main infection vectors, shifting the burden of responsibility to employee training and end-user vigilance. However, 2025 brings a brutal verification of these assumptions. Of the documented ninety zero-day vulnerabilities exploited in the past year, almost half – a record 48 per cent – targeted corporate technologies directly.

    A particular battleground has become edge devices and network products, which are often a kind of ‘no-man’s land’ in modern IT architecture. These devices, although crucial to business continuity, are rarely equipped with advanced detection and response mechanisms such as EDR systems. For espionage groups, especially those linked to state decision-making centres, they have become an ideal entry point. Exploiting a security vulnerability has now become the most common path of first penetration, overtaking even stolen credentials or social engineering attacks in the statistics.

    Strategic Theft: The Anatomy of a BRICKSTORM Operation

    Among the many incidents recorded in the autumn of 2025, Operation BRICKSTORM stands out as heralding a new trend in industrial espionage. Attributed to Chinese state actors, the activities were not limited to the routine collection of customer data. Their targeting vector was intellectual property in its purest form: source code and proprietary software documentation.

    From a business perspective, such a shift in priorities in attackers is a wake-up call of the highest order. After all, stealing source code is not a one-off loss; it is a process that allows attackers to carry out extremely precise reverse engineering. With an insight into the software architecture, groups such as UNC3886 can identify further vulnerabilities, not yet known to anyone, for future operations. This is a mechanism for building a long-term advantage, in which the victim not only loses their unique know-how, but becomes an unwitting testing ground for the next generation of exploits.

    Cascading risks and erosion of market confidence

    Source Kd is the foundation of market valuation and a guarantor of customer confidence. BRICKSTORM incidents carry a cascading risk that extends far beyond the walls of the attacked organisation. Once a technology provider loses control of its blueprints, the threat spills over to the entire ecosystem of its customers. The attacked company becomes, in this set-up, ‘patient zero’ in an epidemic of supply chain attacks.

    It is worth noting that knowledge of upcoming updates, planned functionalities or specific encryption methods contained in the software documentation allows competitors – or hostile state actors – to completely neutralise a brand’s innovative advantage. Product security thus ceases to be a mere technical issue and becomes an integral part of a market survival strategy. The loss of Intellectual Property is often irreversible, and its effects may only manifest themselves in the financial sheets after several years, when competitors manage to implement solutions based on stolen knowledge.

    Commercial zero-day market

    An extremely significant element of the landscape described by Google is the change in the authorship structure of attacks. For the first time in the history of observation, more zero-day vulnerabilities were attributed to commercial surveillance software providers than to classic state-sponsored groups. This phenomenon can be called the democratisation of advanced cyber defence. These entities are selling their services to both governments and private customers, drastically lowering the barrier to entry into the world of the most sophisticated hacking operations.

    From the point of view of the business decision-maker, this means that the profile of the potential adversary has blurred. The threat no longer flows only from the direction of the big powers, but can be funded by any market player who decides to purchase a ready-made ‘surveillance package’. The increase in financially motivated attacks, including those leading to the use of ransomware, confirms that zero-day vulnerabilities have become a common commodity and their exploitation a standard tool in the arsenal of modern economic crime.

    Beyond the limits of the fort

    Since the statistics clearly show the ineffectiveness of the traditional perimeter protection approach, a redefinition of security strategy becomes necessary. Focusing on building ever-higher walls around an organisation makes no sense when almost half of all attacks hit the very foundations of these walls – that is, the network infrastructure and VPN devices.

    The defence strategy should be based on deep value segmentation. Key resources, such as source code repositories, require isolation beyond standard procedures. It becomes necessary to implement a paradigm of limited trust (Zero Trust) not only at the user level, but above all at the level of machine-to-machine communication processes. Monitoring for anomalies inside the network must become a priority, because it is there, in the silence of edge devices, that attackers such as those in BRICKSTORM operations build their long-term presence.

    Arbitrator in the arms race

    In the report described, artificial intelligence is emerging as an accelerator of activity on both sides of the barricade. Attackers are using AI to automate the process of finding vulnerabilities and scaling attacks, reducing the time between the publication of a new technology and its first exploitation to almost zero. In this context, traditional vulnerability management, based on cyclical audits, is becoming an anachronism.

    The only real answer seems to be the use of AI agent-based systems that proactively and autonomously scour their own infrastructure and source code for bugs before they are spotted by an adversary. The race for security in 2026 therefore becomes largely a technological race to see who can integrate intelligent automation into their processes faster and more efficiently. The human role in this set-up is evolving from that of a security operator to a strategist who sets priorities for autonomous defence systems.

  • Big Tech workers vs Pentagon. Military pressure on AI sector sparks resistance

    Big Tech workers vs Pentagon. Military pressure on AI sector sparks resistance

    When US Secretary of Defence Pete Hegseth called the development of artificial intelligence a military arms race in January, relations between the government and Silicon Valley entered a new and turbulent phase. We are now witnessing unprecedented pressure from the US administration on key players in the AI sector, which is being met with increasing resistance from the developers of these technologies themselves.

    A growing conflict has been sparked by an ultimatum issued to Anthropic. The Pentagon is reportedly threatening to use the Defence Production Act to force the company to adapt its language models to the needs of the US military. A refusal would result in the company being deemed a supply chain risk. In response to this pressure, Anthropic has made it clear that it will not make its solutions available for mass surveillance of citizens or to power weapons capable of autonomous killing without close human oversight.

    The situation instantly triggered a wave of solidarity within the competing companies. A group of vetted Google and OpenAI employees have signed a joint petition entitled ‘We will not be divided’. The signatories of the document warn that the Department of Defence is attempting to use classic divisive tactics, hoping to force the tech giants to make concessions that AI security leaders have not agreed to. The initiative aims to create a united industry front. Employees are calling on their companies’ boards to maintain standards and not hand over technology to the military without proper ethical safeguards.

    From a business perspective, the threat of using extraordinary national security powers against private technology entities is an entry into completely uncharted territory. As Dean Ball, former White House technology policy advisor, notes, Anthropic faces the dangerous spectre of quasi-nationalisation or exclusion from the market. This aggressive move by the administration also sends a clear and worrying message to the entire innovation ecosystem, suggesting that doing business with the government carries a huge risk of losing operational independence.

    These developments will define not only the future of weapons contracts in Silicon Valley, but above all the limits of commercialisation and control of the most powerful models of artificial intelligence.

  • Online advertising market under scrutiny. The Belgian authority is checking Google

    Online advertising market under scrutiny. The Belgian authority is checking Google

    Alphabet, Google’s parent company, is facing another regulatory challenge in Europe. This time, the company’s key driving engine, online ad sales, has been targeted. The Belgian competition watchdog has announced that it has opened an investigation, pointing to serious indications suggesting an abuse of market power by the tech giant. Although the case is at an early stage, it sheds light on the growing pressure on a business model that has dominated the digital ecosystem for years.

    From a business perspective, the online ad market is a complex web of tools connecting advertisers to publishers. Google has a strong position at almost every stage of this chain. Belgian officials are investigating whether the current model violates antitrust rules and restricts the free market. Possible regulatory interventions could affect the distribution of advertising revenues and force greater transparency throughout the auction process.

    The situation in Belgium is not an isolated case, but rather part of a wider European puzzle. Mountain View has been repeatedly targeted by EU regulators, resulting in multi-billion dollar fines in recent years. Moreover, the spectre of another European Commission investigation looms on the horizon. According to the company’s recent communication to advertisers, Brussels is looking into concerns about potential unfair overpricing of advertising space.

    The technology company has consistently defended its operating model in an effort to tone down sentiment. Google representatives argue that their advertising systems are the foundations that level the playing field, allowing small and medium-sized businesses to compete effectively with the largest global brands. In addition, they emphasise a key argument for consumers themselves – it is the profits from advertising that allow them to maintain free access to most resources on the web.

    The outcome of the Belgian investigation remains unclear, but it is a clear sign to the industry that the architecture of the AdTech market will be subject to increasingly stringent audits. Companies basing their growth strategies on search engine campaigns should keep a close eye on the actions of European regulators, as they may ultimately remodel the costs and rules under which user attention is bought online.

  • The end of cheap power for AI? Trump hits back at tech giants

    The end of cheap power for AI? Trump hits back at tech giants

    The Trump administration has thrown down the gauntlet to technology giants, confronting them with a dilemma that could redefine the economics of data centres in the US. During his State of the Union address, the president announced that major technology companies would be forced to build their own power supplies to relieve the strain on the national electricity grid. While the rhetoric focuses on protecting consumers’ wallets from rising bills, for the AI and cloud computing sector this marks a shift from a pure consumption model to a role as critical infrastructure developers.

    The move is a direct response to tensions in states such as Virginia and Ohio, where the rapid growth of AI clusters has led to an overloading of local grids. PJM Interconnection, a key operator on the East Coast, has previously suggested that new large-scale energy customers need to bring their own capacity into the system. Now these suggestions are becoming the foundation of hard federal policy.

    For leaders such as Microsoft, Google and Amazon, the announced March meeting at the White House will not just be a courtesy visit. These companies have been investing in renewables for years, but Trump’s new doctrine suggests something much more demanding: physical separation from the public grid or direct financing of new energy generation (so-called *behind-the-meter*). This forces capital-intensive investment, probably towards small modular reactors (SMRs) or advanced energy storage systems, which are currently in the early stages of deployment.

    From a business perspective, Trump is trying to kill two birds with one stone. On the one hand, the administration wants to maintain its lead over China in the AI arms race, which requires gigantic computing power. On the other hand, rising energy prices have become a political burden ahead of the upcoming mid-term elections. By shifting the cost of infrastructure development to Big Tech, the White House is taking this burden off the shoulders of voters, while forcing companies to accelerate innovation in the energy sector.

  • Green light from Brussels: Google closes the acquisition of Wiz

    Green light from Brussels: Google closes the acquisition of Wiz

    The European Commission’s approval of Google ‘s acquisition of Wiz above all signals a shift in the balance of power in the cloud arms race. For $32 billion, Google Cloud is buying what it has failed to fully organically develop over the years: the trust of half of the Fortune 100 companies and the technological ‘seatbelt’ that is becoming the standard in modern infrastructure.

    From Brussels’ perspective, the deal proved surprisingly easy to swallow. The decision by Teresa Ribera, executive vice-president of the EC, is based on a pragmatic assessment of the market, where Google is still chasing Amazon (AWS) and Microsoft (Azure). Brussels felt that absorbing Wiz would not cut off the competition’s oxygen. The key argument was interoperability; multicloud customers would not be trapped in Google’s ecosystem, and sensitive commercial data of competitors integrating with Wiz would remain protected. It is a rare case where antitrust authorities see consolidation as an opportunity to make the number three player more competitive against the dominant leaders.

    For the business world, this deal is a lesson in patience and the brutal valuation of security. Back in 2024, Assaf Rappaport, CEO of Wiz, rejected a $23 billion offer, baiting investors with visions of an IPO. A year and a half later, Google came back to the table with a premium of $9 billion. The jump in valuation from 12 billion (during the May 2024 funding round) to 32 billion reflects a new market reality: in the age of AI and distributed infrastructure, security is no longer an add-on, but a foundation for which corporations are willing to pay any price.

    The integration of Wiz with Google Cloud is a purely offensive move. Google is not buying revenue – those at $350 million are a drop in the ocean of need – but it is buying relationships with executives from some of the world’s largest companies. If Google manages to keep its Wiz solutions agnostic, it could become the main guarantor of security in multi-cloud architectures, monetising the protection of data stored with its biggest rivals. It’s a risky but logical gamble for the highest stakes in the digital economy.

  • Google and AWS want to go local. IT giants battle it out for the European cloud market

    Google and AWS want to go local. IT giants battle it out for the European cloud market

    For the past decade, the technology world has fed us a vision of digital cosmopolitanism. Cloud computing was supposed to be a transnational entity, an ethereal layer of innovation that, like Roman aqueducts, provides life-giving resources regardless of latitude. We believed in ‘Cloud Anywhere’, in stateless clusters and an architecture for which national borders were merely an annoying artefact of the analogue past.

    However, 2026 brings a painful wake-up call. According to Gartner‘s latest forecasts, global spending on cloud sovereigns will increase by 35.6%, reaching a not inconsiderable $80 billion. This is no mere market correction. This is the moment when digital globalism collides with the hard wall of geopolitics, and the Seattle and Mountain View giants – hitherto the priests of universalism – must hastily learn their local dialects.

    The anatomy of a concession

    Rarely in the history of IT have the major players voluntarily abandoned economies of scale. The foundation of the power of AWS or Microsoft Azure was unification: one technology stack, one operating model, one global management system. But today’s landscape, dominated by the fear of losing their ‘digital autonomy’, is forcing them into a process that could be called controlled fragmentation.

    The launch of AWS European Sovereign Cloud or the Sovereign Core platform from IBM are acts of capitulation to the hard law of sovereignty. They are an attempt to answer a fundamental question: who has the last word when the cloud operating system needs to be rebooted and the encryption keys are of interest to a foreign jurisdiction?

    Survival strategy

    The most interesting phenomenon, however, is how deftly the technology giants are adapting to the role of ‘local suppliers’. We are seeing a fascinating market spectacle: companies that epitomise American technological dominance are entering into alliances with national telecoms champions in Europe or Asia. Partnerships with T-Systems in Germany or Orange in France are nothing short of ‘yowhite-labeling’ of trust.

    For the business customer, this is a paradoxical situation. On the one hand, he receives the promise of Silicon Valley-like innovation, on the other, the guarantee that data will not leave his backyard. But has there really been a change under this mask? Critics point to the problem of the U.S. CLOUD Act, which in theory allows U.S. services to access data managed by U.S.-based companies, regardless of the location of the server. Hyperscalers are doubling and tripling to prove that technical barriers render this law useless. It is a technological arms race where credibility is at stake.

    80 billion reasons to play locally

    Why do the giants opt for the engineering nightmare of maintaining separate sovereign regions? The answer is: because they have no choice. Gartner predicts that by the end of 2026, organisations will move 20% of their existing workloads from global public clouds to local providers. This is a gigantic capital outflow.

    Spending growth of 35.6% is being driven by critical sectors: governments, banking, energy. These are industries that have stopped believing in the ‘goodwill’ of global corporations. As trust erodes to the point where government organisations begin to consider whether geopolitical tensions could lead to sudden service cuts, sovereignty has become the new KPI for boards.

    Gartner’s Rene Buest rightly points out that the aim is to ‘keep wealth generation within its own borders’. Data has become the new oil, and the sovereign cloud is the local refinery. Countries have realised that by allowing data to flow freely to global centres, they are losing not only control, but also the potential to build their own AI models and innovations.

    Sovereignty tax

    However, this new reality carries a hidden cost. We need to talk openly about a ‘sovereignty tax’. Localised solutions, cut off from global networks, will inherently be more expensive to maintain. Moreover, they may suffer from so-called ‘technology lag’. The latest AI services, the most advanced language models or analytics functions tend to debut in major cloud regions. Sovereign enclaves may receive them with a delay of several months or even a year.

    Business is therefore faced with a dilemma: maximum innovation or absolute control?

    Will the mask become a face?

    The year 2026 will go down as the moment when cloud computing finally lost its innocence. The hyperscalers, donning the masks of local providers, made a masterstroke – instead of fighting regulation, they decided to capitalise on it.

    However, it is important to remember that data sovereignty is not just a question of where the server stands, but who has the authority to operate it and who controls the platform’s source code.

  • Gambling for 185 billion: How Google is buying its way back to the top of AI

    Gambling for 185 billion: How Google is buying its way back to the top of AI

    An era has dawned in the world of Big Tech where billion-dollar investments are no longer just a spreadsheet item, but a statement of survival. Alphabet, Google’s parent company, has just announced that it plans to almost double its capital expenditure (CAPEX) in the coming year, targeting an astronomical range of $175 to $185 billion. This is a bold move, given that as recently as 2025 the figure was hovering around $91 billion, and market analysts were expecting a much more modest increase.

    This unprecedented escalation in spending on data centres and network infrastructure is a direct response to the bottlenecks in computing power that are currently holding the sector back. Sundar Pichai, CEO of Alphabet, has made it clear: demand for artificial intelligence is outstripping the company’s current supply capacity. These investments are intended to lay the foundations for further expansion of the Gemini model, which in its third iteration has managed to reverse the narrative of Google as a technological marauder.

    A new hierarchy of hyperscalers

    For market observers, the most striking dynamic is that of the cloud division. Google Cloud, recording a 48% increase in revenue (to US$17.7 billion), has officially gone from being a “promising project” to being a real threat to the position of Azure and AWS. For the first time in years, Google Cloud’s growth rate was clearly ahead of Microsoft’s, allowing the company to cement its status as a fully-fledged hyperscaler. This success is confirmed by the partnership with Apple and the fact that 2,800 corporations have already purchased a total of 8 million paid Gemini licences.

    Risk vs. real returns

    Investors, although initially concerned about the aggressive cash drain, seem to have become convinced of Pichai’s vision. Despite temporary volatility in the share price following the results announcement, Alphabet’s financial fundamentals remain solid. Quarterly revenue of $113.8 billion and earnings per share beating analyst forecasts suggest that AI is no longer just an experiment, but a viable revenue engine.

    Artificial intelligence has even begun to redefine the company’s core business – search. With Gemini, Google is able to monetise complex, long queries that previously eluded traditional advertising algorithms. With 750 million users of the Gemini assistant per month, Alphabet is proving it can scale new tools quickly. In the current economic climate, the message from Mountain View is clear: Google is not only participating in the AI arms race, but intends to fund it with ruthless efficiency, even at the expense of short-term margins.

  • $2 billion in play. Google has avoided a financial knockout in a privacy dispute

    $2 billion in play. Google has avoided a financial knockout in a privacy dispute

    Alphabet can breathe a sigh of relief, at least for a moment. A federal judge in San Francisco, Richard Seeborg, has dismissed claims by consumers demanding that the Mountain View giant return $2.36 billion in allegedly undue profits. The amount was said to be a penalty for collecting data from users who knowingly turned off activity tracking features in apps. While the verdict protects Google’ s financial balance sheet from being drastically depleted, it also sheds light on the systemic tensions between the analytics-driven business model and the growing demands for privacy.

    Friday’s decision follows a September trial in which a jury found Google guilty of secretly collecting activity data on millions of people. At the time, $425 million in damages was awarded – a significant but symbolic sum compared to the astronomical $31 billion originally sought by the plaintiffs. A key victory for Google in the latest iteration of the litigation is the rejection of the ‘disgorgement’ mechanism, i.e. the forced surrender of profits generated by the disputed practices. Judge Seeborg found that the plaintiffs had failed to provide sufficient evidence of “irreparable harm” to justify such a severe penalty or an immediate injunction to halt data processing.

    For executives in the technology sector, the Rodriguez v. Google case sets a significant precedent. Google argued that forcibly blocking the collection of data linked to user accounts could ‘cripple’ the analytics services used by millions of third-party developers. This shows how deeply tracking mechanisms are woven into the Android ecosystem and the digital advertising infrastructure more broadly. Google’s financial victory, however, does not mean the end of its image and legal problems. The judge upheld the status of a class action involving 98 million users, meaning that the battle over the definition of ‘consent’ in the world of Big Tech will continue in the appellate courts.

    In a landscape dominated by increasingly stringent regulations such as Europe’s RODO and California’s CCPA, this case highlights the giants’ determination to defend the integrity of their data engines. Although Google avoided the bleakest scenario this time, the line between necessary analytics and invasion of privacy remains one of the costliest flashpoints in the technology-legal relationship.

  • Will Google save the AI in the iPhone? The new Siri is set to change everything

    Will Google save the AI in the iPhone? The new Siri is set to change everything

    Apple is making one of the riskiest decisions in its recent software history, with plans to completely overhaul Siri later this year. According to reports from Bloomberg News, the Cupertino-based giant intends to replace the assistant’s current, outdated interface with a new solution codenamed ‘Campos’. The move is not just a product update, but a strategic necessity after last year’s implementation of the ‘Apple Intelligence’ suite was met with a cool reception from the market and investors, who are still waiting for a convincing answer to the dominance of OpenAI and Microsoft.

    A key piece of this puzzle is Tim Cook’s surprising pragmatism. Apple, a company renowned for its vertical integration and self-sufficiency, struck a deal with Google earlier this month. Campos is to be powered by Gemini models, a significant win for Alphabet, but also a signal that Apple’s own language models are not yet ready to compete independently at the highest level. The new chatbot is to be based on technology comparable to Gemini 3, known internally at Apple as ‘Apple Foundation Models version 11’. This technological background is expected to allow Siri to switch seamlessly between voice and text modes, offering deep integration with iOS, iPadOS and macOS that has been missing from the ecosystem so far.

    For businesses and developers, this represents a potential paradigm shift in the way users interact with apps on iPhones. If Campos does indeed offer the level of contextual understanding known from leading LLM models, the ‘app economy’ could evolve towards an ‘action economy’ performed directly by the assistant.

    Apple’s ambitions go beyond smartphones alone, however. In parallel to the software work, a new category of hardware is being developed in Cupertino’s labs. The Information reports on a wearable device being designed – a pin powered by artificial intelligence, equipped with cameras and microphones. Although this device is not expected to be released until 2027, this indicates a long-term strategy in which AI frees the user from the screen. For now, however, the priority remains saving Siri’s reputation and proving to Wall Street that Apple can still define standards in the tech industry.

  • One billion dollars for Siri’s new brain: Strategic alliance between Apple and Google

    One billion dollars for Siri’s new brain: Strategic alliance between Apple and Google

    In a rare but pragmatic move that redefines the balance of power in Silicon Valley, Apple has finalised a billion-dollar-a-year deal with Google. The decision ends months of speculation about the future of Siri and confirms that Gemini models will become the fundamental engine driving the assistant in the iPhone ecosystem. For Apple, it’s a way to quickly catch up in the AI race without the gargantuan cost of building its own underlying infrastructure from scratch.

    A key element of the agreement is a move away from a ‘one size fits all’ model. Rather than relying on a single giant algorithm, Apple will implement a multi-model strategy, taking advantage of the flexibility of the upcoming Gemini 3 series. The foundation is to be a modified, high-performance model with 1.2 billion parameters, designed for fast query processing. This approach allows Apple to precisely match computing power to the complexity of the task, drastically reducing the operational costs and infrastructure needed to handle billions of queries per day. Siri will now seamlessly switch between variants – from a lightweight Flash model to advanced Pro versions – depending on whether the user is setting a timer or needs help with multi-step trip planning.

    The technical aspect of the collaboration reveals the hybrid nature of Apple’s new architecture. Although heavier workloads will be processed in Google Cloud, the Cupertino-based company has set tough privacy conditions. Apple Intelligence will continue to prioritise local processing and its own Private Cloud Compute system for sensitive data, ensuring that user information is deleted immediately after processing and that Google has no permanent access to it.

    Importantly, the new agreement does not exclude existing players. The partnership with OpenAI around ChatGPT integration remains in place, suggesting that Apple is building an agnostic model in which it is the user or context that decides the choice of AI tool. The reaction of the markets was immediate and euphoric. News of the deal pushed Alphabet’s market capitalisation above the historic $4 trillion threshold, joining Google’s owner in the elite club of tech giants with an unprecedented valuation. For Google, this is not only a cash injection, but more importantly, the ultimate confirmation of the dominance of their technology in the mobile world.

  • AI as critical infrastructure. How Gemini 3 is changing the enterprise operating model

    AI as critical infrastructure. How Gemini 3 is changing the enterprise operating model

    The tech world is alive with headlines about the new leap in performance Google is offering with the launch of Gemini 3. Benchmarks, token processing speeds and ‘human’ conversational fluency, however, are just a facade. The real revolution – and the associated risks – is taking place quietly, in the architecture of IT systems. Experts are increasingly loudly pointing out that with this update, artificial intelligence is no longer just a tool in the hands of an employee. It is becoming the operational backbone of the enterprise, and this is completely changing the rules of the game in cyber security.

    Until now, the relationship between business and generative artificial intelligence has resembled working with a capable intern. Models such as early versions of Copilot or ChatGPT were helpers: they summarised reports, prompted the content of emails, generated code. If the ‘intern’ made a mistake, the consequences were limited and easy to catch. With the advent of the Gemini 3 era, this metaphor loses its meaning. We are no longer dealing with an assistant, but with a new operational foundation.

    AI leaves the chat window

    Google makes no secret of the fact that full integration is the goal. Gemini 3 is not just a chatbot in a browser window; it is a technology that permeates the working environment. What is being created is what could be called a unified AI grid. In this ecosystem, the model’s interactions extend to email, cloud documents, storage and collaboration tools.

    The most important change that IT managers need to understand is the transition of AI to an ‘active infrastructure’ role. The system does not passively wait for the user’s command. With native integrations, the model is constantly ‘listening’, processing and combining facts from the company’s various data sources. This is a huge process convenience, but at the same time the point at which AI becomes the new security perimeter (security perimeter). Every document that the model has access to becomes part of this perimeter – a point that must be protected with the same rigour as email servers or databases were once protected.

    An agent who can do too much?

    Gemini 3 accelerates the trend of equipping AI with agentic capabilities. This is a key term for understanding today’s threat landscape. The model is no longer just there to answer questions (Q&A), but is capable of autonomous action. It can transcribe documents, forward them, respond to content in the inbox and even control APIs.

    This is where the risk of operational depth comes in. The attack surface grows exponentially, going beyond classic security controls. If an agent’s permissions are configured too broadly – which often happens in the rush to implement innovations – and its actions are not verified by a human-in-the-loop, the company exposes itself to uncontrollable processes. A misinterpretation of one email can set off a chain of events in ERP or CRM systems that will be costly and difficult to undo.

    PDF as a weapon, or invisible attacks

    In the new reality, traditional firewalls and EDR (Endpoint Detection and Response) systems are proving insufficient. Why? Because the threat no longer comes in the form of a `.exe’ file or a malicious script, but in semantic form.

    We are talking about the phenomenon of Indirect Prompt Injection. This is a technique in which the attacker does not need to crack passwords or take over a user account. All they need to do is craft a document – such as a PDF CV or a web page – that contains hidden instructions for the AI model. When Gemini 3 processes such a file (e.g. summarising it for an HR employee), it will execute the instructions sewn into it. The user will not see anything suspicious, but the model can exfiltrate the data or change the parameters of its work in the background.

    Moreover, this problem scales with multimodality. Since Gemini 3 ‘sees’ and ‘hears’, any data format becomes an attack vector.

    Audio: transcriptions of recordings may contain commands that are inaudible or unintelligible to humans, but interpretable to AI as system commands.

    Image: manipulated screenshots or images can influence a model’s decisions in ways that classic content security filters cannot detect.

    Therefore, treating malicious media as viable attack vectors, rather than scientific curiosities, is becoming a necessity for SecOps teams.

    A race against time and cost

    Despite these threats, business is not slowing down. Security readiness reports (GenAI Security Readiness) are ringing alarm bells: companies are deploying AI far faster than they can secure it. There is often a lack of basic ‘guardrails’ (guardrails), monitoring of agent activities or testing pipelines to check resistance to hostile attacks (adversarial testing).

    However, a technical and economic nuance is worth noting. Initial analyses indicate that Gemini 3 in the Pro Preview version shows a high degree of robustness in terms of security, provided it is configured appropriately – e.g. with enforced security prioritisation and a self-assessment layer. However, such a configuration comes at a price: it drastically increases the computational effort (and cloud costs).

    In comparison, competitor models such as Claude 4.5 Haiku offer similar levels of security at significantly lower operating costs. This presents IT decision-makers with a dilemma: should they invest in a powerful but ‘heavy’ model and its security features, or seek optimisation? The key conclusion, however, is one: a model alone is not a security strategy. Even the best algorithm without proper configuration, prompt engineering and multi-layered security features will remain vulnerable to attacks.

    New task for the board

    The lessons from the Gemini 3 capability analysis are clear: the responsibility for AI is shifting from innovation departments directly onto the shoulders of the board of directors and CISOs. The decisive question to ask suppliers and IT teams is no longer: “How intelligent is this model?”. It is the question of 2023.

    Artificial intelligence today creates an extreme edge in corporate security. If companies allow it to grow into their processes without being aware of the risk of ‘underestimated dependency’, the consequences could be far-reaching. Gemini 3 is a powerful tool, but it is up to us to make it the foundation of success or the weakest link in the security chain.

  • Ukraine says no to OpenAI. Builds sovereign AI on Google infrastructure

    Ukraine says no to OpenAI. Builds sovereign AI on Google infrastructure

    Ukraine has taken the strategic decision to build its own Large Language Model (LLM). The project, implemented by the Ministry of Digital Transformation in partnership with operator Kyivstar, aims to create a system based on the open Gemma architecture from Google. The initiative sends a clear signal that Kyiv is seeking to make its critical systems – both civilian and military – independent of commercial solutions such as ChatGPT or Chinese technologies.

    The decision to choose Google came after an analysis of competing solutions, including the Llama model from Meta and Europe’s Mistral AI. Deputy Minister of Digitalisation Oleksandr Bornyakov stresses that a key factor was the need to avoid dependence on proprietary systems from external suppliers. Indeed, the Ukrainian military plans to deeply integrate AI into its battlefield management systems, which excludes the risk of relying on foreign ‘black boxes’.

    The architecture of the project involves a hybrid infrastructure approach. Initial training of the model will take place on Google’s secure computing clusters outside Ukraine to speed up the learning process. Ultimately, however, the finished system will be transferred to Kyivstar’s local infrastructure, providing Kyiv with full data sovereignty. This is particularly important in the context of the planned use of the model for court records, state archives and services for 23 million citizens.

    Misha Nestor, product director at Kyivstar, points to pragmatic reasons for building an in-house tool. Current global LLM models struggle with the region’s specific linguistic context, often generating errors in translating legal documents or failing to cope with local dialects that are a mix of Ukrainian, Russian and Bulgarian. The new model, fed by data from more than 90 government institutions, is expected to accurately handle these nuances, including minority languages such as Crimean Tatar.

    The venture is not without risk. The developers expect massive cyber attacks from Russia as soon as the system is launched. Nevertheless, the Ukrainian project could set a precedent for other smaller countries, showing how to use open source technologies to build strategic independence in the age of artificial intelligence.

  • From Apple to Alphabet: Warren Buffett’s late turn to AI infrastructure

    From Apple to Alphabet: Warren Buffett’s late turn to AI infrastructure

    Ruben Dalfovo, Investment Strategist at Saxo, writes in an analysis that for years Warren Buffett’s history with Google was a cautionary tale. He openly admitted that not buying their shares was a serious mistake, even though he saw the company turning internet search into a tool to monetise advertising. Now, just months before handing over the helm to Greg Abel, Berkshire Hathaway has quietly bought a stake worth billions of dollars in Google‘s parent company, Alphabet.

    Alphabet’s Class C shares closed the 17 November session at $285.60, up 3.11% on the day, after reports of a new package helped push the stock to record levels. The stock is up around 50% since the start of 2025 and is the best among the so-called Magnificent Seven this year. When an investor known for avoiding following fashionable trends buys shares in a market leader whose price is approaching historic highs, the obvious question arises: what does he see that the rest of us might not?

    From Apple to Alphabet

    Berkshire has been a net seller of equities for twelve consecutive quarters, including the last. In that time, it has sold about $12.5bn of securities while buying about $6.4bn of shares, while allowing its cash holdings to grow to a record $381.7bn. This is not the behaviour of a man who thinks everything is cheap. However, there has been a major reshuffling within this overall reduction.

    In the past quarter, Berkshire reduced its stake in Apple by around 15% and its position in Bank of America by around 6%. Alphabet, meanwhile, emerges as a new player in the top ten club. Regulatory documents and portfolio statements show that the holding now ranks roughly tenth in Berkshire’s equity portfolio, behind such classics as American Express, CocaCola and Chevron. This is a rare situation in the technology industry for a conglomerate that for decades has held shares in railways, insurers and basic goods companies, rather than fast-growing software giants.

    What has changed, however, is not so much Buffett’s principles as the companies themselves. Apple, which he has always described as a consumer brand, now operates in a world where hardware upgrades rely heavily on AI-based features. At the same time, Alphabet is looking less and less like a speculative technology company and more like a sprawling infrastructure for the digital economy, with advertising and cloud revenues that are surprisingly stable for something built from lines of code.

    Saxo

    Alphabet as AI infrastructure, not a gadget story

    Alphabet is at the point where its artificial intelligence ambitions meet traditional cash generation. In the third quarter of 2025, the company reported around $102bn in revenue, above forecasts, and profits also beat expectations. The main driver of growth has been Google Cloud, which has evolved from a ‘nice-to-have’ into a business driver as companies developing AI rent its computing power.

    In addition, Gemini models and AI-enhanced search reach hundreds of millions of users today. These tools run on a global network of data centres, proprietary chips and fibre optics, the expansion of which this year will consume a total of more than $90 billion in capital expenditure. Put simply: Alphabet wants to provide the ‘shovels and picks’ for the gold rush around AI.

    The partnership with Anthropic adds another dimension. Google has invested billions in the startup and has entered into a major chip supply and cloud services agreement, which should direct future computing workloads to the Google Cloud. Berkshire’s package gives the company indirect exposure to this ecosystem: every Anthropic query run on Google’s infrastructure strengthens Alphabet’s position as an AI infrastructure.

    The key point is that this expansion is based on a strong balance sheet. Alphabet is valued at around 25 times expected earnings, cheaper than some other megacaps, and continues to generate solid free cash flow from the search engine and from YouTube. This cash can fund data centres and still support share buybacks, which suits investors who prefer ‘great companies at fair prices’.

    What does the ‘vote of confidence’ from Buffett really mean?

    Buffett’s purchase of Alphabet is more than a simple seal of approval. It’s a concrete thesis about where AI profits will be concentrated. Alphabet makes its money from search ads, YouTube, maps, the app shop and the cloud. AI is not a separate product here. It is an enhancement that can increase user engagement and monetisation opportunities in existing businesses.

    It is also a clear shift towards infrastructure rather than devices. Apple is betting on on-device intelligence, but is yet to refine its AI business model. Alphabet is already monetising AI through cloud contracts, advertising tools and office software. Reducing its position in Apple while increasing its stake in Alphabet suggests that Berkshire sees more future value of AI in data centres and platforms than in handset replacement cycles.

    Finally, it is important to remember that ‘technology’ is not a single category. Alphabet may share the index with fast-growing artificial intelligence start-ups, but its competitive advantage, cash-generating ability and diversified revenues place it closer to companies that have consistently multiplied capital over the years, exactly the kind of companies Buffett has always favoured.

    Risks that even Berkshire cannot ignore

    The risks are real. Alphabet could over-invest in AI computing capacity if customers slow down projects or competitors snatch up big contracts. Google Cloud’s growth rate, margins and long-term investment guidelines are worth watching. The second threat remains regulation. Tougher antitrust or privacy laws in the US and Europe could hit search and advertising profitability or force changes in how data is used.

    AI strategy execution will also be key. Alphabet stumbled at the start of the race for generative AI and is still playing catch-up, trying to regain the initiative with Gemini and other models. If users or corporate customers prefer the tools of the competition, all this spending on chips and data centres could result in lower-than-expected returns, even despite Buffett’s presence in the shareholding.

    Closing the buckle: what Buffett’s bet on Alphabet really teaches

    Buffett has said for years that not buying Google shares was one of his big mistakes. Alphabet was up for grabs, with search engine profits and solid cash flow, while he remained on the sidelines. By buying the shares, now that he is preparing to hand over as CEO, he is doing more than just a ‘neat final move’. It’s a discreet signal of what he thinks sustainable value in AI will look like.

    For ordinary investors, the conclusion is not ‘buy what Buffett is buying’. The point is that even the most traditional value investor is happy to have exposure to AI – as long as it is built into a diversified platform generating strong cash flows that can be understood and rationally justified. The market will continue to argue whether Alphabet’s price is too high, too low or just right. A better question is one that Buffett has been asking for seven decades: which companies can you hold in good times and bad because you understand how they make money and why they can survive?

    – Artificial intelligence remains one of the fastest growing market segments, while being distinguished by high volatility, risk of overheating and intense competition. The sector often reacts with rapid valuation movements, and the dynamics of innovation mean that market positions of leaders can change rapidly. Investing in stable assets outside the AI sector helps to keep a portfolio balanced, even when sentiment towards innovation declines. Therefore, in the long term, it is important to ensure portfolio diversification, e.g. by combining a bold approach to new technologies with sound and disciplined risk management, says Aleksander Mrózek, CEE key account relationship manager at Saxo Bank.

  • Google chief warns: AI bubble could burst. “No one is safe”.

    Google chief warns: AI bubble could burst. “No one is safe”.

    It is rare in the world of tech giants for the CEO of a company gaining 46 per cent on the stock market this year to publicly invoke the spectre of the ‘irrational euphoria’ of the dotcom bubble era. However, Sundar Pichai, Alphabet’s CEO, opted for this surprising act of candour in an interview with the BBC. Although investors are still aggressively betting on Google’s ability to compete with OpenAI, Pichai admits straightforwardly: there are elements of irrationality evident in the market, and in the event of a correction, no company – including Google – will emerge unscathed.

    Pichai’s words come at a time when US technology company valuations are beginning to weigh on broader indices, and analysts are increasingly asking loudly about the viability of giant investments in artificial intelligence. While Alphabet’s chief executive believes his company will weather the potential storm, his caution contrasts with the company’s aggressive investment strategy. Google is not slowing down, as evidenced by its investment commitment in the UK announced in September. The conglomerate plans to spend £5 billion on infrastructure expansion, including new data centres and funding for London’s DeepMind lab.

    This move has not only a technological dimension, but also a geopolitical one. The start of training AI models in the UK fits in with Prime Minister Keir Starmer’s ambition to make the UK the third superpower in artificial intelligence after the US and China. However, this technological arms race comes at a price, which the industry is reluctant to talk about. Pichai confirmed that the ‘huge’ energy demands of the new infrastructure will force a delay in Alphabet’s climate targets. Achieving net-zero emissions is slipping, giving way to the need for the computing power required to maintain leadership.

    Pichai’s statement is a clear signal to the market: Google is ready for the long march and the costs – both financial and environmental – but management in Mountain View realises that the current bull market is based on fragile foundations. The question is no longer whether the market is overheated, but how deep the correction will be when investor sentiment finally cools down.