Category: Global

  • Apple is looking for an alternative to TSMC. Talks with Intel and Samsung

    Apple is looking for an alternative to TSMC. Talks with Intel and Samsung

    Apple has entered into preliminary talks with Intel and Samsung Electronics over the potential production of its core processors. According to reports from Bloomberg, executives from the Cupertino giant have already visited Samsung’s Texas factory and held independent consultations with Intel. Although negotiations are at an early stage and have not translated into concrete orders, the move is aimed at creating an alternative to Taiwan’s TSMC. The decision comes in the shadow of Tim Cook’s warnings about supply constraints on advanced chips, which have negatively impacted iPhone sales. The situation is compounded by the fact that Apple’s upcoming smartphone processors use technology shared with its most coveted AI chips.

    Apple’s actions lead to a clear conclusion. The market’s deep dependence on a single supplier, as TSMC has become, raises powerful operational risks, especially in an era of massive demand for artificial intelligence architectures that are drastically shrinking available capacity. At the same time, Apple’s scepticism about the reliability standards and scale of alternative suppliers exposes a brutal truth: TSMC’s technological and logistical advantage creates a barrier that competitors cannot quickly overcome.

    The strategic need to review purchasing processes in the high-tech sector is worth noting. Business leaders should calculate long-term deficits in state-of-the-art lithography nodes and treat diversification not as a fallback option but as a permanent part of the strategy. It is advisable to develop closer collaboration with alternative manufacturing partners early on in the design and R&D phase. Such an approach will minimise technological risks and make the hardware architecture more flexible, effectively securing the company’s business continuity in the face of further supply crises.

  • Samsung workers strike. CEO warns of crisis

    Samsung workers strike. CEO warns of crisis

    Samsung Electronics board chairman Shin Je-yoon has issued an internal memo to employees, calling for an amicable resolution to the wage dispute. The upcoming 18-day union strike, scheduled for 21 May, is aimed at winning higher bonuses based on profits from the AI memory segment. Management warns that operational paralysis at South Korea’s largest manufacturer by revenue will hit investors, trigger an outflow of foreign capital and weaken the domestic currency. However, the key risk remains a loss of confidence from global customers and a flight to competitors at a critical market juncture.

    The escalation of this conflict reflects a deeper, structural problem in the technology sector, where workers are increasingly demanding a direct share of the profits generated by the artificial intelligence revolution. Lessons learnt from the current impasse indicate that a possible production outage will not be limited to Samsung’s internal losses. An interruption in the supply of HBM and DRAM components will immediately destabilise global supply chains, impacting the margins and schedules of leading Silicon Valley giants and delaying the deployment of AI infrastructure around the world.

    In the current situation, it is worth noting the need to revise existing incentive models to respond more flexibly to profit spikes in the most stressed divisions. It would be advisable to develop mechanisms for transparent dialogue about remuneration structure before negotiations enter a phase that makes compromise impossible. From the perspective of long-term competitiveness, it seems a sensible step to balance wage pressures with maintaining investment capacity in R&D. Ultimately, the priority remains to protect operational continuity, as this determines market position in the absolute technology race.

  • The economics of open source: who pays for the code the world runs on?

    The economics of open source: who pays for the code the world runs on?

    Every day, as we reach for our smartphone, launch our favourite TV series or send a business email, we participate in the quiet miracle of modern technology. Beneath the shiny surface of apps and services lies an invisible foundation – open source software.

    It is millions of lines of code, written, refined and shared with the world for free by a global community. This code is the bloodstream of the internet and the backbone of the AI revolution.

    But this digital world, raised on the idea of freedom and collaboration, conceals a profound paradox. The global economy relies on an infrastructure created largely by volunteers, often balancing on the brink of professional burnout.

    It is as if global trade routes were based on bridges built as a hobby after hours. How long can such a structure last? Who actually pays for the code we all rely on?

    The invisible foundation: our global dependence

    Open source software is no longer an alternative. It has become the default building block of the digital world. Hard data paints a picture of almost total dependence. An analysis by Synopsys in 2024 showed that as much as 96% of the commercial code bases examined contained open source components.

    What’s more, on average, 77% of all code in these applications came from open source. It’s no longer a question of using individual libraries – it’s about building entire systems on a foundation created by the community.

    The scale of this dependency becomes even more striking when looking at the dynamics of consumption. In 2024, it was forecast that the total number of downloads of open source packages would reach the unimaginable figure of 6.6 trillion.

    The npm (JavaScript) ecosystem alone was responsible for 4.5 trillion requests, recording 70% year-on-year growth, while the AI-powered Python ecosystem (PyPI) grew by 87% to reach 530 billion downloads.

    The average commercial application today is a complex mosaic of an average of 526 different open source components. Each has its own life cycle, its own maintainers and its own potential problems.

    Cracks in the foundation: zombie code and a wake-up call called Log4j

    The ubiquity of open source is a double-edged sword. The same ease with which developers can incorporate off-the-shelf components into their projects leads to systemic neglect. The data is alarming: as many as 91% of the commercial code bases surveyed contain components that are ten or more versions out of date.

    This problem leads to so-called ‘zombie code’ – components that have had no development activity for more than two years. This phenomenon affects almost half (49%) of the applications on the market.

    This means that companies are building their critical systems on abandoned projects, without active support and, most importantly, without security patches. The consequence is a ticking time bomb: in just one year, the percentage of code bases containing high-risk security vulnerabilities has increased from 48% to 74%.

    Nothing illustrates this risk better than the December 2021 incident, when the world learned of the Log4j vulnerability. This small, free Java library for logging turned out to be embedded in millions of applications around the world.

    The vulnerability, named Log4Shell, received a maximum criticality rating of 10/10. An attacker could take full control of a server by sending a simple string of characters. US CISA director Jen Easterly called it “one of the most serious vulnerabilities she has seen in her entire career”.

    The Log4j incident became a global wake-up call, making companies brutally aware of how much their security depends on the work of anonymous volunteers.

    Worse still, even three years after the discovery of Log4Shell, up to 13% of all Log4j library downloads are still vulnerable versions. This demonstrates the profound inertia of organisations that fail to update their dependencies even in the face of a well-known, critical threat.

    The human cost of ‘free’ software: the burden of the custodian

    There are people behind every line of code. A model that treats their work as a free resource generates a huge human cost. Salvatore Sanfilippo, the creator of the Redis database, described this phenomenon as the ‘flooding effect’.

    Over time, the stream of emails, GitHub submissions and questions turns into a never-ending flood that leads to guilt over not being able to help everyone.

    The scale of this pressure is illustrated by the example of Jeff Geerling, who looks after more than 200 projects. Each day he receives between 50 and 100 notifications, of which he is only able to deal with a fraction.

    Nolan Lawson, another well-known maintainer, aptly put the emotional weight of this work. Notifications on GitHub are “a constant stream of negativity”. No one opens a notification to praise working code. People only post when something is wrong.

    This chronic pressure leads to burnout, which, in the context of open source, has clearly defined causes: demanding users, low quality contributions, lack of time and, most acutely, lack of remuneration.

    Knowing that work that consumes huge amounts of energy is the foundation for commercial products that make real profits for others is extremely demotivating. As one maintainer put it:

    “My software is free, but my time and attention is not”. Caregiver burnout is not just a personal tragedy. It is a critical risk to the global infrastructure.

    ‘Zombie code’ is a direct, measurable symptom of this crisis at the human level.

    The New Economy of Code: Towards a Sustainable Future

    In the face of these risks, the open source ecosystem is slowly maturing, moving from a volunteer-based model to more sustainable forms of funding.

    1. corporate patrons: strategy, not altruism

    At the forefront of this transformation are the technology giants. Companies such as Google, Microsoft and Red Hat have been the biggest contributors to the open source world for years. Their motivations, however, are not altruistic – they are cold, strategic calculations.

    Joint development of fundamental components (such as operating systems or containerisation) is simply more efficient. This allows them to compete at a higher level, in areas that directly differentiate their products.

    By becoming involved in key projects, corporations can also influence their direction, ensuring alignment with their own strategy.

    2 The power of institutions: the role of foundations

    The second pillar is non-profit foundations such as the Linux Foundation and the Apache Software Foundation. They act as neutral trustees for the most important projects, ensuring their stability and independence from a single corporation.

    They collect contributions from sponsors, creating a budget that allows them to fund key developers and safety audits.

    3 The maker revolution: the GitHub Sponsors model

    Alongside the big players, a new grassroots funding wave has been born. Platforms such as GitHub Sponsors allow direct, recurring contributions from users and companies, creating a revenue stream for maintainers.

    The story of Caleb Porzio, creator of Livewire and AlpineJS tools, is a prime example of the potential of this model. Standing on the brink of burnout, he decided to try his hand at the GitHub Sponsors programme.

    The real breakthrough came when he changed the paradigm: instead of asking for support, he decided to offer his sponsors additional, exclusive value. His secret turned out to be paid screencasts – a series of video tutorials.

    He reserved access to the full library exclusively for backers on GitHub. The effect was spectacular. His annual revenue grew by $80,000 in 90 days and crossed the $1 million threshold in the following years.

    This is a key lesson: a sustainable model does not have to be based on charity, but on building a viable business model around a free, open core.

    From stowaway to stakeholder

    ‘Free’ software has never been free. Its price, hitherto hidden, has been paid with the time, energy and mental health of a global army of volunteers. The model in which we treated their work as an inexhaustible resource is coming to an end.

    It is time for every participant in this ecosystem to undergo a transformation – from a passive ‘stowaway’ to an active stakeholder.

    This requires specific actions. Developers need to practice ‘software hygiene’ – regularly updating dependencies and consciously managing technical debt.

    Companies need to treat open source as a critical part of the supply chain, creating ‘software component inventories’ (SBOMs) and investing in business-critical projects. Investing in open source is not a cost, it is business continuity insurance.

    We stand at the threshold of a new era for open source – an era of professionalisation and sustainability. A future where creators are fairly remunerated and the global digital infrastructure is secure is within our reach. Building it, however, requires a conscious effort from each of us.

  • Mistral AI vs OpenAI – Has Europe gained a viable alternative to ChatGPT?

    Mistral AI vs OpenAI – Has Europe gained a viable alternative to ChatGPT?

    Last week will be remembered as the moment when Europe’s artificial intelligence sector went from defensive to precision technology offensive. In just 48 hours, Paris-based Mistral AI made a series of moves that go beyond mere model updates. By simultaneously launching the Mistral Medium 3.5 model, the Vibe development environment, the Workflows orchestration platform and the Le Chat mode of operation, the company unveiled a complete vertical technology stack (full-stack). For IT decision-makers and business leaders in Europe, the message is clear: digital sovereignty has become a measurable operational and financial category.

    The end of distributed models – Economics Mistral Medium 3.5

    A key element of the new strategy is Mistral Medium 3.5, a 128-billion-parameter scale model released under licence with open weights. From an analytical perspective, its greatest value is not just in its ‘raw power’, but in the unification of capabilities. It is the first Mistral model to combine advanced reasoning, deep instructional understanding and high consistency of generated code within a single parameter set.

    From a business perspective, such integration directly affects the total cost of ownership (TCO). Until now, companies have been forced to maintain a fleet of specialised models: one to analyse legal documents, another to support developers and another for simple classification tasks. Medium 3.5 allows for infrastructure consolidation. Results in benchmarks such as SWE-Bench Verified(77.6%) or tau³-Telecom (91.4%) prove that this model not only matches, but in specific engineering applications outperforms closed systems such as GPT-4o or Claude 3.5.

    Importantly for operations departments, Medium 3.5 can be deployed locally using four H100 or H200 GPUs. This opens the door to building private, secure AI environments inside corporate data centres, eliminating reliance on the latency and pricing policies of external cloud providers.

    From conversation to implementation – Vibe and Workflows

    Mistral AI has rightly diagnosed that the bottleneck for AI adoption in business is no longer the quality of the text generated, but the integration with processes. Vibe and Workflows tools are the answer.

    Vibe addresses a key productivity issue for engineering teams: developer lock-in when AI agents are working. The introduction of remote agents running in parallel in the Mistral cloud, while remaining fully synchronised with the local environment, changes the working paradigm. Integration with GitHub, Jira, Sentry and Slack means that AI ceases to be a ‘question assistant’ and becomes a ‘task performer’ that only notifies the human once the process is complete.

    Workflows, on the other hand, built on the proven Temporary engine (used by Stripe and Netflix, among others), is an orchestration layer that allows the construction of long-term, fault-tolerant workflows. This architecture separates the control plane from the data plane. In practice, this means that a regulated sector company can benefit from advanced process management in the cloud, while the data itself and its processing never leave the client’s secure, local infrastructure. This solution is ideally suited to the needs of players such as ASML or La Banque Postale, who are already automating customs processes or document compliance verification using it.

    Sovereignty as strategic risk management

    In 2026, the argument of digital sovereignty has evolved from an ideological discourse to a hard risk analysis. Statements by UK Secretary of State Liz Kendall or actions by the French Ministry of the Armed Forces point to a growing awareness of the risks posed by the concentration of computing power in the hands of just a few Silicon Valley players.

    For a European technology director, the on-premise model offered by Mistral is an insurance policy against three risks:

    1. political risk: the unpredictability of US export regulations and the impact of the US administration on the availability of AI services in situations of geopolitical tension.

    2 Regulatory risk: The need to strictly comply with RODO, the EU AI Act and the NIS2 and DORA directives. In the financial or healthcare sector, the ‘right to audit’ and full control over the location of data are legal requirements that standard APIs from OpenAI or Anthropic are not always structurally able to fulfil.

    3 Operational risk: Sudden changes in the behaviour of models (so-called model drift) or unilateral modifications of service terms by SaaS providers.

    With 60% of its revenues in Europe, Mistral has a natural interest in adapting to the local regulatory framework, making it a more predictable partner than its US competitors.

    Alliances and financial foundations

    Critics of the European approach have often pointed to a lack of capital and infrastructure. Mistral AI systematically refutes these claims. Institutional funding of €830 million from a consortium of banks (including BNP Paribas, HSBC, MUFG) for the purchase of 13,800 NVIDIA processors is a signal that AI in Europe is becoming an infrastructure asset, not just a speculative one.

    Equally important is Mistral’s incorporation into the NVIDIA Nemotron Coalition. The partnership with Jensen Huang allows Mistral to co-create boundary models on DGX Cloud infrastructure, while keeping them open. It is a strategic balancing act: using the best available hardware while promoting open model scales, driving innovation across the European developer ecosystem.

    Analysis of recent Mistral AI activities leads to three key conclusions for business leaders in Europe:

    • AI is becoming a commodity (Commodity), but control is not: Competitive advantage is built not by simply having access to models, but by being able to integrate them deeply into one’s own infrastructure without the risk of data leakage.
    • Cost optimisation requires flexibility: Open-weighted models allow for fine-tuning of performance to cost. The ability to run a Medium class model on your own servers drastically changes ROI calculations in AI projects.
    • Compliance is an opportunity, not a burden: Companies that choose the path of sovereign AI will pass through the regulatory sieve of the EU AI Act and NIS2 more quickly, gaining the trust of customers in critical sectors.

    Mistral AI is no longer just a ‘European alternative’. In May 2026, it appears as the mature architect of a new technological order in which performance goes hand in hand with autonomy. On the global chessboard of artificial intelligence, Europe, thanks to Mistral, has gained the ability to play its own sovereign game. Companies that recognise this now will gain a strategic resilience that no contract with a supplier from overseas can provide.

  • How Zebra Technologies has been building a global partner ecosystem for a decade

    How Zebra Technologies has been building a global partner ecosystem for a decade

    In the technology industry, ten years is a whole era. Zebra Technologies is just celebrating the anniversary of its PartnerConnect programme, which provides a good opportunity to look at how the business collaboration model has evolved in the digital age. What started in 2016 as an attempt to integrate dispersed partners is today a powerful network of more than 10,000 players worldwide, from resellers to innovative software developers.

    From the beginning, the programme has relied on a channel-first strategy, i.e. growth through the success of its partners. Throughout this decade, Zebra has not only delivered hardware, but built the framework for the entire ecosystem, introducing specialisations in areas such as RFID, vision systems and AI-based automation. The success of this strategy is confirmed not only by the numbers, but also by industry accolades.

    Today, in IT, a product alone, even the best, is not enough. Real value is created where technology meets industry-specific expertise – whether in logistics or healthcare. Zebra understood that it could not solve every customer problem on the front line alone. Instead, it created a platform that allows partners to overbuild their own high-margin services on the foundation of their technology.

    Greg Williams, VP Channel EMEA w Zebra Technologies Corporation

    “Partners play a key role in how we solve problems and deliver value to our customers by offering solutions that digitise, automate and deploy intelligence into frontline operations,” says Greg Williams, VP Channel EMEA at Zebra Technologies Corporation. “The PartnerConnect programme reinforces our channel-first strategy and the development of an ecosystem that meets the needs of today’s customers and AI-driven transformation.”

    When choosing technology providers, it is worth paying attention to whether they offer only tools or an entire environment to support data development and integration. It seems reasonable to look for partners that invest in long-term relationships and specialisations, as these are the ones that guarantee that the implemented solution will not get old after one season. It is also worth considering a greater focus on solutions that provide real-time data insights, as these, combined with the right software, build a real competitive advantage in the modern market. A good direction is to bet on ecosystems that promote the exchange of competencies – working with a specialised integrator often yields better results than trying to implement universal solutions on your own.

  • Buy European: China announces retaliation for new EU law

    Buy European: China announces retaliation for new EU law

    For decades, the European economy has been based on a paradigm of maximum openness, often at the expense of its own industrial base. Today, we are witnessing a historic turnaround. The regulations proposed by Brussels – from tougher cyber security standards to the ‘Buy European’ (Industrial Accelerator Act) – are not just a defensive response to global turmoil, but above all an ambitious plan to reclaim Europe’s role as a technological leader. Beijing’s vehement opposition, which has taken the form of diplomatic warnings in recent days, is the best evidence that the European Union has finally begun to define its national interests effectively.

    Power diplomacy: Brussels begins to speak with one voice

    China’s Ministry of Commerce and diplomats in Beijing accuse the EU of ‘double standards’ and violating free trade rules. But from an analytical perspective, what Beijing calls discrimination, for European business is a levelling of the playing field. For years, Chinese giants have benefited from subsidies and a protected internal market, expanding in Europe on terms that were unattainable for EU companies in China.

    China’ s current diplomatic offensive – letters to the European Commission and lobbying in capitals – confirms that the EU’s de-risking strategy has real leverage. The EU is ceasing to be merely a market and is becoming a standard-setter, which in the long term will ensure greater predictability and operational stability within the community.

    Cyber security as a foundation for trust

    A key pillar of the new strategy is the elimination of components from ‘high-risk’ suppliers in critical sectors. China is calling for these definitions to be removed, seeing them as a barrier to companies like Huawei. However, technological sovereignty is not a luxury but a cornerstone of national security, especially in times as geopolitically unstable as the present.

    From a market perspective, this process is stimulating a new wave of innovation within the EU:

    • Support for homegrown integrators: Reducing the share of non-trusted suppliers opens up space for European companies such as Ericsson and Nokia, as well as the growing Open RAN sector.
    • Integrity by design: European safety standards are becoming a global quality certificate, which could become a new export asset for EU technology.

    Industrial Accelerator Act: A new era for European innovation

    The ‘Buy European’ law is not an act of protectionism, but a strategy to build a healthy industrial ecosystem. Using public procurement to promote local manufacturing and low-carbon standards is a mechanism that aims to:

    1. Stimulating the energy transition: Promoting goods with a low carbon footprint forces global suppliers to innovate, while giving a technological edge to European manufacturers.
    2. Protection of intellectual property: Beijing’s opposition to technology transfer legislation shows that the EU is effectively safeguarding its most valuable assets against uncontrolled leakage of know-how.

    The introduction of a requirement for EU-produced content in public contracts is not a barrier, but an invitation to real investment on the continent. Companies that choose to build factories and research centres in Europe will gain stable and preferential access to one of the world’s largest markets.

    Investment in stability

    Although China is threatening ‘countermeasures’, the analysis of economic interdependence indicates that both sides have too much at stake to bring about a full-blown rupture of relations. For business, the following conclusions are key:

    • Reshoring and Nearshoring: building industrial sovereignty in the EU will shorten supply chains, drastically reducing the geopolitical risks that have destabilised production in recent years.
    • Growth of the local R&D sector: The need to replace some imported technologies with our own solutions will force an increase in R&D spending, which will raise the competitiveness of the European IT sector within a decade.
    • New partnerships: Diversifying suppliers (e.g. towards India or Vietnam) in response to Chinese restrictions will make European companies more resilient to economic blackmail.

    Empowerment through sovereignty

    Building the ‘Digital Fortress of the EU’ is in fact building the foundations for a modern, independent and competitive economy. Transitional tensions with Beijing are the natural result of correcting long-standing imbalances. For European entrepreneurs, Brussels’ current course means a return to the highest stakes game – not as sub-suppliers, but as technology owners and standard setters.

    Strategic autonomy does not mean isolation, but the right to choose partners on their own terms. In the long term, it is this assertiveness that will make Europe a more attractive and credible place to do business, where innovation goes hand in hand with security and values.

  • Why are AI agents becoming the target of cyber attacks? Trend overview 2026

    Why are AI agents becoming the target of cyber attacks? Trend overview 2026

    Over the past eighteen months, the enterprise sector has moved from a fascination with generative artificial intelligence to a phase of actively implementing it into operational processes. A key trend in this evolution is the shift from passive language models (LLMs) to AI agents – autonomous systems capable not only of generating text but also of performing tasks: writing code, managing email communications, calling APIs or authorising financial transactions. With this agility, however, comes a critical new category of threats: Indirect Prompt Injection (IPI). Recent data from reports by Google and Forcepoint shed new light on the scale and sophistication of these attacks, suggesting that agent systems security will become one of the biggest challenges for chief information security officers (CISOs) in the coming years.

    IPI mechanism: Data as instructions

    Traditional prompt injection attacks relied on direct manipulation of the model by the user (e.g. attempting to ‘jailbreak’ a bot by giving it the command to ignore security). Indirect Prompt Injection is a much more insidious phenomenon. It involves inserting malicious instructions into content that the AI agent processes as input – this could be web pages, PDF documents, emails or code repositories.

    The problem lies in the very architecture of current LLM models, which cannot absolutely separate system instructions (issued by the tool developer) from external data. When an AI agent analyses a web page in search of information, it may come across hidden text, which the model will interpret as a new overarching command. As a result, the attacker takes control of the agent’s logic, instructing it to, for example, send sensitive data to an external server or perform a destructive operation on the user’s file system.

    Analysis of market trends

    Google Security Research researchers, analysing CommonCrawl resources, point to an alarming trend. Between November 2025 and February 2026, there was a 32 per cent increase in the number of detected malicious injection attempts in publicly accessible web resources. This relatively short time frame demonstrates the dynamism with which the criminal community is adapting to new technologies.

    From a market perspective, Google’s observation on cost-benefit calculus is key. Until recently, IPI attacks were considered the realm of academic research – they were difficult to implement and often failed due to the instability of the results generated by AI. Now, with the increased reliability and agility of agents, these attacks are becoming ‘viable’. AI’s ability to autonomously call external tools (tool calling) means that a successful injection of instructions has an immediate and measurable financial or operational impact.

    The Google study allowed the current IPI trials to be categorised into five groups:

    1. Harmless jokes: Attempts to change the tone of an agent’s response.
    2. Helpful tips: Suggesting preferential answers to the model (often on the edge of ethics).
    3. Optimisation for AI (AI-SEO):Hidden phrases to position products in assistants’ responses.
    4. Deterring agents: Instructions prohibiting AI from indexing or summarising a particular page.
    5. Malicious attacks: Data exfiltration and sabotage (deletion of files, destruction of backups).

    Although the latter are often at an experimental stage at present, their increasing complexity suggests that it is only a matter of time before they enter the phase of mass attacks.

    From coding assistants to financial transactions

    The Forcepoint report provides concrete evidence of how IPI manifests itself in professional software and financial tools. Experts have identified ten verified indicators of attacks targeting popular tools such as GitHub Copilot, Cursor and Claude Code.

    The attack scenario is mundane: a programmer uses an AI agent to analyse a library or documentation on an external site. This site contains a hidden AI instruction. When the agent ‘reads’ the site, it is instructed to execute a command in the terminal that destroys local backups. Since the agent has permission to operate on the file system (which is essential in a programmer’s job), the command can be executed without additional verification.

    Even more dangerous are attempts at financial fraud. Forcepoint points to cases where complete transaction instructions are sewn into web content, e.g. PayPal.me links with a predefined amount along with step-by-step instructions on how the agent is to finalise the payment. In systems where AI has access to digital wallets or corporate payment systems, the risk of capital loss becomes immediate.

    The paradox of detection and the challenges for business

    One of the most worrying findings from the Forcepoint report is the so-called detection paradox. The phrases and keywords used by attackers to inject hints are identical to the terminology the cyber security community uses to describe and analyse these threats. This renders simple filters based on word blacklists ineffective – either blocking legitimate expert communications or letting intelligently worded attacks through.

  • Why PCHE chips are key to the next stage of artificial intelligence development

    Why PCHE chips are key to the next stage of artificial intelligence development

    If it seems like the semiconductor market is back in the spotlight, that’s because it really is. ASML, the world’s leading supplier of photolithography systems, recently reported that the company’s share value has risen by around 97% in the last six months, reflecting a renewed increase in investment in chip manufacturing. However, behind the headlines is a less high-profile, and perhaps equally important, issue related to managing the heat generated both during chip production and by the AI equipment that depends on them, explains Ben Kitson, director of business development at chemical etch manufacturing company Precision Micro.

    The current cycle is atypical. Technology giants are pouring huge resources into AI data centres, generating unprecedented demand for high-performance hardware. What’s more, much of this computing hardware has already been contracted, according to Simply Wall St.

    This combination poses a real challenge for infrastructure planning, as AI system operators face high power density and unprecedented cooling requirements in their data centres.

    Traditional data centres were designed for racks with power consumption of 5-10 kW, but AI clusters now consume 30-50 kW per rack. Furthermore, advanced GPU and accelerator platforms are now reaching 100-120 kW per rack, meaning that air cooling alone is no longer sufficient.

    Thermal management at the forefront

    Thermal constraints are finally starting to attract attention. In May 2025, semiconductor giant Nvidia announced that hyperscale operators are installing tens of thousands of its latest GPUs every week, and the pace of deployment is set to accelerate further with the introduction of the ‘Blackwell Ultra’ platform.

    According to the company’s public development plan, its next ‘Ruby Ultra’ architecture will allow more than 500 GPUs to be housed in a single server rack with up to 600 kW of power consumption, highlighting the scale of the cooling challenges currently facing artificial intelligence infrastructure.

    Across the AI infrastructure sector, thermal stability has become a key constraint not only in chip design, but also in the infrastructure required to power and cool high-density computing environments.

    High-performance liquid cooling systems and microchannel heat exchangers have ceased to be niche solutions and have become essential components. The same engineering principles – precise control of fluid flow, maximisation of heat transfer and production of compact components with tight tolerances – apply to many applications today.

    The engineering expertise gained in high-precision semiconductor environments is now being applied to printed circuit heat exchanger (PCHE) technology for AI data centres, which is the interface between electronics manufacturing and energy infrastructure.

    Why PCHE systems matter

    PCHE systems are not just a more advanced version of conventional designs such as shell-and-tube or plate-and-frame heat exchangers. They are smaller, lighter and more efficient, making them ideal for space-constrained and high-density installations.

    In data centres, this translates into a higher number of racks per square metre without compromising reliability, while at the same time reducing the energy required to cool the computing equipment.

    Energy efficiency is another factor, as AI workloads are predicted to cause a significant increase in global electricity demand. Goldman Sachs forecasts an increase of up to 165% by 2030, meaning that every watt of energy used for cooling counts.

    Compact, high-performance PCHEs not only save installation space, but also help control energy costs and improve the overall energy efficiency ratio (PUE), becoming a key component of high-density AI infrastructures in hyperscale environments.

    Chemical digestion scaling

    The very qualities that make PCHEs so effective – microchannels, large heat transfer area and tight tolerances – simultaneously make them difficult to manufacture. Conventional machining allows prototyping, but is slow, causes burrs and is not cost-effective for volume production.

    Chemical etching, on the other hand, eliminates these problems by creating all the channels simultaneously over the entire surface of the plate. In this way, precise stress-free structures are achieved, and then the finished heat exchanger plate is created by diffusion welding.

    Chemical etching company Precision Micro has been producing PCHE boards since the technology was introduced to the market in the 1990s. It has a specialist 4,100sq m facility that is capable of processing thousands of boards up to 1.5 metres long and up to 2 mm thick each week. This enables batch production of etched plates and makes the facility one of the largest sheet etching centres of its kind in the world.

    This is because scaling production to thousands of boards requires tightly controlled chemical processes and rigorous quality control. Few suppliers in the world have the expertise, production capacity and process control system necessary to mass-produce etched PCHE boards.

    Pressure on the supply chain

    Producing PCHE boards in high volumes requires significant capital investment and advanced technological processes. Although new production capacity is emerging in Asian markets, many OEMs in Europe and North America continue to emphasise reliability, process repeatability and quality as key criteria when sourcing precision components.

    Working with established regional partners can reduce logistical complexity, improve intellectual property protection and ensure consistent quality, especially when supply chains are looking for local suppliers of core competencies.

    Etched flow plates and high-performance heat exchangers are an essential, but often invisible, part of the AI ecosystem. Through precise temperature control, they help data centres maintain high-density computing racks without the risk of overheating and enable reliable and efficient scalability of AI infrastructure.

    This is the hidden reality behind the renewed increase in investment in chip manufacturing. Innovation is not just driven by smaller transistors, new node geometries or more efficient GPUs. They also depend on the physical infrastructure that enables these technologies to operate reliably at industrial scale.

    PCHE chips may not attract as much attention as chips or artificial intelligence models, but they underpin the performance, efficiency and scalability of both. Where every watt of energy and every fraction of a degree of temperature counts, precision thermal hardware is quietly enabling the progress of one of the fastest growing technology cycles of the last decade.

    Source: Precision Micro

  • Windows K2, Microsoft’s new strategy for dealing with the problems of Windows 11

    Windows K2, Microsoft’s new strategy for dealing with the problems of Windows 11

    In the history of Microsoft’s operating systems, it has rarely been the case that a product still has to prove its worth almost five years after its release. Windows 11, while statistically dominating the market, is at a critical turning point. The project, internally dubbed ‘Windows K2’, is not just a package of technical fixes – it is an admission of flaws in user experience (UX) design and an attempt to regain the trust of the business sector at a time when support for Windows 10 has finally expired.

    Forced statistics: The reality of market 2026

    From an analytical perspective, the current market position of Windows 11 is the result not so much of user enthusiasm as of the inevitability of the software lifecycle. Although the system now controls around two-thirds of the market, a third of the PC fleet still operates on Windows 10 or older versions. In the enterprise sector, this resistance has been particularly pronounced.

    For business, the transition to Windows 11 presented two main barriers: stringent hardware requirements (TPM 2.0 module, newer generations of processors) and operational costs due to the need to train employees and adapt infrastructure. Microsoft, realising the risk of mass migration to alternative ecosystems or extending the life of old hardware, launched the ESU (Extended Security Updates) programme. However, paid support for Windows 10 is only a temporary solution – an expensive ‘stability tax’ that companies pay to avoid a still immature system. The K2 project is supposed to be an argument for investing this money in migration rather than persisting with the past.

    Performance architecture: Tackling “resource intensity”

    One of the most serious criticisms of Windows 11 is its inefficiency in resource management compared to its predecessor. Benchmark tests on identical hardware indicated that Windows 11 shows a greater appetite for RAM, without offering a commensurate increase in productivity in return. For IT departments managing thousands of workstations, this system ‘overweight’ means a shorter hardware lifecycle and higher TCO.

    A key element of the K2 operation is the full integration of the WinUI 3 structure. Microsoft is aiming to unify the interface, which is expected to eliminate historical legacies in the code that slow down File Explorer or the Start Menu. From a business point of view, the smoothness of the interface is not a question of aesthetics, but of ergonomics. Every second of delay in rendering menus or searching for files on a corporate scale translates into measurable efficiency losses.

    An end to ideology in favour of pragmatism

    Over the past few years, Microsoft has tried to impose its vision of the system as a service platform on users, manifesting itself through, among other things:

    • Stiff, limited taskbar.
    • Intrusive suggestions and ads in the Start Menu.
    • Aggressive promotion of Edge, Bing and OneDrive services.

    From a systems administrator’s perspective, this approach is problematic. An operating system in a professional environment should be a transparent tool, not a marketing channel. Pavan Davuluri’s announcements about restoring full functionality to the taskbar (including the ability to position it freely) and reducing unwanted content in the Start Menu demonstrate a return to pragmatism.

    Removing the ‘advertorial’ and intrusiveness of MSN services from the widgets is a step towards regaining the professional nature of the system. Business does not need the weather forecast interspersed with tabloid gossip inside a work tool. The K2 project seems to understand that control of the desktop must return to the user and administrator.

    Copilot: From euphoria to manageable assistance

    Artificial intelligence has become a cornerstone of Microsoft’s strategy, but the way it has been implemented in Windows 11 has been controversial. The integration of Copilot into applications such as Notepad and Paint was seen by many professional users as an unnecessary burden on the system and a potential risk to data confidentiality.

    There is a significant redefinition of the role of AI within the K2 project. Microsoft is moving away from the concept of ‘AI everywhere’ to ‘AI where it makes sense’. For the business sector, the most significant change is the ability to fully manage and disable Copilot functions on computers managed by central policies (GPO/Intune). This is critical for companies in regulated industries (finance, medical, legal) where uncontrolled data flow to the cloud is unacceptable. Copilot is intended to become an optional assistant rather than an integral, non-removable part of the system kernel.

    Repairing the feedback loop

    The Windows 11 release cycle was plagued by unstable updates that could cripple entire departments. Criticism focused on prioritising new features over code quality. As part of Operation K2, Microsoft announced a ‘resuscitation’ of the Windows Insider programme.

    For the business, this signals that the patch testing process will become more rigorous. The promise that Insider feedback will realistically influence the final shape of the update is key to avoiding a Patch Tuesday scenario. Additionally, greater flexibility in deferring updates and streamlining the configuration process for new devices (OOBE) is expected to reduce technical downtime, a direct gain for the operational agility of businesses.

  • Data centre spending peaks. How is AI driving infrastructure construction?

    Data centre spending peaks. How is AI driving infrastructure construction?

    Market forecasts for the technology sector are rarely so clear-cut. According to the latest data from analyst firm Gartner, global IT spending will reach $6.31 trillion in 2026. This is evidence of a shift in the centre of gravity of global business. The 13.5 per cent year-on-year increase, significantly higher than previous estimates, is a direct result of the artificial intelligence infrastructure arms race.

    A foundation of concrete and silicon: Exploding the data centre sector

    The most glaring point in the report is the dynamics of investment in data centres. Gartner predicts that spending in this segment will grow by 55.8% in 2026, surpassing the $788 billion barrier. To understand the scale of this phenomenon, it is important to look at it through the lens of technological change: we are not dealing with a simple expansion of existing resources, but with a complete reconfiguration of computing architecture.

    Traditional data centres, optimised for data storage and standard business applications, are giving way to HPC facilities. These are designed for the specific requirements of graphics processing units (GPUs) and TPUs, which are at the heart of modern AI. The surge in investment extends not only to the servers themselves, but also to advanced liquid cooling systems, high-density power infrastructure and enabling technologies, without which scaling large-scale language models (LLMs) would be impossible.

    In parallel, the IT services segment, infrastructure deployments and the IaaS model will generate a turnover of $1.87 trillion. This suggests that the market is ripe for consuming computing power in a hybrid model, where physical infrastructure goes hand in hand with specialised management.

    The dominance of hyperscalers: The computing oligopoly

    A phenomenon of a structural nature is the increasing concentration of computing power in the hands of a few players. By 2031, hyperscalers – mainly Microsoft, Google (Alphabet) and AWS (Amazon) – are forecast to control as much as 67% of global data centre capacity.

    This year alone, these three giants plan to spend more than $500 billion on capital expenditure related to AI infrastructure. Such gigantic outlays create a barrier to entry almost impossible for new players to overcome. For businesses, this means that they have to strategically choose a cloud provider that de facto becomes a partner in delivering a data-driven competitive advantage.

    We are also seeing a new geopolitical map of IT investment. Microsoft’s $25 billion investment in Australia or Meta’s construction of the world’s 32nd data centre show that the availability of stable energy sources and space is becoming more important than proximity to traditional business clusters.

    Strategic alliances and supply chain

    Analysis of recent market deals sheds light on the direction in which the industry is heading. Anthropic’s agreements with Google and Broadcom to supply TPU (Tensor Processing Unit) power from 2027 onwards point to the growing importance of proprietary chips to make the giants independent of the dominance of third-party processor suppliers.

    Even the biggest players need flexibility and specialised GPU cloud providers to cope with surges in computing power demand, as evidenced by Meta’s $21 billion partnership with CoreWeave. The biggest profits will be generated not by the AI developers themselves, but by the companies supplying the ‘components’ of this revolution – from accelerator manufacturers to power suppliers.

    Market insights for business

    In the context of the upcoming 2026 Investment Summit, business leaders should consider three key lessons:

    1. Infrastructure as a bottleneck: A 55.8% increase in spending on data centres suggests that access to computing power may become a scarce commodity. Companies planning large-scale AI deployments need to secure infrastructure resources in advance to avoid product development downtime.
    2. The need for cost optimisation: With IT spending reaching $6 trillion, efficiency becomes key. The shift from generic cloud solutions to AI-optimised infrastructure (such as IaaS supported by TPUs/GPUs) will determine the margins of digital projects.
    3. A new ecosystem of suppliers: Companies such as Broadcom and CoreWeave are worth watching. They represent a new category of technology partners who, through specialisation, are able to provide the components needed to scale AI faster and cheaper than traditional hardware suppliers.
  • Asseco South Eastern Europe publishes results: Leap in profitability

    Asseco South Eastern Europe publishes results: Leap in profitability

    In the first quarter of 2026, Asseco South Eastern Europe (ASEE) proved that in the mature technology sector, the key to success is not just to aggressively grow revenues, but to rigorously improve profitability. The company’s results for the first three months of the year show a clear disparity between scale growth and profit dynamics. While consolidated revenues grew by a solid 9% to PLN 434.5 million, net profit attributable to shareholders of the parent company shot up by an impressive 33% to PLN 47.5 million.

    This jump in efficiency is primarily due to the Banking Solutions segment. The Group was able to translate the increased scale of operations into real margin improvement, which, with EBITDA up 13% (to PLN 84.8 million), suggests deep cost optimisation within the regional operations. Importantly, this growth is almost entirely organic. Despite last year’s acquisitions, the newly acquired companies contributed just €0.6m to revenues. This means that ASEE’s growth engine is running at full capacity based on existing, already integrated resources, rather than by ‘buying’ results.

    Analysing the structure of these figures, one can conclude that the company has entered a phase of mature monetisation of previous investments in the Balkan region and Turkey. The focus on the banking sector and authentication technologies is proving to be an extremely apt strategy in an era of accelerated digitalisation of financial services in this part of Europe. The dynamics of operating profit, which grew by 18%, confirms that ASEE’s business model is highly scalable – the company is able to generate significantly higher profits without a commensurate increase in operating expenses.

    From a business perspective, it is worth noting the potential inherent in the integration of new entities. Although their current impact on the group’s bottom line is marginal, they represent strategic beachheads for future expansion. It seems reasonable to keep a close eye on the pace of integration of these assets into the group’s ecosystem in the coming quarters, as they could become further fuel for margins. Investors and management may also want to consider a greater focus on diversification in contact centre and cyber security solutions. This will preserve the resilience of the results in a possibly saturated market for traditional banking systems. Maintaining current cost discipline, while subtly scaling new assets, appears to be the optimal path to sustaining market leadership in the region.

  • Big Tech vs Australia. New law to force platforms to pay publishers

    Big Tech vs Australia. New law to force platforms to pay publishers

    Australia is once again becoming a global testing ground in the state-BigTech relationship. The government in Canberra has announced plans to introduce a ‘News Bargaining Incentive’ – a mechanism to replace the existing, ineffective 2021 regulations. The new regulation presents giants such as Meta, Alphabet and TikTok with a stark choice: either negotiate commercial deals with local publishers, or face a tax of 2.25% of their local revenues.

    According to the bill, which is expected to come into force in July 2025, the proceeds of the new levy will not go into the general state budget, but will be redirected directly to media organisations. The key criterion for the distribution of funds is to be the number of journalists employed, in order to promote real content creation and not just coverage. Prime Minister Anthony Albanese, despite warnings from the US administration about possible retaliatory tariffs, emphasises the sovereignty of Australian economic policy.

    Australia’s move is a shift away from a soft negotiation model to hard fiscalism. The previous system allowed platforms to avoid payment by extinguishing contracts or, in extreme cases, blocking news content, something the Met has already tested in 2021. The current proposal is much harder to neutralise from an operational level – a tax on revenue is a cost that cannot be avoided with a simple algorithm change.

    However, the geopolitical risks are worth noting. Donald Trump’s announcements of tariffs on countries that tax US technology companies suggest that local journalism protection could become the trigger for a wider trade conflict. For the technology sector, this represents a period of increased volatility and the need to review strategies for presence in markets with strong protectionist tendencies.

  • The end of Microsoft’s monopoly on OpenAI. What does the new agreement mean for the market?

    The end of Microsoft’s monopoly on OpenAI. What does the new agreement mean for the market?

    The most influential partnership in the history of artificial intelligence has just undergone a fundamental transformation. Microsoft and OpenAI have announced a renegotiation of the terms of their partnership, ending Azure’s previous exclusivity to offer ChatGPT creator models. The new agreement paves the way for the startup to have a direct presence in the ecosystems of Microsoft’s biggest competitors, including Amazon Web Services and Google Cloud. While the original deal, backed by a $13 billion investment, defined the current AI landscape, both parties recognised that the existing formula had become too cramped for their growing ambitions.

    Strategic foundations for change

    Under the new arrangement, Microsoft will remain OpenAI’s primary cloud partner until 2032, and the startup has committed to spend at least $250 billion on Azure services. The Redmond giant retains priority rights to deploy new products, but loses its sales monopoly. In return, Microsoft has secured a 20 per cent share of OpenAI’s revenue by 2030, importantly including if the startup achieves so-called artificial general intelligence (AGI). Previous provisions would have allowed OpenAI to stop paying Microsoft when it made the technological leap to AGI, which was a significant risk for the investor. At the same time, Microsoft stops sharing profits with OpenAI from offering their models within Azure, simplifying the giant’s financial structure.

    The loosening of ties is a move dictated by the maturity of the market. OpenAI, as it prepares to go public, needs to demonstrate its ability to scale its enterprise business beyond a single vendor’s infrastructure, especially in a clash with the rising Anthropic. From Microsoft’s perspective, giving up some control of OpenAI’s model distribution is the price of taking off the burden of funding the giant infrastructure needed by the startup and, perhaps most importantly, easing pressure from antitrust authorities in the US and Europe. Satya Nadella’s strategy is evolving towards diversification; Microsoft is increasingly promoting its own models and third-party solutions within Copilot, reducing the critical dependence on a single technology provider.

    It is worth noting the increasing freedom to build multi-cloud strategies. It seems a good direction to review current contracts with cloud providers for upcoming AWS Bedrock or Google Vertex AI deployments, which will optimise costs and reduce latency. It is also worth monitoring the pace of Microsoft’s in-house models, as their growing role in Copilot 365 may soon offer better value for money than standard external models.

  • Benchmarks won over loyalty: Microsoft bets on Anthropic. A blow for OpenAI

    Benchmarks won over loyalty: Microsoft bets on Anthropic. A blow for OpenAI

    Microsoft ‘s choice of the Claude Mythos model as the foundation for its new software security architecture sets a significant precedent in the Redmond-based technology giant’s strategy. This decision, while at first glance it may appear to be a mere operational adjustment, in reality reveals deeper market shifts in the generative AI sector and changing priorities in digital risk management. Analysing the facts of Anthropic‘s model integration, a clear pattern can be discerned: Microsoft is moving from a phase of fascination with general AI capabilities to a phase of rigorous, benchmarked selection of specialised tools.

    A key reference point for this decision is the CTI-REALM benchmark, co-developed by Microsoft engineers. The fact that Claude Mythos scored highest in it, distancing the GPT-5.4-Cyber model, is a market signal that cannot be ignored. Microsoft, as OpenAI’s largest partner and investor, has shown that pragmatism and hard data, rather than corporate loyalty, wins in critical areas such as cyber security. This strategic approach to model vendor diversification avoids vendor lock-in and ensures access to the most effective solutions in specific niches.

    From a business perspective, integrating Mythos directly into the software development cycle is a classic implementation of the ‘Shift-Left’ strategy. The cost of fixing a vulnerability discovered at the production stage is many times higher than eliminating the bug at the code writing stage. The cited data about the detection of a vulnerability that has existed for 27 years and the success of Mozilla, which identified 271 vulnerabilities thanks to Claude Mythos, are not just technological curiosities. They are concrete indicators of return on investment (ROI). For companies operating on huge collections of legacy code, automating security audits using such high-precision models means saving thousands of hours of high-level professionals and drastically reducing the legal and reputational risks associated with potential data leaks.

    The market reaction to Mythos’ capabilities, manifested, for example, by concern in the banking and insurance sectors and interest from the NSA, suggests that there is a new kind of regulatory risk involved. Claude Mythos is seen as a dual-use technology. The model’s ability to instantaneously map vulnerabilities makes it a defensive tool of unprecedented power, but also a potential offensive instrument. The embargo under consideration by US agencies and the restrictive access under Project Glasswing suggest that in the near future, access to the most advanced cyber security models may be rationed in a similar way to armament or high-end cryptographic technologies. Companies must therefore take into account in their strategies the fact that technological advantage in the area of AI may be limited by state interventions.

    It is also worth noting a painful market lesson for OpenAI. The fact that the release of GPT-5.4-Cyber failed to draw attention away from the Anthropic solution is indicative of the change in expectations of corporate customers. The market has become saturated with promises of versatility; solutions with proven effectiveness in specific usage scenarios are now sought after. Microsoft, by implementing Claude into its 365 applications and its internal processes, de facto legitimises Anthropic as an equal, and in some respects superior, technology partner. This suggests that OpenAI’s dominance may be more fragile than stock market valuations would indicate.

    For Microsoft itself, the move is an attempt to run away from mounting criticism over historical security lapses. Redmond has understood that with the current scale and complexity of the Windows and Azure ecosystem, traditional methods of manual code review are inefficient. Using Claude Mythos as an intelligent filter to verify developers’ work is an attempt to systemically address the problem of technology debt. If Microsoft manages to significantly reduce the number of critical vulnerabilities in its products with this solution, it will set a new market standard to which all SaaS and Cloud players will have to adapt.

  • Layoffs at Big Tech 2026 – why the Meta and Microsoft are cutting jobs

    Layoffs at Big Tech 2026 – why the Meta and Microsoft are cutting jobs

    Silicon Valley is going through a painful but precise tissue replacement operation. While investors are reacting enthusiastically to new stock market records, thousands of Met and Microsoft employees are finding out that their roles are becoming redundant in the new algorithm-oriented world order. What we are seeing is no longer just an echo of the Pocovid correction, but a fundamental shift in strategic priorities.

    The Met has just announced a 10 per cent reduction in its workforce, which, combined with the elimination of unfilled vacancies, means the removal of nearly 14,000 jobs from the labour market. However, a deeper financial analysis of Mark Zuckerberg’s company reveals a second bottom to this decision. The company plans to increase capital expenditure to as much as $135 billion in 2027, focusing on building data centres and developing Superintelligence Labs.

    This is a classic example of aggressive reallocation of resources: billions saved on the ‘traditional’ workforce are funding an artificial intelligence arms race. Behind the scenes, however, there is talk of the phenomenon of “AI-washing” – conveniently attributing redundancies to technological advances to cover up the 2020-2022 recruitment mistakes.

    Microsoft in Redmond, on the other hand, is employing a more subtle but equally telling tactic. For the first time in its history, the giant has opted for a voluntary departure programme targeting around 7% of its US workforce. The ‘sum 70’ criterion (combining age and seniority) suggests that the company wants to slim down the structure of costly, experienced managers whose competencies may not be suited to the era of generative models. At the same time, Microsoft is simplifying the reward and bonus system, giving executives more leeway to reward the talent that realistically drives new business divisions.

    This trend is not isolated – Amazon, Intel or Cisco are following a similar path. There is a clear lesson for the business world: operational efficiency in 2026 is no longer about having the largest teams, but about building the most scalable systems. The technology labour market is no longer a safe haven, becoming a testing ground for a new definition of corporate productivity.

  • More expensive servers and smartphones? How the war in the Middle East is crippling production

    More expensive servers and smartphones? How the war in the Middle East is crippling production

    While Silicon Valley’s attention is focused on the architecture of the latest GPUs, the real threat to the pace of artificial intelligence development has manifested itself in the petrochemical sector. Recent disruptions in the Middle East, including the hit to the Saudi Jubail complex, have exposed the heavy dependence of global electronics on a narrow set of feedstock suppliers.

    A key flashpoint has been the stalled production of high-purity polyphenylene resin (PPE). This material is essential for the laminates in modern printed circuit boards (PCBs), the backbone of everything from smartphones to powerful AI servers. The fact that SABIC accounts for around 70% of the world’s supply of this component means that any break in its Gulf Coast facilities immediately resonates with factories in South Korea and China.

    The effects are tangible and costly. In April alone, PCB prices rose by 40% compared to March, which overlapped with the ongoing copper boom. Copper foil, which accounts for nearly 60% of raw material costs in wafer production, has become 30% more expensive this year. For manufacturers such as South Korea’s Daeduck Electronics, which supplies Samsung and AMD, this situation has forced a complete shift in management priorities. Instead of negotiating contracts with customers, operations directors now spend most of their time securing chemical supplies. Waiting times for epoxy resins have increased dramatically – from three to as much as fifteen weeks.

    The AI infrastructure sector is feeling the most pressure. Multilayer circuit boards used in data centres are many times more expensive than standard models, and prices can exceed 13,000 yuan per square metre. Despite this, cloud providers seem ready to accept these increases. With talk of the PCB market growing to nearly $96 billion by 2026, key players are prioritising continuity of supply over margins.

  • DeepSeek and Chinese AI – Why is the State Department warning allies?

    DeepSeek and Chinese AI – Why is the State Department warning allies?

    US diplomacy is entering a new phase of offensive against Chinese artificial intelligence leaders. The State Department has issued global guidelines to its outposts, ordering them to warn foreign governments about the practices of companies such as DeepSeek, Moonshot AI and MiniMax. The crux of the dispute is no longer just access to processors, but the process of so-called distillation, which Washington explicitly calls the theft of American technological thought.

    From a business perspective, distillation is a tempting shortcut. It allows smaller, cheaper-to-operate models to be trained on the results generated by powerful systems such as those from OpenAI. For Chinese startups, it’s a way to erode the US advantage at a fraction of the research cost. However, according to the US administration, this process not only copies intellectual architecture, but is done without authorisation, hitting Silicon Valley’s commercial foundations.

    DeepSeek’s situation is key here. The startup, which recently electrified the market with its V3 model, has just unveiled the V4 version, optimised for Huawei hardware. This is a clear signal of building an independent ecosystem that challenges the hegemony of Nvidia and Microsoft. While DeepSeek has consistently denied using synthetic data from OpenAI, US lawmakers have received reports suggesting the opposite: deliberately replicating the behaviour of models in order to clone them.

    Washington alerts that ‘distilled’ models often lack built-in fuses and controls, making them unpredictable for corporate use. At the same time, many Western institutions are already banning the use of DeepSeek tools, citing data privacy concerns.

    The timing of this escalation is no coincidence. The escalation in rhetoric comes just weeks before President Donald Trump’s planned visit to Beijing. The dispute over AI intellectual property becomes a bargaining chip in a broader technology war, which, after a brief period of relaxation, is again gaining momentum. The choice of AI model supplier is ceasing to be a purely technical decision and is becoming a statement in a growing geopolitical conflict.

  • Alphabet invests $40bn in Anthropic. Is it fighting for control with Amazon?

    Alphabet invests $40bn in Anthropic. Is it fighting for control with Amazon?

    Alphabet, Google’s parent company, has announced its intention to invest up to $40 billion in Anthropic, a startup that for the Mountain View giant is both a key cloud customer and one of its fiercest competitors in the race for supremacy in artificial intelligence.

    The structure of this deal reflects the new reality of funding the AI sector, where capital is closely tied to specific outcomes. Google will put up $10 billion in cash at a $350 billion valuation for the startup. The remaining 30 billion will only be deployed once the developers of the Claude model achieve rigorous performance targets. For Alphabet, this is not only an investment of capital, but above all an attempt to forge closer ties with an entity that has emerged as a leader in niches where Google is still searching for its identity.

    The move comes just days after Amazon pledged its own $25 billion cash injection to Anthropic. A situation where two of the world’s biggest cloud providers are bidding for the same startup shows how desperately tech giants need the success of external models to drive sales of their own computing infrastructure.

    Anthropic’s driving force is no longer just the promise of secure artificial intelligence, but real financial results. The company’s annual revenue has just surpassed the $30 billion barrier, an impressive jump from the $9 billion recorded at the end of 2025. Investors are responding enthusiastically, with some offers from the venture capital market valuing the company at up to $800 billion. Underpinning this growth is Claude Code, a tool that dominates the software segment, and Anthropic’s Cowork agent, whose plug-ins have recently caused jitters in the stock markets, driving down the valuations of traditional SaaS software companies.

    Anthropic’s greatest challenge, however, remains its ‘hunger for power’. Scaling the models requires infrastructure of a scale never seen before. The startup is securing this through multi-year agreements with Broadcom and CoreWeave, as well as an ambitious $50 billion plan to build its own data centres in the US.

    The market is divided into specialised tools and Anthropic, with its focus on coding and autonomous agents, is proving that it is possible to successfully challenge general-purpose models. Alphabet, by investing in Anthropic, is buying itself an insurance policy in case the startup’s approach proves to be the target business standard.

  • 14,000 cyber attacks in three months: Why is the 1970s protocol still a big risk for the industry?

    14,000 cyber attacks in three months: Why is the 1970s protocol still a big risk for the industry?

    The security of modern factories and power plants still relies on technology from almost half a century ago, which is becoming a growing concern for global business. The latest report from experts at Cato Networks warns of a wave of cyber attacks targeting industrial controllers (PLCs). Hackers are taking advantage of the fact that the widely used Modbus protocol was developed in the 1970s and has no security features – for someone who knows how to use it, taking control of a networked machine is worryingly easy today.

    Modbus, a communication protocol developed in 1979, is in the spotlight. At the time of its creation, no one assumed that industrial controllers (PLCs) would ever be connected to the public Internet. Modbus was designed with trusted, isolated internal networks in mind. As a result, it was completely devoid of the mechanisms we recognise as elementary today: encryption and authentication. This openness, once an advantage to facilitate system integration, has become an invitation to hackers.

    The scale of the problem is illustrated by data collected by a team led by Dr Guy Waizel and Jacob Osmani. Over just three months in autumn 2025, they identified coordinated activity targeting PLCs, involving more than 14,000 attacked IP addresses in 70 countries. These are not isolated incidents, but a systematic mapping of global industry vulnerabilities.

    The attackers’ strategy is multi-layered and precise. Most of the identified interactions – more than 235,000 requests – involved so-called data extraction. The hackers do not immediately try to destroy machines; instead, they quietly read the contents of registers, learning about process parameters and device configuration. The next step is to ‘fingerprint’ the hardware. By knowing the manufacturer and software version, criminals can match specific security vulnerabilities to a particular machine.

    What starts as innocent information gathering can quickly turn into a catastrophic scenario. To understand the real risks, Cato Networks experts ran a simulation on the Wildcat-Dam project. They demonstrated that, with just a laptop and access to the unsecured Modbus protocol, they were able to take control of the digital logic of the firewall. By manipulating register values, the researchers caused an artificial flood, overriding security limits and remotely opening the dam’s gates.

    The geography of the attacks coincides with the map of global industrial powers. The United States, France and Japan have been the main targets, together accounting for 61 per cent of incidents. It is also worrying that attackers are not confined to one industry. Although the manufacturing sector is the most common victim, traces of intrusion have been found in healthcare facilities, construction and even urban infrastructure management systems. What emerges is a picture of opportunistic hacking: attackers are looking for any available controller that has been recklessly exposed to the public network.

    Technical analysis suggests that some of this activity is coming from infrastructure located in China, although the identity of the actors remains hidden behind intermediary server systems. For business decision-makers, however, the key conclusion is not to identify a specific culprit, but to realise a structural flaw in their own systems.

  • The hangover from euphoria, or how AI agents can blow through a year’s budget in a few hours

    The hangover from euphoria, or how AI agents can blow through a year’s budget in a few hours

    Not so long ago, artificial intelligence was supposed to be the ‘ultimate solution’ to productivity problems – a digital alchemist turning empty process flows into pure efficiency gold. The ball was in full swing and the champagne was pouring from the presentations of the models promised by suppliers.

    Today, however, instead of more breakthroughs in machine reasoning, something far less spectacular is whispered about in the corridors of business conferences: the happiness bill. For it turns out that the ticket of admission to the world of AI was not a one-off fee, but a dynamic, hard-to-tame subscription for the future, the cost of which can rise exponentially overnight.

    What we are witnessing is the birth of ‘token fever’. It’s a state where the enthusiasm of engineers collides with the dismay of CFOs. For decades, we have been accustomed to the SaaS model – predictable, fixed licence fees that were easy to budget for. Generative AI has shattered this order, introducing a ‘probabilistic’ model. Here, a mistake in one agent’s logic or an overly effusive prompt can burn up financial resources faster than traditional cloud infrastructure consumes electricity.

    Uber and a mistake worth billions

    If the tech industry was looking for the ‘canary in the coal mine’, it found it in San Francisco in April 2026. At the IA HumanX conference, Praveen Neppalli Naga, Uber’s CTO, gave a speech that sobered even the biggest optimists. The giant, which had invested an astronomical $3.4 billion in research and development in 2025, faced a wall: its annual budget for artificial intelligence had evaporated in just four months.

    It wasn’t a matter of one misguided investment decision, but a side effect of an engineering fantasy with no brakes. Uber, aiming for aggressive technology adoption, encouraged its developers to use agents like Claude Code en masse. The result? 11% of back-end code was already being generated by artificial intelligence, but the price for this ‘efficiency’ proved deadly. Without proper performance filters and oversight of token consumption, AI ceased to be a lever for savings and became an out-of-control spending engine.

    The case of Uber is a classic example of a ‘tsunami of tokens’. Autonomous agents, entering infinite iteration loops with no clear limits, can burn a fortune in the time it takes to drink an espresso. It’s a painful lesson for any CIO: innovation without financial architecture is just a very expensive hobby. Naga admitted that the company had to go back to the design table to completely redefine its strategy. Any company that deploys AI today without a rigorous profitability analysis risks having its success measured not by margin growth, but by the speed with which it exhausts its own resources.

    Goodbye SaaS, hello volatility

    We are bidding farewell to an era where the IT budget was like a fixed Netflix subscription – predictable, secure and giving a false sense of control. For years, the SaaS model accustomed us to per-user licensing, where the only risk was a surplus of accounts that no one used. Generative AI brutally ends this period of ‘licensing peace of mind’ by introducing a billing model that is more akin to electricity bills during an energy crisis than traditional software.

    The shift from fixed costs to variable costs is a fundamental paradigm shift. In 2024, IT departments were buying AI access in a lump sum. Today, in 2026, vendors such as OpenAI and Anthropic have eliminated unlimited Enterprise plans, introducing dynamic billing for token consumption. The reason is mundane: AI agents have destroyed the distribution curve on which the old business was based. The subscription model only worked when the ‘lec’ users subsidised the ‘intensive’ ones. One, when we started employing autonomous agents, the differences became absurd. Analyses show cases where a user paying $100 a month generated costs of $5,600 in a single billing cycle. A subsidy ratio of 25 to 1 is a straightforward path to supplier bankruptcy, hence the sharp turn towards ‘use-pay’ billing.

    This makes IT spending probabilistic. This radically differentiates AI from the traditional cloud. A forgotten server in AWS generates a fixed, linear cost. A poorly designed prompt or agent without iteration limits, on the other hand, can go into a loop and generate millions of useless tokens in seconds. In this new world, a programmer’s logical error doesn’t end up ‘crashing’ the application – it ends up draining the company account at the speed of light. This means an immediate redesign of IT finance and the abandonment of rigid budget frameworks in favour of flexible management of the ‘economics of inference’.

    Tsunami of tokens – a new unit of risk

    In the modern CIO’s dictionary, a new, much more predatory term has emerged alongside ‘technical debt’: the ‘token tsunami’. This is a phenomenon in which autonomous agents, rather than freeing up staff time, fall into loops of endless iterations, burning up budgets with the intensity of a steel mill. The problem is that a bot, unlike a human, never feels fatigue or shame for duplicating mistakes – it simply consumes resources until it encounters a hard limit or empties its account.

    The scale of the problem is such that even the biggest players have had to revise their dogmas. Gartner is sounding the alarm: by the end of 2027, up to 40% of agent-based AI projects will be cancelled. The reason? Not a lack of vision, but brutal mathematics – rising costs while lacking precise tools to measure real business value.

    Here is where the biggest paradox of 2026 manifests itself: the unit price per token is steadily falling, but the total bill is rising. Indeed, AI agents consume between 5 and even 30 times more units per task than a standard chatbot. This is a classic trap of scale – an efficiency that becomes economically inefficient by its sheer volume. If your AI strategy is based solely on the hope that ‘models will be cheaper’, you’re just building a castle in the sand that the coming tsunami will wash away in one billing cycle. Without rigorous control over what machines process and why, modern IT becomes hostage to its own unbridled computing power.

    AI FinOps – the new alchemy of IT finance

    If you thought Cloud FinOps was challenging, get ready for a no-holds-barred ride. Traditional cloud optimisation was about simple craftsmanship: shutting down unused servers and keeping an eye on instance reservations. AI FinOps is a completely different discipline – it’s probabilistic rather than deterministic resource management. Here, the unit of expenditure is no longer processor man-hours, but the cost of a useful response relative to the cost of an erroneous or ‘hallucinated’ response.

    In 2026, as many as 98% of FinOps teams consider spending on AI as their number one priority. The reason is simple: in the traditional cloud, a technical error rarely leads to an exponential increase in cost. In the world of AI agents, misconfigured prompt logic can burn through budgets faster than you can refresh your dashboard. This is forcing IT leaders to define a new metric – the economics of inference. We no longer count how much a model costs us, but how much the operational success gained from its work costs us.

    And that means rewriting dashboards from scratch. Classic management frameworks such as ITIL 4 or COBIT, while providing a solid base, today require immediate extensions to include prompt lifecycle management or agent iteration limits. AI FinOps is not just about Excel tables; it is a new management philosophy where an engineer must think like an economist and a financier must understand LLM architecture. Without this synergy, buying tokens is akin to pouring rocket fuel into a hole in the tank – the effect is spectacular, but extremely short-lived and frighteningly expensive.

    How not to burn through a decade of innovation

    The time window for non-punitive errors has just slammed shut. To avoid a ‘token tsunami’, organisations need to move from a phase of joyful adaptation to a phase of rigorous architecture. The first and most pressing step is to conduct a token consumption audit – not a general one, but a precise one, broken down by specific teams and use cases. When a query to a model can cost as much as a good cup of coffee, we need to know who is ordering a double espresso without a clear business need.

    The key to financial survival is the implementation of three technical foundations:

    • RAG (Retrieval-Augmented Generation): Providing the model with only the data it actually needs, drastically reducing the token ‘diet’.
    • Specialist models: Abandoning the ‘all-knowing’ giants in favour of smaller, cheaper and finely-trained models for repetitive tasks.
    • Corporate charter for the bot: Establish rigid iteration limits and budgets per agent. This is a matter of elementary financial hygiene.

    We also need to review how our people work with the technology. Identifying the ‘Centaurs’ (experts empowering their AI skills) and eliminating the ‘Automators’ (unreflectively delegating work to a machine) will allow a real increase in ROI. The most expensive and fastest way to waste an innovation budget is to buy millions of tokens just to have teams working exactly as they will in 2022, only with an on-screen chat interface.

     

  • Intel is back in the game – results above expectations and massive share gains

    Intel is back in the game – results above expectations and massive share gains

    After years of strategic drift and management missteps, Intel under Lip-Bu Tan is beginning to prove that its turnaround plan is more than just aggressive cost-cutting. Its latest second-quarter revenue guidance, settling in at $14.3 billion, not only beat Wall Street’s expectations, but triggered a euphoric 19 per cent rise in share value. This signals that the former Silicon Valley icon has found its path in a world dominated by artificial intelligence.

    A strategic shift towards CPUs and AI agents

    Key to Intel’s optimism is a paradigm shift in the data centre sector. While the first phase of the AI boom undeniably belonged to Nvidia’s GPUs, used to train powerful models, the market is now entering the deployment (inference) phase. This is where Intel’s CPUs are regaining relevance. In an architecture based on autonomous AI agents, requiring advanced reasoning and handling complex workloads, traditional CPUs are proving to be an indispensable part of the infrastructure. Lip-Bu Tan makes it clear that this demand is not just wishful thinking, but a real trend coming from the major cloud providers.

    Partnership with Musk as foundation of foundry

    The biggest image and technology victory of recent days, however, is securing Tesla as a key customer for the upcoming 14A technology process. Elon Musk’s participation in the Terafab project is a massive credibility boost for Intel’s manufacturing business (Intel Foundry). The partnership aims to create next-generation processors for robotics and data centres, directly challenging TSMC’s dominance. While financial details remain confidential, the strategic alliance with players such as Musk, Nvidia and SoftBank gives Intel the fuel it needs to transform itself into a modern, contract chip foundry.

    A risky road to 2030

    Despite its financial success in the first quarter, where adjusted earnings per share were 29 cents, Intel is still treading on thin ice. The transformation from ‘old giant’ to ‘nimble foundry athlete’ requires not only breaking through manufacturing bottlenecks, but also maintaining the pace of innovation in the face of increasing competition from AMD and ARM. For investors, however, the current valuation may be an attractive entry point. If Intel successfully manages demand for silicon in the coming robotics era, today’s ‘high-stakes gamble’ could end with the company returning to the throne of technological empire.