Tag: AMD

  • Satellite connectivity and AI: AMD collaborates with NEC and NASA on new technologies

    Satellite connectivity and AI: AMD collaborates with NEC and NASA on new technologies

    As NASA shifts its focus from short exploration missions to a sustained presence on the moon, the frontline of the battle for technological supremacy shifts to where data latency becomes a critical bottleneck. In the new reality, where distance from Earth’s server rooms makes ongoing data analysis impossible, the key to success is becoming ‘intelligent edge’ (edge computing) – and it is here that AMD sees its chance to define the standards of the new space age.

    The Santa Clara giant’s strategy is based on a simple premise: in order for America to lead in space, it must have an edge in the production of advanced chips capable of operating in extreme conditions. The traditional approach of sending raw data back to Earth is no longer efficient for projects such as the NISAR mission or the Artemis programme. The solution is the Versal series of adaptive SoCs, which combine programmable logic with AI engines, allowing information to be processed directly on board the spacecraft.

    For commercial partners such as Blue Origin, the choice of AMD technology is not only a question of performance, but above all flexibility. The flight computers powering the Mark 2 lander test vehicle need to be ready to update AI algorithms after launch, something that was previously impossible with rigid hardware architectures. The ability to reconfigure systems on orbit allows the mission to be optimised in response to unforeseen challenges, dramatically increasing the return on investment for multi-year space programmes.

    The application of these technologies goes beyond NASA’s ambitions. Japan’s NEC is using adaptive AMD chips to build a constellation of optical communications satellites, which is set to revolutionise data routing in extraterrestrial space. This shows that the competition for silicon in space is not just a matter of national prestige, but a real market for infrastructure services.

    AMD’s success on Mars, where FPGAs supported the Perseverance rover’s navigation, provides a solid foundation of confidence. However, the real test for the company will be the coming decade, where autonomous systems will have to cope with radiation and extreme temperatures without support from the base. In this high-margin sector, where reliability is more valuable than raw computing power, AMD is positioning itself as an essential architect of the new orbital data economy.

  • AMD and Meta: $60bn contract will change the balance of power in AI

    AMD and Meta: $60bn contract will change the balance of power in AI

    Meta Platforms, the giant ruled by Mark Zuckerberg, has struck a $60 billion deal with AMD. At first glance, this is a classic chip supply contract to secure the infrastructure for ambitious AI projects. However, a deeper analysis of the structure of this deal reveals a mechanism that is increasingly worrying investors: a return to so-called closed-loop deals.

    Capital for silicon

    The key element of the deal is not the amount itself, but Meta’s right to take up to 10% of AMD shares. The warrant-based mechanism, which becomes due when AMD shares reach certain targets (as low as US$600), makes Meta cease to be a mere customer and become a strategic co-owner.

    For AMD CEO Lisa Su, this is a powerful vote of confidence. Acquiring such a big player helps to challenge Nvidia’s dominance, especially in the upcoming MI450 chip cycle. The market reacted enthusiastically, lifting AMD’s share price by 6%, while Nvidia saw a slight decline. However, critics such as analysts at Hargreaves Lansdown rightly point out that having to give away a tenth of the company suggests that AMD still needs to ‘buy’ its market share, rather than relying solely on organic demand.

    Diversification as an insurance policy

    The Met’s strategy is clear: independence from a single supplier. While the company still buys millions of processors from Nvidia and develops its own chips, the alliance with AMD gives it direct influence over hardware architecture. New chips are to be optimised for inference – a stage that experts believe will soon eclipse the market for just training models in terms of revenue generation.

    This partnership is part of a wider trend in which Big Tech – with almost unlimited capital at its disposal – is taking control of the supply chain. Similar moves by Alphabet against Anthropic or AMD’s earlier pacts with OpenAI create a web of mutual capital ties.

  • Market cools enthusiasm towards AMD

    Market cools enthusiasm towards AMD

    Wednesday’s 13% drop in AMD shares is not just a reaction to the numbers, but more importantly a signal of growing scepticism about the pace at which the leadership-chasing players can realistically monetise the AI revolution. Although AMD forecast first-quarter revenue of around $9.8 billion – formally beating analysts’ consensus – investors saw cracks in the report that not even an unexpected cash injection from China could bridge.

    The foundation of the concern is that without the $390 million from licensed chip sales to the Chinese market, AMD’s key data centre segment would have failed to meet market expectations. For analysts, this is evidence that the organic growth momentum in AI may not be as resilient to shocks as management paints it. While Nvidia aggressively defends its territories, AMD has to contend with a new front: the growing dominance of custom chips designed in-house by technology giants.

    AMD’s situation is complicated by the macroeconomic context and strategic partnerships of its competitors. Google’s deal with Anthropic to supply billions of dollars worth of processors is a clear signal that the market is looking for alternatives, but not necessarily where AMD would like to see them. What’s more, AMD’s market valuation – hovering around 33 times future earnings – today seems like a burden compared to the much lower multiples of infrastructure partners such as Super Micro Computer. The latter, by raising its annual forecasts, has become the beneficiary of the optimism that the Santa Clara chipmaker has lacked.

    CEO Lisa Su remains calm, announcing a sharp acceleration of shipments to OpenAI and other key players in the second half of the year. The ‘patient offensive’ strategy is based on the assumption that global memory shortages will not hit production, and that demand for next-generation servers will eventually translate into hard profits. But there is a clear lesson for business: in the age of AI, the mere promise of technology is not enough. What matters is the ability to execute rapidly and to withstand attempts by major customers to diversify their supply.

  • Physics versus marketing. What do you really gain by investing in 1.8nm and 3nm processors?

    Physics versus marketing. What do you really gain by investing in 1.8nm and 3nm processors?

    Intel is bringing out the heavy guns in the form of third-generation Core Ultra processors, known as Panther Lake, which are based on 18A, or 1.8 nanometre, technology. On the other side of the market barricade is AMD with its Ryzen chips, baked in TSMC ‘s Taiwanese factories using a 3nm process. On paper, Intel’s advantage seems crushing, suggesting a technology almost half the size and more modern. However, in the CFO’s portfolio, this difference may prove to be a statistical error. In a world where ‘nanometre’ has become a brand rather than a measurement, business must learn to look at what really drives performance, ignoring the labels on the boxes.

    When IT managers look at the specifications of new laptops or servers, their gaze naturally goes to the numbers, because in the technology industry, smaller usually means better, faster and more economical. Manufacturers are well aware of this, which is why the arms race in the semiconductor sector has moved from the physics labs to the marketing departments. To make an informed purchasing decision for 2025-2026, you need to understand where the engineering ends and the wordplay begins.

    The grand illusion of the nanometre

    For decades, the IT industry has operated with a simple and understandable currency. Back in 1995, when we talked about the 350 nm technology process, it meant that the gate of a transistor on a silicon wafer was actually 350 nanometres long. The engineer and the salesman spoke the same language, and the node name was a direct reflection of physical reality. However, this order broke down in the late 1990s with the introduction of new technologies for building microtransistors, which broke the direct link between the node name and the physical dimension of the components.

    Today, names such as ‘Intel 4′, ’18A’ meaning 18 Angstroms, or ‘TSMC N3’ are predominantly trade names. Treating them as a technical measure of length is a mistake that can lead to misleading business conclusions. It is a situation analogous to the automotive market, where the model designation of a car, for example the BMW 330, no longer necessarily denotes a three-litre engine. The number now serves to position the product in the range, rather than to describe its technical parameters precisely.

    For business, this means that the approach to analysing offerings needs to change. The fact that one processor is labelled ‘1.8 nm’ and another ‘3 nm’ does not automatically mean that the former is physically much smaller. In fact, the differences may be minimal and, in extreme cases, the packing density relationship may even be the opposite of what the numbers suggest.

    The hard currency of silicon

    Since nanometres are conventional, an informed investor or IT manager should look at other metrics. If we look under the hood of Panther Lake processors or the latest Ryzen processors, we find objective parameters that PR departments are reluctant to talk about, but which are crucial for engineers. These are, first and foremost, Gate Pitch, which is the minimum distance between individual transistors, and Metal Pitch, denoting the minimum distance between the copper paths connecting these components.

    Analysis of this hard data leads to surprising conclusions. Comparing the current generation of processes, it appears that the Intel 4 technology and the competing TSMC N4 have almost identical physical characteristics, with a gate pitch oscillating between 50 and 51 nanometres. Despite the different trade names, the packing density of the technologies is very similar. The future looks even more interesting, with Intel promoting an 18A process suggesting 1.8 nm, while TSMC is preparing to implement a 2 nm process. Paradoxically, according to many technical analyses, it is the Taiwanese ‘2 nm’ that may offer higher transistor density than the US solution. Intel is making up for it with marketing, suggesting leadership, but in practice the two giants are going head to head and their nodes will meet each other halfway in terms of real-world performance.

    Physics translates into costs

    Although the labels are confusing, the technological advances are real and central to the cost of doing business, or TCO. Regardless of the nomenclature, the drive towards denser transistor packing is driven by the inexorable laws of physics, as a smaller transistor with a shorter path between source and drain requires a lower voltage to switch its logic state. For the company, this translates directly into energy efficiency and thermal performance.

    The chip, made using a newer, denser process, uses less power for the same load. On the scale of a single laptop, this means an extra hour of battery life during a business trip, while on the scale of a data centre, it translates into thousands of zlotys of savings on electricity bills. The thermal aspect is equally important, as less power consumption means less heat generated. This allows the processors to run at higher frequencies without the risk of thermal throttling, ensuring more stable operation of demanding applications. Therefore, Intel Panther Lake will be inherently better than its predecessor not because of the name ’18A’, but because the engineers have actually improved the physical structure of the chip, which is also true for AMD using TSMC improvements.

    The strategic trap of the single supplier

    There is another element of business risk in this technological jigsaw puzzle, related to incompatibility. Intel’s, TSMC’s and Samsung’s manufacturing processes have diverged dramatically, with each giant using different chip production methods, deploying technologies such as FinFET or RibbonFET at different times. This means that chip designers such as AMD and NVIDIA are firmly tied to their chosen factory and cannot move production to a competitor overnight. Adapting a design to another factory is a process that takes up to a year and incurs huge costs. When choosing a hardware platform for a company, decision makers are therefore choosing not just a processor, but the entire supply chain, where the stability of the manufacturing partner becomes a strategic factor, more important than the marketing name of a nanometre.

    We are approaching the point where comparing processors solely on the basis of lithography becomes pointless. Intel Panther Lake and the upcoming Ryzen generations will be powerful chips, but their value to business is not based on the labels on the box. When planning infrastructure purchases, the key indicator should be the performance-per-watt ratio. It is this parameter that determines whether an investment in new hardware will translate into real productivity gains and reduced operating costs for the business.

  • Intel regains the initiative and business says ‘check’ to AI hype. Key takeaways from CES 2026

    Intel regains the initiative and business says ‘check’ to AI hype. Key takeaways from CES 2026

    This year’s CES in Las Vegas brought a turnaround in the semiconductor industry that has been rare in recent years. After a period of catching up, Intel seems to have regained the technological pre-eminence, with direct implications for B2B purchasing strategies. “Blue” dominates the narrative with Panther Lake chips (Intel Core Ultra Series 3), manufactured using the Intel 18A technology process. This is the equivalent of 2nm technology, which its main competitor AMD has yet to bring to the mass market, offering instead a refreshed architecture in the form of the Ryzen AI 400.

    The response from OEMs was immediate and unequivocal. Lenovo stepped up its partnership with Intel, promoting the ‘Aura Edition’ line as exclusive to this architecture, particularly in the premium ThinkPad and ThinkBook segments. HP, on the other hand, has taken an agnostic stance, offering a choice between Intel, AMD and ARM architecture from Qualcom within the same SKU on the EliteBook X G2. This is a pragmatic approach that shifts the burden of architecture choice directly to the enterprise customer.

    An interesting development is the apparent cooling of enthusiasm for ARM architecture in the Windows ecosystem. Despite the launch of cheaper Snapdragon X2 variants, Qualcomm has failed to dominate the conversation behind the scenes this year. The attention of the business sector, after a brief flirtation with alternatives, seems to be returning to the proven x86 architecture. ARM is instead enjoying spectacular success in the server and HPC segments, where Nvidia unveiled the Vera and Ruby processors, cementing its position in AI infrastructure.

    The event that may define B2B marketing for the coming quarters, however, is a change in Dell’s rhetoric. Kevin Terwilliger, head of product at Dell, has openly admitted that business customers do not make purchasing decisions based on the presence of ‘AI’ in a product name. The company has drastically reduced the use of this acronym in its new portfolio, including the reactivated XPS line. This is a sobering counterpoint to competitors such as MSI, which continues to experiment with complex naming like ‘Pro Max AI+’.

    Despite technological optimism, the spectre of cost hangs over the market. Specific prices in euros for entry-level configurations were missing in Las Vegas. Given the shortage of DDR5 memory and the rising cost of silicon wafers in the lowest lithographic processes, IT purchasing departments should prepare for the return of innovation to come at a high price.

  • AMD offensive: Helios platform and announcement of 1000-fold performance boost

    AMD offensive: Helios platform and announcement of 1000-fold performance boost

    Dr Lisa Su’s speech at CES 2026 was not just a standard product presentation. The presence on stage of representatives from OpenAI, Blue Origin and the Director of the White House Office of Science Policy clearly signals a shift in the company’s positioning.

    AMD is no longer just a component supplier, becoming a strategic pillar of the ‘Genesis Mission’ – the US initiative to advance science through AI. This sends a clear signal to the market: in the geopolitical technology race, Santa Clara takes a front-row seat.

    The main focus of the speech was the prediction that global computing power will exceed 10 JottaFLOPS in the next five years. AMD’s answer to this challenge is the Helios platform.

    This unified rack system, integrating Instinct accelerators, EPYC processors and Pensando networking solutions, is expected to offer performance of 3 EksaFLOPS in a single rack. However, it is the 2027 announcements that have electrified the data centre sector. The upcoming Instinct MI500 accelerators, based on the CDNA 6 architecture and 2-nanometre process, are expected to deliver a thousand-fold increase in AI performance over current solutions, setting an aggressive new path forward for cloud infrastructure.

    An equally important battle is being fought over the ‘edge of the network’ and tools for developers. The introduction of the Ryzen AI Halo, a developer MiniPC with a 60 TFLOPS GPU, is an attempt to lower the entry threshold for AI engineers who need local computing power.

    Complementing this strategy are the new Ryzen AI 400 processors for consumer and business laptops, which will debut in offerings from leading manufacturers such as Dell and Lenovo as early as the first quarter of this year.

    The whole ecosystem is tied together by software, hitherto the Achilles’ heel of Nvidia’s competitors. The new ROCm 7.2, integrated with the popular ComfyUI, is seeing spikes in downloads, suggesting that developers are beginning to realistically adopt AMD’s open environment.

    The range closes the gaming segment with the Ryzen 7 9850X3D processor, which is expected to compete effectively with Intel’s top chips thanks to 3D V-Cache technology.

  • AMD undercuts Intel, but the Santa Clara giant remains in the lead

    AMD undercuts Intel, but the Santa Clara giant remains in the lead

    Competition in the x86 processor market is gathering pace, with AMD consistently consolidating its position at Intel‘s expense. Although Intel continues to dominate, controlling around 70 per cent of the market, the latest Mercury Research analysis confirms that AMD has effectively secured close to 30 per cent share for itself.

    AMD’s growth rate is not uniform across all segments, however. The company’s biggest successes have been in the desktop market, where its share grew by an impressive five per cent to 33.6 per cent in the last quarter. It is much quieter in the mobile segment – here the shares of both players remained almost unchanged, and Intel even managed to record a token increase of 0.4 per cent.

    The server market also looks interesting, having compensated both manufacturers for a weaker quarter in the PC segment, probably caused by uncertainty over import duties. In data centres, AMD is also going from strength to strength, increasing its share by 3.5% and now controlling 27.8% of this strategic market.

    According to Mercury Research, AMD’s gains are partly due to the fact that the company is ‘delivering faster’, while Intel’s attention has been diverted away from entry-level processors. Still, the numbers don’t lie – Intel’s dominance is still undisputed, and AMD has a very long way to go to realistically threaten its leadership position.

    It seems, however, that Intel, perhaps dormant in recent years, has definitely awoken. The conglomerate, under the leadership of Pat Gelsinger, is undergoing a profound transformation, without hesitation cutting off divisions that are no longer central to the company’s strategy. The aim is to restore former operational efficiency.

    Mercury Research’s analysis focuses exclusively on the x86 duo. However, it is important to bear in mind the third player, the Arm architecture, which is already estimated to control around 10% of the total processor market and is increasingly daring to enter the game in the backyard hitherto reserved for Intel and AMD.

  • AMD throws down the gauntlet to Nvidia. Target: $100 billion from AI

    AMD throws down the gauntlet to Nvidia. Target: $100 billion from AI

    AMD shares gained 7% on Wednesday, increasing the company’s capitalisation by more than $26 billion. The reason for investors’ enthusiasm was the bold new strategic goal announced during the analyst day: to reach $100 billion in annual revenue from the data centre segment. This is a direct challenge thrown down to Nvidia, which currently dominates the red-hot AI accelerator market.

    AMD CEO Lisa Su estimates that the data centre chip market alone could reach $1 trillion by 2030. This is a forecast that includes general purpose processors, networking chips and AI accelerators. AMD is not defenseless in this battle. The company is highlighting key partnerships, including with OpenAI and Oracle, which are already expected to generate significant revenue and open the door to talks with other hyperscale giants.

    The technological weapons in this battle are to be the next-generation MI400 chips and the Helios integrated system, which are expected to hit the market in 2026. AMD’s plan is to gain significant market share, as reflected in its internal forecasts. The company expects 60% compound annual growth rate (CAGR) in its data centre business and 35% growth across the company over the next three to five years.

    This ambition can also be seen in the target earnings per share (EPS) of $20. This is a bold statement, given that the LSEG consensus for 2025 is for earnings of just $2.68 per share.

    Some analysts are taking a cautious approach to these announcements. Stacy Rasgon of Bernstein described the targets as “somewhat aggressive and aspirational”. He noted that ultimate success will depend on AMD’s ability to actually take market share with Helios and move from being a marginal AI player to a viable competitor. AMD executives are clearly going on the offensive in an attempt to change the market narrative.

    Meanwhile, market leader Nvidia appears unfazed by the challenge. On the day AMD’s plans were announced, Nvidia’s shares also recorded a quiet 1.5% rise.

  • Oracle and AMD join forces: new AI cloud with MI450 chips

    Oracle and AMD join forces: new AI cloud with MI450 chips

    Oracle and AMD are stepping up their collaboration, setting their sights on future generations of AI infrastructure. According to the companies’ announcement, Oracle will begin offering cloud services based on upcoming AMD MI450 chips – processors designed specifically for artificial intelligence workloads. The first 50,000 chips will hit Oracle’s data centres in the third quarter of 2026, with the scale of deployments expected to grow in subsequent years.

    This partnership is part of a growing market pressure: technology companies are racing to secure enough computing power to train ever larger AI models. Oracle – until now mainly associated with business software and databases – has been consistently rebuilding its cloud for AI, hitting the segment of customers looking for alternatives to AWS, Azure or Google Cloud. Its partnership with AMD gives it an advantage: the ability to offer high-performance ‘AI superclusters’ based on Helios designs, competitive with Nvidia solutions.

    For AMD, this is yet another strategic deal confirming its ambitions in the AI market, hitherto dominated by Nvidia. Following last week’s news of chip deliveries to OpenAI, the deal with Oracle reinforces the narrative that the MI450 is set to be a viable alternative to the H100 or B200. What’s more, AMD has been designing the MI450 in collaboration with OpenAI, which signals that the chips were being developed for generational models with a huge appetite for computing power.

    The market reacted immediately: the AMD share price rose more than 3% before the session, against a broader sector decline linked to concerns about US-China trade relations. Oracle saw a slight correction, which could be interpreted as a cool reaction by investors to the rising costs of infrastructure expansion.

    Hovering in the background is an even bigger storyline: OpenAI. According to reports, Sam Altman’s startup was expected to commit to buying up to $300 billion worth of Oracle’s computing power over five years – which would be one of the largest cloud deals in history. If confirmed, Oracle would become the infrastructure pillar of generational artificial intelligence.m

  • OpenAI chooses AMD. AI processor contract worth tens of billions

    OpenAI chooses AMD. AI processor contract worth tens of billions

    The AMD and OpenAI partnership is more than just another hardware supply deal. It is a strategic reshuffle at the top of the artificial intelligence industry that positions AMD as a viable alternative to the dominant Nvidia and provides the ChataGPT developer with powerful resources for the future. The multi-year contract not only involves the supply of hundreds of thousands of AI chips, but also gives OpenAI the option to take up to 10% of AMD.

    For AMD, this is a transformative moment. The deal, which is expected to generate tens of billions of dollars in annual revenue, is the strongest endorsement yet of the competitiveness of their AI chips and software. The contract covers the deployment of chips with a total capacity of six gigawatts, starting in the second half of 2026. The first phase involves OpenAI building a one gigawatt data centre based on the upcoming MI450 series of processors. AMD estimates that the ripple effect from this deal could generate more than $100 billion in new revenue for the company over four years as other players follow the lead.

    From OpenAI’s perspective, the deal is a key part of its strategy to secure the computing power needed to train increasingly advanced models. Instead of relying solely on Nvidia, with whom it also has a giant contract, the company is diversifying its suppliers, which gives it greater flexibility and negotiating power. At the same time, the partnership with AMD does not change other plans, such as developing its own silicon chips or working with Microsoft.

    The most interesting element of the deal is its financial structure. AMD has granted OpenAI warrants that entitle it to purchase up to 160 million shares in the company at a symbolic 1 cent apiece. The activation of further tranches of warrants is contingent on the achievement of milestones, including AMD shares reaching a target price of up to $600. Such a mechanism creates OpenAI not just a customer, but a partner with a direct interest in AMD’s market success. It signals that the era of monopoly is coming to an end in the AI chip market and the time for real competition is beginning.

  • AI accelerator market in Europe: digital sovereignty vs. Nvidia’s dominance

    AI accelerator market in Europe: digital sovereignty vs. Nvidia’s dominance

    The generative artificial intelligence (GenAI) revolution has created an insatiable demand for computing power, fundamentally changing data centre architectures. Traditional processors (CPUs), for decades the heart of computing, have become the bottleneck for language-based models (LLMs) and other GenAI systems. In response to this challenge, a new class of specialised hardware was born: AI accelerators.

    The end of the CPU era and the birth of a new paradigm

    The problem with CPUs in the context of AI lies not in their speed, but in a fundamental architectural mismatch. Optimised for sequential execution of complex tasks, they have only a few powerful cores. Meanwhile, deep learning algorithms require massive parallel processing – performing trillions of simple operations simultaneously. This is a task for which graphics processing units (GPUs), equipped with thousands of smaller cores, are ideally suited .

    Alongside GPUs, which have become the standard for model training, even more specialised units have emerged. Neural processing units (NPUs) are a broad category of chips designed from the ground up with AI in mind, prioritising energy efficiency, making them crucial for edge AI applications. Tensor processing units (TPUs), on the other hand, are Google‘s proprietary ASICs, optimised for its software ecosystem and massive cloud computing .

    This paradigm shift is driving a market in Europe with huge potential. Valued at around €4.88 billion in 2024, the European AI accelerator market is expected to grow to nearly €43 billion by 2033, with an impressive compound annual growth rate (CAGR) of 27.4% .

    Unique European Drivers: Politics meets market demand

    The European accelerator market is shaped by a unique combination of bottom-up commercial demand and top-down strategic initiatives, which sets it apart from markets in the US or Asia.

    On the one hand, AI adoption is growing in key sectors such as healthcare, automotive and finance. Already 13.5% of businesses in the EU are using AI technologies, and the entire European AI market (software, hardware and services) is growing at a rate of more than 33% per year.

    On the other hand, the European Union is pursuing an ambitious programme to strengthen its digital and technological sovereignty. Geopolitical concerns and the desire for independence from non-EU suppliers have led to powerful investment mechanisms:

    • EU Chips Act: This initiative aims to mobilise more than €43 billion in public and private investment to double Europe’s share of global semiconductor production from 10% to 20% by 2030 . Attracting investment to build advanced factories, such as Intel’s and TSMC’s plants in Germany, is crucial for future accelerator production in Europe.
    • AI Continent Action Plan: this €200 billion plan aims to create a sovereign, pan-European AI ecosystem. Its key element is the InvestAI initiative, which is expected to mobilise €20 billion to build 4-5 ‘AI Gigafactories’ – each equipped with more than 100,000 advanced AI chips.
    • EuroHPC and ‘AI Factories’: The Joint Undertaking for European Large Scale Computing (EuroHPC JU) is investing billions of euros to build a fleet of supercomputers. Around these, 13 ‘AI Factories’ are being built to democratise access to computing power for startups and SMEs, stimulating innovation and creating guaranteed demand for infrastructure .

    The competitive landscape: Nvidia’s dominance and the strategies of the contenders

    The data centre accelerator market is close to a monopoly. Nvidia controls around 98% of the global market in terms of units shipped, and its real advantage is its mature CUDA software ecosystem, used by 5 million developers . This creates a powerful lock-in effect, making it difficult for competitors to gain share.

    Nevertheless, the contenders are pursuing well thought-out strategies:

    • AMD: Positions itself as a major high-performance alternative. The Instinct MI300 series of accelerators is intended to compete with Nvidia’s offerings, with a key selling point being the open ROCm software platform, aimed at breaking the CUDA monopoly.
    • Intel: It is betting on price competition with Gaudi accelerators (to be 50% cheaper than Nvidia’s H100) and an open oneAPI ecosystem.
    • Google (TPU): It does not sell the chips directly, but uses them as a key differentiator for its cloud platform, offering an excellent performance-to-cost ratio for specific AI workloads.

    Against this backdrop, European players such as the UK’s Graphcore and France’s Blaize are also emerging, focusing on niches such as novel architectures (IPUs) or energy-efficient chips for Edge AI

    The growth trilemma: Cost, energy and talent

    Despite the optimistic outlook, the European market faces three fundamental barriers that create a strategic trilemma for decision-makers.

    Cost and availability: The price of a single high-end accelerator, such as the Nvidia H100, is up to US$40,000, making building your own AI infrastructure prohibitive for most companies . Additionally, global supply chains are vulnerable to disruption and export controls, which threatens project continuity .

    Energy and ESG: Data centres dedicated to AI consume four to five times more energy than traditional ones. Data centre energy consumption in Europe is forecast to almost triple by 2030. This is in contrast to the EU’s ambitious sustainability goals, such as

    Energy Efficiency Directive, which imposes an obligation to reduce energy consumption.

    Talent: Europe is facing a critical shortage of AI and HPC professionals. The skills gap is slowing down innovation and preventing companies from effectively using even the infrastructure they already have, empowering global cloud providers.

    Future trends: From possession to access, from monolith to module

    Looking ahead to 2030, the market will be shaped by three key trends:

    • The dominance of the ‘Compute-as-a-Service’ model: Due to the aforementioned trilemma, most companies will not buy accelerators, but rent access to them. This model, pursued by both public ‘AI Factories’ and commercial cloud providers, transforms huge capital expenditure (CAPEX) into predictable operating costs (OPEX).
    • Software battle: The long-term structure of the market will depend on the success of open standards, such as ROCm and oneAPI, in breaking the dominance of CUDA. Avoiding dependence on a single vendor is a powerful motivator for the industry as a whole

    New hardware architectures: To overcome physical limitations, the industry is moving towards chiplets – smaller, specialised silicon cubes combined into a single system. This allows for greater modularity and lower costs. In the long term, the revolution could be

    photonic computing, using light instead of electrons, which promises orders of magnitude higher throughput and energy efficiency .

    Strategic lessons for technology leaders

    The European AI accelerator market is an arena where global technology competition meets unique political and regulatory ambitions. For technology and innovation directors, this means navigating a complex ecosystem.

    The key strategic question is shifting from “which accelerator to buy?” to “how to strategically access computing power?”. The answer requires balancing performance, cost, sovereignty and sustainability. Success in the GenAI era will not depend on simply having the latest hardware, but on the ability to intelligently use both public initiatives and private innovation to build a sustainable competitive advantage in a unique European market.

  • Nvidia invests in Intel – the king of revolution shakes hands with a sinking rival

    Nvidia invests in Intel – the king of revolution shakes hands with a sinking rival

    In Silicon Valley, there are alliances that seem natural, and those that shake the foundations of the entire industry. The collaboration between Nvidia and Intel announced yesterday undoubtedly falls into the latter category.

    This is no mere business deal; it is a strategic pact made by two former fierce rivals. We are witnessing a historic moment in which the leader of the AI revolution (Nvidia) shakes hands with the giant that almost slept through this revolution (Intel).

    The question we must ask ourselves goes far beyond corporate press releases: are we witnessing the birth of a synergy that will drive innovation, or rather the creation of a powerful duopoly that will concretise the market for years and marginalise competition?

    Act of desperation or masterful chessboard?

    To understand the importance of this alliance, it is necessary to look at the position from which both players are starting. Intel, once the undisputed king of silicon, has been struggling for years. Problems with the transition to lower technology processes, delays and increasing competition have caused the company to lose its leadership position to Nvidia and Samsung.

    At a time of rapid growth in GPU-driven artificial intelligence, Intel’s dominance in the CPU segment has proved insufficient. From this perspective, the pact with Nvidia looks like a desperate attempt to get back into the highest stakes game.

    This is an acknowledgement that without AI leadership technology, Intel is unable to compete on the all-important innovation front alone.

    For Nvidia, on the other hand, this move is pure, calculating strategy. Jensen Huang’s company absolutely dominates the data centre and AI accelerator segments. But its next goal is to conquer a market where Intel still has hegemony: personal computers.

    By natively integrating its graphics chips (in the form of RTX chiplets) with Intel processors, Nvidia gains access to the vast x86 ecosystem. It’s a brilliant move that allows it to enter the AI PC segment ‘through doors and windows’, bypassing the need to build everything from scratch.

    Nvidia is not just buying Intel’s shares for $5bn; it is buying the decades of experience, customer base and distribution channels of its former rival.

    What does AMD say about this?

    Every great alliance creates not only winners but also losers. In this case, the company with the most to lose is obvious: AMD. Under Lisa Su’s leadership, AMD has done the near impossible, becoming a viable alternative to both Intel in the processor market (Ryzen series) and Nvidia in the graphics card market (Radeon series).

    The company deftly manoeuvred between the two giants, taking market share away from them.

    Now, however, AMD is facing a nightmare scenario – a battle against a combined front end. Imagine laptops and workstations where Intel’s processor and Nvidia’s graphics chip are integrated at silicon level.

    This synergy can offer performance and power efficiency that standalone AMD products will find extremely difficult to compete with. It’s no longer a battle on two separate fronts; it’s a clash with an emerging technological behemoth that controls key elements of the PC platform.

    From competition to a new order

    In the IT industry, there is talk of the phenomenon of ‘coopetition’ – cooperation between competitors in specific areas. However, the Nvidia and Intel deal appears to be something much deeper. This is not a temporary project, but the foundation for a new market order.

    The aim is to create a hardware platform that is so integrated and optimised that it becomes the de facto standard for anyone serious about artificial intelligence on PCs and servers.

    The long-term consequences could be devastating for market diversity. If the Nvidia-Intel duo dominates the AI PC segment, software manufacturers will begin to optimise their applications specifically for this architecture, further marginalising alternatives.

    We will be in a situation where real choice will be limited and innovations outside this ecosystem may find it extremely difficult to break through to a mass audience.

    A golden cage for the consumer?

    Undoubtedly, in the short term, this collaboration will produce exciting products. Computers will become more powerful and AI-based functions more accessible. But in the long term, we risk entering a ‘golden cage’ – an ecosystem so perfect and integrated that we will not want or be able to leave it.

    History teaches us that when competition weakens, innovation suffers and prices rise.

    Nvidia’s surprising move is not only great strategically. It shows that growth and development is being undertaken in IT at all costs, and that the drive to grab as much of the market as possible is a key priority. The move will save Intel from the worst, the question is: at what cost?

    The VMware and Broadcom merger has shown that the introduction of new orders can sometimes be painful for markets. And the growing central IT powers, apart from show trials, are not only not constrained in any effective way by state bodies. On the contrary, today, governments consider technological monopolists as an element of geopolitical advantage, which, in the age of the digital rush, is logical, albeit short-sighted and potentially damaging in the long term.

  • AI accelerator market: NVIDIA, AMD, Intel – the battle for supremacy

    AI accelerator market: NVIDIA, AMD, Intel – the battle for supremacy

    At North Carolina State University, robotic arms precisely mix chemicals while streams of data flow through systems in real time. This ‘self-powered laboratory’, an AI-powered platform, discovers new materials for clean energy and electronics not in years, but days.

    Collecting data 10 times faster than traditional methods, it observes chemical reactions like a full-length film rather than a single snapshot. This is not science fiction; it is the new reality of scientific discovery.

    This incredible leap is being driven by a new kind of computing engine: specialised AI accelerator chips. These are the ‘silicon brains’ of the revolution. Moore’s law, the old paradigm of doubling computing power in general-purpose systems, has given way to a new law of exponential progress, driven by massive parallel processing.

    The crux of the story, however, is more complex. While AI algorithms are the software of a new scientific era, the physical hardware – the AI chips – has become the fundamental enabler of progress and, paradoxically, also its biggest bottleneck.

    The ability to discover a new life-saving drug or design a more efficient solar cell is today inextricably linked to a hyper-competitive, multi-billion dollar corporate arms race and a fragile geopolitical landscape in which access to these chips is a tool of global power.

    Anatomy of a boom: who is building silicon brains?

    The boom in generative artificial intelligence has created an insatiable demand for computing power. It’s not just chatbots, but foundational models that underpin a new wave of scientific research. This demand has transformed a niche market into a global battlefield for dominance.

    Reigning champion: NVIDIA

    NVIDIA has established itself as a key architect of the AI revolution, as evidenced by its stunning financial results. The data centre division, the heart of the company’s AI business, reported revenues of $41.1bn in a single quarter, up 56% year-on-year.

    This dominance is based on successive generations of powerful architectures such as Hopper and now Blackwell, which are core hardware for technology giants such as Microsoft, Meta and OpenAI .

    An energetic contender: AMD

    AMD is positioning itself not as a distant number two, but as a serious and fast-growing competitor. The company reported record data centre revenue of US$3.5bn in Q3 2024, a massive 122% year-on-year increase, driven by strong adoption of its Instinct series GPU accelerators.

    Significantly, major cloud service providers and companies such as Microsoft and Meta are actively deploying MI300X accelerators from AMD, signalling a desire to have a viable alternative to NVIDIA. The company forecasts that its data centre GPU revenue will exceed US$5bn in 2024.

    The gambit of the historical giant: Intel

    Intel’s situation presents a strategic challenge. Although the company claims that its Gaudi 3 accelerators offer a better price/performance ratio compared to NVIDIA’s H100 , it is struggling to gain market share.

    Intel missed its $500m revenue target for Gaudi in 2024, citing slower-than-expected adoption due to issues with transitioning between product generations and, crucially, challenges with ‘ease of use of the software’.

    Analysis of this data reveals deeper trends. Firstly, the AI hardware market is not just a race for components, but a war of platforms. Intel’s difficulties with software point to the real battlefield: the ecosystem. NVIDIA’s CUDA platform has more than a decade’s head start, creating a deep ‘moat’ of developer tools, libraries and expertise.

    Competitors are not just selling silicon; they need to convince the whole world of science and development to learn a new programming language. Secondly, the AI boom is leading to vertical integration of the data centre.

    Not only does NVIDIA dominate the GPU market, but following its acquisition of networking company Mellanox in 2020, it has also become the leader in Ethernet switches, recording sales growth of 7.5x year-on-year.

    NVIDIA is no longer just selling chips; it is selling a complete, optimised ‘AI factory’ design, creating an even stronger lock-in effect.

    From lab to reality: scientific breakthroughs powered by silicon

    This unprecedented computing power is fueling a revolution in the way we do research, leading to breakthroughs that seemed impossible just a few years ago.

    The medicine of tomorrow

    The traditional drug discovery process, which takes 10 to 15 years, is being dramatically shortened. DeepMind CEO Demis Hassabis predicts that AI will reduce this time to “a matter of months”.

    Isomorphic Labs, a subsidiary of DeepMind, is using AI to model complex biological systems and predict drug-protein interactions. Researchers at Virginia Tech have developed an AI tool called ProRNA3D-single that creates 3D models of protein-RNA interactions – key to understanding viruses and neurological diseases such as Alzheimer’s.

    Moreover, a new tool from Harvard, PDGrapher, goes beyond the ‘one target, one drug’ model. It uses a graph neural network to map the entire complex system of a diseased cell and predicts combinations of therapies that can restore it to health.

    High-resolution climate

    In the past, accurate climate modelling required a supercomputer. Today, AI models such as NeuralGCM from Google can run on a single laptop . This model, trained on decades of weather data, helped predict the arrival of the monsoon in India months in advance, providing key forecasts to 38 million farmers.

    A new AI model from the University of Washington is able to simulate 1,000 years of Earth’s climate in just one day on a single processor – a task that would take a supercomputer 90 days.

    Companies like Google DeepMind (WeatherNext), NVIDIA (Earth-2) and universities like Cambridge (Aardvark Weather) are building fully AI-driven systems that are faster, more efficient and often more accurate than traditional models.

    Alchemy of the 21st century

    As mentioned at the outset, AI is creating autonomous labs that accelerate materials discovery by a factor of ten or more. The paradigm shifts from searching existing materials to generating entirely new ones.

    AI models, such as MatterGen from Microsoft, can design new inorganic materials with desired properties from scratch. This ability to ‘reverse engineer’, where scientists identify a need and AI proposes a solution, has been the holy grail of materials science.

    These examples illustrate a fundamental change in the scientific method itself. The computer has ceased to be merely a tool for analysis; it has become an active participant in the generation of hypotheses. The role of the scientist is evolving into a curator of powerful generative systems.

    This accelerates the discovery cycle exponentially and allows scientists to explore a much larger ‘problem space’ than was ever possible for humans.

    Geopolitical storm and a new division of the world

    As the importance of these silicon brains grows, they are becoming the most valuable strategic resource of the 21st century – the new oil, crucial for economic competitiveness and scientific leadership.

    US strategy: “small garden, high fence”

    The US has implemented a ‘small garden, high fence’ strategy, introducing export controls aimed at slowing China’s ability to develop advanced AI. These restrictions apply not only to the chips themselves (such as NVIDIA’s H100), but also to the hardware required to manufacture them (from companies such as the Dutch ASML).

    This hit the Chinese semiconductor industry in the short term, causing equipment shortages and ‘crippling’ its production capacity.

    China’s determined response

    China’s response has been multi-pronged: massive investment in its domestic semiconductor industry and the use of its own economic leverage by restricting exports of key rare earth elements. The case study is Huawei.

    Despite being crippled by sanctions, the company has developed its own line of AI Ascend chips (910B/C/D), which are now seen as a viable alternative to NVIDA products in China.

    In response, the US government has toughened its stance, declaring that the use of these chips anywhere in the world violates US export controls, escalating the technological divide.

    A study by Oxford University reveals a harsh reality: advanced GPUs are heavily concentrated in just a few countries, mainly in the US and China. The US leads the way in access to state-of-the-art chips, while much of the world is in ‘computing deserts’.

    This situation leads to unintended consequences. US export controls, designed to slow China down, have become an ‘inadvertent accelerator of innovation’ for China, forcing Beijing to build a completely independent technology stack.

    A decade from now, the world may have two completely separate, incompatible AI stacks, fundamentally dividing global research.

    The cloud as the great equivalent?

    There is a powerful counter-argument: cloud computing democratises access to elite AI. Platforms such as Amazon Web Services (AWS), Microsoft Azure and Google Cloud offer AI-as-a-Service (AIaaS), allowing a university or startup to rent the same powerful GPUs that OpenAI uses.

    The cloud giants offer rich ecosystems. AWS provides services such as SageMaker for building models and Bedrock for access to leading foundation models. Google Cloud promotes democratisation with tools such as Vertex AI, designed for minimal complexity.

    Microsoft Azure is tightly integrating AI into its ecosystem through Azure AI Foundry, offering access to more than 1,700 models and running dedicated ‘AI for Science’ research labs.

    However, the promise of access must be set against the harsh reality of cost. Training a state-of-the-art model is prohibitively expensive, with estimates as high as USD 78 million for GPT-4 and USD 191 million for Gemini Ultra. This leads to a ‘two-tier democracy’ in AI research.

    On the one hand, any researcher with a grant can access world-class AI tools. This is a democratisation of application . On the other hand, the ability to train a new large-scale foundational model from scratch remains the exclusive domain of a handful of actors: the cloud providers themselves and their key partners.

    This is the centralisation of creation. The cloud ‘democratises’ AI in the same way that a public library democratises access to books. Anyone can read them, but only a few have the resources to write and publish them.

    A future written in silicon

    The breathtaking pace of scientific discovery in medicine, climatology and materials science is a direct consequence of the massive industrial and geopolitical mobilisation around a single technology: the AI accelerator.

    Progress has become fragile and deeply interdependent. Scientific breakthrough is no longer just a function of a brilliant mind. It now also depends on the quarterly financial reports of NVIDIA and AMD, the trade policies enacted in Washington and Beijing, the stability of the supply chain passing through Taiwan and the pricing models of AWS, Google and Microsoft.

    We have entered an era where the future is literally written in silicon. The great challenges of our time – curing disease, fighting climate change, creating a sustainable future – will be solved with these new tools.

    But who will be able to wield them, and for what purpose, remains the most important and unresolved question of the 21st century. The next great scientific revolution will be televised live, but the rights to broadcast it are currently being negotiated in the boardrooms of corporations and the corridors of global power.

  • IBM and AMD join forces. Goal: Quantum supercomputers

    IBM and AMD join forces. Goal: Quantum supercomputers

    Technology giants IBM and AMD have announced a strategic partnership to integrate the power of quantum computing into classic supercomputers.

    The collaboration will focus on creating hybrid architectures that combine IBM’s leadership in quantum technology with AMD’s expertise in high performance computing (HPC) and AI accelerators.

    The partnership is expected to lead to open, scalable platforms that could redefine the future of advanced computing. The idea behind this project is to create so-called quantum-centric supercomputers.

    In such a model, quantum processors will operate in tandem with a classic HPC infrastructure, driven by CPUs, GPUs and FPGAs from AMD.

    The hybrid concept assumes that complex computational problems will be broken down into parts and solved by the technology that is best suited for this.

    For example, a quantum computer could tackle the simulation of the behaviour of atoms and molecules at the quantum level – a task not feasible for classical machines – while supercomputers based on the AMD architecture would analyse huge result data sets and support processes using artificial intelligence.

    Such synergies are expected to allow real-world problems to be solved in areas such as drug and material discovery, logistics or the optimisation of complex systems at a scale and speed not previously possible.

    The companies are exploring how to integrate AMD’s technologies with IBM’s quantum systems to accelerate a new class of algorithms.

    One of the key aspects of the collaboration is expected to be the use of AMD’s real-time error correction solutions, a fundamental challenge on the road to building stable and fault-tolerant quantum computers.

    The first demonstration showing how IBM’s quantum systems work with AMD technology is planned for later this year. The partners also intend to develop open-source ecosystems, such as Qiskit, to facilitate the creation of algorithms for new hybrid supercomputers.

    These activities are part of IBM’s broader strategy, which already includes similar integrations with the Fugaku supercomputer in Japan and collaborations with the likes of Cleveland Clinic and Lockheed Martin.