Tag: Intel

  • Apple is looking for an alternative to TSMC. Talks with Intel and Samsung

    Apple is looking for an alternative to TSMC. Talks with Intel and Samsung

    Apple has entered into preliminary talks with Intel and Samsung Electronics over the potential production of its core processors. According to reports from Bloomberg, executives from the Cupertino giant have already visited Samsung’s Texas factory and held independent consultations with Intel. Although negotiations are at an early stage and have not translated into concrete orders, the move is aimed at creating an alternative to Taiwan’s TSMC. The decision comes in the shadow of Tim Cook’s warnings about supply constraints on advanced chips, which have negatively impacted iPhone sales. The situation is compounded by the fact that Apple’s upcoming smartphone processors use technology shared with its most coveted AI chips.

    Apple’s actions lead to a clear conclusion. The market’s deep dependence on a single supplier, as TSMC has become, raises powerful operational risks, especially in an era of massive demand for artificial intelligence architectures that are drastically shrinking available capacity. At the same time, Apple’s scepticism about the reliability standards and scale of alternative suppliers exposes a brutal truth: TSMC’s technological and logistical advantage creates a barrier that competitors cannot quickly overcome.

    The strategic need to review purchasing processes in the high-tech sector is worth noting. Business leaders should calculate long-term deficits in state-of-the-art lithography nodes and treat diversification not as a fallback option but as a permanent part of the strategy. It is advisable to develop closer collaboration with alternative manufacturing partners early on in the design and R&D phase. Such an approach will minimise technological risks and make the hardware architecture more flexible, effectively securing the company’s business continuity in the face of further supply crises.

  • Intel is back in the game – results above expectations and massive share gains

    Intel is back in the game – results above expectations and massive share gains

    After years of strategic drift and management missteps, Intel under Lip-Bu Tan is beginning to prove that its turnaround plan is more than just aggressive cost-cutting. Its latest second-quarter revenue guidance, settling in at $14.3 billion, not only beat Wall Street’s expectations, but triggered a euphoric 19 per cent rise in share value. This signals that the former Silicon Valley icon has found its path in a world dominated by artificial intelligence.

    A strategic shift towards CPUs and AI agents

    Key to Intel’s optimism is a paradigm shift in the data centre sector. While the first phase of the AI boom undeniably belonged to Nvidia’s GPUs, used to train powerful models, the market is now entering the deployment (inference) phase. This is where Intel’s CPUs are regaining relevance. In an architecture based on autonomous AI agents, requiring advanced reasoning and handling complex workloads, traditional CPUs are proving to be an indispensable part of the infrastructure. Lip-Bu Tan makes it clear that this demand is not just wishful thinking, but a real trend coming from the major cloud providers.

    Partnership with Musk as foundation of foundry

    The biggest image and technology victory of recent days, however, is securing Tesla as a key customer for the upcoming 14A technology process. Elon Musk’s participation in the Terafab project is a massive credibility boost for Intel’s manufacturing business (Intel Foundry). The partnership aims to create next-generation processors for robotics and data centres, directly challenging TSMC’s dominance. While financial details remain confidential, the strategic alliance with players such as Musk, Nvidia and SoftBank gives Intel the fuel it needs to transform itself into a modern, contract chip foundry.

    A risky road to 2030

    Despite its financial success in the first quarter, where adjusted earnings per share were 29 cents, Intel is still treading on thin ice. The transformation from ‘old giant’ to ‘nimble foundry athlete’ requires not only breaking through manufacturing bottlenecks, but also maintaining the pace of innovation in the face of increasing competition from AMD and ARM. For investors, however, the current valuation may be an attractive entry point. If Intel successfully manages demand for silicon in the coming robotics era, today’s ‘high-stakes gamble’ could end with the company returning to the throne of technological empire.

  • The Lip-Bu Tan effect: Is Intel finally going out on a limb?

    The Lip-Bu Tan effect: Is Intel finally going out on a limb?

    After years of structural problems and strategic missteps, Intel seems to finally be catching the wind in its sails. Investors, who have watched the tech giant’s melting lead anxiously for the past quarters, are beginning to believe in CEO Lip-Bu Tan’s turnaround plan. The numbers speak for themselves: the company’s shares are up an impressive 84 per cent in 2025, outclassing the benchmark semiconductor index, which has gained 42 per cent in that time.

    The foundation for this optimism, however, is not just market speculation, but real changes to the capital and operating structure. Strategic cash injections – $5 billion from Nvidia and $2 billion from SoftBank, backed by US government commitment – proved crucial. This gave Tan the necessary financial flexibility to combat the ‘bloated management structure’ and accelerate the transformation of the manufacturing model. The market responded enthusiastically, with at least ten brokerages raising their recommendations on the company in the past two months.

    Data centres remain the driving force behind the results. According to LSEG data, Intel will report a more than 30 per cent jump in revenue in this segment, reaching $4.43 billion. Paradoxically, the AI boom, which initially pushed Intel onto the defensive, is now stimulating demand for its traditional server processors, which are needed to work with competitors’ GPUs. Analysts are even predicting double-digit price increases for server processors in 2026, heralding improved margins in the long term.

    The picture is not without its cracks, however. Rebuilding PC market position is still a challenge. Intel is losing share to AMD and the Arm architecture, and the global rise in memory prices – which account for up to 30 per cent of the material cost of a PC – could chill demand for new laptops. UBS analysts even forecast a 4 per cent decline in PC shipments in 2026.

    However, the biggest test for Tan’s strategy remains production in 18A lithography. Although the company has started shipping ‘Panther Lake’ chips made in its own factories, yield rates are still at levels that limit wide availability to external customers such as Broadcom and Nvidia. The pressure on profitability is evident – adjusted gross margin is expected to fall to 36.5 per cent. Intel therefore faces a clear choice: it must prove that it can produce cutting-edge chips not only for itself, but also for the market, before investor confidence runs out.

  • Physics versus marketing. What do you really gain by investing in 1.8nm and 3nm processors?

    Physics versus marketing. What do you really gain by investing in 1.8nm and 3nm processors?

    Intel is bringing out the heavy guns in the form of third-generation Core Ultra processors, known as Panther Lake, which are based on 18A, or 1.8 nanometre, technology. On the other side of the market barricade is AMD with its Ryzen chips, baked in TSMC ‘s Taiwanese factories using a 3nm process. On paper, Intel’s advantage seems crushing, suggesting a technology almost half the size and more modern. However, in the CFO’s portfolio, this difference may prove to be a statistical error. In a world where ‘nanometre’ has become a brand rather than a measurement, business must learn to look at what really drives performance, ignoring the labels on the boxes.

    When IT managers look at the specifications of new laptops or servers, their gaze naturally goes to the numbers, because in the technology industry, smaller usually means better, faster and more economical. Manufacturers are well aware of this, which is why the arms race in the semiconductor sector has moved from the physics labs to the marketing departments. To make an informed purchasing decision for 2025-2026, you need to understand where the engineering ends and the wordplay begins.

    The grand illusion of the nanometre

    For decades, the IT industry has operated with a simple and understandable currency. Back in 1995, when we talked about the 350 nm technology process, it meant that the gate of a transistor on a silicon wafer was actually 350 nanometres long. The engineer and the salesman spoke the same language, and the node name was a direct reflection of physical reality. However, this order broke down in the late 1990s with the introduction of new technologies for building microtransistors, which broke the direct link between the node name and the physical dimension of the components.

    Today, names such as ‘Intel 4′, ’18A’ meaning 18 Angstroms, or ‘TSMC N3’ are predominantly trade names. Treating them as a technical measure of length is a mistake that can lead to misleading business conclusions. It is a situation analogous to the automotive market, where the model designation of a car, for example the BMW 330, no longer necessarily denotes a three-litre engine. The number now serves to position the product in the range, rather than to describe its technical parameters precisely.

    For business, this means that the approach to analysing offerings needs to change. The fact that one processor is labelled ‘1.8 nm’ and another ‘3 nm’ does not automatically mean that the former is physically much smaller. In fact, the differences may be minimal and, in extreme cases, the packing density relationship may even be the opposite of what the numbers suggest.

    The hard currency of silicon

    Since nanometres are conventional, an informed investor or IT manager should look at other metrics. If we look under the hood of Panther Lake processors or the latest Ryzen processors, we find objective parameters that PR departments are reluctant to talk about, but which are crucial for engineers. These are, first and foremost, Gate Pitch, which is the minimum distance between individual transistors, and Metal Pitch, denoting the minimum distance between the copper paths connecting these components.

    Analysis of this hard data leads to surprising conclusions. Comparing the current generation of processes, it appears that the Intel 4 technology and the competing TSMC N4 have almost identical physical characteristics, with a gate pitch oscillating between 50 and 51 nanometres. Despite the different trade names, the packing density of the technologies is very similar. The future looks even more interesting, with Intel promoting an 18A process suggesting 1.8 nm, while TSMC is preparing to implement a 2 nm process. Paradoxically, according to many technical analyses, it is the Taiwanese ‘2 nm’ that may offer higher transistor density than the US solution. Intel is making up for it with marketing, suggesting leadership, but in practice the two giants are going head to head and their nodes will meet each other halfway in terms of real-world performance.

    Physics translates into costs

    Although the labels are confusing, the technological advances are real and central to the cost of doing business, or TCO. Regardless of the nomenclature, the drive towards denser transistor packing is driven by the inexorable laws of physics, as a smaller transistor with a shorter path between source and drain requires a lower voltage to switch its logic state. For the company, this translates directly into energy efficiency and thermal performance.

    The chip, made using a newer, denser process, uses less power for the same load. On the scale of a single laptop, this means an extra hour of battery life during a business trip, while on the scale of a data centre, it translates into thousands of zlotys of savings on electricity bills. The thermal aspect is equally important, as less power consumption means less heat generated. This allows the processors to run at higher frequencies without the risk of thermal throttling, ensuring more stable operation of demanding applications. Therefore, Intel Panther Lake will be inherently better than its predecessor not because of the name ’18A’, but because the engineers have actually improved the physical structure of the chip, which is also true for AMD using TSMC improvements.

    The strategic trap of the single supplier

    There is another element of business risk in this technological jigsaw puzzle, related to incompatibility. Intel’s, TSMC’s and Samsung’s manufacturing processes have diverged dramatically, with each giant using different chip production methods, deploying technologies such as FinFET or RibbonFET at different times. This means that chip designers such as AMD and NVIDIA are firmly tied to their chosen factory and cannot move production to a competitor overnight. Adapting a design to another factory is a process that takes up to a year and incurs huge costs. When choosing a hardware platform for a company, decision makers are therefore choosing not just a processor, but the entire supply chain, where the stability of the manufacturing partner becomes a strategic factor, more important than the marketing name of a nanometre.

    We are approaching the point where comparing processors solely on the basis of lithography becomes pointless. Intel Panther Lake and the upcoming Ryzen generations will be powerful chips, but their value to business is not based on the labels on the box. When planning infrastructure purchases, the key indicator should be the performance-per-watt ratio. It is this parameter that determines whether an investment in new hardware will translate into real productivity gains and reduced operating costs for the business.

  • Intel regains the initiative and business says ‘check’ to AI hype. Key takeaways from CES 2026

    Intel regains the initiative and business says ‘check’ to AI hype. Key takeaways from CES 2026

    This year’s CES in Las Vegas brought a turnaround in the semiconductor industry that has been rare in recent years. After a period of catching up, Intel seems to have regained the technological pre-eminence, with direct implications for B2B purchasing strategies. “Blue” dominates the narrative with Panther Lake chips (Intel Core Ultra Series 3), manufactured using the Intel 18A technology process. This is the equivalent of 2nm technology, which its main competitor AMD has yet to bring to the mass market, offering instead a refreshed architecture in the form of the Ryzen AI 400.

    The response from OEMs was immediate and unequivocal. Lenovo stepped up its partnership with Intel, promoting the ‘Aura Edition’ line as exclusive to this architecture, particularly in the premium ThinkPad and ThinkBook segments. HP, on the other hand, has taken an agnostic stance, offering a choice between Intel, AMD and ARM architecture from Qualcom within the same SKU on the EliteBook X G2. This is a pragmatic approach that shifts the burden of architecture choice directly to the enterprise customer.

    An interesting development is the apparent cooling of enthusiasm for ARM architecture in the Windows ecosystem. Despite the launch of cheaper Snapdragon X2 variants, Qualcomm has failed to dominate the conversation behind the scenes this year. The attention of the business sector, after a brief flirtation with alternatives, seems to be returning to the proven x86 architecture. ARM is instead enjoying spectacular success in the server and HPC segments, where Nvidia unveiled the Vera and Ruby processors, cementing its position in AI infrastructure.

    The event that may define B2B marketing for the coming quarters, however, is a change in Dell’s rhetoric. Kevin Terwilliger, head of product at Dell, has openly admitted that business customers do not make purchasing decisions based on the presence of ‘AI’ in a product name. The company has drastically reduced the use of this acronym in its new portfolio, including the reactivated XPS line. This is a sobering counterpoint to competitors such as MSI, which continues to experiment with complex naming like ‘Pro Max AI+’.

    Despite technological optimism, the spectre of cost hangs over the market. Specific prices in euros for entry-level configurations were missing in Las Vegas. Given the shortage of DDR5 memory and the rising cost of silicon wafers in the lowest lithographic processes, IT purchasing departments should prepare for the return of innovation to come at a high price.

  • Intel Core Ultra Series 3: End of Hyper-Threading and debut of 18A processor

    Intel Core Ultra Series 3: End of Hyper-Threading and debut of 18A processor

    This year’s CES in Las Vegas set the stage for one of the most important tests in Intel‘s recent history. The presentation of the Core Ultra Series 3 chips, known by the codename Panther Lake, is not only a refresh of the product portfolio, but above all a demonstration of the operational maturity of the 18A technology process.

    Manufactured in Intel’s own facilities, the chips, based on 18 angstrom lithography (equivalent to 2 nm at TSMC), are intended to be a technological answer to the dominance of Asian foundries, offering transistors with higher density and energy efficiency.

    The new architecture brings significant changes to the silicon design. Intel has decided to abandon multi-threading (Hyper-Threading) technology altogether. Instead, the fourteen new laptop models are based on a physical combination of performance (P), efficiency (E) and low-power (LPE) cores, where each core supports exactly one thread.

    The minimum TDP is set at 25 watts, but the flexibility of the power configuration means that the final performance of the laptop will largely depend on the engineering decisions of the OEMs.

    The model range itself is becoming increasingly labyrinthine for the sales channel. A distinction has been made between standard chips and those with an ‘X’ suffix, which signals the presence of the more powerful Intel Arc Pro B390 graphics chip.

    Importantly for integrators, individual models vary drastically in the number of PCIe lines available, which can complicate the design of motherboards for specific configurations with discrete graphics cards.

    In the background of the technology launch, the political and marketing context resonates clearly. Intel is heavily emphasising the US pedigree of the new processors – from design to manufacturing – a clear nod to the new US administration. From a usage perspective, despite the presence of NPUs in all models, the leap in performance for AI tasks seems evolutionary rather than revolutionary.

    While the ‘AI PC’ slogan continues to dominate the marketing message, the real value of Panther Lake will be verified by the market at the end of the month, when the first devices hit shop shelves. The key for the IT industry, however, will be not so much the AI slogan itself, but to confirm whether the 18A process actually allows Intel to return to the performance throne.

    The Intel Core Ultra Series 3 family consists of the following components at launch:

    ProcessorCores (P+E+LPE)GHz (max.)GPU coresTOPY NPUPCIeTDP max. (W)
    Ultra X9 388H16 (4+8+4)5,112501265,8
    Ultra 9 368H16 (4+8+4)4,94502065,8
    Ultra X7 368H16 (4+8+4)5,012501265,8
    Ultra 7 366H16 (4+8+4)4,84502065,8
    Ultra 7 365 4,84491255
    Ultra X7 358H16 (4+8+4)4,812501265,8
    Ultra 7 365H16 (4+8+4)4,74502065,8
    Ultra 7 3658 (4+0+4)4,74491255
    Ultra 5 338H12 (4+4+4)4,710471265,8
    Ultra 5 336H12 (4+4+4)4,64472065,8
    Ultra 5 3558 (4+0+4)4,64471255
    Ultra 5 3258 (4+0+4)4.54471255
    Ultra 5 3326 (2+0+4)4.42461255

  • MSI introduces Panther Lake chips. Prestige and Modern series with 18A lithography

    MSI introduces Panther Lake chips. Prestige and Modern series with 18A lithography

    MSI has used this year’s CES to thoroughly refresh its business portfolio, but what’s most significant is not in the new case design, but in the silicon. The Taiwanese manufacturer is one of the first to launch machines based on Intel Core Ultra Series 3 processors, more widely known as Panther Lake.

    For the partner channel and the industry as a whole, this is an important signal – these chips are the first market test for Intel’s new 18A technology process, which is expected to define the power efficiency of mobile devices in the coming quarters.

    In the Prestige and Modern series, the visual changes are evolutionary, but the specifications aim high. The Prestige 14 AI+ and 16 AI+ models, also available in “Flip” convertible variants, have received Intel Core Ultra X9 388H processors.

    The ‘X’ designation indicates the presence of an integrated but relatively powerful Intel Arc B390 graphics chip. Combined with OLED matrices and robust 81 Wh batteries, MSI is positioning these machines as tools capable of handling not only office packages, but also more demanding graphics tasks.

    The smallest representative of the family, the Prestige 13 AI+, may cause the most discussion. MSI engineers managed to get the weight down to an impressive 899 grams while retaining the Core Ultra 9 processor and 2.8K OLED panel. Here, however, is where the technological risk comes in: a battery capacity of just 53 Wh with such a high-resolution matrix could prove to be a challenge for uptime.

    The market success of this model will be a direct test of whether Intel’s promises of a leap in energy efficiency at the 18A node are covered in reality.

    For the corporate mainstream, MSI has prepared a refreshed Modern series (14S and 16S models). Here, the focus is on pragmatism: Core Ultra 7 processors, a metal finish and a full set of ports, including the now rare native Ethernet, which is still an asset in an office environment.

    Significantly, despite parallel launches from AMD, MSI is betting exclusively on the ‘blues’ in its current business deal, with no models based on competing chips announced in these series. Sales of the flagship Prestige series are due to start on 27 January.

  • Judicial ‘discount’ for Intel. Giant to pay EU 1/3 less

    Judicial ‘discount’ for Intel. Giant to pay EU 1/3 less

    For Intel, a giant currently struggling through one of the most difficult restructurings in its history, any positive financial news is at a premium. On Wednesday, the General Court of the European Union provided the Californian company with a rare recent reason to be pleased, deciding to significantly reduce its antitrust fine. While the manufacturer’s culpability in blocking competition was not challenged, the size of the fine was mitigated, ending another chapter in a legal saga that has lasted nearly two decades.

    The case, under reference T-1129/23, goes back to the period of the aggressive battle for dominance in the x86 processor market between Intel and Advanced Micro Devices (AMD). At the centre of the dispute were practices between 2002 and 2006, which the European Commission identified as so-called naked restrictions. The mechanism involved payments to key OEM partners – HP, Acer and Lenovo – in return for withholding or deliberately delaying the launch of computers equipped with competitors’ chips.

    Originally, in 2009, Brussels imposed a then record fine of €1.06 billion on Intel. After years of court battles, this mammoth sum was overturned, but in 2023 the Commission came back with a new fine, set at €376 million. It was this decision that the US manufacturer appealed, arguing that the sanction was disproportionate to the actual harm of the act.

    Intel
    Source: Intel

    The judges in Luxembourg upheld part of the defence’s arguments. The reasoning of the judgment indicated that the amount of €376 million did not adequately reflect the gravity of the infringement. The Court noted the limited scope of the conduct, which involved a relatively small number of devices, and the fact that the anti-competitive conduct was not continuous – the evidence pointed to a 12-month gap between incidents. As a result, the fine was reduced by around a third, to just under €237 million.

    For the channel market and the IT industry, the ruling is an important signal. It confirms that European regulators remain relentless in protecting competition rules, even if enforcement processes drag on for years. On the other hand, the court’s decision shows that the European Commission needs to calibrate penalties precisely, based on hard data on the scale of infringements and not just on the overall market position of an entity.

    The decision is not yet final. Both Intel and the European Commission have the option to appeal to the EU Court of Justice, which could prolong this legal marathon. However, in the current macroeconomic situation and with Intel’s tight budget, the saving of nearly €140 million is a significant boost, even if it is only a partial victory in a case that casts a shadow over the company’s reputation from its days of absolute dominance.

  • AMD undercuts Intel, but the Santa Clara giant remains in the lead

    AMD undercuts Intel, but the Santa Clara giant remains in the lead

    Competition in the x86 processor market is gathering pace, with AMD consistently consolidating its position at Intel‘s expense. Although Intel continues to dominate, controlling around 70 per cent of the market, the latest Mercury Research analysis confirms that AMD has effectively secured close to 30 per cent share for itself.

    AMD’s growth rate is not uniform across all segments, however. The company’s biggest successes have been in the desktop market, where its share grew by an impressive five per cent to 33.6 per cent in the last quarter. It is much quieter in the mobile segment – here the shares of both players remained almost unchanged, and Intel even managed to record a token increase of 0.4 per cent.

    The server market also looks interesting, having compensated both manufacturers for a weaker quarter in the PC segment, probably caused by uncertainty over import duties. In data centres, AMD is also going from strength to strength, increasing its share by 3.5% and now controlling 27.8% of this strategic market.

    According to Mercury Research, AMD’s gains are partly due to the fact that the company is ‘delivering faster’, while Intel’s attention has been diverted away from entry-level processors. Still, the numbers don’t lie – Intel’s dominance is still undisputed, and AMD has a very long way to go to realistically threaten its leadership position.

    It seems, however, that Intel, perhaps dormant in recent years, has definitely awoken. The conglomerate, under the leadership of Pat Gelsinger, is undergoing a profound transformation, without hesitation cutting off divisions that are no longer central to the company’s strategy. The aim is to restore former operational efficiency.

    Mercury Research’s analysis focuses exclusively on the x86 duo. However, it is important to bear in mind the third player, the Arm architecture, which is already estimated to control around 10% of the total processor market and is increasingly daring to enter the game in the backyard hitherto reserved for Intel and AMD.

  • Intel is betting on a single standard. Future Xeon 7 chips exclusively with 16 memory channels

    Intel is betting on a single standard. Future Xeon 7 chips exclusively with 16 memory channels

    Intel has made a significant adjustment to its server processor roadmap, opting to strategically simplify its offerings. The company has confirmed that the upcoming Xeon 7 family of chips, known by the codename Diamond Rapids, will not be offered in previously planned, more budget variants with eight memory channels. Instead, the entire product line, which is expected to debut in 2026, will be based exclusively on a platform with sixteen memory channels.

    Simplification as a competitive strategy

    The move is being communicated by Intel as a ‘simplification’ of the ecosystem. In practice, this means abandoning market segmentation by memory bandwidth in favour of establishing a new high standard. For the manufacturer, this means fewer platforms to validate and support. For customers, including large data centres, the benefit is expected to be access to maximum memory bandwidth even in theoretically lower Xeon 7 configurations. This is crucial in the era of AI and HPC workloads, where fast data access is as important as the processing power of the cores.

    The decision is also a direct response to the actions of competitors. For the past generations, Intel, with its eight channels, has lagged behind AMD. Competing Epyc chips (9004 and 9005 series) offer a unified 12-channel memory controller. Introducing a new server platform with just eight channels in 2026 would be strategically difficult to defend. By jumping straight to sixteen channels, Intel is not only catching up, but also trying to impose a new performance standard on the market.

    Waiting for the new architecture

    The Diamond Rapids (Xeon 7) platform itself will be geared towards maximum performance, offering up to 192 P-core cores based on the Panther Cove architecture. It is scheduled for market release in the second half of 2026. Prior to that, in early 2026, Intel plans to introduce Xeon 6+ (Clearwater Forest) chips, produced on the Intel 18A process and focused on energy-efficient E cores. The cancellation of the 8-channel Diamond Rapids shows that Intel is willing to sacrifice the lower end of the price spectrum in favour of hitting the premium segment harder and fighting for leadership in a key indicator for data centres – memory bandwidth.

  • Intel loses AI chief to OpenAI

    Intel loses AI chief to OpenAI

    Intel has confirmed a significant personnel change in its key segment. Sachin Katti, former chief technology officer (CTO) and head of the AI group, is leaving the company to join OpenAI. This is a strategic transfer in the midst of a global war for talent specialising in artificial intelligence. At OpenAI, Katti is to be responsible for designing and building the computing infrastructure to support general artificial intelligence (AGI) research, reporting directly to CEO Greg Brockman.

    For Intel, the timing of the departure is difficult. The company is busily trying to catch up with Nvidia (NVDA) in the lucrative data centre AI accelerator market. Although Intel’s processors are widely used in server systems, the company is still struggling to create a chip that can realistically compete with Nvidia’s dominant chips produced by TSMC. The loss of a key executive responsible for this strategy complicates the company’s efforts.

    The duties of Katti, a former Stanford professor who has worked at Intel for about four years, will be taken over temporarily by Lip-Bu Tan. Tan, serves as CEO of Intel and is a veteran of the semiconductor industry (including former CEO of Cadence). Intel assures in an official announcement that AI remains “one of its highest strategic priorities” and the company is focused on executing its roadmap.

  • Intel celebrates the success of accountants, not engineers

    Intel celebrates the success of accountants, not engineers

    Euphoria flooded the market after Intel’s latest results. Shares soaring 90% by 2025, quarterly earnings per share of 23 cents (instead of the expected 1 cent) and gross margins of 40% look like the return of the king. Investors, buoyed by the AI-PC hype and a fresh injection of $15 billion, are opening the champagne.

    However, we must ask the question: what are we really celebrating?

    The answer is simple: we celebrate the success of accountants, not engineers. Intel’s recent results are not the fruit of regained technological dominance, but the result of “drastic cost-cutting measures” introduced by its new CEO, Lip-Bu Tan. It is an illusion of success that masks a strategic retreat and a tacit admission of defeat in the crucial race for the future of chip manufacturing.

    A financial miracle

    Let’s look at where this impressive profit came from. It doesn’t come from revolutionary new products that beat Nvidia to the AI market. It comes from cuts.

    Firstly, Intel is ending the year with a workforce that is more than a fifth (more than 20%) smaller than last year. Second, the company is aggressively selling off assets – including a 51% stake in Altera, a company acquired in 2015 for $16.7 billion.

    It’s a cold financial calculation. The new CEO, Tan, is doing exactly what he was hired for: putting out the fire his predecessor left behind.

    Let’s remember that Pat Gelsinger’s ambitious plans to turn Intel into a TSMC-like contract manufacturer led the company to its first annual loss since 1986. Tan has radically scaled back these costly ambitions. Current profits are therefore not growth, but stopping the haemorrhage.

    Rescue, not reward

    Giant investments were also in the spotlight: $5 billion from Nvidia, $2 billion from SoftBank and an unprecedented $8.9 billion from the US government in exchange for a 10% stake.

    Let’s not be fooled. This is not a reward for a market leader. It is a rescue for a company that is trying to maintain its dominance and whose repeated attempts to break into the AI chip market have failed.

    The investment by Nvidia – the rival that dethroned Intel in the AI segment – is a strategic bet, not an act of faith. It is an attempt to secure access to manufacturing capacity in the West and to influence the development of CPUs, which (ironically) are essential in AI servers to support… Nvidia GPUs.

    Even more telling is the intervention of the US government. The 10% takeover, which came after President Donald Trump called for Tan’s resignation over his links to China, is not a market move. It is a geopolitical intervention. Intel has become a national security asset; a company too important to fail but too weak to win on its own.

    Time bomb: the truth about the 18A process

    While Wall Street analysts were getting excited about gross margins, the key message came from Intel’s CFO himself, Dave Zinsner.

    When asked about the foundation of Intel’s future competitiveness – the 18A manufacturing process – Zinsner openly admitted that the process would not give Intel the level of margins it currently needs.

    That already sounds bad. To make matters worse, moments later Zinsner said the process would not be ready at a level “acceptable to the industry” until 2027.

    This is the true picture of Intel, hidden behind a facade of good quarterly results. The 18A process is not “some” design. It was supposed to be the answer to TSMC and Samsung’s dominance. It is the technology that was supposed to return Intel to the throne of manufacturing leadership. The admission that it won’t be ready until 2027 is a disaster. In the semiconductor industry, that’s an eternity. It means that for the next 2-3 years Intel will be technologically far behind.

    Retreat from dominance

    So what will Intel do if it cannot compete with TSMC? CEO Tan has a new vision: to create a ‘central engineering group’ that will offer specially designed chips to external customers such as Google or Amazon.

    To put it bluntly: Intel is giving up the battle for global dominance in mass production. Instead, it will try to become a niche supplier of expensive, custom solutions, competing with the likes of Broadcom and Marvell Technologies. This is a radical but, it seems, necessary lowering of ambition.

    Stable patient, not healthy leader

    Intel 3.0, led by Lip-Bu Tan, is a financially more stable company than the shaky giant under Pat Gelsinger. A $15 billion injection and brutal cost-cutting bought the company time.

    But the investors who are buying shares today are not buying a technology leader. They are buying a company that has just admitted that its key manufacturing technology is years behind schedule. They are buying a company that is selling off assets and giving up the battle for the throne to become a premium service provider to other giants.

    Ironically, the current ‘high-end problem’ mentioned by CFO Zinsner – i.e. demand outstripping supply – is largely due to data centres having to upgrade CPUs (Intel’s) to keep up with advanced AI chips (Nvidia’s).

    Intel is no longer leading the AI revolution. It has become an indispensable but nonetheless secondary parts supplier for it. This is not the return of the king. It is the beginning of life as a strategic asset, kept alive by rivals and the government, whose main goal is no longer domination but survival.

  • Intel beats forecasts thanks to cost cuts, but struggles with 18A production

    Intel beats forecasts thanks to cost cuts, but struggles with 18A production

    Intel significantly beat analysts’ expectations for third-quarter profit as a direct result of the “drastic cost-cutting measures” implemented by its new CEO, Lip-Bu Tan. In response to the results and strategic investments, the company’s shares surged nearly 90% in 2025, recovering from a 60% decline last year.

    It was the first earnings announcement since the company raised multi-billion dollar funding, described as a ‘lifeline’. Investors in Intel included Nvidia (US$5 billion), SoftBank (US$2 billion) and the US government, which took a 10% stake for US$8.9 billion. The funds are intended to support the company in its competitive battle with AMD in the PC and server market and in its so far unsuccessful attempts to enter the AI market, dominated by Nvidia.

    Tan’s new strategy departs from the costly ambitions of predecessor Pat Gelsinger to compete with TSMC. Cuts have included cutting staff by more than a fifth. Intel will now focus on creating custom chips for external customers, competing with Broadcom and Marvell.

    Despite strong demand, which according to chief financial officer Dave Zinsner currently outstrips supply, the company faces significant challenges. Zinsner admitted that the capacity of the key 18A manufacturing process is insufficient and is unlikely to “reach industry acceptable levels” until 2027.

  • PW eSkills: government programme for digital competences has gained new partners

    PW eSkills: government programme for digital competences has gained new partners

    The PW eSkills programme, launched by the Ministry of Digitalisation, has expanded its group of partners to include four more entities: iCodeTrust Sp. z o.o., Intel Corporation, the Polish Development Fund and the University of Lodz. The inclusion of these organisations is another sign that the development of digital competences is becoming a cross-sectoral issue, bringing together science, business and administration.

    The PW eSkills initiative works in five areas: raising the level of digital competences, promoting their development and good practices, supporting equality and diversity in the ICT sector, strengthening cooperation between ICT educators and creating new initiatives and recommendations. The programme is open to government and local administrations, NGOs, entrepreneurs, the research and education community and competence sector councils.

    The involvement of these four institutions brings specific opportunities to PW eSkills: Intel as a technological player can bring competences in the area of data processing and artificial intelligence, the University of Lodz represents the academic background of research and education, the Polish Development Fund as a financial institution can assist with support mechanisms and iCodeTrust Ltd. – providing services, training or certification. Deputy Minister of Digitalisation Paweł Olszewski emphasised that “digital competences are becoming a shared priority” and the new partners will facilitate the building of a society prepared for digital transformation.

    In the context of the challenges posed by the dynamically changing technological environment – automation, artificial intelligence, changes in work patterns – the programme is gaining importance as part of the national digital education strategy. The Polish market can thus better respond to the demand for ICT competences, which affects both the economy and the reduction of digital exclusion.

    The key question now is: how to effectively translate the partners’ declarations into concrete operational outcomes – available courses, internships, certifications – and how to measure the programme’s impact on the labour market and society.

  • Intel takes another approach to AI. “Not everything for everyone”.

    Intel takes another approach to AI. “Not everything for everyone”.

    Intel is back in the game for the AI market, announcing a new data centre GPU, Crescent Island, at the Open Compute Summit. The chip is due to debut next year and, according to the company’s statement, will be optimised not for impressive benchmarks, but for operating economics: cost per token processing, energy efficiency and AI model inference.

    This is a clear signal of a change in strategy. After years of failed approaches – from the abandoned Gaudi line to the frozen Falcon Shores project – Intel is betting on pragmatism. “We don’t want to build everything for everyone. We are focusing on inference,” stressed the company’s CTO, Sachin Katti. Translated into the language of the market: Intel will not fight Nvidia where Nvidia is strongest, namely in training giant models. Instead, it is targeting the stage where it makes money from enterprise-scale AI implementations.

    Technically, Crescent Island is betting on 160GB of vintage-type memory – slower than the HBM used by competitors. It’s a trade-off: lower peak performance, but potentially better availability and cost per watt. The chip is based on Intel’s consumer GPU architecture, which suggests a shorter deployment cycle and lower manufacturing risk. However, key details are still missing: what technology process (TSMC? in-house factories?) and what real TCO against AMD Instinct or Nvidia Hopper/Blackwell chips.

    In a market that has suffered from a chronic GPU shortage since the release of ChatGPT, room for a third player obviously exists – but patience is running out. Hyperscaler customers want annual launches, interoperability and an open ecosystem. Intel promises exactly that: modularity and the ability to mix chips from different vendors. It’s a defensive/offensive move – if it can’t win solo, it wants to be indispensable as the CPU in every AI system. This is borne out by the recent deal with Nvidia, which invested $5bn and took an approximate 4% stake in Intel.

    Will it be enough? Intel is playing for time and a second chance. AI is no longer a power race – it’s starting to be an economics race. If Crescent Island proves ‘performance per dollar’, Intel can get back to the table. If not – it will remain a factory for the other winners of this revolution.

  • Panther Lake: Intel’s new processor is set to reverse the downward trend in the laptop segment

    Panther Lake: Intel’s new processor is set to reverse the downward trend in the laptop segment

    Intel has announced that this Thursday it will reveal technical details of its new mobile chip, Panther Lake, the company’s first product to be manufactured entirely on the 18A process. It’s an important moment: Intel is trying to regain investor confidence after a series of stumbles at advanced technology nodes.

    Panther Lake is the flagship processor for premium laptops, designed for high performance and energy efficiency. Lower power consumption is expected to be around 30 per cent compared to the current generation, with an increase in computing power of up to 50 per cent in certain scenarios. The chips are expected to hit the market in early 2026.

    Intel last week held a series of technical presentations and tours of its Arizona facilities, including the new Fab 52 plant to support Panther Lake production. The company detailed its redesigned AI engine, CPU and GPU cores, power technologies and microarchitecture.

    However, the key challenge – production yield – remains in doubt. According to earlier reports, the yield (‘yield’) for Panther Lake remains low: the company reported that around 10 per cent was achieved during the summer, up from around 5 per cent at the end of last year. However, it was not stated how many of the units produced met the highest quality requirements.

    This point is key: if Intel does not improve yields to levels that allow profitable mass production, even impressive technical performance may not be enough.

    The market situation does not make the task any easier. Intel reported a loss of US$2.9 billion in the second quarter and threatened to suspend work on the future Node 14A if it did not find a key customer. At the same time, the company raised funding from SoftBank and Nvidia, and a grant was transformed into a 9.9 per cent stake in Intel under the CHIPS law

    Panther Lake could be the key in Intel’s new strategy – either a symbol of rebirth or a reminder of how far the company has fallen behind the competition. On Thursday, we will learn more technical details that could give the first clear indication of whether the 18A process is ready to become the foundation of Intel’s new era.

  • AI accelerator market in Europe: digital sovereignty vs. Nvidia’s dominance

    AI accelerator market in Europe: digital sovereignty vs. Nvidia’s dominance

    The generative artificial intelligence (GenAI) revolution has created an insatiable demand for computing power, fundamentally changing data centre architectures. Traditional processors (CPUs), for decades the heart of computing, have become the bottleneck for language-based models (LLMs) and other GenAI systems. In response to this challenge, a new class of specialised hardware was born: AI accelerators.

    The end of the CPU era and the birth of a new paradigm

    The problem with CPUs in the context of AI lies not in their speed, but in a fundamental architectural mismatch. Optimised for sequential execution of complex tasks, they have only a few powerful cores. Meanwhile, deep learning algorithms require massive parallel processing – performing trillions of simple operations simultaneously. This is a task for which graphics processing units (GPUs), equipped with thousands of smaller cores, are ideally suited .

    Alongside GPUs, which have become the standard for model training, even more specialised units have emerged. Neural processing units (NPUs) are a broad category of chips designed from the ground up with AI in mind, prioritising energy efficiency, making them crucial for edge AI applications. Tensor processing units (TPUs), on the other hand, are Google‘s proprietary ASICs, optimised for its software ecosystem and massive cloud computing .

    This paradigm shift is driving a market in Europe with huge potential. Valued at around €4.88 billion in 2024, the European AI accelerator market is expected to grow to nearly €43 billion by 2033, with an impressive compound annual growth rate (CAGR) of 27.4% .

    Unique European Drivers: Politics meets market demand

    The European accelerator market is shaped by a unique combination of bottom-up commercial demand and top-down strategic initiatives, which sets it apart from markets in the US or Asia.

    On the one hand, AI adoption is growing in key sectors such as healthcare, automotive and finance. Already 13.5% of businesses in the EU are using AI technologies, and the entire European AI market (software, hardware and services) is growing at a rate of more than 33% per year.

    On the other hand, the European Union is pursuing an ambitious programme to strengthen its digital and technological sovereignty. Geopolitical concerns and the desire for independence from non-EU suppliers have led to powerful investment mechanisms:

    • EU Chips Act: This initiative aims to mobilise more than €43 billion in public and private investment to double Europe’s share of global semiconductor production from 10% to 20% by 2030 . Attracting investment to build advanced factories, such as Intel’s and TSMC’s plants in Germany, is crucial for future accelerator production in Europe.
    • AI Continent Action Plan: this €200 billion plan aims to create a sovereign, pan-European AI ecosystem. Its key element is the InvestAI initiative, which is expected to mobilise €20 billion to build 4-5 ‘AI Gigafactories’ – each equipped with more than 100,000 advanced AI chips.
    • EuroHPC and ‘AI Factories’: The Joint Undertaking for European Large Scale Computing (EuroHPC JU) is investing billions of euros to build a fleet of supercomputers. Around these, 13 ‘AI Factories’ are being built to democratise access to computing power for startups and SMEs, stimulating innovation and creating guaranteed demand for infrastructure .

    The competitive landscape: Nvidia’s dominance and the strategies of the contenders

    The data centre accelerator market is close to a monopoly. Nvidia controls around 98% of the global market in terms of units shipped, and its real advantage is its mature CUDA software ecosystem, used by 5 million developers . This creates a powerful lock-in effect, making it difficult for competitors to gain share.

    Nevertheless, the contenders are pursuing well thought-out strategies:

    • AMD: Positions itself as a major high-performance alternative. The Instinct MI300 series of accelerators is intended to compete with Nvidia’s offerings, with a key selling point being the open ROCm software platform, aimed at breaking the CUDA monopoly.
    • Intel: It is betting on price competition with Gaudi accelerators (to be 50% cheaper than Nvidia’s H100) and an open oneAPI ecosystem.
    • Google (TPU): It does not sell the chips directly, but uses them as a key differentiator for its cloud platform, offering an excellent performance-to-cost ratio for specific AI workloads.

    Against this backdrop, European players such as the UK’s Graphcore and France’s Blaize are also emerging, focusing on niches such as novel architectures (IPUs) or energy-efficient chips for Edge AI

    The growth trilemma: Cost, energy and talent

    Despite the optimistic outlook, the European market faces three fundamental barriers that create a strategic trilemma for decision-makers.

    Cost and availability: The price of a single high-end accelerator, such as the Nvidia H100, is up to US$40,000, making building your own AI infrastructure prohibitive for most companies . Additionally, global supply chains are vulnerable to disruption and export controls, which threatens project continuity .

    Energy and ESG: Data centres dedicated to AI consume four to five times more energy than traditional ones. Data centre energy consumption in Europe is forecast to almost triple by 2030. This is in contrast to the EU’s ambitious sustainability goals, such as

    Energy Efficiency Directive, which imposes an obligation to reduce energy consumption.

    Talent: Europe is facing a critical shortage of AI and HPC professionals. The skills gap is slowing down innovation and preventing companies from effectively using even the infrastructure they already have, empowering global cloud providers.

    Future trends: From possession to access, from monolith to module

    Looking ahead to 2030, the market will be shaped by three key trends:

    • The dominance of the ‘Compute-as-a-Service’ model: Due to the aforementioned trilemma, most companies will not buy accelerators, but rent access to them. This model, pursued by both public ‘AI Factories’ and commercial cloud providers, transforms huge capital expenditure (CAPEX) into predictable operating costs (OPEX).
    • Software battle: The long-term structure of the market will depend on the success of open standards, such as ROCm and oneAPI, in breaking the dominance of CUDA. Avoiding dependence on a single vendor is a powerful motivator for the industry as a whole

    New hardware architectures: To overcome physical limitations, the industry is moving towards chiplets – smaller, specialised silicon cubes combined into a single system. This allows for greater modularity and lower costs. In the long term, the revolution could be

    photonic computing, using light instead of electrons, which promises orders of magnitude higher throughput and energy efficiency .

    Strategic lessons for technology leaders

    The European AI accelerator market is an arena where global technology competition meets unique political and regulatory ambitions. For technology and innovation directors, this means navigating a complex ecosystem.

    The key strategic question is shifting from “which accelerator to buy?” to “how to strategically access computing power?”. The answer requires balancing performance, cost, sovereignty and sustainability. Success in the GenAI era will not depend on simply having the latest hardware, but on the ability to intelligently use both public initiatives and private innovation to build a sustainable competitive advantage in a unique European market.

  • Apple a lifesaver for Intel? Unexpected alliance could change the chip market

    Apple a lifesaver for Intel? Unexpected alliance could change the chip market

    As part of a broad strategy to return to the top, Intel is in early talks with Apple about a potential investment and closer collaboration. For Intel, struggling in the AI market, financial and technological support from the Cupertino giant would send a powerful signal to the market.

    For Apple, it would be an opportunity to diversify its supply chain in geopolitically unstable times.

    The initiative is the latest in CEO Pat Gelsinger’s ambitious plan to reclaim Intel’s former glory. The company, once synonymous with innovation in Silicon Valley, has been overshadowed by competitors such as Nvidia and AMD in recent years, particularly in the booming artificial intelligence segment.

    To fund its costly transformation and build new factories in the US, Intel is actively seeking strategic partners. The talks with Apple come just days after Nvidia announced a $5 billion investment in exchange for a roughly 4% stake.

    Earlier, the company also secured $10 billion from the US federal government (in exchange for a 10% stake) and $2 billion from SoftBank Group. These cash injections have already improved investor sentiment, which has translated into a more than 40% increase in share value since mid-August.

    Why would Apple, which abandoned Intel ‘s processors in favour of its own Apple Silicon chips in 2020, now return to the negotiating table? The answer lies in strategy and risk management.

    Firstly, diversification of the supply chain. Apple today is heavily dependent on Taiwanese manufacturer TSMC. A potential partnership with Intel would allow the company to diversify production of key components and hedge against escalating geopolitical tensions in the Taiwan region.

    Secondly, the relationship with the US administration. The investment in a key US chipmaker is part of Apple’s commitment to increasing domestic investment, which could further strengthen the company’s position in Washington.

    Although the talks are at an early stage and there is no guarantee of success, the very fact that they are taking place is significant. For Intel, gaining Apple as a customer for its foundry business would be the ultimate validation of its chosen IDM 2.0 strategy.

    This would be a much bigger success than the deal with Nvidia, which, while including joint chip development, does not involve manufacturing its computing chips in Intel’s factories.

    Intel’s future depends on its ability to attract external customers to its factories. A potential alliance with a former key partner could prove to be a decisive step in this game.

  • Nvidia invests in Intel – the king of revolution shakes hands with a sinking rival

    Nvidia invests in Intel – the king of revolution shakes hands with a sinking rival

    In Silicon Valley, there are alliances that seem natural, and those that shake the foundations of the entire industry. The collaboration between Nvidia and Intel announced yesterday undoubtedly falls into the latter category.

    This is no mere business deal; it is a strategic pact made by two former fierce rivals. We are witnessing a historic moment in which the leader of the AI revolution (Nvidia) shakes hands with the giant that almost slept through this revolution (Intel).

    The question we must ask ourselves goes far beyond corporate press releases: are we witnessing the birth of a synergy that will drive innovation, or rather the creation of a powerful duopoly that will concretise the market for years and marginalise competition?

    Act of desperation or masterful chessboard?

    To understand the importance of this alliance, it is necessary to look at the position from which both players are starting. Intel, once the undisputed king of silicon, has been struggling for years. Problems with the transition to lower technology processes, delays and increasing competition have caused the company to lose its leadership position to Nvidia and Samsung.

    At a time of rapid growth in GPU-driven artificial intelligence, Intel’s dominance in the CPU segment has proved insufficient. From this perspective, the pact with Nvidia looks like a desperate attempt to get back into the highest stakes game.

    This is an acknowledgement that without AI leadership technology, Intel is unable to compete on the all-important innovation front alone.

    For Nvidia, on the other hand, this move is pure, calculating strategy. Jensen Huang’s company absolutely dominates the data centre and AI accelerator segments. But its next goal is to conquer a market where Intel still has hegemony: personal computers.

    By natively integrating its graphics chips (in the form of RTX chiplets) with Intel processors, Nvidia gains access to the vast x86 ecosystem. It’s a brilliant move that allows it to enter the AI PC segment ‘through doors and windows’, bypassing the need to build everything from scratch.

    Nvidia is not just buying Intel’s shares for $5bn; it is buying the decades of experience, customer base and distribution channels of its former rival.

    What does AMD say about this?

    Every great alliance creates not only winners but also losers. In this case, the company with the most to lose is obvious: AMD. Under Lisa Su’s leadership, AMD has done the near impossible, becoming a viable alternative to both Intel in the processor market (Ryzen series) and Nvidia in the graphics card market (Radeon series).

    The company deftly manoeuvred between the two giants, taking market share away from them.

    Now, however, AMD is facing a nightmare scenario – a battle against a combined front end. Imagine laptops and workstations where Intel’s processor and Nvidia’s graphics chip are integrated at silicon level.

    This synergy can offer performance and power efficiency that standalone AMD products will find extremely difficult to compete with. It’s no longer a battle on two separate fronts; it’s a clash with an emerging technological behemoth that controls key elements of the PC platform.

    From competition to a new order

    In the IT industry, there is talk of the phenomenon of ‘coopetition’ – cooperation between competitors in specific areas. However, the Nvidia and Intel deal appears to be something much deeper. This is not a temporary project, but the foundation for a new market order.

    The aim is to create a hardware platform that is so integrated and optimised that it becomes the de facto standard for anyone serious about artificial intelligence on PCs and servers.

    The long-term consequences could be devastating for market diversity. If the Nvidia-Intel duo dominates the AI PC segment, software manufacturers will begin to optimise their applications specifically for this architecture, further marginalising alternatives.

    We will be in a situation where real choice will be limited and innovations outside this ecosystem may find it extremely difficult to break through to a mass audience.

    A golden cage for the consumer?

    Undoubtedly, in the short term, this collaboration will produce exciting products. Computers will become more powerful and AI-based functions more accessible. But in the long term, we risk entering a ‘golden cage’ – an ecosystem so perfect and integrated that we will not want or be able to leave it.

    History teaches us that when competition weakens, innovation suffers and prices rise.

    Nvidia’s surprising move is not only great strategically. It shows that growth and development is being undertaken in IT at all costs, and that the drive to grab as much of the market as possible is a key priority. The move will save Intel from the worst, the question is: at what cost?

    The VMware and Broadcom merger has shown that the introduction of new orders can sometimes be painful for markets. And the growing central IT powers, apart from show trials, are not only not constrained in any effective way by state bodies. On the contrary, today, governments consider technological monopolists as an element of geopolitical advantage, which, in the age of the digital rush, is logical, albeit short-sighted and potentially damaging in the long term.

  • Intel has sold Altera and is lowering forecasts. The company’s shares are rising

    Intel has sold Altera and is lowering forecasts. The company’s shares are rising

    Intel is signalling a turnaround in financial discipline to the market by lowering its forecast for adjusted operating expenses for 2025 to $16.8 billion. The revision, although small, was positively received by investors, with the company’s shares gaining nearly 4%. The change is a direct result of the finalisation of the sale of the majority stake in Altera.

    Discipline after years of investment

    The decision to cut projected spending is part of the company’s new strategy, which is undergoing restructuring under CEO Pat Gelsinger. The company is feeling the effects of a multi-billion dollar investment in capacity development and an IDM 2.0 strategy to compete with Asian giants in contract manufacturing.

    These ambitious plans have put a heavy strain on the company’s balance sheet, leading to significant losses in recent years.

    The new management announces an end to ‘blank cheques’ and a tightening of cost discipline. One of its manifestations is the reduction in staff announced for this year by more than a fifth compared to last year.

    Altera off balance sheet

    A key part of the optimisation is the deconsolidation of the programmable chip (FPGA) business, Altera. In September, private equity fund Silver Lake finalised the acquisition of a 51% stake in the company.

    The deal, which valued Altera at $8.75 billion, allowed Intel to take the operating costs generated by this segment off its balance sheet. It is worth recalling that Intel paid almost $17 billion for Altera in 2015, which shows how the valuation of this business has changed.

    In the first half of 2025, Altera, still part of Intel, generated $816 million in revenue with a gross margin of 55% and operating expenses of $356 million.

    Maintaining a minority stake allows Intel to benefit from Altera’s potential growth while easing the burden on its own finances. The operating cost target for 2026 remains unchanged at $16 billion.

  • AI accelerator market: NVIDIA, AMD, Intel – the battle for supremacy

    AI accelerator market: NVIDIA, AMD, Intel – the battle for supremacy

    At North Carolina State University, robotic arms precisely mix chemicals while streams of data flow through systems in real time. This ‘self-powered laboratory’, an AI-powered platform, discovers new materials for clean energy and electronics not in years, but days.

    Collecting data 10 times faster than traditional methods, it observes chemical reactions like a full-length film rather than a single snapshot. This is not science fiction; it is the new reality of scientific discovery.

    This incredible leap is being driven by a new kind of computing engine: specialised AI accelerator chips. These are the ‘silicon brains’ of the revolution. Moore’s law, the old paradigm of doubling computing power in general-purpose systems, has given way to a new law of exponential progress, driven by massive parallel processing.

    The crux of the story, however, is more complex. While AI algorithms are the software of a new scientific era, the physical hardware – the AI chips – has become the fundamental enabler of progress and, paradoxically, also its biggest bottleneck.

    The ability to discover a new life-saving drug or design a more efficient solar cell is today inextricably linked to a hyper-competitive, multi-billion dollar corporate arms race and a fragile geopolitical landscape in which access to these chips is a tool of global power.

    Anatomy of a boom: who is building silicon brains?

    The boom in generative artificial intelligence has created an insatiable demand for computing power. It’s not just chatbots, but foundational models that underpin a new wave of scientific research. This demand has transformed a niche market into a global battlefield for dominance.

    Reigning champion: NVIDIA

    NVIDIA has established itself as a key architect of the AI revolution, as evidenced by its stunning financial results. The data centre division, the heart of the company’s AI business, reported revenues of $41.1bn in a single quarter, up 56% year-on-year.

    This dominance is based on successive generations of powerful architectures such as Hopper and now Blackwell, which are core hardware for technology giants such as Microsoft, Meta and OpenAI .

    An energetic contender: AMD

    AMD is positioning itself not as a distant number two, but as a serious and fast-growing competitor. The company reported record data centre revenue of US$3.5bn in Q3 2024, a massive 122% year-on-year increase, driven by strong adoption of its Instinct series GPU accelerators.

    Significantly, major cloud service providers and companies such as Microsoft and Meta are actively deploying MI300X accelerators from AMD, signalling a desire to have a viable alternative to NVIDIA. The company forecasts that its data centre GPU revenue will exceed US$5bn in 2024.

    The gambit of the historical giant: Intel

    Intel’s situation presents a strategic challenge. Although the company claims that its Gaudi 3 accelerators offer a better price/performance ratio compared to NVIDIA’s H100 , it is struggling to gain market share.

    Intel missed its $500m revenue target for Gaudi in 2024, citing slower-than-expected adoption due to issues with transitioning between product generations and, crucially, challenges with ‘ease of use of the software’.

    Analysis of this data reveals deeper trends. Firstly, the AI hardware market is not just a race for components, but a war of platforms. Intel’s difficulties with software point to the real battlefield: the ecosystem. NVIDIA’s CUDA platform has more than a decade’s head start, creating a deep ‘moat’ of developer tools, libraries and expertise.

    Competitors are not just selling silicon; they need to convince the whole world of science and development to learn a new programming language. Secondly, the AI boom is leading to vertical integration of the data centre.

    Not only does NVIDIA dominate the GPU market, but following its acquisition of networking company Mellanox in 2020, it has also become the leader in Ethernet switches, recording sales growth of 7.5x year-on-year.

    NVIDIA is no longer just selling chips; it is selling a complete, optimised ‘AI factory’ design, creating an even stronger lock-in effect.

    From lab to reality: scientific breakthroughs powered by silicon

    This unprecedented computing power is fueling a revolution in the way we do research, leading to breakthroughs that seemed impossible just a few years ago.

    The medicine of tomorrow

    The traditional drug discovery process, which takes 10 to 15 years, is being dramatically shortened. DeepMind CEO Demis Hassabis predicts that AI will reduce this time to “a matter of months”.

    Isomorphic Labs, a subsidiary of DeepMind, is using AI to model complex biological systems and predict drug-protein interactions. Researchers at Virginia Tech have developed an AI tool called ProRNA3D-single that creates 3D models of protein-RNA interactions – key to understanding viruses and neurological diseases such as Alzheimer’s.

    Moreover, a new tool from Harvard, PDGrapher, goes beyond the ‘one target, one drug’ model. It uses a graph neural network to map the entire complex system of a diseased cell and predicts combinations of therapies that can restore it to health.

    High-resolution climate

    In the past, accurate climate modelling required a supercomputer. Today, AI models such as NeuralGCM from Google can run on a single laptop . This model, trained on decades of weather data, helped predict the arrival of the monsoon in India months in advance, providing key forecasts to 38 million farmers.

    A new AI model from the University of Washington is able to simulate 1,000 years of Earth’s climate in just one day on a single processor – a task that would take a supercomputer 90 days.

    Companies like Google DeepMind (WeatherNext), NVIDIA (Earth-2) and universities like Cambridge (Aardvark Weather) are building fully AI-driven systems that are faster, more efficient and often more accurate than traditional models.

    Alchemy of the 21st century

    As mentioned at the outset, AI is creating autonomous labs that accelerate materials discovery by a factor of ten or more. The paradigm shifts from searching existing materials to generating entirely new ones.

    AI models, such as MatterGen from Microsoft, can design new inorganic materials with desired properties from scratch. This ability to ‘reverse engineer’, where scientists identify a need and AI proposes a solution, has been the holy grail of materials science.

    These examples illustrate a fundamental change in the scientific method itself. The computer has ceased to be merely a tool for analysis; it has become an active participant in the generation of hypotheses. The role of the scientist is evolving into a curator of powerful generative systems.

    This accelerates the discovery cycle exponentially and allows scientists to explore a much larger ‘problem space’ than was ever possible for humans.

    Geopolitical storm and a new division of the world

    As the importance of these silicon brains grows, they are becoming the most valuable strategic resource of the 21st century – the new oil, crucial for economic competitiveness and scientific leadership.

    US strategy: “small garden, high fence”

    The US has implemented a ‘small garden, high fence’ strategy, introducing export controls aimed at slowing China’s ability to develop advanced AI. These restrictions apply not only to the chips themselves (such as NVIDIA’s H100), but also to the hardware required to manufacture them (from companies such as the Dutch ASML).

    This hit the Chinese semiconductor industry in the short term, causing equipment shortages and ‘crippling’ its production capacity.

    China’s determined response

    China’s response has been multi-pronged: massive investment in its domestic semiconductor industry and the use of its own economic leverage by restricting exports of key rare earth elements. The case study is Huawei.

    Despite being crippled by sanctions, the company has developed its own line of AI Ascend chips (910B/C/D), which are now seen as a viable alternative to NVIDA products in China.

    In response, the US government has toughened its stance, declaring that the use of these chips anywhere in the world violates US export controls, escalating the technological divide.

    A study by Oxford University reveals a harsh reality: advanced GPUs are heavily concentrated in just a few countries, mainly in the US and China. The US leads the way in access to state-of-the-art chips, while much of the world is in ‘computing deserts’.

    This situation leads to unintended consequences. US export controls, designed to slow China down, have become an ‘inadvertent accelerator of innovation’ for China, forcing Beijing to build a completely independent technology stack.

    A decade from now, the world may have two completely separate, incompatible AI stacks, fundamentally dividing global research.

    The cloud as the great equivalent?

    There is a powerful counter-argument: cloud computing democratises access to elite AI. Platforms such as Amazon Web Services (AWS), Microsoft Azure and Google Cloud offer AI-as-a-Service (AIaaS), allowing a university or startup to rent the same powerful GPUs that OpenAI uses.

    The cloud giants offer rich ecosystems. AWS provides services such as SageMaker for building models and Bedrock for access to leading foundation models. Google Cloud promotes democratisation with tools such as Vertex AI, designed for minimal complexity.

    Microsoft Azure is tightly integrating AI into its ecosystem through Azure AI Foundry, offering access to more than 1,700 models and running dedicated ‘AI for Science’ research labs.

    However, the promise of access must be set against the harsh reality of cost. Training a state-of-the-art model is prohibitively expensive, with estimates as high as USD 78 million for GPT-4 and USD 191 million for Gemini Ultra. This leads to a ‘two-tier democracy’ in AI research.

    On the one hand, any researcher with a grant can access world-class AI tools. This is a democratisation of application . On the other hand, the ability to train a new large-scale foundational model from scratch remains the exclusive domain of a handful of actors: the cloud providers themselves and their key partners.

    This is the centralisation of creation. The cloud ‘democratises’ AI in the same way that a public library democratises access to books. Anyone can read them, but only a few have the resources to write and publish them.

    A future written in silicon

    The breathtaking pace of scientific discovery in medicine, climatology and materials science is a direct consequence of the massive industrial and geopolitical mobilisation around a single technology: the AI accelerator.

    Progress has become fragile and deeply interdependent. Scientific breakthrough is no longer just a function of a brilliant mind. It now also depends on the quarterly financial reports of NVIDIA and AMD, the trade policies enacted in Washington and Beijing, the stability of the supply chain passing through Taiwan and the pricing models of AWS, Google and Microsoft.

    We have entered an era where the future is literally written in silicon. The great challenges of our time – curing disease, fighting climate change, creating a sustainable future – will be solved with these new tools.

    But who will be able to wield them, and for what purpose, remains the most important and unresolved question of the 21st century. The next great scientific revolution will be televised live, but the rights to broadcast it are currently being negotiated in the boardrooms of corporations and the corridors of global power.