Category: Column

Our columnists provide subjective and authoritative commentary on major events and trends in the IT industry and sales channel. This is the place for critical and industry insights and analysis.

  • Why PCHE chips are key to the next stage of artificial intelligence development

    Why PCHE chips are key to the next stage of artificial intelligence development

    If it seems like the semiconductor market is back in the spotlight, that’s because it really is. ASML, the world’s leading supplier of photolithography systems, recently reported that the company’s share value has risen by around 97% in the last six months, reflecting a renewed increase in investment in chip manufacturing. However, behind the headlines is a less high-profile, and perhaps equally important, issue related to managing the heat generated both during chip production and by the AI equipment that depends on them, explains Ben Kitson, director of business development at chemical etch manufacturing company Precision Micro.

    The current cycle is atypical. Technology giants are pouring huge resources into AI data centres, generating unprecedented demand for high-performance hardware. What’s more, much of this computing hardware has already been contracted, according to Simply Wall St.

    This combination poses a real challenge for infrastructure planning, as AI system operators face high power density and unprecedented cooling requirements in their data centres.

    Traditional data centres were designed for racks with power consumption of 5-10 kW, but AI clusters now consume 30-50 kW per rack. Furthermore, advanced GPU and accelerator platforms are now reaching 100-120 kW per rack, meaning that air cooling alone is no longer sufficient.

    Thermal management at the forefront

    Thermal constraints are finally starting to attract attention. In May 2025, semiconductor giant Nvidia announced that hyperscale operators are installing tens of thousands of its latest GPUs every week, and the pace of deployment is set to accelerate further with the introduction of the ‘Blackwell Ultra’ platform.

    According to the company’s public development plan, its next ‘Ruby Ultra’ architecture will allow more than 500 GPUs to be housed in a single server rack with up to 600 kW of power consumption, highlighting the scale of the cooling challenges currently facing artificial intelligence infrastructure.

    Across the AI infrastructure sector, thermal stability has become a key constraint not only in chip design, but also in the infrastructure required to power and cool high-density computing environments.

    High-performance liquid cooling systems and microchannel heat exchangers have ceased to be niche solutions and have become essential components. The same engineering principles – precise control of fluid flow, maximisation of heat transfer and production of compact components with tight tolerances – apply to many applications today.

    The engineering expertise gained in high-precision semiconductor environments is now being applied to printed circuit heat exchanger (PCHE) technology for AI data centres, which is the interface between electronics manufacturing and energy infrastructure.

    Why PCHE systems matter

    PCHE systems are not just a more advanced version of conventional designs such as shell-and-tube or plate-and-frame heat exchangers. They are smaller, lighter and more efficient, making them ideal for space-constrained and high-density installations.

    In data centres, this translates into a higher number of racks per square metre without compromising reliability, while at the same time reducing the energy required to cool the computing equipment.

    Energy efficiency is another factor, as AI workloads are predicted to cause a significant increase in global electricity demand. Goldman Sachs forecasts an increase of up to 165% by 2030, meaning that every watt of energy used for cooling counts.

    Compact, high-performance PCHEs not only save installation space, but also help control energy costs and improve the overall energy efficiency ratio (PUE), becoming a key component of high-density AI infrastructures in hyperscale environments.

    Chemical digestion scaling

    The very qualities that make PCHEs so effective – microchannels, large heat transfer area and tight tolerances – simultaneously make them difficult to manufacture. Conventional machining allows prototyping, but is slow, causes burrs and is not cost-effective for volume production.

    Chemical etching, on the other hand, eliminates these problems by creating all the channels simultaneously over the entire surface of the plate. In this way, precise stress-free structures are achieved, and then the finished heat exchanger plate is created by diffusion welding.

    Chemical etching company Precision Micro has been producing PCHE boards since the technology was introduced to the market in the 1990s. It has a specialist 4,100sq m facility that is capable of processing thousands of boards up to 1.5 metres long and up to 2 mm thick each week. This enables batch production of etched plates and makes the facility one of the largest sheet etching centres of its kind in the world.

    This is because scaling production to thousands of boards requires tightly controlled chemical processes and rigorous quality control. Few suppliers in the world have the expertise, production capacity and process control system necessary to mass-produce etched PCHE boards.

    Pressure on the supply chain

    Producing PCHE boards in high volumes requires significant capital investment and advanced technological processes. Although new production capacity is emerging in Asian markets, many OEMs in Europe and North America continue to emphasise reliability, process repeatability and quality as key criteria when sourcing precision components.

    Working with established regional partners can reduce logistical complexity, improve intellectual property protection and ensure consistent quality, especially when supply chains are looking for local suppliers of core competencies.

    Etched flow plates and high-performance heat exchangers are an essential, but often invisible, part of the AI ecosystem. Through precise temperature control, they help data centres maintain high-density computing racks without the risk of overheating and enable reliable and efficient scalability of AI infrastructure.

    This is the hidden reality behind the renewed increase in investment in chip manufacturing. Innovation is not just driven by smaller transistors, new node geometries or more efficient GPUs. They also depend on the physical infrastructure that enables these technologies to operate reliably at industrial scale.

    PCHE chips may not attract as much attention as chips or artificial intelligence models, but they underpin the performance, efficiency and scalability of both. Where every watt of energy and every fraction of a degree of temperature counts, precision thermal hardware is quietly enabling the progress of one of the fastest growing technology cycles of the last decade.

    Source: Precision Micro

  • From the Big Bang to the speed of light: the AI revolution is underway

    From the Big Bang to the speed of light: the AI revolution is underway

    In 2023, we witnessed the Big Bang of technology – a year in which artificial intelligence ushered in a new era of innovation and transformation. In 2025, generative AI went mainstream, and agent-based AI took the stage. Most importantly, real returns on investment began to emerge for large companies such as Dell Technologies.

    In 2026, the story of artificial intelligence is accelerating. AI will redesign the entire structure of businesses and industries. It will drive new ways of doing things, building and innovating at a scale and pace previously unimaginable.

    Understanding these changes is essential, as those who invest today in a robust, flexible technology base and benefit from a network of partner ecosystems will be ready to manage the rapid changes to come.

    Time to act: principles governing a dynamic ecosystem

    With the acceleration of artificial intelligence comes a degree of volatility. While we anticipate that the governance framework will eventually stabilise the ecosystem, today’s reality is a call to action.

    Governance is currently causing the most delays and it’s even a critical problem that is not making progress. The industry has rushed to bring valuable artificial intelligence tools such as chatbots and agents into production, but we have done so without sufficient governance.

    This is not only risky, but unsustainable. By next year, robust frameworks and private environments are needed to ensure stability and control. Running models locally, on their own servers or in controlled AI factories, will become the norm to provide a stable foundation and insulate organisations from external disruption.

    But this is more than a forecast. It is an urgent appeal. We need to focus more on governance. Without this, we will end up with uncertainty that will slow down the implementation of practical and valuable artificial intelligence for businesses.

    Our concrete demand to the public and private sector is to create rules for the governance of the enterprise market in collaboration with the real players in this market – enterprises and business technology providers.

    We cannot assume that managing public AI or AGI chatbots is the same as helping businesses shape the actual application of artificial intelligence in their companies and processes.

    Governance is not about slowing down innovation. It is about building a protective framework that allows us all to accelerate in a safe and sustainable way.

    2. Data management: the real foundation of innovation in artificial intelligence

    The next big leap in artificial intelligence will not just come from more powerful algorithms. It will come from the way we manage, enrich and use our data. As artificial intelligence systems become more complex, the quality and availability of the data they use is paramount.

    In 2026, AI-based data management and storage will become the undisputed foundation of all AI innovations.

    AI infrastructure is different from classic IT systems. It focuses on accelerated computing, advanced networking adapted to AI, new user interfaces and, most importantly, a new layer of knowledge from data that drives its results.

    Purpose-built AI data platforms, designed to integrate disparate data sources, protect new artefacts and provide the efficient storage needed to support them, will become essential. Partner ecosystems can help unlock the potential of these purpose-built platforms, with partners using their expertise to integrate and optimise data management solutions for enterprise AI.

    The ability to effectively feed clean, structured and relevant data into artificial intelligence models is crucial. However, as we enter the era of agent-based AI, this data will no longer be used solely to train large models. Instead, they will be a dynamic resource during inference, enabling the generation of evolving knowledge and intelligence in real-time. This foundational layer of data is the starting point for everything that comes next.

    3 Agent AI: the new business continuity manager

    What is coming is agent-based artificial intelligence. An evolution that transforms artificial intelligence from a helpful assistant to an integral manager of long-term, complex processes.

    In areas such as manufacturing and logistics, artificial intelligence agents will not just assist workers, they will assist in coordinating their activities. Using rich, dynamic data streams, they will ensure continuity between shifts, optimise real-time workflows and create new levels of operational efficiency.

    Imagine an artificial intelligence agent scaling the capabilities of process managers on the shop floor, adjusting production schedules based on supply chain disruptions or guiding a new employee through a complex task. By positioning AI agents as intermediaries between a team’s goals and its employees, we are elevating team coordination across all sectors to unprecedented levels.

    These intelligent agents will become the nervous system of modern operations, ensuring resilience and progress. Like any other AI capability, they rely on enterprise data to create a unique store of knowledge and intelligence that must be properly stored and protected.

    4. Artificial intelligence factories redefine resilience and disaster recovery

    The more AI integrates with a company’s core functions, the more business continuity becomes unquestionable.

    Artificial intelligence infrastructure will evolve to prioritise operational resilience, redefining the meaning of disaster recovery in an AI-driven world. The focus is not just on backing up systems, but on ensuring AI functionality, even if the underlying systems go offline. This includes protecting vectorised data and other unique artefacts, so that system intelligence can survive any disruption.

    Achieving this requires innovation across the AI value chain, from data protection and cyber security companies to key AI technology providers. Collaborative ecosystems include governments, partners and large-scale AI innovators. They must work together to build resilient factories that bring together the tools and expertise needed to ensure continuity and secure critical functions in hybrid cloud environments.

    5. Sovereign artificial intelligence accelerates development of national enterprise infrastructure

    Artificial intelligence is central to national interests, which is why we are seeing the rapid development of sovereign artificial intelligence ecosystems. Countries are no longer just consumers of AI technology, they are actively building their own frameworks to drive local innovation and maintain digital autonomy.

    This is changing the way artificial intelligence infrastructure is planned, with computing, data storage and management playing a key role in protecting and locating sensitive information.

    Businesses will increasingly adapt to this framework, scaling their operations within regional boundaries. By storing data locally, governments can shape public services such as healthcare, and businesses can use national infrastructure while aligning business objectives with national industrial policy.

    This creates innovations with a direct impact on citizens and economies, and represents a fundamental shift that moves artificial intelligence from a global concept to a concrete, local reality.

    Setting the course for 2026

    In 2026, the artificial intelligence revolution is not slowing down, but accelerating. What started with the Big Bang has reached the speed of light, and leading organisations are evolving and adapting to change just as fast.

    To succeed, you don’t need to chase every breakthrough. It’s better to build an infrastructure that can keep up with these changes: resilient AI factories, sovereign frameworks, agent systems that manage complex operations and collaborative ecosystems that turn innovation into real business impact. The tools and information are available. It is the readiness to act that already sets leaders apart from the rest.

    Leadership and concrete action will determine who reaps the real rewards. The future is rushing by at the speed of light. The question is: are we ready?

    By John Roese, global director of technology and artificial intelligence at Dell Technologies

  • Pragmatism versus hype: How ‘agent washing’ and hallucinations brought AI down to earth

    Pragmatism versus hype: How ‘agent washing’ and hallucinations brought AI down to earth

    The technology industry, after two years of fascination with Generative AI, is entering the ‘check out’ stage. Enthusiasm is colliding with hard reality. Statistics indicating low levels of AI adoption in many economies are bringing us down to earth.

    This year’s Dell Technologies Forum in Warsaw was a good example of this. As Dariusz Piotrowski aptly summarised it, the key dogma nowadays is: ‘AI follows the data, not the other way around’. It is no longer the algorithms that are the bottleneck. The real challenge for business is access to clean, secure and well-structured data. The discussion has definitely moved from the lab to the operational back office.

    AI follows the data

    We have been living under the belief that the key to revolution is a more perfect algorithm. This myth is just now collapsing. However, internal case studies of major technology companies show: implementing an internal AI tool is often not a problem of the model itself, but months of painstaking work to… organising and providing access to distributed data.

    This raises an immediate consequence: computing power must move to where the data originates. Instead of sending terabytes of information to a central cloud, AI must start operating ‘at the edge’ (Edge AI).

    The most visible manifestation of this trend is the birth of the AI PC era. With dedicated processors (NPUs), PCs are expected to locally handle AI tasks. This is not a marketing gimmick, but a fundamental change in architecture. It’s all about security and privacy – key data no longer needs to leave the desk to be processed. Of course, this puzzle won’t work without hard foundations. Since data is so critical, the cyber security landscape is changing. The number one target of attack is no longer production systems, but backup. This is why the concepts of ‘digital bunkers’ (restore vaults) – guaranteeing access to ‘uncontaminated’ data – are becoming the absolute foundation of any serious AI strategy.

    Pragmatism versus “Agent Washing”

    In this red-hot market, how do you distinguish real value from marketing illusion? After the wave of fascination with ‘GenAI’, the new ‘holy grail’ of the industry is becoming ‘AI Agents’. However, we must beware of the phenomenon of “Agent Washing” – the packaging of old algorithms into a shiny new box with a trendy label.

    Business is beginning to understand that the chaotic ‘bottom-up’ approach leads nowhere. As Said Akar of Dell Technologies frankly admitted, the company initially put together ‘1,800 use cases’ of AI, which could have become a simple path to paralysis. Therefore, the strategy was changed to a hard ‘top-down’ approach: finding a real business problem, defining a measurable return on investment (ROI) and only then selecting tools.

    This leads directly to a global trend: a shift away from the pursuit of a single, giant overall model (AGI) to ‘Narrow AI’. This trend combines with the growing need for digital sovereignty. States and key sectors (such as finance or administration) cannot afford to be dependent on a few global providers. Hence the growing trend of investing in local models that allow for greater control.

    Hype versus hallucination

    When the dust settles, it turns out that the great technological race is no longer just about making models know more. It’s about making them… make up less often. The biggest technical and business problem remains hallucinations.

    The dominant and only viable business model is becoming ‘human-in-the-loop’, i.e. the human at the centre of the process. In regulated industries, no one in their right mind will allow a machine to ‘pull the lever’ on its own. As mBank’s Agnieszka Słomka-Gołębiowska aptly pointed out, financial institutions are in the ‘business of trust’ and the biggest risk of AI is ‘bias’, which cannot be fully controlled in the model itself.

    Artificial intelligence is set to become a powerful collaborator to take over the ‘thankless tasks’. But the final, strategic decision is up to humans. The real revolution is pragmatic, happens ‘on the edge’ and is meant to help, not take away, from work.

  • Microsoft is playing for a long position in AI. Opening up to Grok marks the beginning of the end of the OpenAI monopoly

    Microsoft is playing for a long position in AI. Opening up to Grok marks the beginning of the end of the OpenAI monopoly

    At the Build 2025 conference, Microsoft announced a new language model, Grok, developed by the xAI start-up founded by Elon Musk, on its Azure cloud platform. While this announcement may seem like just another step in the expansion of AI offerings, it actually signals a significant change of course. Microsoft is betting on openness towards a variety of artificial intelligence providers – including those that may compete with its strategic partners, such as OpenAI.

    The move opens up new opportunities for Azure customers, but also raises questions about the future of Microsoft’s entire cloud ecosystem. Is openness an asset at a time of dominance by a few big AI players, or a strategic risk?

    From Copilot to Grok: Microsoft seeks balance

    Over the past few years, Microsoft has been building its image as a leader in the field of generative AI, largely based on its close collaboration with OpenAI. GPT-4 models drive a number of the company’s products, from Microsoft 365 to developer tools. In this context, the arrival of Grok in Azure is a signal that the company does not want to be held hostage to a single vendor.

    xAI, founded by Elon Musk, is presenting Grok as an alternative to the ‘too stacked’ models of other companies. The model itself has gained notoriety for, among other things, its integration with X (formerly Twitter), but its arrival in Azure is more than just another integration. Microsoft is signalling that it does not want to be associated with just one approach to AI – and that the Azure platform is intended to be a space for multiple perspectives.

    Diversity as an advantage … and a challenge

    From the point of view of business customers, this is good news. Different AI models offer different functionalities, and being able to choose can bring real benefits – matching industries, domain language, operational costs or processing policies. Companies increasingly want an option: not just ‘GPT or nothing’, but, for example, Grok for fast social media processing, Mistral for offline work and Claude for document analysis.

    However, openness is not free. Managing multiple models in parallel on a single cloud infrastructure generates complexity – especially in terms of security, visibility and regulatory compliance. What is flexibility for some may be the beginning of chaos for others.

    Ecosystem under pressure

    Microsoft promotes GPT-based Copilots on the one hand, while making competing solutions – such as Grok – available on the other. This dual-tracking can raise tensions with both partners and end customers. What will happen to integrators and providers of OpenAI-only solutions? Will they be forced to adapt to the ‘new pluralism’, or will they start looking for more closed environments?

    From an end-user perspective, this can also lead to a fragmented experience. When different tools work with different AI models, there is a question of consistency of results, data security and control over the flow of information.

    Security: a new front line

    The biggest challenge, however, relates to security. Every new model in the Azure ecosystem is a new attack vector – not necessarily due to maliciousness on the part of the developers, but through lack of standardisation, configuration imperfections and limited transparency.

    The multi-model AI environment in the cloud means that it is not always clear who is processing the data, how and for what purpose. The line between legitimate and covert use of AI is becoming increasingly difficult to grasp. Companies that don’t have the right tools to inspect, audit and detect anomalies may not even know that their data has ended up in a model they never validated.

    This is forcing organisations to redefine their security strategy. Traditional approaches – such as firewalls or simple DLP systems – are no longer sufficient. What is needed are zero-trust architectures, advanced behavioural analysis mechanisms and least privilege policies that cover not only people but also machines.

    Will Microsoft become a ‘marketplace’ for AI?

    The opening to Grok may be the harbinger of a wider trend – Microsoft may be looking to make Azure something like an ‘App Store’ for AI models. The customer chooses which model they want to use and Microsoft provides the infrastructure, access and integration.

    On the one hand, it’s an interesting business model – Microsoft doesn’t need to invest in its own LLMs as much if it creates an open platform with models from other companies. On the other – it requires strong quality, security and compliance controls, without which such a platform will quickly turn into a minefield.

    The question is: will users trust a platform that gives freedom of choice but shifts some responsibility to the customer?

    Openness is the future – but it requires maturity

    Opening up Azure to alternative AI models is a logical step towards the democratisation of artificial intelligence. Microsoft wants its cloud to be a place where any model can be used, tailored to specific needs.

    But the greater the diversity, the greater the need for order. Companies must not only choose the best models, but also understand how these models work, what data they process and what risks they pose. Without this, openness will turn into uncontrolled exposure.

    Microsoft is playing on many pianos these days. The question is whether it will be able to hold the tune – or whether chaos will begin to reverberate.