Category: Opinions

Opinions is a space for experts, industry leaders and editors to comment on the most important developments and trends in the IT sector and sales channel. We publish columns, analysis and interviews on technology, the market, regulation and the strategies of vendors, integrators and distributors.

  • Why PCHE chips are key to the next stage of artificial intelligence development

    Why PCHE chips are key to the next stage of artificial intelligence development

    If it seems like the semiconductor market is back in the spotlight, that’s because it really is. ASML, the world’s leading supplier of photolithography systems, recently reported that the company’s share value has risen by around 97% in the last six months, reflecting a renewed increase in investment in chip manufacturing. However, behind the headlines is a less high-profile, and perhaps equally important, issue related to managing the heat generated both during chip production and by the AI equipment that depends on them, explains Ben Kitson, director of business development at chemical etch manufacturing company Precision Micro.

    The current cycle is atypical. Technology giants are pouring huge resources into AI data centres, generating unprecedented demand for high-performance hardware. What’s more, much of this computing hardware has already been contracted, according to Simply Wall St.

    This combination poses a real challenge for infrastructure planning, as AI system operators face high power density and unprecedented cooling requirements in their data centres.

    Traditional data centres were designed for racks with power consumption of 5-10 kW, but AI clusters now consume 30-50 kW per rack. Furthermore, advanced GPU and accelerator platforms are now reaching 100-120 kW per rack, meaning that air cooling alone is no longer sufficient.

    Thermal management at the forefront

    Thermal constraints are finally starting to attract attention. In May 2025, semiconductor giant Nvidia announced that hyperscale operators are installing tens of thousands of its latest GPUs every week, and the pace of deployment is set to accelerate further with the introduction of the ‘Blackwell Ultra’ platform.

    According to the company’s public development plan, its next ‘Ruby Ultra’ architecture will allow more than 500 GPUs to be housed in a single server rack with up to 600 kW of power consumption, highlighting the scale of the cooling challenges currently facing artificial intelligence infrastructure.

    Across the AI infrastructure sector, thermal stability has become a key constraint not only in chip design, but also in the infrastructure required to power and cool high-density computing environments.

    High-performance liquid cooling systems and microchannel heat exchangers have ceased to be niche solutions and have become essential components. The same engineering principles – precise control of fluid flow, maximisation of heat transfer and production of compact components with tight tolerances – apply to many applications today.

    The engineering expertise gained in high-precision semiconductor environments is now being applied to printed circuit heat exchanger (PCHE) technology for AI data centres, which is the interface between electronics manufacturing and energy infrastructure.

    Why PCHE systems matter

    PCHE systems are not just a more advanced version of conventional designs such as shell-and-tube or plate-and-frame heat exchangers. They are smaller, lighter and more efficient, making them ideal for space-constrained and high-density installations.

    In data centres, this translates into a higher number of racks per square metre without compromising reliability, while at the same time reducing the energy required to cool the computing equipment.

    Energy efficiency is another factor, as AI workloads are predicted to cause a significant increase in global electricity demand. Goldman Sachs forecasts an increase of up to 165% by 2030, meaning that every watt of energy used for cooling counts.

    Compact, high-performance PCHEs not only save installation space, but also help control energy costs and improve the overall energy efficiency ratio (PUE), becoming a key component of high-density AI infrastructures in hyperscale environments.

    Chemical digestion scaling

    The very qualities that make PCHEs so effective – microchannels, large heat transfer area and tight tolerances – simultaneously make them difficult to manufacture. Conventional machining allows prototyping, but is slow, causes burrs and is not cost-effective for volume production.

    Chemical etching, on the other hand, eliminates these problems by creating all the channels simultaneously over the entire surface of the plate. In this way, precise stress-free structures are achieved, and then the finished heat exchanger plate is created by diffusion welding.

    Chemical etching company Precision Micro has been producing PCHE boards since the technology was introduced to the market in the 1990s. It has a specialist 4,100sq m facility that is capable of processing thousands of boards up to 1.5 metres long and up to 2 mm thick each week. This enables batch production of etched plates and makes the facility one of the largest sheet etching centres of its kind in the world.

    This is because scaling production to thousands of boards requires tightly controlled chemical processes and rigorous quality control. Few suppliers in the world have the expertise, production capacity and process control system necessary to mass-produce etched PCHE boards.

    Pressure on the supply chain

    Producing PCHE boards in high volumes requires significant capital investment and advanced technological processes. Although new production capacity is emerging in Asian markets, many OEMs in Europe and North America continue to emphasise reliability, process repeatability and quality as key criteria when sourcing precision components.

    Working with established regional partners can reduce logistical complexity, improve intellectual property protection and ensure consistent quality, especially when supply chains are looking for local suppliers of core competencies.

    Etched flow plates and high-performance heat exchangers are an essential, but often invisible, part of the AI ecosystem. Through precise temperature control, they help data centres maintain high-density computing racks without the risk of overheating and enable reliable and efficient scalability of AI infrastructure.

    This is the hidden reality behind the renewed increase in investment in chip manufacturing. Innovation is not just driven by smaller transistors, new node geometries or more efficient GPUs. They also depend on the physical infrastructure that enables these technologies to operate reliably at industrial scale.

    PCHE chips may not attract as much attention as chips or artificial intelligence models, but they underpin the performance, efficiency and scalability of both. Where every watt of energy and every fraction of a degree of temperature counts, precision thermal hardware is quietly enabling the progress of one of the fastest growing technology cycles of the last decade.

    Source: Precision Micro

  • Between stabilisation and uncertainty. Polish business in the shadow of global tensions and the MPC decision

    Between stabilisation and uncertainty. Polish business in the shadow of global tensions and the MPC decision

    Polish business today operates in a reality that is best described by simultaneous stabilisation and uncertainty. On the one hand, the Monetary Policy Council’s decision to maintain interest rates provides predictability in the area of the cost of money, while on the other, growing geopolitical tensions and the volatility of the economic environment increase the risk of doing business. In this jigsaw puzzle, liquidity support tools are important. This is confirmed by data from the Polish Factors Association. In the first quarter of 2026, factoring companies financed receivables worth approximately PLN 131 billion, an increase of nearly 10% year-on-year, and the service is already used by approx. 27 thousand enterprises already use the service.

    The Monetary Policy Council’ s decision to keep interest rates unchanged is part of a broader picture of caution currently dominating economic and financial policy. In an environment of heightened geopolitical uncertainty, related among other things to tensions around the Persian Gulf and the situation in relations between Iran and the United States, central banks are increasingly opting for a wait-and-see strategy rather than quick reactions. This is also the direction signalled by the National Bank of Poland, emphasising the importance of incoming data and the volatility of the external environment.

    Risk of sudden changes

    For Polish business, this means operating in an environment where key economic parameters remain relatively stable, but at the same time are subject to significant risks of sudden changes. Of particular importance here is the energy commodity market, whose sensitivity to events in the Middle East remains high. Possible disruptions in the supply of oil or gas can quickly translate into operating costs for companies, affecting both production prices and inflation levels. From this point of view, the decision to hold off on further interest rate changes can be read as an attempt to maintain a balance between controlling inflation and supporting economic activity.

    In such an environment, the increasing use of factoring is no coincidence. Data from the Polish Factors Association shows that companies are increasingly treating it not just as a source of financing, but as an element of risk and liquidity management. More than 7 million financed invoices in the first quarter of the year is a clear sign that companies are actively shortening their cash turnover cycle and protecting themselves against payment delays.

    At the same time, the stabilisation of rates does not mean a return to the predictability familiar from earlier years. Companies operating in Poland still have to take into account scenarios in which external factors change business conditions in a short period of time. This applies not only to financing costs, but also to exchange rates, availability of capital or liquidity in supply chains. With global tensions likely to affect transport and raw material prices, the importance of flexible financial management and ongoing cash flow control is growing.

    Working capital requirements

    It is no coincidence, then, that the dynamic growth of factoring is concentrated in the manufacturing and distribution sectors, where an increase in sales means a greater need for working capital at the same time. The ability to immediately release funds from issued invoices allows companies to settle their own obligations on time and to safely develop their business, even in conditions of increased volatility.

    The scenario of further monetary easing seems to have been pushed back and any decisions will depend on the path of inflation. This means that companies should not assume a rapid fall in the cost of money as a factor for improving their financial situation. Instead, it is becoming more important to be able to adapt in an environment of persistent uncertainty and to be able to hedge risks arising from global dependencies also through the use of instruments such as factoring.

    More broadly, the current situation shows how strongly the Polish economy is linked to processes outside Europe. Even limited tensions in key regions for global trade and energy can affect the decisions of domestic institutions and the condition of companies. In this context, the importance of tools and strategies that allow companies to maintain operational stability despite a volatile environment and build resilience to external shocks is growing.

  • Wojciech Janusz, Dell Technologies: 2026 is the time to settle the effects of AI, not to buy promises

    Wojciech Janusz, Dell Technologies: 2026 is the time to settle the effects of AI, not to buy promises

    Artificial intelligence is ceasing to be just a tool for conversation and is becoming a technology to realistically do our jobs and close business processes. We discuss how to invest wisely in AI infrastructure, cut costs and count the real return on deployments in an interview with Wojciech Janusz, EMEA Data Science & AI Horizontal Lead at Dell Technologies.

    Klaudia Ciesielska, Brandsit: Over the past year, the market has been wowed by Generative AI, and now Dell is starting to talk about Agentic AI – autonomous agents performing tasks. However, many Polish companies are still at the stage of testing simple chatbots. Are you running away from technology too fast to move forward? Why do you think this is the moment to invest in infrastructure for autonomous agents, when companies often have yet to see a return on investment in simpler GenAI models?

    Wojciech Janusz, Dell Technologies: Agentic AI is not a new technology. Rather, it is a natural transition from chatbots to agents that can perform specific actions for us. While large language models allow us to unleash the knowledge we have in the company, the real revolution starts when we turn knowledge into skills and actions.

    I get the feeling that we’re all a little over-saturated with chatbots. At the end of the day, we don’t want to read the advice of a wise assistant who will tell us what to do, but someone who will do that thing for us – or at least relieve us of most of the task.

    “Agentic AI is not a new technology. Rather, it’s a natural transition from chatbots to agents that can perform specific actions for us.”

    We always consider the implementation of new technology in three aspects: people, processes and technology. Unfortunately, in the last two years we have too often focused on technology instead of the first two categories. AI agents are a way to bring it all together. It’s about integration with processes, it’s about human-machine collaboration and leveraging existing technology.

    To answer the question: this is a very good moment, because only when AI starts to perform specific tasks for us will we be able to determine the actual yield, count the efficiency and better plan the next steps and implementations.

    K.C.: There is a lot of talk about Sovereign AI and local models, but where does the point of profitability lie? At what scale of operations does it realistically pay for a Polish company to withdraw data from a hyperscaler and invest in its own AI Factory? Is this a solution only for corporates or a viable financial alternative for the SME sector?

    W.J.: The break-even point lies much lower than we think. Few people realise how big a technological leap we have made in the last two years. That goes for both the hardware and the AI models themselves.

    Firstly, the AI market has split. ‘Open’ models have emerged, giving us the opportunity to download and run them on our hardware in a secure controlled environment, but also to further customise them to fit our use case even better.

    Simply downloading a model and running it won’t do much if it doesn’t meet expectations – and here too we have a big leap. Open models are catching up with the best closed cloud models in terms of capability and correctness. Of course, a model 1000 times smaller will not have the same capabilities as one in the cloud. But that is not the purpose and applications – instead of universal models that can speak all the languages of the world and solve every problem regardless of the domain, we can use specialised but smaller models and focus on specific activities. This gives us more flexibility and control over what is happening.

    Instead of a single ‘universal genius’, we choose a team of expert AIs working together in a controlled and efficient manner. Appointing the necessary resources when required to solve a specific problem.

    Such models with a developed ability to reason and break down problems into smaller tasks form the basis of AI agents.

    Through high computing power and energy costs, models optimised to run on simpler hardware have also emerged. Here, the biggest contributions come from new architectures – such as Mixture of Experts (MoE), new training methods – including the use of Reinforcement Learning, and advanced ways of optimising the model itself.

    The final element is the development of hardware platforms. Here too, new developments are emerging. We have a whole new category of hardware designed to use AI rather than train it.

    It is estimated that the cost of running the model per token cost is decreasing by a factor of 10 each year, and so far, since the advent of GPT 3.5, this trend is managing to continue.

    Tasks that only 2-3 years ago required powerful servers are today easily performed on an AI PC, for example, the Dell Pro Max with GB10 allows you to successfully work with models up to 200 billion parameters.

    Of course, the appetite is growing and the list of tasks we want to do with AI is growing too, but it is becoming increasingly clear that the technology is no longer blocking us. The main question now is what we actually want to do with AI, not how to run it on our infrastructure.

    K.C.: Poland has some of the highest energy prices in Europe and AI servers are extremely power hungry. Does the implementation of efficient AI solutions in Polish conditions force companies to overhaul their server rooms and switch to liquid cooling? Is it not the case that the main barrier to AI adoption in our region will not be the price of the server itself, but precisely the cost of electricity and the need to upgrade the cooling infrastructure?

    W.J.: It will depend on the scale. Earlier we talked about changes in AI technology itself. We have new models, new uses, but also new architectures to enable AI to run on even modest resources.

    “On a large scale, there will be no escaping energy costs and changes to the Data Centre infrastructure, but I am optimistic.”

    My impression is that quite a few companies assume a massive cost of entry. Meanwhile, we can start AI projects with single applications. In any case, this is a very sensible and recommended approach: to limit ourselves to a few use-cases, well grounded in the realities of the company, with a clear budget and projected profit, and, most importantly, lying close together in terms of the technology and integration needed. This approach means that we can start modestly, with single devices, such as the Dell Pro Max with GB10 just now, without a huge revolution in our DC. Of course, when we are successful, these examples will be the basis for further steps while providing a solid foundation.

    Start Small, Think Big, scale fast. This is the basis of our AI strategy.

    Of course, on a large scale there will be no escaping energy costs and changes to the Data Centre infrastructure, but I am optimistic. I think for most companies it will be a gradual evolution rather than a revolution requiring drastic changes.

    Investments can also yield very rewarding results. One new Dell PowerEdge server can replace up to seven older servers, and this translates into a reduction in energy costs of up to 65-80 per cent. Dell customer Wirth Research has reduced energy consumption in HPC environments by up to 70 per cent at its Verne Global data centres thanks to liquid-cooled PowerEdge servers.

    K.C.: The great hardware replacement is underway, but does it make economic sense to buy PCs with NPUs (AI PCs) today when there are still few business applications that make real use of this chip? Aren’t companies today paying a ‘novelty tax’ for hardware whose potential will only be realised in 2-3 years, i.e. at the end of its life cycle?

    W.J.: We are seeing a lot of interest in AI PC among business customers, with organisations looking to enhance their AI capabilities used locally.

    Every computer we presented at CES 2026 is a computer with an AI processor and an NPU. This chip is not just for new applications yet to be developed – it is actively used during the user’s day-to-day work, providing benefits such as extended battery life – up to 27 hours of video streaming in the case of the XPS 14.

    K.C.: Finally, a request for an honest forecast. Looking to 2026 and your experience of working with companies: in which area will Polish business (regardless of the industry) “overspend” with investments – spend too much money without a quick return, and which area will they drastically underestimate, which may negatively affect the performance of companies?

    W.J.: I think in 2026 companies will be preparing more thoroughly for AI projects. We no longer want to have AI for the sake of having an AI project. There will be more cost-effectiveness analysis and looking for those applications that actually bring real benefits. We will also focus on the efficiency of using AI, not just the cost of purchase.

    We have new metrics and tools to better choose the right approach to AI.

    “A model that achieves 80% in a test having 8 billion parameters is considered much more impressive (and effective) than one that achieves 82% but requires 70 billion parameters.”

    Until recently, we have only focused on maximum quality and speed.

    Currently, we are increasingly looking for a reasonable compromise between quality and efficiency. An example might be the Frontier Pareto methodology: instead of looking only at the top of the scoreboard, we look for models on the ‘Pareto front’, i.e. those that offer the best ratio of quality (e.g. MMLU score) to model size (number of parameters) or inference cost. A model that achieves 80% in a test with 8 billion parameters is considered much more impressive (and efficient) than one that achieves 82% but requires 70 billion parameters.

    Another example is a metric showing the real cost of an AI decision or action – Tokens per Decision/ Tokens per Action – A more efficient model will make an accurate decision using a few hundred reasoning tokens, while a weaker one may need several times as many.Choosing the former significantly reduces TCO and allows for a faster return on investment.

    A final but very effective way of showing which way we are heading is the Cost per Resolved Task (or Cost per Resolution) metric: how much it realistically costs us to perform a specific activity using AI or, more commonly, an AI Agent.

    In my opinion, 2026 will be the year of prudent AI project building, well-founded, grounded in reality and backed up by numbers.

  • In 2026, the lack of an ESG strategy is a real financial risk – Przemysław Brzywcy, Polenergia Fotowoltaika

    In 2026, the lack of an ESG strategy is a real financial risk – Przemysław Brzywcy, Polenergia Fotowoltaika

    While energy stock market quotations may give an illusory sense of stability, the reality for entrepreneurs is shaped by expenditure on grid upgrades and the stringent requirements of Western contractors. In this new dispensation, photovoltaics and energy storage are becoming a critical tool for optimising the bottom line.

    We talk to Przemysław Brzywcy, CEO of Polenergia Fotowoltaika, about whether Polish companies are ready to ‘move energy over time’, why the lack of a low-carbon strategy may cut off access to capital and how to realistically secure a budget in 2026.

    Brandsit: In today’s market reality, is the transition to green energy still mainly a matter of image-building for the company, or is it already a hard cost optimisation that defends itself in the financial results?

    Przemysław Brzywcy: Just looking at the quotations of the Polish Power Exchange over the last two or three years, one might get the impression that energy prices have fallen and the topic is no longer pressing. However, this is a very apparent picture.

    In parallel to the price of energy itself, distribution charges are rising steadily, and far faster than inflation. This is due to the need for huge investments in the modernisation and expansion of the grid, which are necessary in order for the system to accommodate increasing amounts of renewable sources. These outlays are then naturally passed on to the grid users.

    In addition, large-scale RES projects under ERO auctions and contracts for difference are coming into the system. When these installations come on stream, the costs of operating the system will also be spread across all consumers.

    As a result, companies do not just pay for the ‘price of energy from the exchange’, but for the whole system. Therefore, investments in photovoltaics and energy storage cease to be an image element and become a very rational tool for cost optimisation, which can be seen in the financial results in real terms.

    Brandsit: Combining energy storage with dynamic tariffs sounds promising, but requires a change in thinking about power consumption. Are Polish companies technologically ready to automatically control their energy consumption depending on the instantaneous price of energy, and how do storages help in this?

    P.B.: This does indeed require a change in approach, but technologically Polish companies are increasingly better prepared for this. Many plants already have energy monitoring and management systems in place, and their integration with energy storage and algorithms that react to market prices is not a technological barrier today, but rather a matter of a business decision.

    In my opinion, this is one of the most underestimated directions for optimising energy costs. Companies very often have an unstable demand for power, which generates additional costs, overrun charges and the risk of price fluctuations. Energy storage makes it possible to compensate for this instability by stabilising the consumption profile.

    “Integration with energy storage and market price-responsive algorithms is not a technological barrier today, but rather a business decision issue.”

    Tariffs linked to the energy market, on the other hand, offer the opportunity to buy electricity at times of very low or even negative energy prices and to reduce consumption when prices are highest. You could say that we ‘transfer’ energy over time.

    This allows the company to simultaneously take advantage of market opportunities and hedge against price spikes. In practice, it is a solution that, when properly designed, can significantly improve the economics of a company’s overall energy consumption and increase its cost predictability.

    Brandsit: Are you observing a trend where Polish companies are having to switch to green energy not by their own choice, but under pressure from Western contractors who require suppliers to report a zero carbon footprint?

    P.B.: Yes, this is a very clear and widespread trend that we are seeing with our customers, especially those with export operations in Western European markets. You can see it strongly in the food industry, but also in the whole supply chain linked to the automotive sector.

    Increasingly, Polish companies involved in these supply chains have to report their carbon footprint and demonstrate the share of energy from renewable sources. In many industries, this is no longer an advantage, but a condition for keeping the contract.

    At the same time, entrepreneurs are seeing more and more clearly that this is not just a response to ESG requirements or contractor expectations. Properly designed RES-based solutions simply pay off. Green energy is ceasing to be an image element and is becoming a source of real cost and competitive advantage.

    That is why companies today combine two aspects. On the one hand, they are building their credibility with their foreign partners, and on the other, they are making an investment that is defended by a very concrete business case.

    Brandsit: To what extent does the lack of an implemented ESG strategy and the use of conventional energy sources make it difficult for companies today to obtain cheap investment credit or attract investors?

    P.B.: To a very large extent. Financial institutions and investment funds are increasingly evaluating companies not only through the prism of financial performance, but also through how they manage environmental and energy risks. The lack of an ESG strategy, including the lack of energy transition measures, is having a real impact on financing conditions today.

    “The lack of an ESG strategy, including the lack of energy transition activities, is having a real impact on financing conditions today.”

    It is also of paramount importance for exports. Foreign contractors pay attention to how goods are produced and what energy is used in the manufacturing process. This ceases to be an element of image and becomes an element of assessing business credibility.

    Brandsit: What one key step would you advise company managements who want to not only meet the new regulatory requirements in 2026, but above all to realistically protect their budgets against rising energy costs?

    P.B.: First of all, ask for a solid business case. Boards should talk to technology providers in a very simple way – ‘how will this investment pay off in my organisation’. Whether we are talking about solar PV, energy storage or a combination of both.

    It is worth clearly defining acceptable criteria for the rate of return and evaluating these solutions from this angle. Photovoltaics or energy storage are not gadgets or fashion items today. They are real, quantifiable tools that allow a company to stabilise its energy costs and improve its bottom line.

  • From hardware supplier to digital environment architect. Rafał Szarzyński on the “One Sharp” revolution

    From hardware supplier to digital environment architect. Rafał Szarzyński on the “One Sharp” revolution

    Klaudia Ciesielska, Brandsit: Sharp is a brand with over 100 years of history of innovation. What made you decide just now to bring together three previously separate worlds – printing, visualisation and IT services – in such a fundamental way? What was the key impetus for this integration?

    Rafal Szarzynski, Sharp: A key factor has been the change in the way we work. Companies today operate in an environment where it is not just the hardware that matters, but the entire digital architecture – secure, flexible and intuitive. Our customers want a partner that understands their processes and can support them, not just supply devices. That’s why we created the ‘Sharp Digital Experience’ concept, which brings together print, visualisation and IT services into one seamless ecosystem.

    This is a really well thought-out change – we have been preparing for it for years. We have invested in developing our IT competencies, acquiring companies in the UK, France and Switzerland, and in November we completed the final stage of our merger with Sharp/NEC. Today, we have more than 500 IT professionals in Europe and state-of-the-art support platforms that allow us to design work environments that meet the challenges of digital transformation. This makes Sharp a digital world company that makes a real difference to the way customers work.

    K.C.: Joe Tomota announces a move away from a transactional model to long-term strategic partnerships. Given that in Poland Sharp is mainly associated with reliable hardware – how does this change redefine your model of cooperation with the business? Does ‘One Sharp’ represent a shift from being a technology provider to being an advisor responsible for architecting and optimising the digital working environment?

    R.S.: This is a fundamental change in the way we look at customer relationships. Until now, the market has often been based on simple transactions – purchase the device, install, end of process. Today, companies expect something very different: a partner who understands their business objectives and can design the working environment to support efficiency, security and growth.

    “One Sharp” is the answer to this need – it is a philosophy in which technology is just a tool and the real value is in consulting and building strategies together.

    An example? Increasingly, we are talking to customers not about which screen or printer to choose, but how to integrate communication in a hybrid team, how to secure data in the cloud, or how to optimise document processes. Our role is not just to deliver hardware, but to create a cohesive ecosystem that addresses real business challenges. This is the essence of ‘One Sharp’ – partnerships that give you an edge in the digital world.

    “One Sharp (…) is a philosophy in which technology is just a tool, and the real value is in consultancy and joint strategy building.”

    K.C.: CIOs are increasingly asking not ‘if’ but ‘how’ to ensure security. With the integration of cyber security competences into Sharp’s structures: can devices such as printers or screens become elements of an organisation’s first line of defence? What does such a security model look like in practice within the ‘One Sharp’ ecosystem?

    R.S.: Definitely yes. Today’s working environment is distributed, and any device connected to the network can be a potential access point. That’s why, at One Sharp, we treat security as an integral part of the design of the entire ecosystem, not an add-on. Our devices – from printers to displays – are equipped with data protection mechanisms, encryption, access control and integration with identity management systems. This makes them an active part of your security strategy, not just passive hardware.

    In practice, this means that documents are stored and transmitted securely, access to devices is controlled and communication in meeting rooms takes place in an encrypted environment. Additionally, with our IT services and management platforms, we can monitor and respond to threats in real time. This approach gives the CIO the confidence that every piece of infrastructure – even the printer – is supporting the organisation’s protection, not undermining it.

    “Our devices – from printers to displays (…) are becoming an active part of the security strategy, not just passive hardware.”

    K.C.: ITpoint and Apsia brought agile software and services expertise to Sharp. How does the combination of hard hardware engineering with cloud and IT know-how change the final value perceived by the customer? Can the Polish market expect new hybrid services combining these worlds?

    R.S.: This combination opens a whole new chapter in the way we support customers. Until now, technology has often been seen as a set of separate elements – devices, applications, infrastructure. Today, we integrate these areas into a single ecosystem where hardware and software work together seamlessly and securely. With the expertise brought by ITpoint and Apsia, we can design solutions that not only work, but realistically simplify processes, automate tasks and increase productivity.

    In the Polish market, this means access to hybrid services that combine our expertise in hardware engineering with modern cloud platforms. Examples include cloud-based document management solutions, integration of audiovisual systems with collaboration tools or IT services supporting security and business continuity. Customers gain not just a product, but a complete service – from consultancy to implementation to ongoing support. This is the true value of ‘One Sharp’.

    “Customers get not just a product, but a complete service – from consultancy to implementation to ongoing support. This is the true value of ‘One Sharp’.”

    K.C.: Today’s IT departments are facing a huge fragmentation of suppliers and solutions. Is the ‘One Sharp’ strategy a response to the trend towards consolidation of services (vendor consolidation)? Apart from the convenience of a ‘single invoice’, what tangible benefits does a company gain by entrusting print, visualisation and IT to a single partner instead of three different entities?

    R.S.: ‘One Sharp’ is a response to the growing need for simplification and integration. Fragmentation of suppliers means not only greater management complexity, but also higher risks – different security standards, inconsistencies in processes and difficulties in scaling solutions. Consolidating services under a single partner offers more than convenience – it’s all about technological and strategic consistency.

    This gives the enterprise uniform security standards, faster deployments and the ability to centrally manage the entire working environment. Instead of three different integrations, we have a single ecosystem in which print, visualisation and IT work together seamlessly. This translates into lower operating costs, better control over data and greater flexibility to respond to change. In practice, this means fewer risk points, simpler processes and greater predictability – and this is the value that CIOs are looking for today.

    K.C.: Digital transformation is not only about processes, but above all about people. How does the integration of IT services and modern visual tools affect the so-called Employee Experience? In Sharp’s vision, can a modern, integrated office be an argument in the battle for talent and a way to increase the efficiency of teams working in a hybrid model?

    R.S.: Definitely yes. Technology only makes sense if it supports people in their daily work. That’s why the idea behind ‘One Sharp’ is to look at the working environment as a holistic experience that influences comfort, efficiency and organisational culture. Integrated solutions – from secure collaboration platforms to intuitive audiovisual systems – make meetings simpler, communication smoother and access to information immediate. This translates into a real sense of control and convenience for employees.

    In a hybrid model, this is crucial. An employee who can easily connect with his or her team, share documents or give a presentation in a modern conference room feels part of the organisation regardless of where they work. Such an environment is today an argument in the battle for talent – it shows that the company is investing in tools that support creativity and collaboration. As a result, not only satisfaction but also the effectiveness of teams increases. This is our vision: technology that serves people, not the other way around.

    “This is our vision: technology that serves people, not the other way around.”

    K.C.: Poland is a fast and demanding market. How will the ‘One Sharp’ strategy be implemented locally? Can partners and customers in Poland expect new billing models and consultancy services to carry out a turnkey office transformation?

    R.S.: Yes, Poland is a very important market for us and the implementation of ‘One Sharp’ will be complete here. We are developing local IT services in order to be able to offer customers comprehensive projects – from needs analysis to design to implementation and maintenance. We want the office transformation to be a simple turnkey process. When it comes to billing, we are introducing subscription models and ‘as-a-service’ services that make cost planning easier and give flexibility. This is a trend that meets the needs of Polish companies – predictability, simplicity and real value.

    K.C.: Hybrid working, automation and increasing cost pressures – which business challenges do you think will dominate in the next 2-3 years? How is the ‘new’ Sharp prepared to help business leaders meet them?

    R.S.: The coming years will be dominated by three trends: the consolidation of hybrid working, process automation and cost optimisation under economic pressure. Companies will be looking for ways to increase efficiency without compromising on safety and quality. This means that technology must not only be innovative, but also scalable and cost predictable.

    “The ‘new’ Sharp is prepared for these challenges with its ‘One Sharp’ strategy, which integrates printing, visualisation and IT services into a single ecosystem. We offer solutions that support workflow automation, secure cloud collaboration and intuitive communication tools for distributed teams. Additionally, we are developing subscription models and ‘as-a-service’ services that allow companies to better control spend and flexibly scale technology. Our goal is for business leaders to be able to look to the future with the confidence that their working environment is ready for change – no matter how fast it happens.


    This material was produced in collaboration with Sharp Poland.

  • From the Big Bang to the speed of light: the AI revolution is underway

    From the Big Bang to the speed of light: the AI revolution is underway

    In 2023, we witnessed the Big Bang of technology – a year in which artificial intelligence ushered in a new era of innovation and transformation. In 2025, generative AI went mainstream, and agent-based AI took the stage. Most importantly, real returns on investment began to emerge for large companies such as Dell Technologies.

    In 2026, the story of artificial intelligence is accelerating. AI will redesign the entire structure of businesses and industries. It will drive new ways of doing things, building and innovating at a scale and pace previously unimaginable.

    Understanding these changes is essential, as those who invest today in a robust, flexible technology base and benefit from a network of partner ecosystems will be ready to manage the rapid changes to come.

    Time to act: principles governing a dynamic ecosystem

    With the acceleration of artificial intelligence comes a degree of volatility. While we anticipate that the governance framework will eventually stabilise the ecosystem, today’s reality is a call to action.

    Governance is currently causing the most delays and it’s even a critical problem that is not making progress. The industry has rushed to bring valuable artificial intelligence tools such as chatbots and agents into production, but we have done so without sufficient governance.

    This is not only risky, but unsustainable. By next year, robust frameworks and private environments are needed to ensure stability and control. Running models locally, on their own servers or in controlled AI factories, will become the norm to provide a stable foundation and insulate organisations from external disruption.

    But this is more than a forecast. It is an urgent appeal. We need to focus more on governance. Without this, we will end up with uncertainty that will slow down the implementation of practical and valuable artificial intelligence for businesses.

    Our concrete demand to the public and private sector is to create rules for the governance of the enterprise market in collaboration with the real players in this market – enterprises and business technology providers.

    We cannot assume that managing public AI or AGI chatbots is the same as helping businesses shape the actual application of artificial intelligence in their companies and processes.

    Governance is not about slowing down innovation. It is about building a protective framework that allows us all to accelerate in a safe and sustainable way.

    2. Data management: the real foundation of innovation in artificial intelligence

    The next big leap in artificial intelligence will not just come from more powerful algorithms. It will come from the way we manage, enrich and use our data. As artificial intelligence systems become more complex, the quality and availability of the data they use is paramount.

    In 2026, AI-based data management and storage will become the undisputed foundation of all AI innovations.

    AI infrastructure is different from classic IT systems. It focuses on accelerated computing, advanced networking adapted to AI, new user interfaces and, most importantly, a new layer of knowledge from data that drives its results.

    Purpose-built AI data platforms, designed to integrate disparate data sources, protect new artefacts and provide the efficient storage needed to support them, will become essential. Partner ecosystems can help unlock the potential of these purpose-built platforms, with partners using their expertise to integrate and optimise data management solutions for enterprise AI.

    The ability to effectively feed clean, structured and relevant data into artificial intelligence models is crucial. However, as we enter the era of agent-based AI, this data will no longer be used solely to train large models. Instead, they will be a dynamic resource during inference, enabling the generation of evolving knowledge and intelligence in real-time. This foundational layer of data is the starting point for everything that comes next.

    3 Agent AI: the new business continuity manager

    What is coming is agent-based artificial intelligence. An evolution that transforms artificial intelligence from a helpful assistant to an integral manager of long-term, complex processes.

    In areas such as manufacturing and logistics, artificial intelligence agents will not just assist workers, they will assist in coordinating their activities. Using rich, dynamic data streams, they will ensure continuity between shifts, optimise real-time workflows and create new levels of operational efficiency.

    Imagine an artificial intelligence agent scaling the capabilities of process managers on the shop floor, adjusting production schedules based on supply chain disruptions or guiding a new employee through a complex task. By positioning AI agents as intermediaries between a team’s goals and its employees, we are elevating team coordination across all sectors to unprecedented levels.

    These intelligent agents will become the nervous system of modern operations, ensuring resilience and progress. Like any other AI capability, they rely on enterprise data to create a unique store of knowledge and intelligence that must be properly stored and protected.

    4. Artificial intelligence factories redefine resilience and disaster recovery

    The more AI integrates with a company’s core functions, the more business continuity becomes unquestionable.

    Artificial intelligence infrastructure will evolve to prioritise operational resilience, redefining the meaning of disaster recovery in an AI-driven world. The focus is not just on backing up systems, but on ensuring AI functionality, even if the underlying systems go offline. This includes protecting vectorised data and other unique artefacts, so that system intelligence can survive any disruption.

    Achieving this requires innovation across the AI value chain, from data protection and cyber security companies to key AI technology providers. Collaborative ecosystems include governments, partners and large-scale AI innovators. They must work together to build resilient factories that bring together the tools and expertise needed to ensure continuity and secure critical functions in hybrid cloud environments.

    5. Sovereign artificial intelligence accelerates development of national enterprise infrastructure

    Artificial intelligence is central to national interests, which is why we are seeing the rapid development of sovereign artificial intelligence ecosystems. Countries are no longer just consumers of AI technology, they are actively building their own frameworks to drive local innovation and maintain digital autonomy.

    This is changing the way artificial intelligence infrastructure is planned, with computing, data storage and management playing a key role in protecting and locating sensitive information.

    Businesses will increasingly adapt to this framework, scaling their operations within regional boundaries. By storing data locally, governments can shape public services such as healthcare, and businesses can use national infrastructure while aligning business objectives with national industrial policy.

    This creates innovations with a direct impact on citizens and economies, and represents a fundamental shift that moves artificial intelligence from a global concept to a concrete, local reality.

    Setting the course for 2026

    In 2026, the artificial intelligence revolution is not slowing down, but accelerating. What started with the Big Bang has reached the speed of light, and leading organisations are evolving and adapting to change just as fast.

    To succeed, you don’t need to chase every breakthrough. It’s better to build an infrastructure that can keep up with these changes: resilient AI factories, sovereign frameworks, agent systems that manage complex operations and collaborative ecosystems that turn innovation into real business impact. The tools and information are available. It is the readiness to act that already sets leaders apart from the rest.

    Leadership and concrete action will determine who reaps the real rewards. The future is rushing by at the speed of light. The question is: are we ready?

    By John Roese, global director of technology and artificial intelligence at Dell Technologies

  • Repositioning the Sharp brand and building lasting partnerships

    Repositioning the Sharp brand and building lasting partnerships

    This change is particularly evident in meeting rooms. Workplaces, public spaces, transport hubs and catering facilities are all looking for new ways to remain modern, viable and connected. There has been a need for a new look at what ‘business as usual’ means, opening the door to new ideas and opportunities for growth.

    Remote working, cyber vulnerabilities, sustainability requirements and growing expectations of audio-visual infotainment systems mean that companies are demanding more than IT tools and hardware. They need trusted and customer-focused technology partners like Sharp to help them understand the new challenges. As companies look to the future, they are often moving away from a simple transactional model and instead looking for deeper, more meaningful business relationships that drive sustainable success and progress.

    Technological challenges are no longer one-dimensional

    We know this because at Sharp we operate on the front line with our customers and partners. On the one hand, they tell us that collaboration and flexibility require seamless digital tools. On the other hand, security, compliance, economic efficiency and environmental responsibility are described as an absolute priority. Finding the right balance between these two aspects, while at the same time reducing the number of suppliers, is a challenge in the quest for a real return on investment.

    A bold new promise, built on over 100 years of trust

    This challenge is our vocation. Founded in 1912, Sharp has a rich history of technological innovation. We know that we have what it takes to be a trusted technology partner for our customers who want to introduce new ideas. These qualities, in fact, perfectly reflect our corporate values of ‘Sincerity and Creativity’. Whether it’s a smarter, safer multifunction printer or groundbreaking (and sustainable) display technologies such as our e-Paper, which we unveiled at ISE 2024, the Sharp name has long been synonymous with quality and reliability.

    We have shown that Sharp is not just focused on solving single problems, but on offering companies comprehensive tools – from intelligent workflows to immersive displays to support collaboration in hybrid teams.

    This multi-faceted reality has emboldened us to redefine what we stand for. The world around us is changing rapidly, and we are not going to wait for it to stabilise. We call this the ‘Sharp Digital Experience’, a clear endorsement of the direction we are heading in at Sharp Europe.

    Sharp Europe: A new identity for a dynamic future

    This new direction signals the beginning of an exciting era at Sharp Europe. The world of technology is evolving faster than ever before, and our brand will now reflect the energy, creativity and forward-thinking spirit that drives us.

    We are excited to present a new brand identity that will help customers and partners use our business solutions with confidence. With a clearer ‘one Sharp’ proposition and consistent visual resources, our products and services form a unified identity. Customers can now expect an even stronger partnership with Sharp. One that focuses on their changing needs and is reinforced with new energy to help them succeed in a dynamic marketplace.

    An example of this unification of services is our ‘Pulse’ design theme – ‘The Pulse’ – which symbolises how our company connects people to technology in a seamless way. Made up of interconnected elements, it constantly adapts and evolves to showcase our wide range of products and services – from IT services to multifunctional devices to innovative display solutions. “Impulse” also illustrates the communication and proximity we have with our customers, symbolising digital conversations through our wide range of technology devices and digital interfaces.

    Visual features aside, our repositioning is not just a new look – it’s a bold statement about who we are and where we’re going. Sharp has very strong connections with people and businesses around the world, and this strategic shift signals our continued investment in innovation and commitment to helping organisations become more efficient and open up new opportunities, no matter what their size.

    From devices to environments

    What can our customers and partners expect from the ‘Sharp Digital Experience’ as we strengthen our trusted partnerships and communicate our brand positioning?

    Imagine the scene. Clients enter the conference room, their attention is caught by a Sharp dvLED screen from the doorway. After a moment, they see that their presentation is ready to go, securely stored in the cloud and accessible via the Sharp Synappx platform. During the meeting, ideas are exchanged not only between those in the meeting room, but also among those participating remotely from 10 other countries around the world. On the Sharp large-format display, everyone can see everything clearly. Discussions run smoothly, decisions are made and supporting documents are printed securely and in top quality with the latest Sharp MFP.

    This is the future we intend to achieve. A world based on intuitive experiences that connect seamlessly to allow businesses to focus on what they do best. This concept also guides our new visual communications, where ‘impulse’ permeates our imagery to show how technology not only connects and talks to each other, but also interacts seamlessly with people.

    Innovation with a message

    By combining its IT services, monitors, printing devices and workplace solutions, Sharp Europe is creating a communication platform that fuses ideas with actions, people with technology, and today’s challenges with tomorrow’s opportunities. The result? Companies that are more resilient. Teams that are more empowered. Workplaces that are more inspiring. And that’s what the future of workplaces and public spaces is all about. Not just faster networks or smarter screens, but better experiences for people.

    Technology alone will not build the future of work. It will be built on how we use the technology. How we design, imagine and integrate it to serve human potential. By combining all its strengths, Sharp Europe is not only developing as a brand. It is becoming something that companies can truly believe in. A trusted partner that supports both our existing customers and those we will engage through our new identity, so that they can navigate emerging complexities with confidence, creativity and honesty.

    The world is changing fast, but with the right technology partner, the future doesn’t have to be daunting. It can actually be inspiring. That is what Sharp Europe wants to prove with expert advice and tailor-made solutions and secure technology.

  • From Apple to Alphabet: Warren Buffett’s late turn to AI infrastructure

    From Apple to Alphabet: Warren Buffett’s late turn to AI infrastructure

    Ruben Dalfovo, Investment Strategist at Saxo, writes in an analysis that for years Warren Buffett’s history with Google was a cautionary tale. He openly admitted that not buying their shares was a serious mistake, even though he saw the company turning internet search into a tool to monetise advertising. Now, just months before handing over the helm to Greg Abel, Berkshire Hathaway has quietly bought a stake worth billions of dollars in Google‘s parent company, Alphabet.

    Alphabet’s Class C shares closed the 17 November session at $285.60, up 3.11% on the day, after reports of a new package helped push the stock to record levels. The stock is up around 50% since the start of 2025 and is the best among the so-called Magnificent Seven this year. When an investor known for avoiding following fashionable trends buys shares in a market leader whose price is approaching historic highs, the obvious question arises: what does he see that the rest of us might not?

    From Apple to Alphabet

    Berkshire has been a net seller of equities for twelve consecutive quarters, including the last. In that time, it has sold about $12.5bn of securities while buying about $6.4bn of shares, while allowing its cash holdings to grow to a record $381.7bn. This is not the behaviour of a man who thinks everything is cheap. However, there has been a major reshuffling within this overall reduction.

    In the past quarter, Berkshire reduced its stake in Apple by around 15% and its position in Bank of America by around 6%. Alphabet, meanwhile, emerges as a new player in the top ten club. Regulatory documents and portfolio statements show that the holding now ranks roughly tenth in Berkshire’s equity portfolio, behind such classics as American Express, CocaCola and Chevron. This is a rare situation in the technology industry for a conglomerate that for decades has held shares in railways, insurers and basic goods companies, rather than fast-growing software giants.

    What has changed, however, is not so much Buffett’s principles as the companies themselves. Apple, which he has always described as a consumer brand, now operates in a world where hardware upgrades rely heavily on AI-based features. At the same time, Alphabet is looking less and less like a speculative technology company and more like a sprawling infrastructure for the digital economy, with advertising and cloud revenues that are surprisingly stable for something built from lines of code.

    Saxo

    Alphabet as AI infrastructure, not a gadget story

    Alphabet is at the point where its artificial intelligence ambitions meet traditional cash generation. In the third quarter of 2025, the company reported around $102bn in revenue, above forecasts, and profits also beat expectations. The main driver of growth has been Google Cloud, which has evolved from a ‘nice-to-have’ into a business driver as companies developing AI rent its computing power.

    In addition, Gemini models and AI-enhanced search reach hundreds of millions of users today. These tools run on a global network of data centres, proprietary chips and fibre optics, the expansion of which this year will consume a total of more than $90 billion in capital expenditure. Put simply: Alphabet wants to provide the ‘shovels and picks’ for the gold rush around AI.

    The partnership with Anthropic adds another dimension. Google has invested billions in the startup and has entered into a major chip supply and cloud services agreement, which should direct future computing workloads to the Google Cloud. Berkshire’s package gives the company indirect exposure to this ecosystem: every Anthropic query run on Google’s infrastructure strengthens Alphabet’s position as an AI infrastructure.

    The key point is that this expansion is based on a strong balance sheet. Alphabet is valued at around 25 times expected earnings, cheaper than some other megacaps, and continues to generate solid free cash flow from the search engine and from YouTube. This cash can fund data centres and still support share buybacks, which suits investors who prefer ‘great companies at fair prices’.

    What does the ‘vote of confidence’ from Buffett really mean?

    Buffett’s purchase of Alphabet is more than a simple seal of approval. It’s a concrete thesis about where AI profits will be concentrated. Alphabet makes its money from search ads, YouTube, maps, the app shop and the cloud. AI is not a separate product here. It is an enhancement that can increase user engagement and monetisation opportunities in existing businesses.

    It is also a clear shift towards infrastructure rather than devices. Apple is betting on on-device intelligence, but is yet to refine its AI business model. Alphabet is already monetising AI through cloud contracts, advertising tools and office software. Reducing its position in Apple while increasing its stake in Alphabet suggests that Berkshire sees more future value of AI in data centres and platforms than in handset replacement cycles.

    Finally, it is important to remember that ‘technology’ is not a single category. Alphabet may share the index with fast-growing artificial intelligence start-ups, but its competitive advantage, cash-generating ability and diversified revenues place it closer to companies that have consistently multiplied capital over the years, exactly the kind of companies Buffett has always favoured.

    Risks that even Berkshire cannot ignore

    The risks are real. Alphabet could over-invest in AI computing capacity if customers slow down projects or competitors snatch up big contracts. Google Cloud’s growth rate, margins and long-term investment guidelines are worth watching. The second threat remains regulation. Tougher antitrust or privacy laws in the US and Europe could hit search and advertising profitability or force changes in how data is used.

    AI strategy execution will also be key. Alphabet stumbled at the start of the race for generative AI and is still playing catch-up, trying to regain the initiative with Gemini and other models. If users or corporate customers prefer the tools of the competition, all this spending on chips and data centres could result in lower-than-expected returns, even despite Buffett’s presence in the shareholding.

    Closing the buckle: what Buffett’s bet on Alphabet really teaches

    Buffett has said for years that not buying Google shares was one of his big mistakes. Alphabet was up for grabs, with search engine profits and solid cash flow, while he remained on the sidelines. By buying the shares, now that he is preparing to hand over as CEO, he is doing more than just a ‘neat final move’. It’s a discreet signal of what he thinks sustainable value in AI will look like.

    For ordinary investors, the conclusion is not ‘buy what Buffett is buying’. The point is that even the most traditional value investor is happy to have exposure to AI – as long as it is built into a diversified platform generating strong cash flows that can be understood and rationally justified. The market will continue to argue whether Alphabet’s price is too high, too low or just right. A better question is one that Buffett has been asking for seven decades: which companies can you hold in good times and bad because you understand how they make money and why they can survive?

    – Artificial intelligence remains one of the fastest growing market segments, while being distinguished by high volatility, risk of overheating and intense competition. The sector often reacts with rapid valuation movements, and the dynamics of innovation mean that market positions of leaders can change rapidly. Investing in stable assets outside the AI sector helps to keep a portfolio balanced, even when sentiment towards innovation declines. Therefore, in the long term, it is important to ensure portfolio diversification, e.g. by combining a bold approach to new technologies with sound and disciplined risk management, says Aleksander Mrózek, CEE key account relationship manager at Saxo Bank.

  • What’s next for AI? Ministry of Digitalisation announces an updated version of the Policy for the Development of Artificial Intelligence in Poland

    What’s next for AI? Ministry of Digitalisation announces an updated version of the Policy for the Development of Artificial Intelligence in Poland

    The document assumes, among others, the implementation of AI in public administration (AI HUB Poland), the creation of dedicated ‘Sectoral Deployment Maps’, as well as support for small and medium-sized enterprises. The tasks described in the AI Policy are ultimately intended to contribute to the realisation of the assumed goals – Poland as the heart of the AI continent thanks to implementations in key sectors of the economy and an efficient state using AI solutions.

    AI BUH Poland

    AI Policy envisages the launch of the AI HUB Poland portal, which is intended to be a tool to support the effective management, development and implementation of AI technologies in the public sector. The aim of the platform is to create an integrated environment that will improve the use of artificial intelligence in public services and in key areas of state functioning.

    Among other things, the project envisages the rapid adoption of AI-based innovations, the upskilling of administrative staff, the harmonisation of data to build artificial intelligence models and the creation of national large language models.

    AI HUB Poland is a joint initiative of experts from the Central Informatics Centre, NASK and partners to support the country’s digital development and strengthen Poland’s international competitiveness. The platform’s activities include the launch of a central system for managing AI projects, building a repository of open automation solutions, sharing best practices and implementation support for smaller administrative units.

    Sector implementation maps

    The Ministry of Digitalisation notes that artificial intelligence is becoming one of the most important drivers of economic transformation. Its use can significantly accelerate the development of innovation, increase the competitiveness of Polish companies and improve the quality of life of society. In order to fully exploit this potential, it is necessary to focus efforts on those projects and sectors that can bring Poland the greatest economic and social benefits.

    The Polish Economic Institute identifies three approaches that help identify priority areas for AI development: analysis of key industries in the economy, assessment of the so-called ‘technology stack’ and identification of ‘grand challenges’ – complex problems where AI can play a particularly important role.

    Based on these analyses and the recommendations of the AI Working Group, the sectors with the greatest potential for AI deployments have been identified. These are: energy, e-commerce, dual-use products, cyber security, BioMedTech, financial services and transport and logistics. It is in these areas that AI can generate the most value – from optimising energy consumption, to faster drug discovery, to autonomous mobility and advanced cyber defence systems.

    The directions set are also in line with the European Commission’s focus on the development of secure, interoperable and high-quality AI systems in strategic sectors across the EU.

    In order to successfully implement artificial intelligence, cross-sector collaboration, a robust data infrastructure, competent staff and consistent regulation are needed. Therefore, dedicated Sector Deployment Maps will be created for each of the key sectors. These will include an analysis of the industry’s needs, key business areas for AI applications, data sharing rules and support mechanisms – so that Polish companies can fully exploit the potential of the breakthrough technology.

    Support for small and medium-sized enterprises

    Small and medium-sized enterprises play a key role in the development of AI in Europe. Thanks to their flexibility, ability to experiment quickly and innovative approach, it is SMEs – and especially startups – that are often the first to implement new technologies and bring breakthroughs to market. At the same time, they face barriers such as limited resources, more difficult access to data and the need to meet ethical and regulatory requirements.

    In order to accelerate the development of AI in this sector, support including funding, computing infrastructure and the ability to test solutions in secure environments is essential. Incubators, accelerators and knowledge-sharing platforms play an important role in helping companies commercialise innovations faster and build technological competence.

    In Poland, the infrastructure being developed – including AI Factories – is to allow entrepreneurs to benefit from technology and regulatory advice, computing power and testing environments. This is complemented by the PFR’s ‘Digital Crate for Companies’ programme, which helps to confirm technology readiness and gain support for AI Act compliance.

    One of the most important elements of the support policy for SMEs are the regulatory AI sandboxes, which – according to EU regulations – must be completely free of charge for them. They allow solutions to be tested under controlled conditions and reduce the risks associated with market entry. Specialised sectoral sandboxes will be established in Poland and their integration with AI Factories will provide access to data and infrastructure.

    In order for Polish companies to realise the full potential of AI, it will be crucial to raise awareness of the benefits of its implementation, provide practical advice and launch dedicated programmes and competitions to support AI projects. In the long term, this will translate into an increase in the competitiveness of SMEs, the development of innovation and the strengthening of Poland’s position in the European technological ecosystem.

    Summary

    The updated AI Policy responds to the challenges posed by the dynamic development of AI technologies. The document sets out directions for action, integrating the needs of government, business, science and society. The Ministry of Digitalisation announces further work on the implementation of the AI Policy.

  • UX writing is not fukurawai

    UX writing is not fukurawai

    Fukurawai is a traditional Japanese game in which a child – blindfolded – tries to arrange eyes, eyebrows, nose and mouth on the outline of a face. The effect can be grotesque: the mouth lands on the forehead, the eyebrow is tucked into the eye. Just like when a content designer is handed a ready-made mock-up with a request: take a look.

    Hitting microcontent in a digital service is no accident. It’s the result of understanding the context: who is reading, when and why. Without this, even the best text may not work – or worse, work the other way around.

    Writing messages and prompts without knowing the product, the user’s needs and the dynamics of the screen on which they are to appear is like answering a question you don’t know or understand. At the Polish Baccalaureate it may be a necessity, but when working on government applications it is a strategy doomed to failure.

    Lack of context leads astray

    Sometimes a content person is instructed to edit messages in a text file or on Confluence. Yes, he or she may take on the challenge: simplify everything according to the plain language standard, correct language errors, ensure consistency of vocabulary and spelling. But when the fruit of this labour hits the screens, the application begins to speak neatly, but not necessarily to the point. The messages do not respond to the real problems of male and female users, but a monologue focused on procedure and functions.

    It also happens that the team recognises the existence of a content designer primarily when it is necessary to explain how an interface or procedure works. Preferably in a tooltip, as this is an element that is easy to add without writing complex code. Tooltips then become a cure-all for complicated legislation, or even a placeholder for the legal basis, which is an expression of ‘utmost care’ that the application works according to the law. They are also sometimes a last resort when the team is unable to prepare a simpler path in time.

    However, the user’s needs get lost in the maze of tooltips. The path to the destination turns into an information jungle in which the only way to survive becomes to report to the Helpdesk.

    The project is already here, only the texts are missing

    When I see a beautiful, colourful design with ill-conceived communication, I am reminded of a neighbour who loved cars. He was constantly changing them, spending Saturdays at the car wash. He used these cars to drive his three beloved children to school.

    One morning, as he stopped outside the school and waited for the toddlers to slam the door to say goodbye, he was surprised by the silence. Not a single toddler squealed. – What is it? Get out of the car! I’m in a hurry to get to work,’ he shouted nervously and looked at the back seat. It was empty. And that was because he had set off, in his haste to forget about the children.

    A beautiful interface without thoughtful content is like my neighbour’s shiny, empty car outside the school. It fails to deliver what is most important – meaning.

    As Marcin Wicha, who died this year, used to say – a designer thinks and designs with language. A design that is created without a content designer is a design that does not know how to talk to people. It doesn’t listen, so it doesn’t understand. Instead of a relationship based on trust, credibility and exchange, it builds a toxic relationship resulting from compulsion dictated by law.

    UX writing is lifting the curse

    The content designer should understand what every person using the digital service will need to understand. When he or she asks the team questions about comprehensibility and usability when working on screens, he or she may be met with answers such as: “users will definitely understand it”, “they know it” or “scientists are intelligent people – they don’t need it explained to them”.

    It was as if someone had put a curse on the band. Steven Pinker in Beautiful Style called it the curse of knowledge. This curse is the difficulty in imagining that someone else doesn’t know what we know. As a result, we create content that appears unreadable or incomprehensible to those outside the team – even though it seems obvious to us.

    Therefore, users are the most valuable source of information for those designing state applications and systems. At the Information Processing Centre – State Research Institutions, we get to know their opinions and impressions not only during usability studies conducted by a professional research team. We also read the reports that our Helpdesk receives. In the submissions, nobody complains about colours, gradients or sad backgrounds. Most of the comments are about specific expectations from the operation of the app. Users want the app to be reliable, simple, understandable, unambiguous. And ideally – to make it work for them.

    During usability studies, we find that people using the application get lost because they are using the studied function for the first time or use it very rarely. Sometimes they do not understand the procedure because the language of law or IT jargon is not their natural language. And sometimes they are simply not focused – in their minds they have a new publication, a problem with obtaining research funding, trouble with students resistant to learning, a loan to repay or a sick child. In short: they have more important matters to unravel than the operation of the National Junction.

    What is clear to the team is not always understood by the users and users. The art of content design is the art of asking questions and removing the curse of knowledge. It is the ability to work with design and production teams in such a way as to take into account the perspective of the person who needs to use a digital service but – for a variety of reasons – sees it very differently from us, its creators.

    Empathy as Anne of Green Gables

    It’s Sunday morning. You want to buy fresh buns for the family breakfast. You reach the bakery, and there you hear a waspish: – The action to bake the buns has failed. Try again later. You wait a while, maybe they will bake. But in a quarter of an hour, you hear exactly the same thing. And two quarters of an hour later. And you have cinema tickets for ten o’clock. What do you feel?

    This is exactly what a person feels when stuck on an incomprehensible app screen. However, when it comes to state services, the consequences can be less pleasant than a failed morning and a disappointed family.

    When writing the story about the buns, I referred to your empathy – the ability to recognise and empathise with the emotions of others. Empathy is a key word that often comes up in the context of user experience design, including your communication with a digital product. But is empathy enough to design useful content and interfaces?

    In an IT world dominated by scientific minds, empathy takes the form of Anne of Green Gables prancing through a meadow from the synopsis by Wojciech Materna and Tomasz Mann. A too careless approach to designing reliable and user-needed content can result in it being reduced to proof-reading and editing of messages moments before implementation. Then the value of such work becomes negligible for the usefulness of the digital service and disproportionate to the effort put in.

    Scientists are people too

    In order to discover what the application is supposed to say and when, you need not only knowledge of the language and its rich nature, but – above all – knowledge of the design workshop, the facts and the data. An empathy map is certainly useful, but it is not enough. And what do the data say? – 31 per cent of academics experience high levels of symptoms of affective disorders, depression and anxiety disorders, alerts the Ministry of Science and Higher Education. This is a cognitively overloaded group that needs to translate complex official procedures into minimalist, concrete interfaces.

    The scientists for whom we design digital services are exceptional people. Their talents, minds and energy should be used elsewhere – not in overcoming digital obstacles. Busy, distracted, diverse, overloaded, they need a helpful word at the interface. One that gives them a sense of control – in a world where everything else eludes it.

    The content should appear when the user makes a decision

    – Most teams I’ve worked with don’t even spend a minute on the content and interaction planning phase, jumping straight into Figma or Sketch. Then it turns out that holes appear in the flow that were not noticed in the earliest stages of the work,” writes Wojtek Kutyła in Web Accessibility. An Introduction to Digital Accessibility.

    Such a statement in a book about accessibility – still associated with a technical list of requirements for developers – is a sign that we are moving in the right direction. Although it’s still a walk, but with a good view.

    So how do you identify where simple and helpful content should appear? The ways are many. – Every decision point is a potential content point, suggests Sarah Winter in Content Design, the bible of Gov content designers. A drawn-out user path (user flow) – a diagram that illustrates all the steps and decisions of a person using a digital service – will also help here. A user journey mapping is also useful, i.e. a representation of the entire service experience, including the experience outside the interface. It makes it possible to discover not only the moments of the most difficult decisions, but also the emotions, needs and problems that accompany the handling of the case. Analysis of web analytics data or the aforementioned analysis of user requests will also prove valuable.

    Artificial intelligence and UX writing

    If I were to write a prompt from which generative artificial intelligence could create a symbolic portrait of the ideal content designer, I would write: eyebrows raised in curiosity, eyes searching for answers, mouth ready to talk, pencil bitten out of stubbornness. In the background, a whiteboard inscribed with UX laws, heuristics, accessibility guidelines, key issues from the fields of information processing, psychology, cognitive science, the basics of UI (user interface) design and the handling of Figma – and finally, the principles of plain language and effective writing.

    A dictionary of correct Polish and a spelling dictionary should fit in the frame. And since language is alive and changing, the dictionary-backed arm of the ideal should be adorned with tattoos of links leading to language tutorials and podcasts.

    A key skill of the ideal content designer is to tactfully enter a team of experts and specialists heavily focused on databases, legislation, code, pixels, colours, icons and resolutions. It is therefore useful to master the arts of persuasion, negotiation and eristics. This is why the ideal from the portrait should hold a range of soft skills. Necessary to cool down the heated pre-implementation atmosphere and draw the team’s attention to the language of the application. Even when everyone is already protesting against amendments, because in the word deadline the word basis did not come from chance.

    I would add that the range should be flexible. Always sized to suit different configurations of design and manufacturing teams.

    Will the machine bear such a prompt? Not unlikely. However, the effect would be grotesque, just like in fukurawai.

  • Pragmatism versus hype: How ‘agent washing’ and hallucinations brought AI down to earth

    Pragmatism versus hype: How ‘agent washing’ and hallucinations brought AI down to earth

    The technology industry, after two years of fascination with Generative AI, is entering the ‘check out’ stage. Enthusiasm is colliding with hard reality. Statistics indicating low levels of AI adoption in many economies are bringing us down to earth.

    This year’s Dell Technologies Forum in Warsaw was a good example of this. As Dariusz Piotrowski aptly summarised it, the key dogma nowadays is: ‘AI follows the data, not the other way around’. It is no longer the algorithms that are the bottleneck. The real challenge for business is access to clean, secure and well-structured data. The discussion has definitely moved from the lab to the operational back office.

    AI follows the data

    We have been living under the belief that the key to revolution is a more perfect algorithm. This myth is just now collapsing. However, internal case studies of major technology companies show: implementing an internal AI tool is often not a problem of the model itself, but months of painstaking work to… organising and providing access to distributed data.

    This raises an immediate consequence: computing power must move to where the data originates. Instead of sending terabytes of information to a central cloud, AI must start operating ‘at the edge’ (Edge AI).

    The most visible manifestation of this trend is the birth of the AI PC era. With dedicated processors (NPUs), PCs are expected to locally handle AI tasks. This is not a marketing gimmick, but a fundamental change in architecture. It’s all about security and privacy – key data no longer needs to leave the desk to be processed. Of course, this puzzle won’t work without hard foundations. Since data is so critical, the cyber security landscape is changing. The number one target of attack is no longer production systems, but backup. This is why the concepts of ‘digital bunkers’ (restore vaults) – guaranteeing access to ‘uncontaminated’ data – are becoming the absolute foundation of any serious AI strategy.

    Pragmatism versus “Agent Washing”

    In this red-hot market, how do you distinguish real value from marketing illusion? After the wave of fascination with ‘GenAI’, the new ‘holy grail’ of the industry is becoming ‘AI Agents’. However, we must beware of the phenomenon of “Agent Washing” – the packaging of old algorithms into a shiny new box with a trendy label.

    Business is beginning to understand that the chaotic ‘bottom-up’ approach leads nowhere. As Said Akar of Dell Technologies frankly admitted, the company initially put together ‘1,800 use cases’ of AI, which could have become a simple path to paralysis. Therefore, the strategy was changed to a hard ‘top-down’ approach: finding a real business problem, defining a measurable return on investment (ROI) and only then selecting tools.

    This leads directly to a global trend: a shift away from the pursuit of a single, giant overall model (AGI) to ‘Narrow AI’. This trend combines with the growing need for digital sovereignty. States and key sectors (such as finance or administration) cannot afford to be dependent on a few global providers. Hence the growing trend of investing in local models that allow for greater control.

    Hype versus hallucination

    When the dust settles, it turns out that the great technological race is no longer just about making models know more. It’s about making them… make up less often. The biggest technical and business problem remains hallucinations.

    The dominant and only viable business model is becoming ‘human-in-the-loop’, i.e. the human at the centre of the process. In regulated industries, no one in their right mind will allow a machine to ‘pull the lever’ on its own. As mBank’s Agnieszka Słomka-Gołębiowska aptly pointed out, financial institutions are in the ‘business of trust’ and the biggest risk of AI is ‘bias’, which cannot be fully controlled in the model itself.

    Artificial intelligence is set to become a powerful collaborator to take over the ‘thankless tasks’. But the final, strategic decision is up to humans. The real revolution is pragmatic, happens ‘on the edge’ and is meant to help, not take away, from work.

  • AI bubble? WEF chief warns, tech market is slowing down

    AI bubble? WEF chief warns, tech market is slowing down

    Amid sharp falls in global tech stocks, the head of the World Economic Forum (WEF) is toning down the mood. Borge Brende, during a visit to Sao Paolo, pointed to three potential, related bubbles on the horizon: cryptocurrencies, sovereign debt and, crucially for the IT sector, artificial intelligence.

    Brende’s comments came at a time when markets are correcting their record highs and valuations of many companies, especially those related to AI, look overvalued. While analysts and brokers recommend caution rather than panic, the warning from Davos is significant. Brende highlighted that current government debt is the highest since 1945, significantly narrowing the room for manoeuvre in the event of a potential crisis.

    Artificial intelligence itself, which has fuelled market optimism and expectations of an economic revolution over the past months, is seen by the WEF chief in two ways. On the one hand, it offers the promise of unprecedented productivity gains. On the other, it poses a real risk to many white-collar jobs.

    Brende used a suggestive comparison to the emergence of a new ‘rust belt’ (Rust Belt) in city centres, where office work could be massively replaced by algorithms. In doing so, he pointed to recent announcements of job cuts at global corporations such as Amazon and Nestle.

    However, the WEF president pointed out that in the long term, technological change has historically led to productivity growth. It is this, he concluded, that is the only sustainable driver of wealth growth and real wages in society.

    At least two key lessons flow from this. Firstly, strategic caution and a deep review of AI-based investment valuations is needed, separating real potential for productivity growth from market speculation. Second, Brende’s warning of a new ‘rust belt’ for white-collar workers sends a direct message to boards; companies need to plan now for a deep workforce transformation, investing in reskilling and adapting operating models. In an unstable macroeconomic environment marked by record debt, relying solely on technological optimism is becoming highly risky.

  • Intel celebrates the success of accountants, not engineers

    Intel celebrates the success of accountants, not engineers

    Euphoria flooded the market after Intel’s latest results. Shares soaring 90% by 2025, quarterly earnings per share of 23 cents (instead of the expected 1 cent) and gross margins of 40% look like the return of the king. Investors, buoyed by the AI-PC hype and a fresh injection of $15 billion, are opening the champagne.

    However, we must ask the question: what are we really celebrating?

    The answer is simple: we celebrate the success of accountants, not engineers. Intel’s recent results are not the fruit of regained technological dominance, but the result of “drastic cost-cutting measures” introduced by its new CEO, Lip-Bu Tan. It is an illusion of success that masks a strategic retreat and a tacit admission of defeat in the crucial race for the future of chip manufacturing.

    A financial miracle

    Let’s look at where this impressive profit came from. It doesn’t come from revolutionary new products that beat Nvidia to the AI market. It comes from cuts.

    Firstly, Intel is ending the year with a workforce that is more than a fifth (more than 20%) smaller than last year. Second, the company is aggressively selling off assets – including a 51% stake in Altera, a company acquired in 2015 for $16.7 billion.

    It’s a cold financial calculation. The new CEO, Tan, is doing exactly what he was hired for: putting out the fire his predecessor left behind.

    Let’s remember that Pat Gelsinger’s ambitious plans to turn Intel into a TSMC-like contract manufacturer led the company to its first annual loss since 1986. Tan has radically scaled back these costly ambitions. Current profits are therefore not growth, but stopping the haemorrhage.

    Rescue, not reward

    Giant investments were also in the spotlight: $5 billion from Nvidia, $2 billion from SoftBank and an unprecedented $8.9 billion from the US government in exchange for a 10% stake.

    Let’s not be fooled. This is not a reward for a market leader. It is a rescue for a company that is trying to maintain its dominance and whose repeated attempts to break into the AI chip market have failed.

    The investment by Nvidia – the rival that dethroned Intel in the AI segment – is a strategic bet, not an act of faith. It is an attempt to secure access to manufacturing capacity in the West and to influence the development of CPUs, which (ironically) are essential in AI servers to support… Nvidia GPUs.

    Even more telling is the intervention of the US government. The 10% takeover, which came after President Donald Trump called for Tan’s resignation over his links to China, is not a market move. It is a geopolitical intervention. Intel has become a national security asset; a company too important to fail but too weak to win on its own.

    Time bomb: the truth about the 18A process

    While Wall Street analysts were getting excited about gross margins, the key message came from Intel’s CFO himself, Dave Zinsner.

    When asked about the foundation of Intel’s future competitiveness – the 18A manufacturing process – Zinsner openly admitted that the process would not give Intel the level of margins it currently needs.

    That already sounds bad. To make matters worse, moments later Zinsner said the process would not be ready at a level “acceptable to the industry” until 2027.

    This is the true picture of Intel, hidden behind a facade of good quarterly results. The 18A process is not “some” design. It was supposed to be the answer to TSMC and Samsung’s dominance. It is the technology that was supposed to return Intel to the throne of manufacturing leadership. The admission that it won’t be ready until 2027 is a disaster. In the semiconductor industry, that’s an eternity. It means that for the next 2-3 years Intel will be technologically far behind.

    Retreat from dominance

    So what will Intel do if it cannot compete with TSMC? CEO Tan has a new vision: to create a ‘central engineering group’ that will offer specially designed chips to external customers such as Google or Amazon.

    To put it bluntly: Intel is giving up the battle for global dominance in mass production. Instead, it will try to become a niche supplier of expensive, custom solutions, competing with the likes of Broadcom and Marvell Technologies. This is a radical but, it seems, necessary lowering of ambition.

    Stable patient, not healthy leader

    Intel 3.0, led by Lip-Bu Tan, is a financially more stable company than the shaky giant under Pat Gelsinger. A $15 billion injection and brutal cost-cutting bought the company time.

    But the investors who are buying shares today are not buying a technology leader. They are buying a company that has just admitted that its key manufacturing technology is years behind schedule. They are buying a company that is selling off assets and giving up the battle for the throne to become a premium service provider to other giants.

    Ironically, the current ‘high-end problem’ mentioned by CFO Zinsner – i.e. demand outstripping supply – is largely due to data centres having to upgrade CPUs (Intel’s) to keep up with advanced AI chips (Nvidia’s).

    Intel is no longer leading the AI revolution. It has become an indispensable but nonetheless secondary parts supplier for it. This is not the return of the king. It is the beginning of life as a strategic asset, kept alive by rivals and the government, whose main goal is no longer domination but survival.

  • Cloud exposed. AWS failure is a lesson in economics, not technology

    Cloud exposed. AWS failure is a lesson in economics, not technology

    Monday’s paralysis that laid global giants – from Zoom, to Slack, to Fortnite – was like a blow to the IT industry. In the age of the ‘decentralised’, infinitely scalable cloud, how is it possible for a problem in a single physical building in Virginia (the famous AWS US-EAST-1 region) to shut down half the internet? After all, we got away from our bursting at the seams basement server rooms precisely to avoid such scenarios.

    And yet, we were back to square one. Looking at the red dashboards, we have come to face a brutal truth: this failure is not an argument against the cloud as a concept. It is a painful lesson in economics and a validation of the compromises the industry accepts every day, choosing convenience and low cost, confusing flexibility with immortality.

    The reality of centralisation

    For years we have been sold the promise of the public cloud. You only pay for what you use. You scale your infrastructure in minutes, not months. You don’t worry about power, cooling or replacing drives. This is the undisputed revolution that has enabled startups like Snapchat and Canva.

    However, this revolution comes at a price: addiction. Today’s internet, contrary to idealistic visions, is not a decentralised utopia. It is an oligopoly in which three companies – Amazon Web Services, Microsoft Azure and Google Cloud – hold the keys to almost the entire digital economy.

    The failure in the US-EAST-1 region is perfect proof of this. It is AWS’ oldest, largest and often default region. Over the years, it has become a technological centre of gravity, hosting key services on which Amazon’s other global functions depend. In this way, even within the single-provider ecosystem, we have created a single point of failure (SPOF) for services that consider themselves ‘global’. When this one block of dominoes fell, it pulled the rest with it.

    The myth of “sufficient backup”

    By now, many IT managers and CFOs, seeing the scale of the problem, have probably breathed a sigh of relief: “at least we have backups”. This is a fundamental misunderstanding of the problem that needs to be clearly corrected.

    Backup protects against data loss. It does not protect against service failure.

    Monday’s incident, which originated from bugs in the DynamoDB database API, was not (in all likelihood) about data loss. Snapchat user data or Duolingo game progress was almost certainly safe, replicated and secured in Amazon’s data centres.

    The problem was that the API – the ‘door’ that allows the application to access this data – was not working.

    Having a backup in such a situation is useless. It is like having a perfect copy of the key to a safe that is locked in a burning building that cannot be accessed. You can have hundreds of backups, but if the entire computing, network and service platform fails, your data is simply inaccessible until the failure is fixed.

    “Digital twin” – the holy grail of reliability

    So what is the real safeguard against such a situation? There is one answer, albeit an extremely complicated one: architectural redundancy at supplier level.

    The holy grail of resilience is multi-cloud architecture. We’re not talking about using one service from Google and another from Amazon. We’re talking about having a full ‘digital twin’ (digital twin) of our entire application on another, competing cloud provider.

    Imagine an ideal scenario: our service runs simultaneously on AWS infrastructure and in parallel on Microsoft Azure. Special systems (e.g. DNS) monitor the status of both platforms. When the US-EAST-1 region in AWS starts reporting errors, all user traffic is automatically redirected to the twin infrastructure in Azure within seconds. The end user doesn’t notice anything, except perhaps a temporary slowdown.

    \ÒSounds ideal. So why does almost nobody do it?

    The brutal economics of reliability

    The answer is trivial and brutally honest: money. Implementing a true multi-cloud architecture is simply not cost-effective for 99% of companies worldwide, including many listed giants.

    We’re not talking about doubling your monthly cloud bill. We are talking about multiplying it, and the biggest costs are hidden.

    1. technological costs (complexity): It is not possible to simply copy an application from AWS to Azure. Each provider has its own unique services and different APIs. A DynamoDB (AWS) database is not the same as Cosmos DB (Azure) or Spanner (Google). Maintaining application logic that can run seamlessly on two different technology foundations is a mammoth engineering challenge.

    2. operational costs (people): This architecture requires having dual, highly specialised engineering teams. You need AWS experts and Azure experts. In an era of IT talent shortages, this is a luxury that only a few can afford.

    3. data synchronisation costs: This is the most difficult element. How do you ensure that user data (e.g. a new bank transaction or an item won in a game) is consistent between databases in Virginia (AWS) and Texas (Azure) in the same millisecond? The data transfer costs and logic complexity of such replication are astronomical.

    And here we come to the bottom line. Business knows how to calculate. Companies like Zoom, Duolingo and Roblox have consciously made a risk calculation. The cost and image loss associated with a few hours of downtime once every one or two years is acceptably lower than the constant, gigantic cost of maintaining true multi-cloud redundancy.

    A lesson we must reject

    The failure, then, is not a failure of AWS engineers or evidence of the weakness of the cloud. It is a failure of our illusions about it.

    The cloud is a tool. It can be used to build a low-cost, flexible infrastructure that is nevertheless fundamentally dependent on a single provider. Or it can be used to build an extremely expensive, complex and truly fault-tolerant fortress, running across multiple providers.

    Choosing both at the same time – cheap, simple and 100% reliable – is a privilege that almost no one can afford.

    aThe Virginia incident forced the entire industry to answer the question: how much are we really willing to pay for 100% availability? It turns out to be much less than we like to talk about at industry conferences.

  • Data sovereignty is no longer an empty buzzword, today it has a real impact on purchasing decisions – Tomasz Sobol, OVHcloud

    Data sovereignty is no longer an empty buzzword, today it has a real impact on purchasing decisions – Tomasz Sobol, OVHcloud

    Klaudia Ciesielska, Brandsit: Data sovereignty is increasingly emerging in the context of the digital strategies of companies and public institutions. How does OVHcloud define this concept and what are its key pillars from the cloud provider’s perspective?

    Tomasz Sobol, OVHcloud: For OVHcloud , data sovereignty from the outset means an organisation’s control over where and how its data is stored, processed and protected – both technologically and legally. This concept is based on three pillars: transparency, compliance with local regulations and technological independence. Equally important is the clarity of the offering: the customer needs to know exactly where his or her data is located, what security mechanisms are in place and whether the service meets regulatory requirements.

    K.C.: Which regulations (e.g. RODO, DORA, NIS2, Gaia-X) are most influencing customer expectations in the area of data sovereignty today?

    T.S.: Customer expectations in the area of data sovereignty are today shaped by a number of regulations that set new standards for digital responsibility.The RODO is still the foundation – providing a framework for transparency and control over the processing of personal data. NIS2 extends cyber security obligations to key sectors of the economy and DORA strengthens the digital resilience of financial institutions and their technology providers. All these regulations raise technology requirements, but also redefine the customer-cloud provider relationship. Today, it is compliance that counts, but also proactive support in implementation.

    “Today, it is not only compliance that counts, but also active support in implementation.”

    K.C.: With the public cloud, the question of data location and jurisdiction is key. How does OVHcloud give customers control over where and how their information is stored?

    T.S.: Full control of the data comes from our approach to managing it. Firstly, we implement an integrated operating model where we oversee the entire chain of operational activities – we design and build the servers ourselves, as well as managing our own data centres (44 locations on 4 continents) and fibre network. Secondly, we provide our customers with a choice of regions and availability zones within the EU, such as Poland, France or Germany. This approach guarantees control of data in accordance with the European legal system and specific regulations. All of this translates into building autonomy and operational security, which in effect provides customers with full control over their own data and how it is processed.

    K.C.: In recent years we have seen a growing interest in the European cloud as an alternative to American and Asian providers. How does OVHcloud see its role in building Europe’s digital independence?

    T.S.: We are seeing a clear shift in the approach to the cloud, with more and more organisations driven not only by technical parameters but also by the need for strategic sovereignty. OVHcloud is responding to these expectations by offering solutions that comply with European regulations and protect data from the influence of jurisdictions outside the continent. We are also constantly strengthening our infrastructure, currently operating 44 of our own data centres and 16 local zones. We are developing products so that these meet the needs of government and regulated sector customers. Examples include SecNumCloud – a private government cloud – and On-Prem Cloud Platform – allowing you to build a private cloud on dedicated OVHcloud hardware with the ability to connect it to global resources. Our aim is to provide proximity of service, compliance with local laws and real support in building a sovereign cloud for European customers.

    “More and more organisations are driven not only by technical parameters, but also by the need for strategic sovereignty.”

    K.C.: Is data sovereignty a topic that really influences customers’ purchasing decisions, or is it more a part of companies’ image strategy?

    T.S.: Data sovereignty is no longer an empty buzzword – today it really influences purchasing decisions, especially in the area of AI projects and digital transformation. It means control and influence over strategic resources, which is why companies are increasingly choosing cloud providers not only for the technology, but also for compliance with local laws, transparency of offerings, the ability to decide on the location of data and a guarantee of portability and easy exit from the contract or architecture (the so-called exit-friendly approach), as required by the Data Act. This is in response to increasing regulatory requirements and the need to build digital resilience. In practice, this means that data sovereignty is becoming a strategic pillar for responsible growth and competitiveness – a condition for scaling business without compromising on security and compliance.

    K.C.: What trends in regulation, technology or user awareness are most likely to change attitudes towards data sovereignty in the cloud in the next 2-3 years?

    T.S.: Three key trends will shape the approach to data sovereignty in the cloud over the next 2-3 years. Regulations introducing more stringent requirements for cyber security, digital resilience and transparency of AI systems. Technologies – including local data zones, end-to-end encryption, tools for advanced identity management or AI-based threat monitoring – will become a natural part of cloud providers’ offerings. In parallel, user awareness will grow, as we have seen especially since the beginning of the year.

  • A new iPhone and a cool market reaction

    A new iPhone and a cool market reaction

    Apple has unveiled the thinnest smartphone in its history – the iPhone Air. It is the biggest change to the company’s portfolio in eight years and a direct response to accusations of a lack of innovation. The new model is designed to reinvigorate sales and prove that the Cupertino-based company can still dictate the market in terms of design and engineering.

    The new line of smartphones headed by the Air model represents a clear shift in Apple’s product strategy. Encased in a titanium frame measuring just 5.6mm thick, the device is slimmer than its main competitor, the Samsung S25 Edge.

    The minimalist design, reminiscent of the iconic MacBook Air, forced engineers to significantly miniaturise components to maintain the claimed all-day battery life.

    Priced in the middle of a new product offering, the iPhone Air is expected to drive hardware upgrades by users. Analysts, initially sceptical, admitted after the launch that the refreshed look and capabilities of the device could be effective in attracting customers.

    The strategic price positioning, $100 lower than Samsung’s flagship on its debut day, is expected to further strengthen its market position, especially during the crucial festive season.

    However, the market reaction has been chilly. Apple’s shares have taken a tumble and investors remain unconvinced about the company’s pricing strategy and its viable capabilities against rivals in the field of artificial intelligence.

    While the iPhone Air debuts the new A19 Pro chip, optimised for AI tasks, the presentation itself did not provide much evidence of how Apple intends to close the gap with leaders such as Google.

    The iPhone Air is therefore, on the one hand, an impressive engineering achievement and a return to the company’s roots, where design played a key role.

    On the other hand, it represents a risky game in which success depends not only on the slim casing but, above all, on whether declarations of battery performance and AI breakthroughs are confirmed in reality. The coming months will show whether design will be enough to dominate the market again.

  • Technology giants’ slip-ups. What can we learn from the biggest failures in the IT world?

    Technology giants’ slip-ups. What can we learn from the biggest failures in the IT world?

    In Silicon Valley, it is said that failure is not a shame, but a badge of honour; proof that one had the courage to take risks. It’s a convenient narrative, but there is a grain of truth behind it. Even the biggest players with almost unlimited budgets – Google, Microsoft or Samsung – have had their share of spectacular stumbles.

    Products that were supposed to revolutionise the market now rest in the technological graveyard. However, let’s forget about malice. Let’s look at these stories to extract universal and timeless business lessons.

    Google Glass – When technology overtakes society

    Do you remember 2012? Google presented the future to the world, and it came in the form of smart glasses. The Glass project, with its futuristic interface projected directly in front of the eye, created a wave of excitement.

    The journalists and developers who joined the $1,500 Explorer programme felt they were touching tomorrow. They could shoot video, take photos and navigate, looking at the world through the lens of data.

    However, the spell faded as quickly as it had appeared. Glass users became infamously known as ‘Glassholes’ because those around them felt permanently invigilated. Is my interlocutor recording me? Is he or she just taking my picture?

    The lack of a clear answer to these questions spawned an insurmountable barrier. What’s more, beyond the ‘wow’ effect, no one really knew what the device was to be used for on a day-to-day basis.

    It was a solution in search of a problem – expensive, weird-looking and socially troublesome.

    Business moral: Innovation must be socially acceptable. The most advanced technology will fail if it ignores the cultural context, social norms and real user needs.

    The lesson with Google Glass is simple: it is not enough to ask “can we build this?”, the key question is “should we and does anyone need it?”.

    Windows Phone – Building a great product in a market vacuum

    Microsoft was late to the smartphone revolution, but when it finally got into the game, it did so with aplomb. Windows Phone was a system that delighted critics. Its ’tile’ based interface, known as Metro UI, was fresh, elegant and ran incredibly smoothly even on weaker devices.

    To pose a real challenge to the Apple and Google duopoly, the Redmond giant even took over Nokia’s legendary mobile division. It had a great system and excellent hardware on its hands. What could go wrong?

    Everything that was around. The Windows Phone debacle is a textbook example of the problem known as the ‘app gap’. Users didn’t want a system that didn’t have Snapchat, the latest games or banking apps.

    Developers, in turn, did not want to develop software for a platform with a marginal market share. This vicious circle proved deadly. Microsoft built a beautiful and capable car, but forgot about roads, petrol stations and garages.

    Business moral: The product itself, even the best, is not enough. In today’s world, the king is the ecosystem. Users don’t buy the device or system itself – they buy access to millions of apps, services and communities. Without the support of third-party developers and a strong network effect, even the biggest player is doomed to fail.

    Samsung Galaxy Note 7 – When haste leads to spontaneous combustion

    In the second half of 2016, Samsung was on an upward wave. The Galaxy Note 7 was to be the masterpiece crowning its dominance of the Android market and the ultimate ‘iPhone killer’. The device received rave reviews for its symmetrical design, phenomenal screen and the best camera on the market. Sales kicked off. And then the phones started to flame out.

    Reports of exploding batteries, initially treated as isolated incidents, quickly turned into a global crisis. It turned out that, in the pursuit of the thinnest possible chassis and the desire to get ahead of Apple’s launch, engineers had packed the battery cells too aggressively, leaving no room for them to work naturally.

    Faulty design combined with insufficient quality assurance (QA) testing created a ticking bomb. A global recall and bans on bringing the product on board aircraft became an image nightmare.

    Business moral: Never sacrifice quality and safety on the altar of speed-to-market (Time-to-Market). Foundations are more important than fireworks. One critical mistake can not only destroy a brilliant product, but also cost a company billions of dollars and, more valuable, years of rebuilding customer trust.

    Golden lessons from the technology graveyard

    The stories of Google Glass, Windows Phone and Galaxy Note 7 are more than curiosities – they are case studies illustrating key dynamics governing the technology market. The Google Glass story shows how even the most advanced technology can fail if it ignores societal needs and norms.

    The case of Windows Phone, on the other hand, proves that in today’s world an isolated product, even a technically polished one, stands little chance against the power of a vibrant ecosystem.

    Finally, the Galaxy Note 7 fiasco is a clear example that rushing and compromising on quality leads to the loss of the most valuable capital – customer trust.

    These challenging failures are not a sign of weakness, but a natural part of the innovation process. The ability to learn from them and adapt is what ultimately creates more mature and successful products.

  • Hybrid threat: How drones over Poland translate into cyber risk

    Hybrid threat: How drones over Poland translate into cyber risk

    The night of 9-10 September 2025 will go down in history as the moment when the war across our eastern border ceased to be a mere media report and became a tangible threat. Russian drones over Poland and their downing by the Polish armed forces is an unprecedented event.

    However, anyone who views this incident solely in military terms is making a strategic mistake. For the violation of airspace was a high-profile prologue to the silent offensive that is about to begin in Polish cyberspace.

    Drones over Poland and the anatomy of Russian cyber-aggression: how does the Kremlin machine work?

    To understand what lies ahead, we must first grasp the adversary’s philosophy of operation. For years, Russia has perfected a doctrine of hybrid warfare in which missiles, beats and disinformation form a single, integrated arsenal.

    The aim is no longer just to conquer territory, but to paralyse the state from within – breaking its economy, destroying trust in its institutions and dividing its society.

    In this strategy, cyber attacks play a key role, with specialised secret service units acting with finesse and brutality.

    These operations are headed by two main actors whose code names should be familiar to any security professional:

    • GRU (APT28/Fancy Bear): This is the digital equivalent of the Specnaz units. Units subordinate to military intelligence specialise in high-profile, destructive and sabotage operations. Their goal is chaos. They are behind the attacks on Ukraine’s power grid, the hacking of electoral systems or the devastating Wiper malware attacks that irretrievably erase data. If something is to be destroyed, switched off or paralysed – the GRU steps in.
    • SVR (APT29/Cozy Bear): They are the aristocracy of Russian digital intelligence. They operate more quietly, more subtly and their operations are characterised by extreme patience. The Foreign Intelligence Service focuses on long-term espionage. They are responsible for the notorious attack on the SolarWinds software supply chain, which gave them access to the networks of thousands of companies and government agencies around the world for months. Their focus is on information, strategic advantage and quietly placing ‘digital sleeper agents’ on key enemy systems.

    Significantly, Russian services are blurring the line between state operations and common cybercrime.

    Ransomware groups such as Conti or LockBit often receive tacit permission from the Kremlin to operate in exchange for fulfilling ‘orders’ hitting Western targets – hospitals, corporations or local governments. This allows them to wreak havoc at the hands of seemingly independent criminals and further complicates the attribution of attacks.

    Scenarios for Poland: predicted attack vectors

    In the context of recent events, Poland is becoming a high-priority target. We can expect to be hit from several directions simultaneously.

    Scenario 1: Impact on critical infrastructure (ICS/SCADA)

    This is the most dangerous scenario. Industrial control systems on which the functioning of the state depends will be targeted. Attacks could target:

    • Energy sector: Attempts to take control of transformer substations in order to trigger regional or even national blackouts.
    • Transport and logistics: Paralysis of rail traffic management systems, which would have a direct impact on support shipments to Ukraine, but also on the national economy.
    • Water supply and treatment plants: manipulation of control systems can lead to interruptions in water supply or, in extreme cases, to water contamination.

    Scenario 2: Administrative paralysis and data theft

    Key institutions of the state will become the main target of espionage operations (conducted by the SVR). Massive spear-phishing campaigns should be expected, precisely targeting officials and military officers from the Ministry of Defence, the Ministry of Foreign Affairs or the Ministry of Digitalisation.

    The aim will not only be to steal security data and defence plans, but also to take control of accounts that can be used for further escalation or disinformation operations.

    Scenario 3: Information warfare and social chaos

    This attack is already underway, but it will now enter a new, intense phase. Its aim is to destroy the social fabric. We can expect:

    • DDoS attacks on major news portals and banking services to give the impression that the state is losing control.
    • Defacement (content substitution) of government websites to publish false messages and sow panic.
    • Massive disinformation campaigns on social media, run by troll farms and bots. Narratives will focus on undermining the effectiveness of the Polish army (‘they didn’t shoot everything down’), accusing the government of ‘provoking Russia’ and stoking anti-Ukrainian sentiment.

    Why is increased activity inevitable?

    These predictions are not mere speculation. They stem directly from an analysis of Russian war doctrine and the logic of the current situation.

    1. First: Asymmetric Retaliation. Russia cannot afford an open armed conflict with a NATO country. The downing of its drones was a slap in the face that cannot go unanswered. Cyberspace is the ideal theatre for retaliation – allowing painful blows to the economy and infrastructure while avoiding crossing the threshold of open war.
    2. Second: Phase Two of the Operation. The drone attack was designed not only to strike Ukraine, but also to test the response time and procedures of the Polish defence. Now Phase Two begins: creating internal chaos in a country that is a key logistical hub for Ukraine and a pillar of NATO’s eastern flank. Weakened and preoccupied with its own problems, Poland is a strategic target for the Kremlin.
    3. Third: Testing the Alliance. Russia wants to test in practice how Article 5 solidarity mechanisms work, not only in the military dimension but also in the cyber dimension. A massive attack on Poland will be a test for response procedures and cooperation within NATO.

    The front runs through every server room today

    We must abandon the illusion that cyber security is a technical problem locked up in IT departments. Today, it is the foundation of national security, with every administrator, developer and manager becoming a defender on the digital front line.

    The time of reactive firefighting is irrevocably over. A paradigm shift towards proactive defence and resilience building is required.

    It is worth emphasising at this point: the purpose of this analysis is not to sow panic, but to build strategic awareness and resilience. It is sound knowledge and cool risk assessment, not fear, that provide the basis for effective preparation for scenarios that could materialise at any time.

    For the IT industry, this means immediate action is required:

    • The implementation of the ‘Zero Trust’ architecture: The principle of “never trust, always verify” must become standard in every corporate and government network.
    • Proactive Threat Hunting: Security teams need to actively hunt for signs of intruders on their networks, rather than passively waiting for alerts from SIEM systems.
    • Audit and Testing of Incident Response Plans (IRPs): Having a plan on paper is not enough. It needs to be tested regularly through simulations so that when a crisis occurs, everyone knows what to do.
    • Building Public Resilience: The IT sector has a huge role to play in educating employees and the general public on how to recognise disinformation and phishing.

    The red sky over eastern Poland was a test of our military procedures. The upcoming digital offensive will be a test of the resilience of our entire state and society. This is not a time for fear, but for the consolidation of forces – for cooperation between the private sector and public administration, for sharing knowledge about threats and for building a digital shield that neither massive DDoS attacks nor precision spying operations can break. History teaches that Poland’s greatest strength in the face of threats has always been its ability to mobilise and adapt. Today, this mobilisation must take place in our networks, server rooms and minds.

  • Artificial intelligence systems and copyright

    Artificial intelligence systems and copyright

    Artificial intelligence (AI) systems can be distinguished into two basic groups, which differ both technologically and legally, namely the reference to:

    • traditional artificial intelligence (AI),
    • generative artificial intelligence (Generative AI).

    The above breakdown is important for a proper understanding of the obligations involved, the potential risks and the regulations that apply to them, especially in terms of copyright.

    Traditional AI systems operate on the basis of clearly defined algorithms and rules that process data, detect patterns and make decisions. In this case, copyright is primarily concerned with the input data and the effects of human work on the system, while the algorithms and models themselves are not protected as works. Usually, serious copyright issues do not arise here because AI does not generate new autonomous works.

    In contrast to traditional AI, generative AI is different. Generative AI creates entirely new content – text, images, sounds or code – based on patterns learned from huge data sets, which often contain copyrighted material. This raises a number of legal issues, such as:

    • Use of protected material to train AI models: In the context of the development of generative AI, a key issue is how AI models are trained on huge datasets, which often contain copyrighted material such as texts, images, music or source code. Under the EU’s Artificial Intelligence Regulation (AI Act), which aims to regulate the use of AI in the European Union, providers of generative models are required to respect copyright owners’ objections to the use of their works in the training process. This procedure is called ‘opt-out’ and allows creators or rights owners to object to the use of their content for training AI systems.
    • Copyright protection of generated works: AI creations are usually not considered copyrighted works because they are not created by humans, which means that they often end up in the public domain. At the same time, however, there is a major problem with the fact that AI uses huge datasets when generating content, which often contain copyrighted material. If the generated work contains elements that approximate or even copy protected works, third-party copyright infringement may occur. Such infringements may incur legal liability on the part of users or AI providers, even if they were not fully aware of the use of protected material.
    • Lack of a clear limit of protection: In the case of more complex creations that result from multiple human-AI interactions (e.g. adapting prompts), the question of granting legal protection is unclear and requires further regulation. In addition, current regulations, such as the EU’s 2019 Digital Single Market Copyright Directive (CDSM) and the 2024 AI Act Regulation, introduce some mechanisms to protect creators’ rights, but leave a lot of ambiguity in interpretation. These provisions do not make it clear how to treat copyright in the context of AI-generated works, which causes difficulties for both creators and AI technology developers.

    The division of AI systems into traditional and generative AI systems carries important legal implications. Traditional AI is primarily subject to standard regulations regarding data processing and liability for decisions made. Generative AI, on the other hand, which creates new content, poses additional legal challenges, especially in the areas of copyright, confidential data protection and contractual compliance. In addition, it is subject to increasingly detailed and extensive regulations.

    Copyright law vis-à-vis artificial intelligence faces significant challenges. Currently, works created autonomously by AI are not protected, and the use of protected materials to train models requires consideration of the owners’ rights. The future of copyright regulation will depend on further legislative work and case law, which will need to define more precisely the rules for the use of AI in creation and the protection of the rights of creators and users.

    As such, both developers and users of these systems should carefully read the applicable regulations and consciously assess the risks associated with their use.


    Author: r.pr. Damian Lipiński, GFP_Legal | Grzelczak Fogel i Partnerzy | Wrocław Law Firm

  • Cyber security is not a sprint. Companies need to stop putting out fires and start planning

    Cyber security is not a sprint. Companies need to stop putting out fires and start planning

    The increasing number of cyber attacks, new regulatory obligations and limited human resources make cyber security one of the key challenges for Polish companies – regardless of their size or industry. Dawid Zięcina, Technical Department Director at DAGMA Bezpieczeństwo IT, discusses what threats dominate today’s business environment, to what extent SOC-as-a-Service is becoming a viable alternative and what mistakes and organisational barriers companies most often face when building security systems.

    Klaudia Ciesielska, Brandsit: What cyber threats are currently dominating the Polish corporate environment? Are you really seeing an increase in advanced attacks (APTs), or are phishing incidents and malware still prevalent?

    Dawid Zięcina, Dagma IT Security: Polish companies are still exposed to the same, well-known types of cyber attacks. The most common threat remains classic phishing, based on fake phishing websites. Although the number of phishing campaigns is slightly decreasing compared to previous years, it is still the most commonly used technique by cybercriminals.

    In the case of malicious software (malware), our observations are consistent with data from industry reports – the scale of its use is growing, with data theft being the main target.

    It is also worth noting the increasing activity of APT (Advanced Persistent Threats) groups, which is closely linked to the current geopolitical situation. These are usually groups linked to foreign states, operating for intelligence and disinformation purposes. Importantly, their activities are increasingly extending beyond the public sector or large state-owned companies – smaller companies in the supply chain are also becoming victims of attacks. Individuals associated with employees or owners of these companies are also sometimes targeted.

    “Polish companies are still exposed to the same, well-known types of cyber attacks.”

    Brandsit: The NIS2 Directive and the amendment to the KSC Act introduce significant obligations in the area of cyber security. What challenges do companies most often face when trying to implement compliance with these regulations?

    D.Z.: Currently, the biggest challenge for Polish companies in implementing NIS2 compliance is the lack of an unambiguous, officially adopted Polish interpretation of the national regulations to be included in the amended Act on the National Cyber Security System (KSC). Although most of the guidelines contained in NIS2 have a relatively clear interpretation, it is the implementation details in the Act that may, in practice, determine the direction of change in the area of cyber security. For this reason, many organisations are adopting a wait-and-see attitude.

    Despite the lack of a final law, we have seen a significant increase in interest in services supporting the implementation of information security management systems (compliant with ISO/IEC 27001) and business continuity systems (compliant with ISO 22301). This is a good direction that allows organisations to prepare in a systemic way for the upcoming requirements and to plan specific actions.

    A common problem is a lack of awareness of how much in-depth analysis of one’s own operations these processes require, and how much time and resources need to be devoted to effectively implement solutions to increase the cyber resilience and resilience of the organisation – particularly against the risk of downtime caused by, for example, a cyber attack.

    Brandsit: Is security outsourcing – e.g. in the form of SOC-as-a-Service – becoming a viable alternative for companies without in-house security teams?

    D.Z.: Managed cyber-security services are gaining popularity not only among companies that do not have their own teams of specialists, but also as a support for existing security departments. With services such as SOC-as-a-Service, the customer gets access to an efficient, highly specialised team, ready to operate in the customer’s environment within a short time of the service launch.

    Importantly, the contracting authority gains a wide range of competences necessary to handle security at various stages – without the need to employ narrowly specialised experts regardless of whether an incident occurs and, if so, of what type.

    Maintaining and managing such extensive teams internally would require significant human and financial resources – in an outsourcing model, this responsibility shifts to the service provider, making this solution particularly attractive in terms of flexibility and cost-effectiveness.

    Brandsit: What strategic mistakes do companies most often make when building an IT security management system?

    D.Z.: The most common mistakes made by companies during the implementation phase of security systems are the lack of a prepared transformation plan based on a sound risk analysis, a piecemeal approach to the problems identified and underestimation of the resources – both human, time and financial – required for successful implementation.

    “Cyber security is an ongoing process that has no endpoint and requires creating the right environment for growth.”

    Very often organisations approach the process as a sprint, assuming that once the goal is reached quickly, the project will be completed. Meanwhile, cyber security is an ongoing process that has no endpoint and requires the right environment to be created for development.

    Such an environment can be built by, among other things, implementing an information security management system and a business continuity system – even if the organisation does not plan to formally certify compliance with the chosen standard.

    Brandsit: Are you seeing a change in the approach of boards and a shift in budgets towards cyber security, or is it still treated as a duty rather than a real business need?

    D.Z.: In companies where experienced professionals are responsible for the area of cyber security, boards demonstrate a high level of understanding of both the responsibility and the positive impact of well implemented security processes on the business as a whole.

    “Far more often than not, it is the downplaying of risks or ignoring previously identified problems that leads to costs that are disproportionately higher than investments that could have been made in advance – before the incident occurred.”

    However, we still encounter an approach in which cyber security is seen as an unnecessary constraint – something that hinders operations and generates costs without generating direct revenue.

    Building awareness of the risks, analysing the impact of IT on business operations and identifying scenarios where the organisation could be paralysed or suffer significant losses as a result of business disruption are key elements in changing this perspective.

    It is worth emphasising that ensuring the security of systems and networks does not have to involve huge expenditure. Far more often, it is the downplaying of threats or ignoring previously identified problems that leads to costs that are disproportionately higher than investments that could have been made in advance – before the incident occurred.

  • Microsoft is playing for a long position in AI. Opening up to Grok marks the beginning of the end of the OpenAI monopoly

    Microsoft is playing for a long position in AI. Opening up to Grok marks the beginning of the end of the OpenAI monopoly

    At the Build 2025 conference, Microsoft announced a new language model, Grok, developed by the xAI start-up founded by Elon Musk, on its Azure cloud platform. While this announcement may seem like just another step in the expansion of AI offerings, it actually signals a significant change of course. Microsoft is betting on openness towards a variety of artificial intelligence providers – including those that may compete with its strategic partners, such as OpenAI.

    The move opens up new opportunities for Azure customers, but also raises questions about the future of Microsoft’s entire cloud ecosystem. Is openness an asset at a time of dominance by a few big AI players, or a strategic risk?

    From Copilot to Grok: Microsoft seeks balance

    Over the past few years, Microsoft has been building its image as a leader in the field of generative AI, largely based on its close collaboration with OpenAI. GPT-4 models drive a number of the company’s products, from Microsoft 365 to developer tools. In this context, the arrival of Grok in Azure is a signal that the company does not want to be held hostage to a single vendor.

    xAI, founded by Elon Musk, is presenting Grok as an alternative to the ‘too stacked’ models of other companies. The model itself has gained notoriety for, among other things, its integration with X (formerly Twitter), but its arrival in Azure is more than just another integration. Microsoft is signalling that it does not want to be associated with just one approach to AI – and that the Azure platform is intended to be a space for multiple perspectives.

    Diversity as an advantage … and a challenge

    From the point of view of business customers, this is good news. Different AI models offer different functionalities, and being able to choose can bring real benefits – matching industries, domain language, operational costs or processing policies. Companies increasingly want an option: not just ‘GPT or nothing’, but, for example, Grok for fast social media processing, Mistral for offline work and Claude for document analysis.

    However, openness is not free. Managing multiple models in parallel on a single cloud infrastructure generates complexity – especially in terms of security, visibility and regulatory compliance. What is flexibility for some may be the beginning of chaos for others.

    Ecosystem under pressure

    Microsoft promotes GPT-based Copilots on the one hand, while making competing solutions – such as Grok – available on the other. This dual-tracking can raise tensions with both partners and end customers. What will happen to integrators and providers of OpenAI-only solutions? Will they be forced to adapt to the ‘new pluralism’, or will they start looking for more closed environments?

    From an end-user perspective, this can also lead to a fragmented experience. When different tools work with different AI models, there is a question of consistency of results, data security and control over the flow of information.

    Security: a new front line

    The biggest challenge, however, relates to security. Every new model in the Azure ecosystem is a new attack vector – not necessarily due to maliciousness on the part of the developers, but through lack of standardisation, configuration imperfections and limited transparency.

    The multi-model AI environment in the cloud means that it is not always clear who is processing the data, how and for what purpose. The line between legitimate and covert use of AI is becoming increasingly difficult to grasp. Companies that don’t have the right tools to inspect, audit and detect anomalies may not even know that their data has ended up in a model they never validated.

    This is forcing organisations to redefine their security strategy. Traditional approaches – such as firewalls or simple DLP systems – are no longer sufficient. What is needed are zero-trust architectures, advanced behavioural analysis mechanisms and least privilege policies that cover not only people but also machines.

    Will Microsoft become a ‘marketplace’ for AI?

    The opening to Grok may be the harbinger of a wider trend – Microsoft may be looking to make Azure something like an ‘App Store’ for AI models. The customer chooses which model they want to use and Microsoft provides the infrastructure, access and integration.

    On the one hand, it’s an interesting business model – Microsoft doesn’t need to invest in its own LLMs as much if it creates an open platform with models from other companies. On the other – it requires strong quality, security and compliance controls, without which such a platform will quickly turn into a minefield.

    The question is: will users trust a platform that gives freedom of choice but shifts some responsibility to the customer?

    Openness is the future – but it requires maturity

    Opening up Azure to alternative AI models is a logical step towards the democratisation of artificial intelligence. Microsoft wants its cloud to be a place where any model can be used, tailored to specific needs.

    But the greater the diversity, the greater the need for order. Companies must not only choose the best models, but also understand how these models work, what data they process and what risks they pose. Without this, openness will turn into uncontrolled exposure.

    Microsoft is playing on many pianos these days. The question is whether it will be able to hold the tune – or whether chaos will begin to reverberate.