Category: Legislation and regulations

  • The Greenland effect in IT: How unpredictable US policy is driving the European cloud

    The Greenland effect in IT: How unpredictable US policy is driving the European cloud

    Until a few years ago, the term ‘technological sovereignty’ was the domain of academic debates and niche reports prepared by EU officials in Brussels. For a CEO or CTO in Europe, US Big Tech was like gravity – fixed, inevitable and, despite some privacy controversies, guaranteeing stability. However, recent months have brought a brutal verification of this optimism. Events on the Washington-Brussels line, including Donald Trump ‘s staggering territorial ambitions for Greenland, have catalysed changes that could redraw the map of digital business in Europe forever.

    The end of digital optimism

    Why has the ‘Greenland Effect’ become a symbol of change in IT? While the US administration’s attempted annexation of the island may have seemed like a media anecdote, for European business leaders it was a clear warning: we live in a time where existing rules of the game and alliances can be challenged in a single tweet or unpredictable political decision.

    Risk is no longer theoretical. Today, European business has to ask itself a question that until recently sounded like a sci-fi movie script: what will happen to my company if access to SaaS services, cloud computing or data centres from the US is blocked as a result of a diplomatic dispute? The answer to this question today is building a new strategy of ‘limited trust technology’.

    The statistics of addiction: Landscape after the battle

    To understand the scale of the challenge, it is important to look at hard data. In 2024, European customers will have spent nearly $25 billion on cloud infrastructure provided by the five largest US players. According to IDC data, US companies control as much as 83% of the European cloud market.

    This contrast is striking when we recall Europe two decades ago. In the age of mobile telephony, it was our continent that dictated the terms thanks to the power of Nokia and Ericsson. Today, in the age of the data economy, Europe finds itself in the deep shadow of the United States and China. Attempts to build local search engines or social networks have failed, crushed by American scale, high-risk culture and almost unlimited access to capital.

    EU business leaders point to three main inhibitors: excessive bureaucracy, market fragmentation into 27 national systems and a fear of risk that paralyses innovation at an early stage.

    Fortress Europe: A new defence strategy

    Faced with rising tensions, Germany and France – the two largest economies in the Union – have stopped waiting for a pan-European consensus and have gone on the offensive. The strategy is clear: if we cannot (yet) create our own Google, we must secure the foundations.

    The German Federal Ministry of Digitalisation has just implemented openDesk, an open source alternative to Microsoft tools. This signals that open source software is ceasing to be the domain of enthusiasts and is becoming an ‘insurance policy’ for state institutions and strategic enterprises. France, on the other hand, is promoting Visio, a local videoconferencing solution, eliminating dependence on US platforms in public administration.

    President Emmanuel Macron is going one step further, offering cheap nuclear power to companies building data centres in the region and actively supporting Mistral AI – the European answer to software from OpenAI. This is no longer just politics; it is the construction of a new business ecosystem in which the ‘origin of technology’ becomes a key parameter of choice.

    Giants’ response: Camouflage or adaptation?

    US tech giants are not going to stand idly by and watch their loss of influence in a region that generates hundreds of billions of dollars in revenue for them. Big Tech’s adaptive strategy is fascinating: they are building ‘European clouds’ to look and act like local companies.

    Microsoft is stepping up its collaboration with Delos Cloud (a subsidiary of SAP) and Google is setting up independent entities based in Germany and staffed exclusively in the EU. The aim is clear: to circumvent concerns about the US Cloud Act, which in theory allows US services to see data stored abroad.

    However, for the informed CTO, this is still a half-hearted solution. The question of whether the US giant’s ‘local company’ will realistically resist pressure from its own government in a crisis situation remains open.

    Change management: People, not just bits

    As Frank Karlitschek, CEO of NextCloud, points out, technology is only half the battle. The biggest challenge for the European business is change management. Migrating from comfortable, familiar US systems that have been around for years to European or open-source alternatives is an operationally painful process.

    It requires excellent communication and preparation of employees to change their habits. However, in the new geopolitical paradigm, this effort is seen not as a cost, but as an investment in Business Continuity.

    Technology as a diplomatic currency

    “The Greenland effect” has made Europe realise that in the 21st century sovereignty does not end at land borders – it begins at servers. Europe does not seek complete isolation from American technology, because that would be economic suicide. It does, however, seek to create a ‘fuse’.

  • $2 billion in play. Google has avoided a financial knockout in a privacy dispute

    $2 billion in play. Google has avoided a financial knockout in a privacy dispute

    Alphabet can breathe a sigh of relief, at least for a moment. A federal judge in San Francisco, Richard Seeborg, has dismissed claims by consumers demanding that the Mountain View giant return $2.36 billion in allegedly undue profits. The amount was said to be a penalty for collecting data from users who knowingly turned off activity tracking features in apps. While the verdict protects Google’ s financial balance sheet from being drastically depleted, it also sheds light on the systemic tensions between the analytics-driven business model and the growing demands for privacy.

    Friday’s decision follows a September trial in which a jury found Google guilty of secretly collecting activity data on millions of people. At the time, $425 million in damages was awarded – a significant but symbolic sum compared to the astronomical $31 billion originally sought by the plaintiffs. A key victory for Google in the latest iteration of the litigation is the rejection of the ‘disgorgement’ mechanism, i.e. the forced surrender of profits generated by the disputed practices. Judge Seeborg found that the plaintiffs had failed to provide sufficient evidence of “irreparable harm” to justify such a severe penalty or an immediate injunction to halt data processing.

    For executives in the technology sector, the Rodriguez v. Google case sets a significant precedent. Google argued that forcibly blocking the collection of data linked to user accounts could ‘cripple’ the analytics services used by millions of third-party developers. This shows how deeply tracking mechanisms are woven into the Android ecosystem and the digital advertising infrastructure more broadly. Google’s financial victory, however, does not mean the end of its image and legal problems. The judge upheld the status of a class action involving 98 million users, meaning that the battle over the definition of ‘consent’ in the world of Big Tech will continue in the appellate courts.

    In a landscape dominated by increasingly stringent regulations such as Europe’s RODO and California’s CCPA, this case highlights the giants’ determination to defend the integrity of their data engines. Although Google avoided the bleakest scenario this time, the line between necessary analytics and invasion of privacy remains one of the costliest flashpoints in the technology-legal relationship.

  • Euro penalties and new moderation duties. Ministry of Digitalisation prepares DSA implementation

    Euro penalties and new moderation duties. Ministry of Digitalisation prepares DSA implementation

    Warsaw is finally pressing the accelerator on digital regulation. The Ministry of Digitalisation, keen to avoid the spectre of multi-million dollar fines from the European Commission, has unveiled a two-pronged strategy to implement the Digital Services Act (DSA). It is a belated move, but one that is critical to the operational model of online platforms operating on the Vistula.

    Instead of a single, comprehensive document, the ministry decided to split the legislation into two separate bills. This pragmatic manoeuvre is intended not only to speed up the legislative process, but also to circumvent potential political reefs. This is because in the background there is a smouldering government-president conflict, which has already paralysed the implementation of EU standards in the past.

    The foundation of the changes is the appointment of a ‘digital sheriff’. This role has fallen to the President of the Office of Electronic Communications (UKE), who, as coordinator for digital services, will be given broad powers of control and sanction. For business, this means the end of the era of voluntary content moderation. Technology companies will have to face a new level of transparency: from justifying every decision to remove a post to revealing the cards in terms of how advertising algorithms work. Support for the UKE from the OCC and the NCRT suggests that oversight will be multidimensional, encompassing both consumer protection and information governance.

    The second draft touches on the most explosive issue: the procedure for blocking illegal content. Here, the Ministry seeks to balance the effectiveness of the fight against the most serious crimes, such as terrorism or human trafficking, with the protection of freedom of expression. A key safety net for entrepreneurs and users is to be the lack of immediate enforceability of injunctions and a strong appeal path before the ordinary courts. This is important for business stability – the risk of arbitrary exclusion of services by state authorities is to be minimised by judicial review.

  • Grok on DSA’s target: Why Brussels wants to ban the xAI chatbot

    Grok on DSA’s target: Why Brussels wants to ban the xAI chatbot

    Reports from the daily Handelsblatt shed new light on the strained relationship between Brussels and Elon Musk‘s tech empire. According to senior EU officials, the European Commission is initiating formal proceedings against chatbot Grok under the Digital Services Act (DSA). This is no mere admonition – the aim is clear: to force xAI to withdraw the tool from the European market.

    For executives and investors, this signals that EU regulators have moved from merely monitoring the market to actively defending democratic values and data security. The main flashpoint is the way Grok uses user data on Platform X to train its models without sufficient transparency and consent, according to the EU.

    From a business perspective, this situation sets a dangerous precedent for AI companies operating on a ‘move fast and break things’ model. If xAI succumbs to pressure, Europe could become a digital island with limited access to the most controversial models, which in turn will force local companies to rely on suppliers declaring full compliance with strict EU law. This is a test of strength that will define the cost of innovation in the region.

  • Brussels gives telecoms oxygen, but not Big Tech money

    Brussels gives telecoms oxygen, but not Big Tech money

    Europe’s telecoms giants received a clear, albeit bittersweet, signal from the European Commission on Wednesday regarding the future of their business models. As part of the long-awaited Digital Networks Act, Brussels has proposed a revolutionary change in the management of radio resources: granting operators the right to use spectrum for an indefinite period. This is a fundamental change from the current standard, where licences are typically issued for a minimum of 20 years, forcing companies into cyclical uncertainty and the need to build reserves for costly auctions.

    For telecoms CFOs, this proposal is key. The default renewal of licences and harmonisation of spectrum valuation rules is intended to increase investment predictability. A senior Commission official explicitly acknowledged that unlimited licensing is intended to send a signal to capital markets that the telecoms sector is a safe haven for long-term capital. This is essential to meet Brussels’ ambitious target: full fibre coverage of the European Union between 2030 and 2035. Henna Virkkunen, the EU’s chief technology officer, stressed in a statement that resilient infrastructure is a prerequisite for Europe’s digital sovereignty.

    However, operators’ enthusiasm is dampened by the fact that their key financial demand has been ignored by the Commission. The lobbying offensive to force so-called Big Tech (Google, Netflix, Meta) to directly subsidise infrastructure costs – argued on the grounds that these operators generate the lion’s share of network traffic – has failed. Instead of a mandatory ‘traffic tax’, the bill merely proposes a voluntary cooperation mechanism between service providers and tech giants. In practice, this means maintaining the status quo, in which the burden of CAPEX is on the operators and Silicon Valley avoids new regulatory burdens in Europe.

    An additional element of the package, designed to make the technological transition more flexible, is the possibility for national governments to extend the deadline for copper networks to be phased out until 2030. The draft will now go before the European Parliament and member states, where there are sure to be further clashes between lobbyists from both sectors.

  • Big Tech’s excuses are over. Meta accused of profiting from illegal casinos

    Big Tech’s excuses are over. Meta accused of profiting from illegal casinos

    For technology giants such as Meta Platforms, the argument about the inability to fully monitor millions of adverts has been an effective shield against regulators over the years. However, a recent speech by Tim Miller, executive director of the UK Gambling Commission, suggests that patience for the ‘react after the fact’ model is running out in Europe. Miller, speaking at ICE Barcelona, made it clear: the owner of Facebook and Instagram not only knows about illegal casino advertising, but is deliberately turning a blind eye to it as long as the money is flowing broadly.

    Miller’s accusations strike at a sensitive point in Meta’s business model – the effectiveness of its own verification tools. The regulator highlighted a paradox: the publicly available Meta Advertising Library (Ad Library) effortlessly reveals promotions from gambling operators who boast of bypassing the GamStop system. This is the UK’s self-exclusion mechanism designed to protect addicts. Since officials are able to find these adverts using simple keywords, Meta’s claim of ignorance becomes, according to Miller, “simply false”.

    From a business perspective, the situation casts a shadow over Big Tech’s compliance procedures. Miller described the ad library as a ‘window to criminality’, suggesting that the Met has the technical capability to block such content immediately, but chooses not to do so. It’s an indictment of a cynical calculation: reputational risk is included in the cost of revenue until external pressure becomes too strong.

    For Meta’s advertisers and business partners, this is a wake-up call. The Gambling Commission has admitted that progress to date in talks with the giant has been “very limited”. This could herald an impending tightening of regulation that will force platforms to preemptively censor content under threat of gigantic financial penalties. If regulators find platforms complicit in promoting the grey market, the era of passive moderation will be over. As Miller concluded, Meta’s current attitude leaves the impression that the company is content to take money from scammers until someone loudly protests.

  • Nvidia trapped. Why did Beijing reject AI chips despite Trump’s approval?

    Nvidia trapped. Why did Beijing reject AI chips despite Trump’s approval?

    Jensen Huang, CEO of Nvidia, is planning a visit to China at the end of January. Although the official background to the trip is the company’s Lunar New Year celebrations, behind the scenes the visit is being treated as an urgent diplomatic mission to unlock a key market. The situation is unprecedented: the Donald Trump administration, ignoring the voices of Washington hawks, formally approved the sale of the powerful H200 chips to China. Meanwhile, it was Beijing that said no.

    China Customs’ 14 January decision to halt H200 imports represents a surprising reversal of roles in the ongoing technology war. Previously, the Americans had put up the barriers; now Beijing’s resistance suggests either a negotiating tactic or a desire to protect rising domestic manufacturers such as Huawei. Huang, whose itinerary may include meetings in Beijing, must personally verify that there is still room for Nvidia in the Chinese market. For shareholders, this sends a clear message: approval from the White House is not enough to guarantee revenue from the Middle Kingdom, and technological decoupling is entering a new, more complicated phase.

  • Brussels tightens course: operators must say goodbye to Huawei and ZTE

    Brussels tightens course: operators must say goodbye to Huawei and ZTE

    The European Commission, under the leadership of new executive vice-president Henna Virkkunen, presented on Tuesday a draft amendment to the Cyber Security Act, which turns the previously voluntary approach into tough legal requirements. For European telecoms and technology businesses, this marks the start of a costly race against time. The main target of the new regulations – although no name is mentioned in the document – are Chinese tech giants such as Huawei and ZTE.

    Brussels is proposing a radical expansion of the definition of critical infrastructure. The new framework will cover as many as eighteen key areas, going far beyond just telecommunications. The list includes energy management systems, water infrastructure, cloud computing, as well as sensitive sectors such as medical devices, drones or space technology, among others. The mechanism for action is absolute: if, after a formal risk assessment initiated by the Commission or at least three Member States, a provider is deemed ‘high risk’, mobile operators will be given 36 months to completely remove its key components from their networks.

    For the telecoms industry, this is a wake-up call. The Connect Europe association, which represents the continent’s largest operators, is already warning of billions of euros in compliance costs that could slow down investment in modern networks. EU officials argue, however, that the price of ‘technological sovereignty’ is necessary to pay in the face of rising ransomware attacks and espionage threats. Europe is clearly correcting its course, moving closer to the position of the United States, which has already blocked the approval of new equipment from Huawei and ZTE in 2022.

    Beijing’s reaction was immediate. A spokesperson for the Chinese Foreign Ministry called on the EU to abandon the protectionist path and Huawei, in a sharp statement, accused the Commission of violating World Trade Organisation (WTO) rules. The Chinese conglomerate stresses that the evaluation of suppliers should be based on hard technical evidence and not on country of origin. Despite these protests, the political climate in Europe is thickening. Germany, hitherto reticent, has already set up an expert commission to review trade relations with China and has excluded Chinese components from future 6G networks.

    Before entering into force, the draft must go through negotiations with national governments and the European Parliament. However, given the current geopolitical climate, European business should already be preparing scenarios to diversify their supply chains without waiting for the final signature of the law.

  • Retreat from confrontation. Brussels abandons ‘internet tax’ for Big Tech

    Retreat from confrontation. Brussels abandons ‘internet tax’ for Big Tech

    European telecoms operators hoping to systemically force US tech giants to co-finance network infrastructure may be feeling disappointed. Instead of the announced revolution and hard regulation in the ‘Fair Share’ discussion, the European Commission intends to bet on diplomacy.

    According to reports on the draft Digital Networks Act to be presented by Commissioner Henna Virkkunen on 20 January, Brussels is moving away from imposing binding financial obligations on the largest generators of network traffic. Instead, the document envisages the introduction of a framework for voluntary cooperation under the supervision of the Body of European Regulators for Electronic Communications (BEREC). Giants such as Google and Meta would only be encouraged to attend meetings and define ‘best practices’, effectively dismissing the vision of direct cash transfers to European telcos.

    The European Commission’s change of course is a clear sign of geopolitical pragmatism. Faced with the new administration of Donald Trump, who sees every attempt to tax US corporations as an economic provocation, Brussels is opting for a strategy of conflict avoidance. In a situation of strained transatlantic relations, where Washington is reacting more and more aggressively to attempts to regulate its digital champions, the Digital Network Act becomes part of a delicate diplomatic game. The EU seems to be calculating that escalating trade tensions is too risky at the moment, even at the expense of the interests of local internet providers.

    However, the bill is not only an issue of relations with the US, but also an attempt to harmonise the internal market, which is meeting resistance from member states. Key economies, including France, Germany and Italy, remain sceptical of the centralisation of telecoms governance, preferring to maintain control over regulation at national level.

    The document also addresses infrastructure issues, proposing a unification of spectrum auction rules and a potential revision of digital targets. The Commission allows for the possibility of postponing the date for the complete extinction of copper networks and their replacement by fibre, originally planned for 2030. If local authorities demonstrate that this deadline is unrealistic, Brussels is prepared to be flexible, further evidence that the upcoming legislation will be a set of compromises rather than a radical breakthrough.

  • DNS censorship dispute. Why doesn’t Cloudflare want to work with Italy’s ‘Piracy Shield’?

    DNS censorship dispute. Why doesn’t Cloudflare want to work with Italy’s ‘Piracy Shield’?

    The escalation of tension between Rome and San Francisco is reaching unprecedented proportions. Cloudflare, the US network infrastructure giant, is considering drastic steps, including the complete withdrawal of servers from Italy. This is in direct response to the €14 million fine imposed by regulator AGCOM and demands to censor the internet, which the company considers technically dangerous and extraterritorial.

    At the heart of the dispute is Italy’s ‘Piracy Shield’ mechanism, introduced at the insistence of sports broadcasters, including those with links to Italian football. This regulation requires DNS providers to block designated IP addresses within just thirty minutes of being reported. Significantly, this process takes place without prior judicial review, giving rights holders a powerful tool to act immediately. Cloudflare, however, has refused to implement these blocking measures in its public DNS resolver (1.1.1.1), arguing that there is a lack of transparency and a demand from the Italian side that the blocking should apply globally, not just locally.

    The US company’s concerns are confirmed by recent incidents. The 2024 deployment of the system led to the mistaken blocking of Google Drive, which, according to the Computer & Communications Industry Association (CCIA), caused a service blackout of several hours for thousands of Italian users and businesses. Research by RIPE Labs also found that hundreds of legitimate sites fell victim to the ‘Shield’, blocked without their owners’ knowledge and without a clear redress path. For Cloudflare to comply would mean not only an increase in network latency, but above all an acceptance of censorship based on error-prone automation.

    The response from Matthew Prince and the Cloudflare board has been firm. The company has announced that it will halt all investment plans in Italy and remove its servers from Italian cities if regulatory pressure continues. What’s more, it has threatened to withdraw millions of dollars worth of cyber security services provided pro bono for the upcoming Milan-Cortina Winter Olympics.

    The issue has already gone beyond a local administrative dispute. Cloudflare intends to raise the issue with the US government, pointing to Italian regulations as an example of unjustified barriers to US business in Europe. At a time when the European Union is seeking to enforce the Digital Services and Markets Acts (DSA/DMA), Italy’s unilateral and aggressive actions could become a staging ground for a wider transatlantic conflict.

  • DORA: IBM officially under EU supervision as a key technology provider

    DORA: IBM officially under EU supervision as a key technology provider

    The decision of European regulators to include IBM among key third-party ICT service providers is not surprising, but it does set an important precedent in the relationship between Big Tech and the financial sector. The European Supervisory Authorities, comprising EBA, EIOPA and ESMA, have officially confirmed the strategic role of the US company, which in practice means bringing it under direct supervision at EU level under the DORA (Digital Operational Resilience Act) regulation.

    For the financial market, this is a signal that digital operational resilience is ceasing to be a purely internal problem for banks or insurers, and is becoming a systemic issue, requiring close scrutiny by technology providers. As Piotr Pietrzak, Technical Sales Leader at IBM for Poland, the Baltics and Ukraine, notes, DORA enforces just such a systemic approach to digital resilience. The new regulations cover a broad spectrum of entities – from investment firms to payment institutions – treating technology as integral to market stability and customer security.

    For IBM, key supplier status is, on the one hand, a prestigious acknowledgement of its position as a trusted partner and, on the other, a commitment to even closer cooperation with the supervisory authorities (ESA). The company had been preparing its technology and governance structures for a long time to meet the new requirements. In the run-up to the implementation of the regulations, IBM’s teams were conducting extensive adaptation activities in parallel with the development of global cyber security technologies.

    From the perspective of CIOs of financial institutions, bringing IBM under direct EU supervision is reassuring news. It means that the use of this provider’s infrastructure and services comes with the added assurance of regulatory compliance. IBM promises to continue providing guidance and resources to help clients navigate the complex requirements of DORA without losing sight of innovation.

    The aim of the new regulations is clear: to reduce systemic risk in the European financial ecosystem. The inclusion of key technology players under direct supervision is a step that redefines responsibility for digital security in the Old Continent. IBM declares its full readiness to work constructively with regulators, using its experience in risk management to make the adaptation process smooth for both the company itself and its business partners.

  • Digital Poland: digital sovereignty starts with freedom of technology choice, not market isolation

    Digital Poland: digital sovereignty starts with freedom of technology choice, not market isolation

    The State Digitalisation Strategy is to be a comprehensive, long-term document setting out the directions for the development of the state’s computerisation. Re-consultations on this issue were announced by the Ministry of Digitalisation. It was within their framework that the experts of the Digital Poland Association prepared an opinion focusing on the elements necessary for the Strategy to be implemented effectively.

    According to the Association, it is the freedom of technological choice – based on interoperability and open standards – that should become the foundation of digital sovereignty. Only such a model allows public administrations to avoid dependence on a single provider, to respond flexibly to technological changes and to effectively improve the security of public systems.

    – Digital sovereignty is not about the state locking itself into one ‘own’ technological ecosystem. It is about being able to consciously choose the best available solution at any time and change it if a better alternative emerges, says Michał Kanownik, President of the Digital Poland Association.

    The organisation points out that the technology market in Poland today accounts for around 10 per cent of GDP and provides work for nearly 1.5 million people. Global and local technology providers create a common ecosystem from which administration, business and citizens benefit. The strategy, according to the Association, should clearly reflect this fact, instead of suggesting that the presence of global companies is a threat to state sovereignty.

    Cloud and procurement: diagnosis without tools is not enough

    Experts emphasise that the Strategy document in its current form accurately diagnoses many problems, but does not always indicate the tools to solve them. This applies in particular to cloud implementation mechanisms and the organisation of IT procurement.

    – A strategy should be an instruction manual for the administration, not just a set of ambitious goals. Without a simplification of IT purchasing, a viable Cloud First policy and a clear approach to cooperation with the market, neither the digitalisation of public services nor the building of state resilience will be accelerated, states Michał Kanownik.

    Concretes? The strategy is too cautious about the public cloud, even though it is a standard in the most regulated sectors in the world, from banking to defence. Cyfrowa Polska’s experts call for adopting a Cloud First policy at the statutory level, treating the government cloud and commercial clouds as complementary solutions, adopting an obligation to justify decisions not to use the cloud in new IT projects, as well as promoting multi-cloud and hybrid architectures to reduce vendor lock-in.

    One of the most serious problems of the digitisation of the administration is ineffective purchasing mechanisms. The union points out that the Cloud Service Provisioning System (ZUCH) in its current form has not fulfilled its role, which has also been confirmed by NIK inspections. The opinion proposes reforming or replacing ZUCH with a model modelled on the British G-Cloud and simplifying procedures so as to really open up the public procurement market for Polish SMEs and startups.

    – The UK’s G-Cloud programme has enabled thousands of contracts with more than 5,000 suppliers, mainly in the SME sector, with a total value of around £11.5 billion. This is an example we should take advantage of,” notes Michal Kanownik.

    It is necessary to prepare for new types of threats

    Effective digitisation of the state cannot abstract from the issue of cyber security and resilience of critical infrastructure. As we read in the opinion of Cyfrowa Polska, the Strategy should focus more on practical solutions, such as physically and logically isolated data processing centres, dedicated communications infrastructure and systems designed to operate in crisis conditions.

    In the Association’s view, the approach to new types of threats, including post-quantum threats, is also a significant understatement of the Strategy. The organisation points out that the development of quantum and post-quantum cryptography should not be limited to research and development facilities only, but should include pilot implementations of already available commercial solutions. This is particularly true for critical infrastructure and key data processing nodes, which should be prepared for long-term technological risks.

    The issue of digital identity and the system of electronic signatures also needs to be sorted out. Experts believe that the Strategy should explicitly strengthen the role of qualified electronic signatures and qualified validation as EU-wide solutions for automatic, reliable document verification. In an environment of increasing document fraud, this is crucial for the security of legal transactions.

    – All of these comments aim to create a comprehensive, sustainable strategy that will stay with us for years to come. While we are positive about the direction of the document itself, we believe that the details are of fundamental importance in this case. Therefore, we declare our readiness for further cooperation and dialogue with the Ministry of Digitalisation,” concludes Michał Kanownik.

    Source: Związek Cyfrowa Polska

  • Public cloud in the European Union – between innovation and data responsibility

    Public cloud in the European Union – between innovation and data responsibility

    As a result, the development of cloud services in the EU is taking place in parallel with the debate about data sovereignty, ethical computing and the need to build solutions in line with European values. According to the European Commission, investment in computing infrastructure and AI will be one of the most important drivers of growth, but only if businesses and institutions trust that the cloud is a secure, predictable and compliant environment.

    European cloud in practice: from scalability to strategic independence

    The increasing load on systems, the digitalisation of public services and the development of AI models are making the public cloud not just a convenient tool for European organisations, but a key component of business infrastructure. It allows them to rapidly increase computing power, implement new functions and move processes that previously required their own data centres. At the same time, the EU is increasingly emphasising the need to build solutions that provide control over data flows and reduce reliance on non-European jurisdictions.

    – The European model assumes that IT architecture must support auditability, data control and interoperability. This is not a regulatory cost, but an investment in the European economy, which does not limit its development in the long term, but ensures that we maintain our identity as a European economy, comments Artur Kmiecik, Head of Cloud and Infrastructure at Capgemini Poland.

    Standards and certification: EUCS as the new security map for cloud computing

    In order to structure the requirements for cloud providers, ENISA is preparing the EUCS, a European certification scheme to unify the rules for assessing the security and compliance of services. For organisations, this means clearer criteria for selecting a provider, and for public administrations, the ability to use services with a predictable level of protection. The EUCS also simplifies the documentation and integration of systems that have to meet stringent industry standards. In practice, this is a strategic step towards a more transparent and standardised cloud market across the Union.

    Data under protection: how GDPR and EDPB set the framework for responsible processing

    Data protection regulation remains one of the strongest pillars of the European cloud approach. The GDPR and European Data Protection Board guidelines specify how to design processing and how to ensure compliance in an environment that is dynamically changing. This enforces practices based on privacy-by-design, regular risk assessment, access control and documentation of activities. At the same time, organisations need to be fully aware of where their data is and who can process it. The result is a model that reinforces transparency and predictability – including for services operating across national borders.

    AI in the cloud – innovation under regulatory scrutiny

    AI naturally thrives in cloud environments, which provide scale, computing power and the ability to update quickly. At the same time, the AI Act creates a legal framework to guarantee user security and transparency of models. Organisations that want to use more advanced systems need to prepare for documentation obligations, compliance testing and risk assessments, especially in high-responsibility sectors. This ensures that the development of AI does not come at the expense of data quality or user rights. Regulation does not slow down innovation – it puts it in order and gives it clear rules to work by.

    Trust as the currency of the digital economy: transparency and control over data

    The complexity of cloud environments means that organisations increasingly expect not only security, but also full auditability of operations. The ability to track activity, view logs, analyse permissions and verify processes is becoming one of the key criteria for vendor selection. Companies and institutions want to make sure they know who is processing their data and how – and transparency is becoming just as important as technical safeguards.

    – The IT architecture in our region must take into account not only scale and computing power, but also the requirements of the European Union. In practice, trust in the cloud is becoming the currency of the digital economy – organisations that can gain it through control of data flows and responsible use of AI will gain a real competitive advantage. The future of the European cloud is not only interoperability, but also ethical innovation that protects users and strengthens the data economy, adds Artur Kmiecik, Head of Cloud and Infrastructure at Capgemini Poland.

    The future of the European cloud: interoperability, ethics and responsible innovation

    Initiatives such as GAIA-X or European data spaces show that the future of the cloud in the EU is the development of systems that can work together independently of the provider. Interoperability is expected to facilitate cross-sector projects, process automation and data exchange in a way that complies with the highest ethical standards. At the same time, responsible innovation principles are growing in importance to protect users and strengthen the data economy. It is a direction that will allow Europe to develop modern technologies without abandoning the values that define its approach to digitalisation

    Source: Capgemini

  • NIS2 is not a shopping list for IT. Why is technology alone not enough?

    NIS2 is not a shopping list for IT. Why is technology alone not enough?

    The IT industry likes to think of security in terms of products. New generation firewalls, EDR systems, advanced network segmentation – these are concretes that are easy to price, sell and deploy. However, in the face of the EU’s NIS2 directive, this traditional model of thinking is becoming a trap. Experts analysing the new legislation make it clear: NIS2 is not a technical manual for administrators. It is a management revolution that brutally exposes what many companies have so far ignored – the lack of coherent corporate governance.

    Many companies still live with the belief that compliance with new regulations can be ‘bought’ or achieved by updating their infrastructure. This is a dangerous cognitive error. An analysis of the directive’s assumptions shows that the focus shifts radically from ‘IT operations’ to ‘risk management’. This means that even the most expensive technology will not protect an organisation from the consequences if the people, decision-making processes and accountability structure fail.

    The illusion of a digital fortress

    When a security incident occurs, the first instinct is to look for blame in the technology department. Did the system fail? Was an update overlooked? Meanwhile, security strategists point to another clue. Cyber security rarely falls down because of a lack of technology. Rarely is the problem a physical lack of a firewall or monitoring tools. These are usually in place.

    Systems fail most often because of decisions, priorities and structures that fail to fully map risks. So it is not a question of whether a company ‘has’ the tools, but whether its management structures are configured so that risk is understood and controlled at every level. If the board does not understand what it is protecting and why, even the best-armed digital fortress will have its back door open. Governance therefore becomes, in the light of NIS2, a safety-critical function – a foundation without which technology loses its effectiveness.

    The end of the era ‘is a problem for IT professionals

    One of the biggest changes NIS2 introduces is the redefinition of accountability. For years, cyber security has been treated as a technical domain, relegated to IT departments, away from boardrooms. The new directive ends this approach.

    NIS2 is a management requirement. It obliges management not only to proactively manage security, but also to demonstrate that decisions made are based on a sound assessment of risk in the context of the business model. Boards face the challenge of combining technical correctness with business relevance. They need to be able to assess how a specific digital threat affects finances, the supply chain or reputation.

    Without this classification, technical analysis remains in a vacuum. Companies are required to be able to demonstrate the ‘decision path’ – how decisions are prepared, prioritised and documented. This is a huge challenge for organisations that lack a structured logic for decision-making. In 2026, accountability will be personal and direct, forcing C-level staff to educate themselves and change their mentality.

    Paper accepts everything, hackers do not

    Another misunderstanding that blocks progress in many organisations is the approach to compliance as a set of documents. There is a perception that compliance can be achieved by creating a sufficient number of procedures or security policies. In practice, NIS2 requires the opposite – a living ecosystem.

    The directive calls for the coherent integration of multiple, often siloed areas: technical safety measures, governance, staff competence development, reporting and supply chain management. If these elements do not mesh perfectly, gaps are created. It is in these gaps – between HR procedure and server configuration, between the report to the board and the actual state of the network – that the biggest emergency disasters occur.

    Governance involves more than a formal definition of responsibilities. It is the framework within which risks become visible. If a company fails to connect these dots, it will be left with a cupboard full of documents that in no way increase its real resilience.

    Time – a resource you will not integrate

    The implementation of NIS2 cannot be understood as a one-off legal obligation to be ‘ticked off’. It is a transformation process, and the biggest enemy of companies in this process is time. Many organisations drastically underestimate the timing of the launch, deluding themselves into thinking that they will be in time for the implementation in a few weeks before the deadline.

    Experts warn: even with a good starting point, it takes months to define new roles, coordinate processes and, above all, introduce effective reporting structures in a ‘management language’. For companies with complex supply chains or a distributed structure, this time extends even further. Anchoring security requirements at multiple operational levels is a marathon, not a sprint.

    The coming months are a crucial ‘transfer window’. Those who start the transition process now have the luxury of controlling priorities and allocating resources sensibly. They can take a realistic inventory and determine which measures realistically reduce risk.

    Those who procrastinate will fall into a spiral of time pressure. ‘Last-minute’ implementations usually end up with half-hearted solutions that are not tailored to the company’s individual risk profile. Such a strategy not only increases costs (operating in a fallback mode is always more expensive), but also raises the risk that central requirements remain incomplete.

    Consequences of inaction

    What happens if companies react too late? The consequences go far beyond the regulatory sanctions that are most often discussed. Organisations that fail to implement appropriate governance structures in time lose their ability to manage risks operationally. They become reactive rather than proactive.

    This poses a huge reputational risk. In the new reality, a lack of evidence of effective security management is a straightforward way to lose the trust of customers and investors. What’s more, these companies may be pushed out of the market by their own business partners – as supply chains will require compliance with certain standards that cannot be implemented overnight.

    Turning pointNIS2 is a turning point for the entire industry. The directive moves cyber security from the technical back office to the strategic core of the business. Governance becomes the new firewall – a factor that will determine economic stability and liability risk in the years to come.

  • Judicial ‘discount’ for Intel. Giant to pay EU 1/3 less

    Judicial ‘discount’ for Intel. Giant to pay EU 1/3 less

    For Intel, a giant currently struggling through one of the most difficult restructurings in its history, any positive financial news is at a premium. On Wednesday, the General Court of the European Union provided the Californian company with a rare recent reason to be pleased, deciding to significantly reduce its antitrust fine. While the manufacturer’s culpability in blocking competition was not challenged, the size of the fine was mitigated, ending another chapter in a legal saga that has lasted nearly two decades.

    The case, under reference T-1129/23, goes back to the period of the aggressive battle for dominance in the x86 processor market between Intel and Advanced Micro Devices (AMD). At the centre of the dispute were practices between 2002 and 2006, which the European Commission identified as so-called naked restrictions. The mechanism involved payments to key OEM partners – HP, Acer and Lenovo – in return for withholding or deliberately delaying the launch of computers equipped with competitors’ chips.

    Originally, in 2009, Brussels imposed a then record fine of €1.06 billion on Intel. After years of court battles, this mammoth sum was overturned, but in 2023 the Commission came back with a new fine, set at €376 million. It was this decision that the US manufacturer appealed, arguing that the sanction was disproportionate to the actual harm of the act.

    Intel
    Source: Intel

    The judges in Luxembourg upheld part of the defence’s arguments. The reasoning of the judgment indicated that the amount of €376 million did not adequately reflect the gravity of the infringement. The Court noted the limited scope of the conduct, which involved a relatively small number of devices, and the fact that the anti-competitive conduct was not continuous – the evidence pointed to a 12-month gap between incidents. As a result, the fine was reduced by around a third, to just under €237 million.

    For the channel market and the IT industry, the ruling is an important signal. It confirms that European regulators remain relentless in protecting competition rules, even if enforcement processes drag on for years. On the other hand, the court’s decision shows that the European Commission needs to calibrate penalties precisely, based on hard data on the scale of infringements and not just on the overall market position of an entity.

    The decision is not yet final. Both Intel and the European Commission have the option to appeal to the EU Court of Justice, which could prolong this legal marathon. However, in the current macroeconomic situation and with Intel’s tight budget, the saving of nearly €140 million is a significant boost, even if it is only a partial victory in a case that casts a shadow over the company’s reputation from its days of absolute dominance.

  • Europe, the US or China? Why regulation could become our ‘killer feature’ in the AI race

    Europe, the US or China? Why regulation could become our ‘killer feature’ in the AI race

    Recent years have seen an unprecedented democratisation of technology. Driven by the falling cost of computing power and increased productivity, artificial intelligence has come out of the labs straight to our desks. Looking at the pace of innovation overseas or the scale of activity in China, it is easy to get the impression that the Old Continent is lagging behind. There is a perception that Europe, with its penchant for legislation, is setting itself up as a technological blockade. But what if it is the exact opposite? In a world where algorithms are beginning to decide people’s health and finances, ‘trust’ is becoming a currency more valuable than the speed of computing itself.

    Artificial intelligence is currently undergoing a phase of exponential development. It is no longer just a novelty for enthusiasts, but a powerful force transforming science and industry. We are seeing a clear convergence of AI with other emerging fields such as biotechnology and neuroscience. However, this rush towards the future raises a fundamental question: can we control it?

    The third way of digital development

    The geopolitical map of artificial intelligence development is clearly divided. The US focuses on speed and market dominance of the big players (Big Tech). China focuses on mass deployment and close integration of technology into the state apparatus. In this context, Europe seems to be taking the ‘third way’.

    Instead of a blind race for parameters, the European Union is focusing on quality, ethics and security. The concept of Trustworthy AI (trustworthy artificial intelligence) is increasingly emerging in policy documents and industry debates. This approach assumes that maximising technological potential must go hand in hand with respect for fundamental rights and sustainability.

    To many IT managers and software house heads, this sounds like corporate newspeak or, worse still, another bureaucratic hurdle. However, it is worth looking at it from a business perspective. In critical sectors – such as energy, banking, cyber-security or healthcare – customers are becoming increasingly wary of ‘black boxes’. The European framework can become a guarantee of quality that solutions from the ‘digital Wild West’ lack.

    Innovation in a corset of rules – is it worth it?

    To understand why regulation can be a catalyst for innovation, just look at the medical sector. This is where AI-based tools are changing the research paradigm. Advanced Deep Learning models are already assisting doctors in analysing medical images, detecting anomalies faster and more accurately than the human eye.

    However, the real revolution mentioned in industry studies is the possibility of conducting ‘virtual’ clinical trials. With simulations on digital models, potential therapies can be validated without involving real patients at an early stage. This drastically speeds up drug discovery and reduces R&D costs.

    However, implementing such systems requires absolute confidence in their reliability. A hospital will not buy an algorithm that ‘hallucinates’ or makes decisions based on biases (bias) sewn into the training data. This is where the European approach becomes an advantage. The requirement for rigorous validation, transparency and ethical design makes systems developed under this regulatory regime safer. For an investor in MedTech or BioTech, compliance with EU standards is not just a ‘checkbox’ in the documentation, but an insurance policy to minimise implementation risk.

    The dark side of algorithms and the regulator’s response

    R&D projects are increasingly looking at AI as a cross-cutting tool – from the automation of tedious tasks to massive data analysis. However, as the complexity of systems increases, so do the challenges. Lack of transparency (the ‘black box’ problem), vulnerability to adversarial attacks and data privacy issues are real issues facing IT departments.

    Initiatives such as the AI Act or RODO, which we all know, are the answer to these challenges. Although often criticised for their complexity, they actually establish a framework that brings order to the market. Three pillars become key:

    1. Transparency – the user needs to know that they are interacting with the machine.

    2. explainability (XAI) – the decisions of the algorithm must be human-understandable and auditable.

    3. human oversight – the ultimate responsibility always lies with the individual, which is key to maintaining autonomy.

    In research environments, where data integrity is fundamental, the security of AI systems is a priority. The system must be resistant not only to errors, but also to deliberate tampering. European regulations are enforcing a Security by Design approach, which in the long term builds a much more stable innovation ecosystem.

    What does this mean for the IT industry?

    The lesson is clear for technology companies operating in Europe: the era of ‘implement anything, anytime’ is coming to an end. The time of responsible engineering is coming.

    European software houses and systems integrators have an opportunity to create unique market value. Instead of competing with giants from the US or China solely on computing power or price, they can offer ‘Enterprise Grade AI’ products – auditable, legally and ethically secure systems ready for implementation in the most demanding economic sectors.

    The challenge is twofold: on the one hand, we need to maximise the potential of AI so as not to fall out of the global innovation chain, and on the other hand, to ensure that the technology respects individual privacy and rights. Success in this area requires close cooperation between the public and private sectors. Public trust in algorithms will not arise on its own; it must be built on a foundation of robust laws and transparent technology.

    The future of artificial intelligence in Europe is full of complexities, but also huge potential. There are many indications that in the years to come, it will not be the ‘raw power’ of the models, but their predictability and safety that will determine market success. By imposing high ethical and regulatory standards, Europe can paradoxically come out on top, offering the world a technology that is safe to use – and not just to marvel at.

  • Brussels vetoes: Digital regulation is not for sale for steel

    Brussels vetoes: Digital regulation is not for sale for steel

    Brussels is sending a clear signal to Washington that Europe’s digital sovereignty will not be a currency in transatlantic trade negotiations. The firm declaration by Teresa Ribera, vice-president of the European Commission, puts a definitive end to attempts to link customs policy with technological compliance. This is a direct response to Monday’s suggestion by Howard Lutnick, the US Secretary of Commerce, to cut tariffs on steel and aluminium imports in exchange for ‘balancing’ EU restrictions on the technology sector.

    The US administration’s attempt to tie the raw ‘old economy’ to digital regulation shows how acute the framework set by the Digital Markets Act (DMA) and the Digital Services Act (DSA) has become for Silicon Valley giants. Lutnick, in an effort to force concessions, attempted to shift the burden of discussion from the level of legal protection to the purely transactional level. Ribera, however, immediately closed this gateway, emphasising that the European legal framework is about protecting consumers and ensuring fair competition, not protectionism that can be traded.

    For the channel market and the IT industry in Europe, this exchange of views carries a key message: the implementation period of stringent requirements will not be relaxed by geopolitical pressure. The European Union intends to defend its regulatory model regardless of the costs in other sectors of the economy. Technology integrators and distributors must therefore prepare themselves for the fact that the current demanding regulatory landscape is a permanent feature of the market game and not a temporary political inconvenience.

  • A monopoly on privacy? The OCCP questions Apple’s market play

    A monopoly on privacy? The OCCP questions Apple’s market play

    Polish regulator joins global wave of scepticism over Apple‘s practices with allegations of abuse of dominance. At stake in the App Tracking Transparency game is not just data protection, but billions of zlotys from the mobile advertising market.

    The President of the Office of Competition and Consumer Protection (OCCP), Tomasz Chróstny, has initiated antitrust proceedings against Apple. The axis of the dispute is the App Tracking Transparency (ATT) policy, implemented in 2021, which in the iOS and iPadOS ecosystems forces developers to obtain user consent to track their activity. While the move appears pro-privacy from a consumer perspective, the Polish regulator sees it as a mechanism for market-based elimination of competition. The essence of the problem is the dual role of the US giant, which acts simultaneously as a ‘guardian’ of the ecosystem (regulator) and an active participant in the advertising market, competing for budgets with third-party app publishers.

    OCC analysts point to a fundamental asymmetry in user communication. In the case of independent developers, the iOS system displays a warning message asking permission to ‘track’, which evokes negative associations and drastically reduces conversion rates (opt-in). Meanwhile, Apple’s own services, pursuing the same de facto business objective, ask the user to enable ‘personalised ads’. This semantic and visual difference – ‘Ask not to be tracked’ versus ‘Enable’ buttons – creates an uneven playing field for those profiting from behavioural advertising.

    In the opinion of the President of the OCCP, such action may constitute an abuse of a dominant position, punishable by a fine of up to 10 per cent of the company’s turnover. Significantly, the President of the OCC confirmed that ATT’s strict framework does not derive directly from data protection legislation, undermining the corporation’s line of defence based on legal necessity. The effects of these practices are felt most acutely by independent publishers and advertisers, for whom impeded access to data means a reduction in the value of advertising space and a weaker negotiating position.

    The Polish investigation is part of a wider European trend. Similar proceedings are being conducted by antitrust authorities in Germany, Italy and Romania, and the French regulator has already taken decisions resulting in multi-million dollar fines. For the IT industry, the signal from Warsaw is clear: the privacy-first argument is no longer an absolute shield against interference with the business model of closed ecosystems.

  • What’s next for AI? Ministry of Digitalisation announces an updated version of the Policy for the Development of Artificial Intelligence in Poland

    What’s next for AI? Ministry of Digitalisation announces an updated version of the Policy for the Development of Artificial Intelligence in Poland

    The document assumes, among others, the implementation of AI in public administration (AI HUB Poland), the creation of dedicated ‘Sectoral Deployment Maps’, as well as support for small and medium-sized enterprises. The tasks described in the AI Policy are ultimately intended to contribute to the realisation of the assumed goals – Poland as the heart of the AI continent thanks to implementations in key sectors of the economy and an efficient state using AI solutions.

    AI BUH Poland

    AI Policy envisages the launch of the AI HUB Poland portal, which is intended to be a tool to support the effective management, development and implementation of AI technologies in the public sector. The aim of the platform is to create an integrated environment that will improve the use of artificial intelligence in public services and in key areas of state functioning.

    Among other things, the project envisages the rapid adoption of AI-based innovations, the upskilling of administrative staff, the harmonisation of data to build artificial intelligence models and the creation of national large language models.

    AI HUB Poland is a joint initiative of experts from the Central Informatics Centre, NASK and partners to support the country’s digital development and strengthen Poland’s international competitiveness. The platform’s activities include the launch of a central system for managing AI projects, building a repository of open automation solutions, sharing best practices and implementation support for smaller administrative units.

    Sector implementation maps

    The Ministry of Digitalisation notes that artificial intelligence is becoming one of the most important drivers of economic transformation. Its use can significantly accelerate the development of innovation, increase the competitiveness of Polish companies and improve the quality of life of society. In order to fully exploit this potential, it is necessary to focus efforts on those projects and sectors that can bring Poland the greatest economic and social benefits.

    The Polish Economic Institute identifies three approaches that help identify priority areas for AI development: analysis of key industries in the economy, assessment of the so-called ‘technology stack’ and identification of ‘grand challenges’ – complex problems where AI can play a particularly important role.

    Based on these analyses and the recommendations of the AI Working Group, the sectors with the greatest potential for AI deployments have been identified. These are: energy, e-commerce, dual-use products, cyber security, BioMedTech, financial services and transport and logistics. It is in these areas that AI can generate the most value – from optimising energy consumption, to faster drug discovery, to autonomous mobility and advanced cyber defence systems.

    The directions set are also in line with the European Commission’s focus on the development of secure, interoperable and high-quality AI systems in strategic sectors across the EU.

    In order to successfully implement artificial intelligence, cross-sector collaboration, a robust data infrastructure, competent staff and consistent regulation are needed. Therefore, dedicated Sector Deployment Maps will be created for each of the key sectors. These will include an analysis of the industry’s needs, key business areas for AI applications, data sharing rules and support mechanisms – so that Polish companies can fully exploit the potential of the breakthrough technology.

    Support for small and medium-sized enterprises

    Small and medium-sized enterprises play a key role in the development of AI in Europe. Thanks to their flexibility, ability to experiment quickly and innovative approach, it is SMEs – and especially startups – that are often the first to implement new technologies and bring breakthroughs to market. At the same time, they face barriers such as limited resources, more difficult access to data and the need to meet ethical and regulatory requirements.

    In order to accelerate the development of AI in this sector, support including funding, computing infrastructure and the ability to test solutions in secure environments is essential. Incubators, accelerators and knowledge-sharing platforms play an important role in helping companies commercialise innovations faster and build technological competence.

    In Poland, the infrastructure being developed – including AI Factories – is to allow entrepreneurs to benefit from technology and regulatory advice, computing power and testing environments. This is complemented by the PFR’s ‘Digital Crate for Companies’ programme, which helps to confirm technology readiness and gain support for AI Act compliance.

    One of the most important elements of the support policy for SMEs are the regulatory AI sandboxes, which – according to EU regulations – must be completely free of charge for them. They allow solutions to be tested under controlled conditions and reduce the risks associated with market entry. Specialised sectoral sandboxes will be established in Poland and their integration with AI Factories will provide access to data and infrastructure.

    In order for Polish companies to realise the full potential of AI, it will be crucial to raise awareness of the benefits of its implementation, provide practical advice and launch dedicated programmes and competitions to support AI projects. In the long term, this will translate into an increase in the competitiveness of SMEs, the development of innovation and the strengthening of Poland’s position in the European technological ecosystem.

    Summary

    The updated AI Policy responds to the challenges posed by the dynamic development of AI technologies. The document sets out directions for action, integrating the needs of government, business, science and society. The Ministry of Digitalisation announces further work on the implementation of the AI Policy.

  • Brussels twist: EU delays key AI legislation under pressure from Big Tech

    Brussels twist: EU delays key AI legislation under pressure from Big Tech

    The European Commission is taking a clear step backwards on the digital regulation front. Responding to growing criticism from US tech giants and concerns about the region’s economic competitiveness, Brussels on Wednesday proposed a package of changes known as the ‘Digital Omnibus’. The proposal involves not only simplifying the bureaucracy, but above all significantly delaying the implementation of the restrictive requirements of the Artificial Intelligence Act (AI Act).

    The most important element of the proposal is the postponement of deadlines for AI systems classified as ‘high-risk’ solutions. Originally, the stringent rules were due to take effect in August 2026, but the Commission is now suggesting postponing them until December 2027. The decision represents a significant breather for companies deploying algorithms in sensitive areas such as employee recruitment, credit scoring, healthcare services, critical infrastructure or biometric identification.

    However, the change of course goes beyond the implementation calendar itself. The proposed adjustments also touch upon the ‘holy grail’ of European regulation, the RODO. The new legal framework is intended to make it easier for giants such as Alphabet(Google), Meta and OpenAI to use Europeans’ personal data to train their artificial intelligence models. This is a direct response to the arguments of the industry, which has long pointed out that the EU’s stringent data protection creates an insurmountable innovation barrier in the race against the US and China. In addition, the package provides for the simplification of user-annoying cookie consent mechanisms.

    Although officials in Brussels at the briefing asserted that ‘simplification is not deregulation’ but merely a critical review of the regulatory environment, the move is part of a wider trend. As with the recent relaxation of environmental regulations, the EU seems to be bowing to pressure from business and the risk of political retaliation from Washington. “The Digital Omnibus has yet to be approved by member states, but the very fact of its creation signals that Europe is beginning to review its role as global digital sheriff in favour of economic pragmatism.”

  • The end of the wild west in the cloud. European Union takes 19 IT giants under the microscope

    The end of the wild west in the cloud. European Union takes 19 IT giants under the microscope

    European Union regulators made an unprecedented move on Tuesday, naming 19 companies – including Amazon Web Services, Google Cloud and Microsoft – as critical service providers for European banking. The decision fundamentally changes the balance of power between Big Tech and financial supervision, moving the relationship from a partnership to a strictly regulated level.

    The move is a direct consequence of the Digital Operational Resilience Act (DORA) regulation coming into force. The new legislation gives European supervisory authorities (EBA, EIOPA, ESMA) the power to directly control technology companies, which until now have only been accountable to their business customers. The regulators make no secret of the fact that their main aim is to mitigate systemic risk. In this era of widespread digitalisation, a failure at one of the leading cloud providers could paralyse a significant part of the European banking system, triggering a domino effect with difficult-to-quantify consequences.

    The list of players targeted by the new surveillance regime is diverse, showing how deeply technology has penetrated finance. In addition to the cloud ‘big three’ (AWS, Google, Microsoft), IBM, market data providers such as Bloomberg and the London Stock Exchange Group (LSEG), as well as telecoms operators, including Orange, and consultancies such as Tata Consultancy Services have also been targeted. Each of these entities will now have to prove that they have an adequate risk management framework in place and that their infrastructure is resilient to cyber attacks and technical failures.

    The industry reaction to the announcement was measured and diplomatic, suggesting that the tech giants had been preparing for this scenario for a long time. Representatives from Microsoft and Google Cloud immediately declared their full willingness to cooperate, emphasising their commitment to cyber security. The LSEG, in turn, openly welcomed the new designation, seeing it as a confirmation of its key role in the ecosystem. Silence has so far been maintained by Bloomberg and Orange, which may indicate ongoing internal analyses of the new regulatory obligations.

    Brussels’ decision is part of a wider global trend of tightening control over critical infrastructure. The European Central Bank explicitly lists technological disruption alongside geopolitical tensions as the main threats to the sector. Similar steps are being taken by the UK, although the legislative process there is lagging behind that of the EU – London does not plan to designate its critical entities until next year. Europe is thus once again becoming a testing ground for new regulatory standards in the technology world.