Category: Legislation and regulations

  • Big Tech vs Australia. New law to force platforms to pay publishers

    Big Tech vs Australia. New law to force platforms to pay publishers

    Australia is once again becoming a global testing ground in the state-BigTech relationship. The government in Canberra has announced plans to introduce a ‘News Bargaining Incentive’ – a mechanism to replace the existing, ineffective 2021 regulations. The new regulation presents giants such as Meta, Alphabet and TikTok with a stark choice: either negotiate commercial deals with local publishers, or face a tax of 2.25% of their local revenues.

    According to the bill, which is expected to come into force in July 2025, the proceeds of the new levy will not go into the general state budget, but will be redirected directly to media organisations. The key criterion for the distribution of funds is to be the number of journalists employed, in order to promote real content creation and not just coverage. Prime Minister Anthony Albanese, despite warnings from the US administration about possible retaliatory tariffs, emphasises the sovereignty of Australian economic policy.

    Australia’s move is a shift away from a soft negotiation model to hard fiscalism. The previous system allowed platforms to avoid payment by extinguishing contracts or, in extreme cases, blocking news content, something the Met has already tested in 2021. The current proposal is much harder to neutralise from an operational level – a tax on revenue is a cost that cannot be avoided with a simple algorithm change.

    However, the geopolitical risks are worth noting. Donald Trump’s announcements of tariffs on countries that tax US technology companies suggest that local journalism protection could become the trigger for a wider trade conflict. For the technology sector, this represents a period of increased volatility and the need to review strategies for presence in markets with strong protectionist tendencies.

  • DeepSeek and Chinese AI – Why is the State Department warning allies?

    DeepSeek and Chinese AI – Why is the State Department warning allies?

    US diplomacy is entering a new phase of offensive against Chinese artificial intelligence leaders. The State Department has issued global guidelines to its outposts, ordering them to warn foreign governments about the practices of companies such as DeepSeek, Moonshot AI and MiniMax. The crux of the dispute is no longer just access to processors, but the process of so-called distillation, which Washington explicitly calls the theft of American technological thought.

    From a business perspective, distillation is a tempting shortcut. It allows smaller, cheaper-to-operate models to be trained on the results generated by powerful systems such as those from OpenAI. For Chinese startups, it’s a way to erode the US advantage at a fraction of the research cost. However, according to the US administration, this process not only copies intellectual architecture, but is done without authorisation, hitting Silicon Valley’s commercial foundations.

    DeepSeek’s situation is key here. The startup, which recently electrified the market with its V3 model, has just unveiled the V4 version, optimised for Huawei hardware. This is a clear signal of building an independent ecosystem that challenges the hegemony of Nvidia and Microsoft. While DeepSeek has consistently denied using synthetic data from OpenAI, US lawmakers have received reports suggesting the opposite: deliberately replicating the behaviour of models in order to clone them.

    Washington alerts that ‘distilled’ models often lack built-in fuses and controls, making them unpredictable for corporate use. At the same time, many Western institutions are already banning the use of DeepSeek tools, citing data privacy concerns.

    The timing of this escalation is no coincidence. The escalation in rhetoric comes just weeks before President Donald Trump’s planned visit to Beijing. The dispute over AI intellectual property becomes a bargaining chip in a broader technology war, which, after a brief period of relaxation, is again gaining momentum. The choice of AI model supplier is ceasing to be a purely technical decision and is becoming a statement in a growing geopolitical conflict.

  • Azure tax? UK court clears the way for billion-pound lawsuit against Microsoft

    Azure tax? UK court clears the way for billion-pound lawsuit against Microsoft

    The London Competition Appeal Tribunal (CAT) has made a decision that could fundamentally change the European cloud infrastructure market. Microsoft, after months of trying to dismiss the claims, must brace itself for a massive lawsuit. At stake is £2.1 billion in damages and the future of a licensing strategy that has been controversial for years among finance and technology executives around the world.

    The case, led by Maria Luisa Stasi on behalf of nearly 60,000 UK businesses, strikes at the heart of Microsoft’s business model. The crux of the dispute is not about the quality of cloud services per se, but about the way the Redmond giant prices Windows Server software licences. According to the plaintiffs, Microsoft has a discriminatory pricing policy: companies choosing to run Windows Server on competitors’ platforms, such as Amazon Web Services, Google Cloud or Alibaba, pay much higher wholesale rates than users choosing the native Azure environment.

    From a business perspective, this means that Azure does not just win by technological prowess, but by an artificially generated cost advantage. For many organisations that have historically based their infrastructure on Microsoft solutions, moving to a competing cloud involves a hidden ‘tax’ that is ultimately charged to their margins or passed on to end customers.

    Microsoft has consistently defended its strategy, arguing that an integrated business model fosters innovation and allows it to offer better solutions within its own ecosystem. Company representatives have announced an appeal, challenging the methodology for calculating the alleged losses and pointing to the dynamic nature of the cloud market.

    However, the London tribunal’s decision coincides with increasing regulatory pressure. The UK Competition and Markets Authority (CMA) and authorities in the EU and US are looking increasingly closely at practices that restrict software interoperability.

    The market is no longer willing to accept technology lock-in with impunity. If Microsoft loses or is forced to settle, we will see not only gigantic compensation payments, but above all a levelling of the price playing field in the cloud. This could pave the way for a new wave of data migration, where performance rather than convoluted and expensive licensing provisions will determine the choice of provider.

  • Facebook and Instagram fraud. Meta in front of court for advertising profits

    Facebook and Instagram fraud. Meta in front of court for advertising profits

    For the tech giants, the line between aggressive monetisation and user safety has been up for debate for years, but a new class action complaint against Meta Platforms may take this dispute to a whole different level of financial accountability. The Consumer Federation of America (CFA) is hitting a sensitive spot in Mark Zuckerberg’s empire with the claim that the company’s business model not only tolerates, but even systemically rewards fraudulent advertising campaigns.

    The case, which has reached the Supreme Court in Washington, is based on extremely incriminating data, allegedly coming from inside the corporation itself. According to Meta’s estimates for 2024, every day Facebook and Instagram users could see up to 15 billion ads classified as ‘high-risk’. What is a risk for the consumer has become a tangible profit for the shareholder. The complaint suggests that revenue from this could have reached $7 billion a year, and the company’s internal projections indicated that as much as one in ten dollars earned by the Met could come from displaying banned or fraudulent content.

    For managers and investors, a key aspect of this battle is not only an image issue, but above all the sustainability of advertising systems. The CFA sheds light on so-called ‘agency accounts’ and collaborations with partners in China who act as intermediaries for the resale of advertising. This complex ecosystem, designed to maximise reach, has, according to the accusers, become a conduit for facilitating the misleading of millions of people while maintaining a safe corporate distance from the fraud itself.

    Meta is not indebted, claiming that the allegations build a false picture of its operations. The company stresses that it is intensifying its vetting processes for advertisers and introducing blockers on redirects from financial ads to private messaging, a typical mechanism in phishing scenarios. However, for the technology market, this process signals that finally the time when platforms could invoke their ‘neutral intermediary’ status is coming to an end.

  • NSA uses Claude Mythos despite official Pentagon ban

    NSA uses Claude Mythos despite official Pentagon ban

    According to Axios, citing sources close to the intelligence community, the National Security Agency (NSA) is actively using Anthropic ‘s latest model, Claude Mythos. There would be nothing unusual about this, were it not for the fact that the same administration has officially declared Anthropic to be a ‘supply chain risk’ company, which should theoretically close its doors to government contracts.

    This rupture within the US security apparatus is indicative of a wider problem: the tension between the ethics of AI developers and the military ambitions of the state. Anthropic was blacklisted not because of technical loopholes or links to foreign intelligence, but as a result of an ideological clash. The company refused to allow the Pentagon to use its models for mass surveillance of citizens and the development of autonomous combat systems. In response, Defence Secretary Pete Hegseth gave the company a risk label, hitherto reserved for entities linked to authoritarian regimes.

    For the technology business, this situation is a lesson in pragmatism. The NSA, whose statutory mandate is to crack ciphers and go on the offensive in cyberspace, has apparently decided that Claude Mythos is too powerful a tool to abandon. The model has shown remarkable effectiveness in identifying zero-day bugs and finding backdoors in foreign software. In the face of such unique capabilities, the Pentagon’s political pronouncements go down the drain.

    The current state of affairs is a classic bureaucratic farce with serious market implications. While the Pentagon is publicly warning against Anthropic, the intelligence services are signing new contracts with the company, arguing for national security needs. This sets a dangerous precedent in which security labels are used as a leverage tool in contract negotiations rather than as a real threat assessment.

    The technical value of AI is becoming stronger than political arbitration. Anthropic is currently fighting to regain its good name through legal means, but it is the actual demand from agencies such as the NSA that may prove to be their most effective line of defence.

  • ChatGPT as a search engine? EU checks OpenAI for DSA

    ChatGPT as a search engine? EU checks OpenAI for DSA

    When OpenAI integrated search functions directly into ChatGPT, the boundary between an AI assistant and a traditional search engine became blurred. Now the European Commission intends to formalise this boundary. Commission spokesperson Thomas Regnier confirmed that Brussels is analysing whether OpenAI’s flagship product should be classified as a Very Large Internet Search Engine (VLOSE) under the Digital Services Act (DSA).

    The decision comes after OpenAI disclosed operational data that puts the company in a difficult negotiating position. Under EU rules, the threshold for enhanced surveillance is 45 million users per month in the EU. Meanwhile, ChatGPT Search recorded an average of 120.4 million active users in the six months ending September 2025. This is almost three times the limit, which imposes strict obligations on tech giants in terms of algorithmic transparency and systemic risk management.

    For OpenAI, the eventual reclassification marks the end of an era of freedom to shape search results. As a VLOSE, Sam Altman’s company would have to share its data with researchers, conduct annual external audits and proactively counter misinformation on pain of penalties of up to 6% of global turnover. Although the Commission declares that it considers each case of large language models on a case-by-case basis, the precedent set by ChatGPT could define the future of the entire generative AI sector in Europe.

    The move forces OpenAI investors and partners to re-evaluate operating costs in the European market. Rather than focusing solely on product innovation, the AI market leader now has to expand its powerful compliance apparatus to meet the demands that have so far mainly concerned Google or Bing. Europe is once again showing that there is a high price to pay for access to its gigantic internal market in the form of strict supervision.

  • IBM drops demographic targets. Lesson from the settlement for which the giant paid $17m

    IBM drops demographic targets. Lesson from the settlement for which the giant paid $17m

    IBM, the technology giant once a symbol of progressive HR management, has agreed to pay $17 million as part of a settlement with the US Department of Justice. This not only brings closure to the legal dispute, but signals to boards that the days of ‘diversity modifiers’ in their current form are coming to an end.

    The IBM case is the first high-profile success of a newly formed entity, the Civil Rights Fraud Initiative. The formation, set up as part of a broad offensive by the Donald Trump administration, adopted an unexpected tactic. Rather than focusing solely on the ideological debate, officials used the civil law against fraud to strike at financial mechanisms promoting DEI (Diversity, Equity, Inclusion).

    The main sticking point appeared to be IBM’s bonus system. The government claimed that the company used algorithms that made executive bonuses dependent on the achievement of specific demographic indicators. From Washington’s perspective, such an arrangement is a form of ‘anti-meritocracy’ that discriminates against non-preferred groups. IBM, while not admitting guilt and stressing that the settlement does not constitute an admission of legal liability, has decided to modify its programmes.

    This case shows that HR policy has ceased to be the internal domain of HR departments and has become an area of high regulatory risk. Companies that have built their culture around tough diversity targets over the past decade now have to revise these strategies. The risks are no longer limited to image damage, but include real financial sanctions and potential exclusion from federal contracts.

    We are now seeing a massive retreat from radical policies of inclusivity. Many US corporations, observing the direction of change in the White House, are quietly backing away from public statements on quotas.

  • Is Claude Mythos from Anthropic threatening the banks? Urgent talks in London and the US

    Is Claude Mythos from Anthropic threatening the banks? Urgent talks in London and the US

    As the Financial Times reports, UK regulators – including the Bank of England and the FCA – are urgently reviewing the potential risks posed by the latest AI model from Anthropic: the Claude Mythos Preview.

    The situation is unprecedented, as the model is not just another chatbot for generating marketing content. Claude Mythos is being developed as part of the enigmatic ‘Project Glasswing’ initiative . According to Anthropic’s official communications, this is a controlled environment in which the model serves a defensive purpose. The problem is that the line between defence and attack in cyberspace is thinner than ever.

    The manufacturer itself has admitted that Mythos has already identified thousands of critical vulnerabilities in operating systems and browsers. What is a breakthrough for security engineers is becoming a nightmare for guardians of the financial system. If the model can pinpoint vulnerabilities in global software with such ease, the critical IT infrastructure of major banks, insurers and stock exchanges could be up for grabs.

    Concern is not just confined to the City of London. Across the ocean, US Treasury Secretary Scott Bessent has already convened a meeting with Wall Street giants to assess the cyber risks of developing such sophisticated models. The reaction of regulators suggests that we are standing on the threshold of a new era of risk management, where the biggest threat to banks is no longer bad loans, but artificial intelligence capable of autonomously detecting errors in the code on which the global circulation of money is based.

    Over the next two weeks, representatives of the UK financial sector are to be instructed in detail by the National Cyber Security Centre (NCSC). The message is clear for business leaders: it is time for IT security audits to stop being a formality and become a real battleground against a model that learns faster than any hacker. Project Glasswing was supposed to bring transparency, but for now it has cast a long shadow over confidence in the digital stability of the financial sector.

  • Child safety online. Courts hit back at social media giants

    Child safety online. Courts hit back at social media giants

    For nearly three decades, Section 230 of the US Communications Decency Act was the most effective line of defence for technology giants. This provision, which protects platforms from liability for user content, was the foundation on which the giants Meta or Google grew. However, recent jury verdicts in California and New Mexico suggest that the era of impunity based on this provision is coming to an end, and the focus of litigation is shifting from the content itself to the architecture of the systems.

    In Los Angeles, a jury found Meta and Google liable for a young woman’s mental health problems, ordering the payment of $6 million in damages. An even more severe blow fell on Meta in New Mexico, where it was ordered to pay $375 million for misrepresenting the safety of its products and allowing abuse of minors. The key here, however, is not the damages themselves, but the legal strategy: the plaintiffs successfully proved that it was not the specific post or video that was harmful, but the deliberate design of the algorithms and interfaces to addict the user.

    Courts are beginning to distinguish between a platform’s role as a ‘transmitter’ of information and its role as a ‘designer’ of experiences. If these rulings hold up in the appellate processes, every product feature – from the infinite scroll mechanism to recommendation systems – could become the basis for multi-billion dollar lawsuits.

    The risk is not limited to social media. Similar battles are already being fought by Roblox, and experts warn that all platforms hosting user-generated content, including gaming or e-commerce sites, could be targeted.

    Although Meta and Google are announcing a fight in the higher courts, the mood in the US legal system is changing. Even Supreme Court judges are suggesting that Section 230 cannot be a ‘get-out-of-jail-free card’ that exempts companies from elementary concern for the safety of their customers. For technology leaders, the time is coming when an ethical audit of algorithms will become as important as a financial audit. The outcome of the upcoming appeals will not only decide the fate of thousands of pending cases, but will set new rules of the game for the entire digital economy.

  • Court blocks Pentagon. Anthropic temporarily removed from blacklist

    Court blocks Pentagon. Anthropic temporarily removed from blacklist

    Federal Judge Rita Lin has temporarily halted the US Department of Defense’s decision to list Anthropic as a threat to the nation’s supply chain. The ruling is the culmination of a high-profile dispute between model maker Claude and the Pentagon over the limits of military and intelligence use of artificial intelligence.

    The conflict escalated when Defence Secretary Pete Hegseth imposed a rarely used security risk label on Anthropic. This status, usually reserved for companies vulnerable to infiltration by foreign powers, prevented the company from bidding for key defence contracts. Anthropic argued in its lawsuit that the government’s decision was unlawful retaliation for its refusal to adapt Claude’s technology to domestic surveillance and autonomous weapons systems.

    In a 43-page memorandum of reasons, Judge Lin upheld the company’s argument, pointing out that the administration’s actions amounted to punishment for public criticism of the government’s position, in violation of the First Amendment to the US Constitution. The court also highlighted the company’s failure to provide due process (*due process*), which prevented Anthropic from effectively challenging the designation before it went into effect.

    From the Pentagon’s perspective, Anthropic’s resistance sets a dangerous operational precedent. The Justice Department argues that restrictions imposed by AI vendors can lead to technical uncertainty and the risk of sudden shutdown of military systems during missions. The government maintains that the designation was solely due to the company’s refusal to accept the contract terms, not its ethical views.

    Anthropic executives estimate that exclusion from government contracts could cost the company billions of dollars in lost revenue. While the current ruling gives the company breathing space, the administration has seven days to file an appeal. At the same time, a second civil government contract proceeding is pending in Washington, which remains a separate risk to Anthropic’s business model.

  • Trump appoints tech giants to AI council: Brin, Su and Huang at PCAST

    Trump appoints tech giants to AI council: Brin, Su and Huang at PCAST

    President Donald Trump’ s decision to appoint Mark Zuckerberg, Jensen Huang and Larry Ellison to the Presidential Council of Advisors on Science and Technology (PCAST) signals that the administration is abandoning its role as a strict arbiter in favour of a business partner, with Washington officially recognising AI as the most important battleground in the strategic rivalry with China.

    The composition of the council resembles the guest list of the world’s most exclusive technology conference. In addition to the leaders of Meta, Nvidia and Oracle, the group included Sergey Brin of Alphabet and Lisa Su of AMD. The presence of these names at one table with David Sacks, acting ‘czar’ for AI and crypto, suggests a new era of pragmatism. Instead of building regulatory barriers, the White House wants to dismantle them, something Trump signalled in his first days in office by commissioning a plan to accelerate innovation.

    The selection of Bob Mumgaard of Commonwealth Fusion Systems to join this group further indicates that the administration recognises the inextricable link between the development of artificial intelligence and the need for the gigantic clean energy resources required to power the data centres of the future.

    This partnership, however, raises important questions about transparency and the influence of large corporations on government policy. While Zuckerberg and Huang publicly declare their desire to empower the US, others, such as Oracle and Alphabet, remain reserved for the time being. Nevertheless, the council’s appointment ends a period of uncertainty about the direction US tech legislation will take.

    The direction is clear: deregulation, market dominance and a close symbiosis between Silicon Valley and Pennsylvania Avenue. In the race for supremacy in the field of AI, the United States has just set its sights on its strongest players, hoping that their private interest will turn out to be the same as the national interest.

  • Snapchat and the EU DSA law: Child safety investigation launched

    Snapchat and the EU DSA law: Child safety investigation launched

    Snapchat, the platform that once revolutionised the way young people communicate, has found itself at the centre of Europe’s most important digital responsibility dispute in years. The European Commission has formally launched an investigation against Snap Inc. alleging systemic failures to protect minors. The case goes beyond mere scrutiny – it is a test of the effectiveness of the EU’s Digital Services Act (DSA), which could cost the giant up to 6% of global revenues.

    EU Commissioner Henna Virkkunen has put the case sharply, suggesting that Snapchat has failed to bring its standards up to the strict requirements of the law. The allegations are serious, ranging from ineffective moderation tools that allow for drug and e-cigarette trafficking, to a hollow age verification system.

    Of particular concern is the phenomenon of so-called ‘child grooming’ and the ease with which minors can be exposed to criminal content. Brussels has also taken over an earlier investigation by Dutch regulators focusing on the sale of vaporisers to children.

    This shows that the EU intends to act as a single, centralised supervisory authority, eliminating fragmented attempts at national enforcement.

    From a business perspective, Snapchat’s situation is complicated. The company has struggled for years to monetise its user base in the shadow of Meta and TikTok. The need to overhaul the app’s architecture – including changing default account settings and eliminating so-called ‘dark patterns’ (deceptive interfaces) – could affect user engagement and growth rates.

  • New ban on router sales in the US. A blow to TP-Link and European brands

    New ban on router sales in the US. A blow to TP-Link and European brands

    The Federal Communications Commission’s (FCC) latest decision to ban the sale of consumer routers manufactured outside the US is a drastic turn that will echo not only in Shenzhen, but also in Berlin and Paris. By raising the rhetoric of ‘reliable supply chains’, Washington is de facto building a digital wall around its own market.

    Although the original target of the regulation seemed to be Chinese players such as market-dominant TP-Link, a literal interpretation of the new guidelines is hitting European technology leaders with a ricochet. Germany’s Fritz!Box or other brands from the Old Continent have been lumped into the same basket as Asian manufacturers. For the US regulator, the origin of the equipment becomes a binary choice: either the device carries the ‘Made in America’ label or it is treated as a potential threat to critical infrastructure and citizens’ privacy.

    For the technology business, this decision is a logistical nightmare. Even US giants such as Netgear, which for years optimised costs by manufacturing in Asia, face a murderous dilemma. Obtaining an exception to the ban requires not only ‘thorough motivation’, but above all a concrete plan for repatriating production to US soil. This signals that the US administration is no longer looking for compromises on cyber security, but is forcing a complete overhaul of global supply routes.

    From a market perspective, the FCC’s move is adding fuel to the geopolitical fire. While the US aggressively eliminates foreign technology from its homes and offices, any attempt to retaliate against US companies abroad is interpreted by Washington as a personal attack. For investors and business leaders, the message is clear: the era of a global, unified network based on the cheapest hardware is coming to an end. We are entering the era of ‘router sovereignty’, where market success is determined not only by bandwidth or price, but above all by the postcode of the factory where the hardware was made. The American market, hitherto the most receptive in the world, is becoming an exclusive club to which only those who play by the new local rules have access.

  • Digital tax in the polish government’s work list. Who will the new tax cover?

    Digital tax in the polish government’s work list. Who will the new tax cover?

    Polish minister of digitalisation Krzysztof Gawkowski announced that a project including a digital tax has been added to the government’s work list. If adopted and enacted, the project will be a move that positions Poland alongside France or Italy, creating a local response to the sluggishness in developing a global tax agreement at OECD level.

    The proposed tax structure precisely targets the largest entities. With the revenue threshold mechanism set at EUR 1 billion on a global scale and PLN 25 million on the local market, the new burden will bypass Polish start-ups and medium-sized platforms. The Ministry of Digitalisation is sending a clear message: we are taxing scale, not innovation.

    Selectivity architecture

    The key to understanding the new regulation is who is missing from it. The government has opted for broad exemptions that protect traditional e-commerce and the financial sector. A fashion brand’s online shop or a bank’s mobile app remain outside the reach of the new tax. Instead, the tax will strike at the heart of the business model of platforms such as Google, Meta or Amazon – where revenue is generated through personalised advertising, marketplace intermediation and monetisation of user data.

    The maximum rate of 3% on gross revenues may seem low, but in the world of technology, where operating margins are under constant pressure, it is a significant amount. An important safety net for companies with a real investment presence in Poland is that the new levy can be reduced by the income tax (CIT) paid. This suggests that the government does not want to penalise companies with a physical presence in the country, but rather those that transfer profits to jurisdictions with more favourable taxation.

    Digital arms fund

    Behind the ideological façade of ‘levelling the playing field’ lies hard budget mathematics. Estimates indicate that by 2030, the tax could feed the state coffers with more than PLN 3 billion a year. However, Warsaw does not intend to use these funds for current consumption. The strategy is to create a closed loop: the money collected from the giants is to return to the market in the form of investments in Polish AI, cyber security and digital competence.

    For the local tech ecosystem, this is a double-edged sword. On the one hand, the announcement of billions in AI subsidies is promising. On the other – there is a legitimate fear that the platforms subject to the tax will simply raise commissions for Polish vendors or increase advertising prices, which will ultimately be financed by the domestic consumer.

    International context

    Poland is embarking on a path previously followed by Austria or the UK, among others, while ignoring the cautious attitude of Germany or Ireland. The decision comes at a time when discussions on the so-called OECD First Pillar have stalled. By introducing its own solution, Warsaw gains negotiating leverage, but exposes itself to potential trade retaliation, particularly from the US, which traditionally sees digital taxes as discriminatory towards its domestic champions.

    The details of the definition of ‘digital interface’ and the role of the tax representative will be crucial in the coming months. It is in these technical provisions that the question of how profoundly the new tax will affect the profitability of digital operations in Central Europe will be decided.

  • Vietnam’s cost savings. How Huawei’s 5G could spook EU investors

    Vietnam’s cost savings. How Huawei’s 5G could spook EU investors

    Hanoi, for years reticent about Chinese critical infrastructure, is taking a sharp turn. The decision by Vietnam’s state-owned operators to award 5G network construction contracts to Huawei and ZTE is causing a stir in Brussels and Washington. While Vietnam’s motivations are pragmatic – Chinese equipment is cheaper and proven in the region – the price of this saving may prove high in the currency that is most valuable to Hanoi: foreign direct investment.

    During the EU-Vietnam Investment Forum, EU Commissioner for International Partnerships Jozef Sikela sounded a clear warning. 5G today is not just about faster internet, but a fundamental layer of industrial security. For global players such as Adidas and Lego, who have located their key production centres in Vietnam, data integrity is a prerequisite for further scaling of operations. If Western managers become suspicious that their trade secrets are being sent over the infrastructure of a provider considered risky in Europe and the US, they may withhold further tranches of capital.

    This situation puts Vietnam in a difficult geopolitical position. The country has been a beneficiary of the China Plus One strategy, attracting companies fleeing the Middle Kingdom. But now, by integrating Chinese technology into the heart of its digital economy, Hanoi risks losing its ‘safe haven’ status. Local authorities downplay the risks, pointing to the reliability of Huawei’s technology, but for Brussels, 5G is a ‘new battleground’ where trust in the supplier is more important than technical specifications.

    Paradoxically, European giants Ericsson and Nokia continue to build the core of Vietnam’s network, but the admission of Chinese rivals on a wider scale is changing the market dynamics. The European Union, despite criticism, is not withdrawing from Vietnam, announcing new investment packages in the transport and energy sectors. Nevertheless, the message coming from Europe is clear: network security is the foundation of modern business.

    For policymakers in Hanoi, the coming months will be a balancing test. They must decide whether the short-term benefits of cheaper infrastructure outweigh the risk of a long-term outflow of Western capital that has driven Vietnam’s economic miracle for decades. In a world where technology is inextricably linked to politics, choosing a 5G provider becomes one of the most important business decisions of the decade.

  • The end of Huawei equipment in the EU?

    The end of Huawei equipment in the EU?

    For years, European telecoms giants have operated in a safe loophole, arguing that national security is the exclusive domain of capitals, not EU officials. The latest opinion of the Advocate General of the EU Court of Justice in the case of Estonian operator Elisa drastically changes this dynamic. It signals that the time for voluntary removal of Chinese technology from 5G networks is coming to an end, and the bill for this transition will fall almost entirely on the private sector.

    Geopolitics over balance sheets

    Tamara Ćapet’s opinion is a powerful tool for ‘security hawks’ in Brussels and Washington. Confirming that the EU has the power to top-down exclude high-risk providers such as Huawei and ZTE strikes at the foundations of many operators’ strategies. These companies have long lobbied against the radical cuts, calling them an “act of self-harm” that will delay the digitalisation of the continent.

    For the European Commission, represented by Henna Virkkunen among others, this is a long-awaited breakthrough. Until now, the tardiness of the member states in implementing the ‘5G Toolbox’ was due to fears of trade retaliation from Beijing. Now Brussels gains the legitimacy to turn soft guidelines into hard, binding law.

    Billion-dollar risk without amortisation

    A key element of the opinion that will freeze the boards of European telcos is the issue of compensation. The Ombudsman made it clear that operators cannot count on automatic compensation for replacing faulty equipment. The only exception is in the case of a ‘disproportionately heavy’ burden, which is extremely difficult to prove in court practice.

    The scale of the challenge is enormous. Estimates suggest that the cost of removing critical components from high-risk suppliers could consume between €3.4 billion and €4.3 billion per block per year. The lack of public support means that these funds will be diverted from budgets for innovation and development of the 6G standard, which could undermine Europe’s competitiveness against the US and Asia.

    While Huawei calls for an assessment based on specifics rather than ‘general suspicions’, the trajectory is clear. The market is being forced to turn sharply towards Nokia and Ericsson. The example of Elisa, which has already replaced most of its infrastructure with Finnish solutions, shows that this process is inevitable.

    Regulatory risk related to geopolitics has become a fixed cost. The final CJEU ruling, expected later this year, is likely to seal this direction, forcing operators to fundamentally rethink their investment strategies for the next decade.

    Source: Politico

  • How much does AI replacement cost? Pentagon counts losses after Anthropic blockade

    How much does AI replacement cost? Pentagon counts losses after Anthropic blockade

    Defence Secretary Pete Hegseth’s decision to list Anthropic as a supply chain risk and order the withdrawal of its tools from the Pentagon within six months has created a breach that the US military is unwilling – and perhaps unable – to patch quickly.

    The context of the security barriers (guardrails) dispute between the startup and the Department of Defence exposes the modern military’s deep dependence on specific language models. Claude, Anthropic’s flagship product, became, in July 2025, the first AI model approved for secret military networks. Today, despite being blacklisted, it is still in use, which experts read as proof of its unrivalled performance in critical tasks such as operations planning or intelligence analysis.

    The Pentagon’s problem is not just a matter of user preference, although these users openly criticise alternatives such as Grok from xAI for inconsistency. It is primarily an operational and financial crisis. Joe Saunders, CEO of RunSafe Security, points out the brutal reality: it can take 12 to even 18 months to recertify systems for new AI models.

    For the Pentagon, this means not only gigantic costs, but above all a drastic drop in productivity. In some units, tasks that Claude used to do in seconds – such as searching through huge data sets – are now done manually using Excel sheets.

    The scale of Claude’s integration with defence infrastructure is striking. Even flagship projects such as Palantir’s Maven Smart Systems, with contract values in excess of a billion dollars, rely on code and workflows built under the Anthropic model. Having to rebuild them is an arduous and risky process.

    There is currently a blame game going on at the Pentagon. Some officials and contractors are ‘slowing down’ the process of decommissioning tools, hoping to reach a compromise before the six-month deadline. It is a classic clash between dynamic technology adoption and national security policy. If the Pentagon does not find a way to replace Anthropic quickly and effectively, it risks surrendering the field in the pursuit of technological sovereignty in the race for effectiveness, which is the most important currency on the modern battlefield.

  • The Kremlin’s digital wall. Internet blockades hit Russian business

    The Kremlin’s digital wall. Internet blockades hit Russian business

    Office workers cut off from the network, lost taxi drivers without navigation and young people playing a constant technological game with VPN blockers. This is not an infrastructure failure, but a new planned reality in Russia. The Kremlin is drastically tightening its control over the country’s internet, hitting Western platforms and popular messengers, fundamentally changing the conditions for business and society there.

    In major metropolitan areas, mobile internet is sometimes systematically switched off. At the same time, the authorities are throttling bandwidth on WhatsApp and Telegram and blocking virtual private networks en masse. The Kremlin spokesperson justifies these steps on the grounds of state security. He points to the threat from drones, which can use mobile networks for precision navigation, and the reluctance of foreign companies to comply with local laws. From a business perspective, however, this sends a clear message about the ultimate subordination of telecommunications infrastructure to the interests of the security apparatus.

    The motive driving these measures is deep political prevention and internal risk management. The new legislation gives the Federal Security Service unprecedented powers, allowing it to immediately cut off the services of any operator at the service’s request. Foreign diplomats and analysts suggest that Moscow, taking its cue from Chinese and Iranian surveillance models, is creating an architecture ready for any macroeconomic and geopolitical scenario. The memory of the systemic chaos following the war in Afghanistan at the end of the USSR prompts the current elites to build an airtight ecosystem. It is designed to prevent the loss of control of the information market, regardless of how the current armed conflict turns out.

    This clash at the intersection of politics and technology exaggerates the local digital market. Pavel Durov, founder of Telegram, openly criticises these restrictions, calling them evidence of the state’s fear of the free exchange of information. In place of blocked global services from Western giants such as Meta, the state administration is actively pushing its own supervised digital solutions, such as the MAX app. While officials explain this by the need to protect against Western influence, for the Russian economy this means drastic isolation and the need to navigate a highly unstable, manually controlled environment.

  • No more smartphones in primary schools. Government speeds up changes

    No more smartphones in primary schools. Government speeds up changes

    Poland joins the growing number of European countries that are choosing to systemically restrict the presence of smartphones in primary education. The Minister of National Education, Barbara Nowacka, has announced the acceleration of legislation that will introduce a top-down ban on mobile phones in primary schools from 1 September 2026. The decision, which was consulted directly with Prime Minister Donald Tusk, signals a move away from the existing autonomy of establishments towards a unified state strategy.

    The move is not just a response to teachers’ requests, but part of a broader strategy based on hard data. The ministry refers to the findings of the ‘Youth Diagnosis 2026’, which clearly indicate a deepening crisis of digital hygiene and a growing dependence of the youngest on social media. From a business and social perspective, this step can be read as an attempt to save the cognitive capital of future generations of workers, whose ability to focus deeply (deep work) is systematically degraded by the notifications and algorithms of entertainment platforms.

    The rules are to be clear, though not entirely inflexible. The main aim is to eliminate phones from breaks and lessons, where they have so far mainly served as entertainment. However, the ministry leaves a gateway for the ‘teaching process’ – the final decision on the use of devices as educational tools is to be left to the teacher. This approach suggests that the government is not fighting against the technology itself, but against its uncontrolled presence, which deconstructs the social structure of the school and makes it difficult to build peer relationships.

  • Amazon: VAT avoidance trial in Italy

    Amazon: VAT avoidance trial in Italy

    The Italian justice system has just sent a clear signal to Silicon Valley: a settlement with the tax authorities no longer guarantees immunity in criminal cases. In a move unprecedented in Europe, Milan prosecutors have called for a trial against Amazon and four of its managers. The case concerns the alleged evasion of €1.2 billion in VAT between 2019 and 2021.

    The situation is all the more unusual in that the e-commerce giant voluntarily paid €527 million into the Italian coffers last December in the hope of closing the dispute. Previous practice in Italy was clear – the payment meant the end of legal troubles for the corporation. This time, however, the prosecutor’s office decided to come out ahead, questioning the effectiveness of soft settlements and raising questions about the stability of the regulatory environment for foreign investors.

    “Avoidance algorithm” under the magnifying glass

    At the heart of the dispute is not only the amount, but above all the company’s operational mechanism. Prosecutors claim that Amazon used sophisticated models and algorithms that allowed tens of thousands of sellers from outside the European Union – mainly from China – to offer goods without full tax transparency. Under Italian law, the platform, as an intermediary, shares responsibility for the unpaid VAT of its counterparties.

    For executives in the technology sector, this issue is of strategic importance. VAT is a harmonised tax across the Community, meaning that a possible conviction in Italy could become a ready-made template for prosecutors in other member states. The global marketplace business model, which has so far enjoyed relative freedom to settle cross-border transactions, faces the spectre of a systemic overhaul.

    Implications for investment

    Amazon has announced a strong defence, arguing that the unpredictability of the Italian legal system is hurting the country’s investment attractiveness. However, the company has more fronts to cover – parallel investigations by the European Public Prosecutor’s Office (EPPO) and local investigations into customs fraud and employee data protection are ongoing.

  • Anthropic losing billions? The implications of the Pentagon’s decision for the AI market

    Anthropic losing billions? The implications of the Pentagon’s decision for the AI market

    The Pentagon ‘s decision to list the makers of the Claude model as a supply chain risk is a precedent shaking up the technology market. Anthropic is responding with lawsuits and warnings of gigantic financial losses in an attempt to salvage relationships with key customers.

    The Washington-Silicon Valley dispute is entering a decisive phase. Faced with an unprecedented decision by Secretary of Defence Pete Hegseth, who officially labelled Anthropic a “supply chain risk” and banned the Pentagon and its contractors from using the company’s products, the startup has opted for a firm legal response. On Wednesday, the company filed a motion with the US Court of Appeals for the District of Columbia to stay the decision pending full judicial review. This step complements a separate lawsuit filed earlier this week in a California federal court, in which Anthropic directly challenges the legitimacy of the military blacklisting.

    The conflict, which has been going on for a week, is fuelled by fundamental differences in approach to technological barriers. Anthropic, which has positioned itself as a leader in secure artificial intelligence from the outset, adamantly refuses to lift internal restrictions that block the use of their technology for mass surveillance of citizens and the construction of fully autonomous weapons systems. The government administration, on the other hand, takes the position that the military must have unrestricted access to deployed AI solutions.

    For a company valued at tens of billions of dollars, the current impasse is much more than an image problem. In a submission to the appeals court, Anthropic’s lawyers categorically emphasise that the sanctions imposed by the Pentagon will cause the company ‘irreparable damage’. The business impact is already being felt. As the court documents show, the risky entity status has caused a huge upset in the commercial market. More than a hundred corporate clients have already managed to contact the startup to assess their own risks arising from the collaboration.

    According to the company’s own best estimates, the government’s adverse actions could cost it hundreds of millions to even several billion dollars in lost revenue by 2026. The situation exposes the growing tension between the innovative technology sector and traditional national security. While Anthropic fights in the courts to protect its principles and business, market rivals are seamlessly stepping into the vacant space to take over lucrative government contracts. The outcome of this battle is sure to set a new standard for the entire AI market in government relations.