Tag: Law

  • Big Tech vs Australia. New law to force platforms to pay publishers

    Big Tech vs Australia. New law to force platforms to pay publishers

    Australia is once again becoming a global testing ground in the state-BigTech relationship. The government in Canberra has announced plans to introduce a ‘News Bargaining Incentive’ – a mechanism to replace the existing, ineffective 2021 regulations. The new regulation presents giants such as Meta, Alphabet and TikTok with a stark choice: either negotiate commercial deals with local publishers, or face a tax of 2.25% of their local revenues.

    According to the bill, which is expected to come into force in July 2025, the proceeds of the new levy will not go into the general state budget, but will be redirected directly to media organisations. The key criterion for the distribution of funds is to be the number of journalists employed, in order to promote real content creation and not just coverage. Prime Minister Anthony Albanese, despite warnings from the US administration about possible retaliatory tariffs, emphasises the sovereignty of Australian economic policy.

    Australia’s move is a shift away from a soft negotiation model to hard fiscalism. The previous system allowed platforms to avoid payment by extinguishing contracts or, in extreme cases, blocking news content, something the Met has already tested in 2021. The current proposal is much harder to neutralise from an operational level – a tax on revenue is a cost that cannot be avoided with a simple algorithm change.

    However, the geopolitical risks are worth noting. Donald Trump’s announcements of tariffs on countries that tax US technology companies suggest that local journalism protection could become the trigger for a wider trade conflict. For the technology sector, this represents a period of increased volatility and the need to review strategies for presence in markets with strong protectionist tendencies.

  • Is Claude Mythos from Anthropic threatening the banks? Urgent talks in London and the US

    Is Claude Mythos from Anthropic threatening the banks? Urgent talks in London and the US

    As the Financial Times reports, UK regulators – including the Bank of England and the FCA – are urgently reviewing the potential risks posed by the latest AI model from Anthropic: the Claude Mythos Preview.

    The situation is unprecedented, as the model is not just another chatbot for generating marketing content. Claude Mythos is being developed as part of the enigmatic ‘Project Glasswing’ initiative . According to Anthropic’s official communications, this is a controlled environment in which the model serves a defensive purpose. The problem is that the line between defence and attack in cyberspace is thinner than ever.

    The manufacturer itself has admitted that Mythos has already identified thousands of critical vulnerabilities in operating systems and browsers. What is a breakthrough for security engineers is becoming a nightmare for guardians of the financial system. If the model can pinpoint vulnerabilities in global software with such ease, the critical IT infrastructure of major banks, insurers and stock exchanges could be up for grabs.

    Concern is not just confined to the City of London. Across the ocean, US Treasury Secretary Scott Bessent has already convened a meeting with Wall Street giants to assess the cyber risks of developing such sophisticated models. The reaction of regulators suggests that we are standing on the threshold of a new era of risk management, where the biggest threat to banks is no longer bad loans, but artificial intelligence capable of autonomously detecting errors in the code on which the global circulation of money is based.

    Over the next two weeks, representatives of the UK financial sector are to be instructed in detail by the National Cyber Security Centre (NCSC). The message is clear for business leaders: it is time for IT security audits to stop being a formality and become a real battleground against a model that learns faster than any hacker. Project Glasswing was supposed to bring transparency, but for now it has cast a long shadow over confidence in the digital stability of the financial sector.

  • New space law in Poland opens new chapter for VC and deep tech

    New space law in Poland opens new chapter for VC and deep tech

    For years, the Polish space sector functioned in a kind of regulatory vacuum. Although indigenous companies successfully provided instruments for ESA or NASA missions, there was a lack of a national legal framework to define the rules of the game in orbit. The Act on Space Activities adopted by the Sejm on 13 February changes this state of affairs, transforming Poland from an ambitious observer into a fully-fledged player on the map of the global extraterrestrial economy.

    For investors and entrepreneurs, the most important signal coming from Warsaw is predictability. The law introduces a clear definition of space activities, encompassing launch, exploitation and – crucially in an era of the growing problem of space junk – deorbit of objects. This approach encapsulates the life cycle of a mission in a legal framework, which is essential for obtaining debt financing or commercial insurance.

    Unlike many European counterparts, the Polish regulation relies on flexibility. The abandonment of rigid capital thresholds in favour of assessing the risk of a specific mission is a nod to the NewSpace sector. Instead of blocking young spin-off companies from entering the market with prohibitive financial requirements, the President of the Polish Space Agency (POLSA) will assess the entity’s real operational capabilities.

    Equally important is the issue of civil liability. The law limits the maximum sum insured to €60 million, which is a reasonable amount on the scale of the global space industry. This protects smaller players from costs that could stifle innovation at the prototyping stage. In addition, limiting the liability of subcontractors to wilful misconduct only builds a safe ecosystem for a wide supply chain – from sensor manufacturers to software providers.

    The National Register of Space Objects (NROK) is becoming a key oversight tool. Registering an object under the Polish flag means bringing it under the jurisdiction of the Republic of Poland, which, from an international perspective, brings order to issues of ownership and state responsibility. At the same time, POLSA is emerging as a central administrator of satellite data, which is expected to stimulate the market for downstream applications – using data from orbit in agriculture, logistics or insurance.

    The Polish law is not just a formality, but a strategic foundation. By creating a stable legal environment, Warsaw is sending a clear message to VC funds: Polish deep tech is ready to scale beyond the atmosphere, and legal risk is no longer a barrier to entry.

  • CISO Hot Chair. Personal responsibility in the age of NIS2 – when digital risk becomes private

    CISO Hot Chair. Personal responsibility in the age of NIS2 – when digital risk becomes private

    Until a decade ago, the biggest professional nightmare for a Chief Information Security Officer (CISO) was losing his or her job as a result of a spectacular hacking attack. It was an acute but purely corporate consequence. Today, the landscape is being dramatically transformed. In the face of new EU regulations such as NIS2 or DORA, as well as precedents from Western markets, what is at stake is no longer just a position within the company structure. The issue of personal legal and financial liability is on the table.

    The transformation of the CISO’s role from a technical gatekeeper of infrastructure to a key business strategist is not only due to the natural evolution of the IT market. It is being forced by a confluence of geopolitical factors, the rapid development of artificial intelligence and the coming quantum revolution. However, it is the legislative layer that is making the security chief’s chair one of the ‘hottest’ seats in the modern enterprise.

    The end of the “technical advisor”

    For years, the role of the CISO was seen through the prism of hard skills: configuring firewalls, managing access or monitoring networks. Risk acceptance decisions were often made at lower levels, away from boardrooms. Current reality is brutally verifying this model. The integration of artificial intelligence with cyber security systems means that the amount of data being processed exceeds human perception. Autonomous systems make decisions to repel attacks in real time, which raises fundamental questions about oversight.

    Who is liable when an AI algorithm makes a mistake resulting in medical data leakage or supply chain paralysis? In light of upcoming regulations, the answer is increasingly less likely to be ‘the software provider’ and more likely to point to the executives who released the system in question.

    The NIS2 Directive or the DORA Regulation are not just sets of technical guidelines. They are pieces of legislation that redefine the concept of ‘due diligence’. They shift the burden of responsibility from IT departments directly to governing bodies. In this arrangement, the CISO ceases to be just an engineer – he or she becomes the guardian of compliance and the guarantor that the company is operating within the boundaries of the law. Unfamiliarity with legislative nuances is becoming as dangerous to security managers as an unpatched software vulnerability (zero-day).

    Scapegoat syndrome vs. real perpetration

    For years, there has been a debate in the cyber security community about the disparity between responsibility (responsibility) and decision-making (authority). Many CISOs fear a scenario in which they become a convenient ‘buffer’ for the board of directors in a moment of crisis. These fears are not unfounded. With cyber attacks supported by foreign governments or advanced ransomware groups becoming a daily occurrence, it is impossible to completely eliminate risk. The goal becomes resilience – the ability to survive an attack and recover quickly.

    The problem arises when an organisation expects a ‘security guarantee’ from the CISO, while refusing a budget adequate to the risks. In the new legal regime, such asymmetry is dangerous for both parties. If the CISO is held criminally or civilly liable for failing to meet his or her obligations, he or she must have viable tools to block risky business projects.

    The modern labour market is reviewing these relationships. There is a trend where experienced security managers during contract negotiations are demanding that a clear decision-making framework be written in and that they be covered by D&O (Directors and Officers) insurance policies, which were traditionally reserved for board members. This signals a maturing of the industry – professionals are ready to take on the burden of responsibility, provided it goes hand in hand with a mandate to act.

    “Paper Trail” – Bureaucracy as a defence shield

    In the context of legal liability, the approach to documentation is also changing. What was once regarded as burdensome bureaucracy is now becoming a key element of the CISO’s defence strategy. The ‘trust but verify’ principle is giving way to an evidence-based approach.

    In the face of threats from supply chains (Supply Chain Attacks) or advances in quantum computing, which may soon challenge current encryption standards, the CISO must demonstrate that it has taken all possible countermeasures available at a given technological stage. Documenting the decision-making process, including formal Risk Acceptance Forms (RACs) signed by the board, is no longer a formality. This proves that the security manager has reliably informed decision-makers about the consequences of, for example, not migrating to quantum-resilient cryptography or not implementing Zero Trust architecture when integrating OT/IT systems.

    This is because, in legal terms, it is not about being unsinkable – as there are no such strongholds in the digital world – but about proving that the highest standards of professionalism were adhered to and that any damage was not due to negligence.

    CISO at the table, not in the server room

    The evolution of threats is forcing a change in the positioning of the CISO in the organisational structure. Since cyber security touches on ethics (when implementing AI), geopolitics (when selecting cloud providers) and business continuity, the person responsible for it cannot report to the IT director, whose priority is system performance and availability. Conflicts of interest in such an arrangement are inevitable.

    The modern management model involves the CISO being directly at the decision-making table, as a partner to the CEO and the board. His or her job is to translate complex technical issues into the language of business and financial risk. The role is evolving into that of ‘Architect of Trust’. In the digital economy, customer and partner trust is as hard currency as share capital. A company that can transparently communicate its approach to data protection and AI ethics gains a competitive advantage.

    Professionalisation through responsibility

    The spectre of legal liability, while it may seem paralysing, has the potential to heal the business-security relationship in the long term. It will force the professionalisation of the CISO function, breaking it away from the stereotype of a ‘brake’ on innovation.

    In the coming years, the market will be looking for hybrid leaders – combining deep technological knowledge with legal and ethical insight. The ability to navigate between the requirements of NIS2, the challenges of the post-quantum era and the pressures on the bottom line will become the definition of competence in this position. For companies, this means that not only cyber security budgets need to be revised, but more importantly – the responsibility structure. This is because security has ceased to be an IT problem and has become a parameter that determines a company’s existence in the regulated market.

  • What’s next for AI? Ministry of Digitalisation announces an updated version of the Policy for the Development of Artificial Intelligence in Poland

    What’s next for AI? Ministry of Digitalisation announces an updated version of the Policy for the Development of Artificial Intelligence in Poland

    The document assumes, among others, the implementation of AI in public administration (AI HUB Poland), the creation of dedicated ‘Sectoral Deployment Maps’, as well as support for small and medium-sized enterprises. The tasks described in the AI Policy are ultimately intended to contribute to the realisation of the assumed goals – Poland as the heart of the AI continent thanks to implementations in key sectors of the economy and an efficient state using AI solutions.

    AI BUH Poland

    AI Policy envisages the launch of the AI HUB Poland portal, which is intended to be a tool to support the effective management, development and implementation of AI technologies in the public sector. The aim of the platform is to create an integrated environment that will improve the use of artificial intelligence in public services and in key areas of state functioning.

    Among other things, the project envisages the rapid adoption of AI-based innovations, the upskilling of administrative staff, the harmonisation of data to build artificial intelligence models and the creation of national large language models.

    AI HUB Poland is a joint initiative of experts from the Central Informatics Centre, NASK and partners to support the country’s digital development and strengthen Poland’s international competitiveness. The platform’s activities include the launch of a central system for managing AI projects, building a repository of open automation solutions, sharing best practices and implementation support for smaller administrative units.

    Sector implementation maps

    The Ministry of Digitalisation notes that artificial intelligence is becoming one of the most important drivers of economic transformation. Its use can significantly accelerate the development of innovation, increase the competitiveness of Polish companies and improve the quality of life of society. In order to fully exploit this potential, it is necessary to focus efforts on those projects and sectors that can bring Poland the greatest economic and social benefits.

    The Polish Economic Institute identifies three approaches that help identify priority areas for AI development: analysis of key industries in the economy, assessment of the so-called ‘technology stack’ and identification of ‘grand challenges’ – complex problems where AI can play a particularly important role.

    Based on these analyses and the recommendations of the AI Working Group, the sectors with the greatest potential for AI deployments have been identified. These are: energy, e-commerce, dual-use products, cyber security, BioMedTech, financial services and transport and logistics. It is in these areas that AI can generate the most value – from optimising energy consumption, to faster drug discovery, to autonomous mobility and advanced cyber defence systems.

    The directions set are also in line with the European Commission’s focus on the development of secure, interoperable and high-quality AI systems in strategic sectors across the EU.

    In order to successfully implement artificial intelligence, cross-sector collaboration, a robust data infrastructure, competent staff and consistent regulation are needed. Therefore, dedicated Sector Deployment Maps will be created for each of the key sectors. These will include an analysis of the industry’s needs, key business areas for AI applications, data sharing rules and support mechanisms – so that Polish companies can fully exploit the potential of the breakthrough technology.

    Support for small and medium-sized enterprises

    Small and medium-sized enterprises play a key role in the development of AI in Europe. Thanks to their flexibility, ability to experiment quickly and innovative approach, it is SMEs – and especially startups – that are often the first to implement new technologies and bring breakthroughs to market. At the same time, they face barriers such as limited resources, more difficult access to data and the need to meet ethical and regulatory requirements.

    In order to accelerate the development of AI in this sector, support including funding, computing infrastructure and the ability to test solutions in secure environments is essential. Incubators, accelerators and knowledge-sharing platforms play an important role in helping companies commercialise innovations faster and build technological competence.

    In Poland, the infrastructure being developed – including AI Factories – is to allow entrepreneurs to benefit from technology and regulatory advice, computing power and testing environments. This is complemented by the PFR’s ‘Digital Crate for Companies’ programme, which helps to confirm technology readiness and gain support for AI Act compliance.

    One of the most important elements of the support policy for SMEs are the regulatory AI sandboxes, which – according to EU regulations – must be completely free of charge for them. They allow solutions to be tested under controlled conditions and reduce the risks associated with market entry. Specialised sectoral sandboxes will be established in Poland and their integration with AI Factories will provide access to data and infrastructure.

    In order for Polish companies to realise the full potential of AI, it will be crucial to raise awareness of the benefits of its implementation, provide practical advice and launch dedicated programmes and competitions to support AI projects. In the long term, this will translate into an increase in the competitiveness of SMEs, the development of innovation and the strengthening of Poland’s position in the European technological ecosystem.

    Summary

    The updated AI Policy responds to the challenges posed by the dynamic development of AI technologies. The document sets out directions for action, integrating the needs of government, business, science and society. The Ministry of Digitalisation announces further work on the implementation of the AI Policy.

  • The end of the wild west in the cloud. European Union takes 19 IT giants under the microscope

    The end of the wild west in the cloud. European Union takes 19 IT giants under the microscope

    European Union regulators made an unprecedented move on Tuesday, naming 19 companies – including Amazon Web Services, Google Cloud and Microsoft – as critical service providers for European banking. The decision fundamentally changes the balance of power between Big Tech and financial supervision, moving the relationship from a partnership to a strictly regulated level.

    The move is a direct consequence of the Digital Operational Resilience Act (DORA) regulation coming into force. The new legislation gives European supervisory authorities (EBA, EIOPA, ESMA) the power to directly control technology companies, which until now have only been accountable to their business customers. The regulators make no secret of the fact that their main aim is to mitigate systemic risk. In this era of widespread digitalisation, a failure at one of the leading cloud providers could paralyse a significant part of the European banking system, triggering a domino effect with difficult-to-quantify consequences.

    The list of players targeted by the new surveillance regime is diverse, showing how deeply technology has penetrated finance. In addition to the cloud ‘big three’ (AWS, Google, Microsoft), IBM, market data providers such as Bloomberg and the London Stock Exchange Group (LSEG), as well as telecoms operators, including Orange, and consultancies such as Tata Consultancy Services have also been targeted. Each of these entities will now have to prove that they have an adequate risk management framework in place and that their infrastructure is resilient to cyber attacks and technical failures.

    The industry reaction to the announcement was measured and diplomatic, suggesting that the tech giants had been preparing for this scenario for a long time. Representatives from Microsoft and Google Cloud immediately declared their full willingness to cooperate, emphasising their commitment to cyber security. The LSEG, in turn, openly welcomed the new designation, seeing it as a confirmation of its key role in the ecosystem. Silence has so far been maintained by Bloomberg and Orange, which may indicate ongoing internal analyses of the new regulatory obligations.

    Brussels’ decision is part of a wider global trend of tightening control over critical infrastructure. The European Central Bank explicitly lists technological disruption alongside geopolitical tensions as the main threats to the sector. Similar steps are being taken by the UK, although the legislative process there is lagging behind that of the EU – London does not plan to designate its critical entities until next year. Europe is thus once again becoming a testing ground for new regulatory standards in the technology world.

  • OpenAI refuses to hand over 20 million ChatGPT logs. Legal dispute with The New York Times continues

    OpenAI refuses to hand over 20 million ChatGPT logs. Legal dispute with The New York Times continues

    The legal dispute between OpenAI and The New York Times is escalating, shifting the burden from general accusations of copyright infringement to the thorny ground of user privacy. On Wednesday, lawyers for the creators of ChatGPT asked a federal judge in New York to block the injunction. It obliges the company to disclose more than 20 million anonymised ChatGPT chat records .

    For OpenAI, this is an attempt to protect the confidential information of millions of users. The company argues that 99.99% of these transcripts are irrelevant to the case, and that the release of the logs, even after de-identification, constitutes a “speculative fishing expedition” and an invasion of privacy. Dane Stuckey, director of information security at OpenAI, described the potential disclosure as a forced handover of “tens of millions of very personal conversations”.

    For The New York Times, however, the chat logs are key evidence in the case. The media conglomerate, which accuses OpenAI of illegally using millions of its articles to train models, needs this data for two reasons. First, to prove that ChatGPT is actually replicating copyrighted content in response to queries from ordinary users.

    Secondly, the logs are to be used to refute OpenAI’s central defence thesis. The company claims that the NYT deliberately ‘hacked’ the chatbot, using specific, misleading queries (prompts) to forcibly extract evidence of a breach from the model. The logs are meant to show whether such results are the norm or just the result of manipulation.

    The two sides argue over the interpretation of security. A spokesperson for the NYT called OpenAI’s position “deliberately misleading”, insisting that “no user privacy is at risk”. He pointed out that the court only ordered the delivery of a sample of chats, anonymised by OpenAI itself and covered by the protective order. Judge Ona Wang, in granting the original injunction, also found that “exhaustive de-identification” would be sufficient protection.

  • Copyright vs AI: GEMA wins lawsuit against OpenAI. It’s about training data

    Copyright vs AI: GEMA wins lawsuit against OpenAI. It’s about training data

    Tuesday’s decision by a court in Munich is a significant signal for developers of generative artificial intelligence and a potential milestone in the dispute over the ‘fair use’ of training data. In a closely watched copyright case, the court sided with GEMA, the German collecting society, against OpenAI. Judge Elke Schwager ruled that the US company could not use song lyrics without a licence and ordered it to pay damages for past infringements.

    GEMA, representing nearly 100,000 creators (including musician Herbert Groenemeyer), argued that ChatGPT was reproducing protected texts without authorisation. Crucially, the organisation claimed that their work had been used to unauthorisedly train the model. OpenAI retorted during the trial that these arguments demonstrate a fundamental misunderstanding of ChatGPT’s operating principles and architecture.

    Although the verdict can be appealed, the case is seen as a key precedent for AI regulation in Europe beyond just music. GEMA is openly pursuing a licensing framework that would strike at the current operating model of many AI companies. These would assume that technology companies would have to pay for the use of protected content both at the algorithm training stage and in the output generated by AI. Both parties said they would issue broader statements later on Tuesday.

  • They sold LinkedIn data for $15,000. The backstory of the lawsuit against ProAPIs

    They sold LinkedIn data for $15,000. The backstory of the lawsuit against ProAPIs

    LinkedIn has filed a lawsuit in federal court in California against ProAPIs and its chief executive, Rahmat Alam, accusing the company of running a sophisticated and massive data scraping operation of user profiles. The case sheds light on the ongoing battle between social media platforms and entities that commercialise unauthorised access to information.

    According to LinkedIn, ProAPIs created an automated system that generated thousands of fake accounts on the site every day. The aim was to bypass security and mass-copy data that was only available to logged-in members – including profile information, company data, posts or reactions. Although the platform claims to be able to detect and block such accounts within a few hours, this short time was enough for the bots to collect huge amounts of data.

    The information obtained in this way was then to be processed and sold to third parties. LinkedIn reports that ProAPIs offered access to its services for sums as high as $15,000 per month, advertising its collections as “up-to-date and comprehensive”. At the same time, the company allegedly made unlawful use of LinkedIn’s logo and trademarks, which may have suggested an official partnership.

    In the lawsuit, the platform stresses that such actions are a blatant violation of its terms of use, which prohibit the creation of false identities and the automated reading of data. LinkedIn argues that this practice not only violates its security architecture but, above all, undermines users’ trust and exposes them to real risks – from spam and phishing attempts to the resale of sensitive information.

    LinkedIn is seeking damages for image and economic losses in a bid to send a clear message to the data scraping industry. The case is not an isolated one and is part of a wider trend where digital platforms are increasingly aggressively defending access to their ecosystems. The incident demonstrates how big and lucrative business automated data scraping has become and how difficult it is to effectively protect against such attacks.

  • Pact with the devil or deal of the century? Hollywood and OpenAI sit down at the table on Sora

    Pact with the devil or deal of the century? Hollywood and OpenAI sit down at the table on Sora

    OpenAI is getting ahead of potential legal disputes surrounding its Sora video generation tool by offering film studios and copyright owners future control over their intellectual property and a share of the revenue. This strategic move aims to win over Hollywood before the technology becomes widely available and has time to trigger the legal battles that plague other generative AI models.

    The company plans to give content owners detailed options to manage how their characters or works are used by the Sora engine. A key feature is expected to be the ability to completely block the use of a particular intellectual property. The initiative is an attempt to build a bridge with the creative industry, which has been watching the advances in video generation with growing concern. Behind the scenes, it is said that major players such as Disney are approaching similar technologies with great caution, fearing uncontrolled use of their iconic characters.

    A key element of the proposal is a revenue-sharing model with rightsholders who choose to make their assets available in the Sora ecosystem. Sam Altman, CEO of OpenAI, acknowledged that developing a fair and effective monetisation system will take experimentation and time. Testing of different approaches is expected to begin soon in a closed group of Sora users, with a coherent model eventually to be implemented across the company’s broader suite of products.

    Although Sora is not yet publicly available – it is currently being tested by a small group of creators and filmmakers – its announcements have caused a stir in the industry. OpenAI’s proactive strategy to settle copyright issues before the product’s wide debut could be a significant competitive advantage against similar tools being developed by Google or Meta. Securing partnerships with large content providers is seen as key to commercial success and legitimising the AI-generated video market.

  • Privacy Tech market: how RODO and AI have created a new billion-dollar industry?

    Privacy Tech market: how RODO and AI have created a new billion-dollar industry?

    We live in an age of fundamental paradox. On the one hand, artificial intelligence, driven by large language models (LLMs), is becoming the lifeblood of modern business, promising unprecedented innovation.

    On the other hand, its insatiable appetite for data clashes head-on with the global privacy rush. This conflict is no longer just a matter of ethics, but a hard regulatory reality that is creating and transforming entire technology markets before our eyes.

    Public sentiment has reached critical mass. Research shows that as much as 86% of the US population expresses growing concern about how their data is being processed, and more than half believe AI will make it harder to protect personal information.

    In response, governments around the world are building a legislative wall. What started with the groundbreaking RODO (GDPR) in Europe has quickly spread globally, creating a dense web of legislation, from the CCPA in California to the LGPD in Brazil.

    Today, more than 137 countries already have national data protection laws, covering almost 80% of the world’s population.

    The stakes in this game are astronomical. Regulators do not hesitate to use their most powerful weapon: financial penalties. The record €1.2 billion fine imposed on Meta for data transfers between the EU and the US or the €746 million fine for Amazon are powerful signals to the market.

    Any such decision is a direct growth stimulus for the ‘Privacy Tech’ sector – a market that has not grown organically out of consumer needs, but has been almost entirely created by legislative action.

    The law does not just regulate technology – it creates it. In this new landscape, a key conclusion emerges: the tool that created the problem – artificial intelligence – is simultaneously becoming the key to solving it.

    We are entering the era of ‘Privacy 2.0’, in which compliance becomes intelligent, proactive and, in retrospect, autonomous.

    From manual work to intelligent automation

    Prior to the era of RODO, privacy management in many organisations was based on manual data mapping, endless spreadsheets and tedious processes for responding to user requests (DSARs).

    The cost of this inefficiency was huge – it was estimated that manually handling a single DSAR request cost an average of more than $1,500. In a world where companies process petabytes of data, such a model was untenable.

    Artificial intelligence (AI) has become the engine that is driving a revolution in this area, transforming privacy management platforms into intelligent command centres. Modern systems are using AI to automate key, once manual processes.

    AI algorithms scan a company’s entire infrastructure, from local servers to the cloud, for personal data, understanding its context and creating a dynamic map in real time. AI models then analyse data flows and access permissions to proactively identify and assess risks, alerting to potential privacy by design violations.

    AI also automates the entire user consent lifecycle and DSAR request fulfilment, reducing processes from weeks to hours.

    The financial impact of this transformation is measurable. Organisations that make extensive use of AI and automation in the security space save an average of $1.76 million in data breach costs compared to companies that do not.

    This is hard evidence of the return on investment of smart privacy management platforms that turn the cost of compliance into operating profit.

    The trust frontier: The world of privacy-enhancing technologies (PETs)

    Automation, however, is only the beginning. The real revolution is taking place at the border of cryptography and advanced mathematics, in the world of Privacy-Enhancing Technologies (PETs).

    It is a set of tools aiming to achieve the ‘holy grail’ of analytics: the ability to extract valuable information from sensitive sets without revealing the data itself.

    One of the key technologies is homomorphic encryption (HE). It allows calculations to be performed on encrypted data, as if the analyst were performing operations on a closed box without seeing its contents.

    Only the owner of the data, who holds the key, can open the box and see the result. The technology, which is being developed by giants such as Microsoft and IBM, is being used in medicine to analyse patient data from multiple hospitals and in finance to detect fraud together.

    Another groundbreaking tool is zero-knowledge proof (ZKP). This is a cryptographic protocol that allows you to prove that you know a certain piece of information without revealing it yourself.

    It’s like being able to prove you are over 21 without showing an ID card with your date of birth and address. ZKP is revolutionising decentralised identity and private financial transactions.

    The problem of analysing data on distributed, private sets is solved by differential privacy and federated learning. Differential privacy involves adding precisely calculated ‘noise’ to a dataset that prevents the identification of a single individual, while preserving overall statistical trends.

    In contrast, federated learning is an approach in which AI models are trained directly on end devices (e.g. smartphones) and only aggregated, anonymised model ‘enhancements’ are sent to a central server, rather than raw user data.

    Giants such as Apple and Google are already using these techniques.

    The deployment of these technologies signals a fundamental shift. Data is no longer an asset whose value lies in exclusive ownership. It is becoming a resource that can be securely shared and collaborated on, unlocking the enormous economic value that was previously trapped in corporate silos. Privacy becomes not a barrier, but a technology that enables innovation.

    The endgame: the dawn of autonomous privacy

    The evolution so far sets out a clear trajectory, the logical culmination of which is a vision of the future in which data protection is managed by autonomous AI systems. A distinction must be made here between automation and autonomy.

    Automation performs defined tasks. Autonomy is the ability of a system to learn, adapt and make decisions on its own to achieve a goal.

    Such a system of the future will be based on the convergence of several technologies. The foundation is autonomous databases that use AI to become self-governing, self-securing and self-repairing.

    This is the basis for a new generation of agent-based AI – systems that can autonomously interact with databases and perform complex tasks to achieve a goal, such as ‘ensure continuous compliance with global regulations’.

    The nervous system is an intelligent data pipeline that filters and edits personal data in real time before it goes to analysis.

    The combination of these elements paints a picture of a future in which an autonomous system will continuously monitor the global legal landscape, automatically translate legal language into enforceable policies and reconfigure data flows across a company’s infrastructure in real time.

    It will also autonomously detect and neutralise potential breaches before they can escalate.

    This technological trajectory leads to the inevitable ‘commoditisation of compliance’, where core tasks will become a universally available service. However, this does not mean the end of the privacy professional’s profession. On the contrary, its role will be transformed – from operational ‘firefighting’ to strategic oversight and ethics management of autonomous systems.

    In this new reality, the key competences will no longer be just interpreting the law, but auditing algorithms and defining operational boundaries for AI agents.

    Privacy 2.0 is not an end in itself. It is the operating system for the future of the digital economy.

  • Nvidia under the magnifying glass of China. Company accused of monopoly

    Nvidia under the magnifying glass of China. Company accused of monopoly

    Beijing has launched an anti-trust investigation against Nvidia, marking another installment in its escalating technology conflict with Washington. The timing of the decision announcement – coinciding with trade talks – suggests that the chip dispute is becoming a key pressure point in the two powers’ relationship.

    China’s State Administration for Market Regulation (SAMR) has announced that it has launched a preliminary investigation into Nvidia’s business practices. While the official announcement was very succinct, the move was seen as a strategic response to US export restrictions that have cut off Chinese companies from cutting-edge AI processors.

    Representatives of the US administration have described this step as being taken at the “wrong time”, which only underlines its importance in the ongoing negotiations.

    The actions of the Chinese office are not coincidental. It is part of a broader strategy in which Beijing is responding to US tariffs and the placement of more Chinese companies on trade blacklists.

    Similar antitrust investigations have affected other US giants in the past, signalling a willingness to use regulatory tools as a form of retaliation.

    Analysts point out that the formal pretext for the proceedings is most likely Nvidia’s acquisition of Israeli company Mellanox Technologies five years ago. China approved the deal on the condition that the GPU manufacturer would continue to supply advanced technology to the Chinese market.

    Currently, due to US restrictions, Nvidia cannot sell its most powerful integrated solutions (combining GPUs with Mellanox networking technology), which Beijing may interpret as a violation of previous commitments.

    The situation puts Nvidia in an extremely difficult position. On the one hand, China accounted for 13% of the company’s total sales last year, and demand for AI chips from local tech giants remains huge.

    On the other hand, the company has to manoeuvre between increasingly stringent US export regulations and growing pressure from Beijing. Attempts to circumvent the restrictions, such as the creation of a special H20 chip for the Chinese market, face further obstacles – from security concerns on the Chinese side to unclear payment rules imposed by Washington.

    Potential consequences of the investigation could include financial penalties of up to 10% of Nvidia’s annual revenue. However, analysts point out that more severe than the potential penalties are China’s long-term strategic goals of pursuing technological self-sufficiency and promoting domestic alternatives.

    The antitrust investigation is therefore first and foremost a wake-up call and a powerful tool of pressure, demonstrating that further tightening of course by the US will have direct, painful consequences for US corporations.

  • Artificial intelligence systems and copyright

    Artificial intelligence systems and copyright

    Artificial intelligence (AI) systems can be distinguished into two basic groups, which differ both technologically and legally, namely the reference to:

    • traditional artificial intelligence (AI),
    • generative artificial intelligence (Generative AI).

    The above breakdown is important for a proper understanding of the obligations involved, the potential risks and the regulations that apply to them, especially in terms of copyright.

    Traditional AI systems operate on the basis of clearly defined algorithms and rules that process data, detect patterns and make decisions. In this case, copyright is primarily concerned with the input data and the effects of human work on the system, while the algorithms and models themselves are not protected as works. Usually, serious copyright issues do not arise here because AI does not generate new autonomous works.

    In contrast to traditional AI, generative AI is different. Generative AI creates entirely new content – text, images, sounds or code – based on patterns learned from huge data sets, which often contain copyrighted material. This raises a number of legal issues, such as:

    • Use of protected material to train AI models: In the context of the development of generative AI, a key issue is how AI models are trained on huge datasets, which often contain copyrighted material such as texts, images, music or source code. Under the EU’s Artificial Intelligence Regulation (AI Act), which aims to regulate the use of AI in the European Union, providers of generative models are required to respect copyright owners’ objections to the use of their works in the training process. This procedure is called ‘opt-out’ and allows creators or rights owners to object to the use of their content for training AI systems.
    • Copyright protection of generated works: AI creations are usually not considered copyrighted works because they are not created by humans, which means that they often end up in the public domain. At the same time, however, there is a major problem with the fact that AI uses huge datasets when generating content, which often contain copyrighted material. If the generated work contains elements that approximate or even copy protected works, third-party copyright infringement may occur. Such infringements may incur legal liability on the part of users or AI providers, even if they were not fully aware of the use of protected material.
    • Lack of a clear limit of protection: In the case of more complex creations that result from multiple human-AI interactions (e.g. adapting prompts), the question of granting legal protection is unclear and requires further regulation. In addition, current regulations, such as the EU’s 2019 Digital Single Market Copyright Directive (CDSM) and the 2024 AI Act Regulation, introduce some mechanisms to protect creators’ rights, but leave a lot of ambiguity in interpretation. These provisions do not make it clear how to treat copyright in the context of AI-generated works, which causes difficulties for both creators and AI technology developers.

    The division of AI systems into traditional and generative AI systems carries important legal implications. Traditional AI is primarily subject to standard regulations regarding data processing and liability for decisions made. Generative AI, on the other hand, which creates new content, poses additional legal challenges, especially in the areas of copyright, confidential data protection and contractual compliance. In addition, it is subject to increasingly detailed and extensive regulations.

    Copyright law vis-à-vis artificial intelligence faces significant challenges. Currently, works created autonomously by AI are not protected, and the use of protected materials to train models requires consideration of the owners’ rights. The future of copyright regulation will depend on further legislative work and case law, which will need to define more precisely the rules for the use of AI in creation and the protection of the rights of creators and users.

    As such, both developers and users of these systems should carefully read the applicable regulations and consciously assess the risks associated with their use.


    Author: r.pr. Damian Lipiński, GFP_Legal | Grzelczak Fogel i Partnerzy | Wrocław Law Firm

  • Microsoft’s shocking confession. One US law leaves them helpless

    Microsoft’s shocking confession. One US law leaves them helpless

    The issue of data sovereignty in the European Union is becoming increasingly important, and Microsoft’ s recent statement to the French Senate only heightens concerns. The conglomerate admitted that it could not fully guarantee that European customer data would not be shared with US authorities. This statement, while sincere, casts a shadow over the concept of the ‘sovereign cloud’ promoted by major US providers.

    The problem lies in a conflict of jurisdictions. On the one hand, US tech giants such as Microsoft, AWS and Google are investing in European data centres, promising that their EU customers’ information will remain on the continent. Initiatives such as the ‘Microsoft Sovereign Cloud’ are supposed to ensure compliance with local regulations and protect against unauthorised access. On the other hand, these same companies are subject to the US CLOUD Act, which obliges them to make data available to US law enforcement agencies upon request, regardless of where it is stored.

    Microsoft representatives emphasise that the company is not defenceless and can challenge unfounded demands. So far, the conglomerate says, there has been no case of data from European servers being handed over under the CLOUD Act. But for many privacy experts and European decision-makers, this is not enough. The risk, even if theoretical, is unacceptable, especially in the context of the processing of sensitive data by public institutions.

    The European Union, aware of its dependence – around 72% of the cloud market in Europe is in the hands of three US companies – is looking for alternatives. Initiatives such as Gaia-X aim to create a federated European data infrastructure that would provide greater control and sovereignty. However, this is a lengthy and expensive process, and it is extremely challenging to match the scale and technological sophistication of the US leaders.

    As a result, European companies and institutions are faced with a difficult choice. They can continue to work with US providers, accepting the legal risks, or seek local alternatives that may not yet offer such advanced and comprehensive services. Microsoft’s acknowledgement that it ‘cannot guarantee’ full sovereignty is an important voice in this debate that will certainly accelerate Europe’s drive towards digital independence.

  • Not just NIS2, or the new cyber security certification regulations

    Not just NIS2, or the new cyber security certification regulations

    At the beginning of May 2025, a government bill on a national cyber security certification system was submitted to the Sejm. This is not only a reaction to European regulations (specifically – EU Regulation 2019/881), but also an opportunity to sort out a market that today tends to be opaque and based on trust in ‘logos’.

    Why do we need a cyber security certification scheme?

    To date, there has been no legislation in Poland that regulates cyber security certification. Yes, the market offers the possibility to obtain various types of cyber certificates, but these are private certificates, where each owner of the “certification programme” sets its own rules. Without questioning the sense and merit of such certificates, it must be remembered that the lack of uniform certification rules/criteria may – at least in some cases – raise questions as to how much reliance can be placed on such certificates. It is therefore welcome that there will soon be statutory provisions in this area.

    What will change in practice?

    The entry into force of the Cyber Security Certification Regulations will not mean that private certificates can no longer be issued. They will still remain and interested persons/entities will be able to continue issuing or applying for them. In addition to private certificates, however, there will be the additional possibility of certification by accredited bodies within the legal framework established by the state. Importantly, the new provisions do not impose any additional obligations on entities not interested in participating in the certification scheme.

    What will the certification levels be?

    Certificates can be granted under European certification schemes (we currently have the EUCC or the European Cybersecurity Scheme on Common Criteria, which can be applied to ICT products such as hardware or software; further schemes are under development for 5G and cloud services) and – in addition – national certification schemes, which will be created by means of regulations by the minister responsible for IT. At the European level, a three-tier classification will apply (according to levels of trust: basic, significant/significant and high), while at the national level the classification is to be single-tier.

    European certification programmes will focus on ICT products, services and processes, and certificates issued under them will be automatically recognised throughout the European Union.

    National certification will be possible not only for ICT products, services and processes, but also for the entity’s cyber-security management system (as a whole) or the personal qualifications of individuals.

    What will the certification system look like?

    The bill stipulates that the certification scheme will involve:

    • Minister responsible for IT (responsible, inter alia, for the creation of national schemes, supervision and control),
    • Polskie Centrum Akredytacji (responsible for granting accreditation to conformity assessment bodies),
    • assessment bodies, i.e. certification bodies, including private companies,
    • entrepreneurs and individuals who wish to undergo certification.

    When will the certification regulations come into force?

    Although the draft law on the national cyber-security certification system was ahead of the planned amendment to the law on the national cyber-security system (implementing the NIS2 directive) in the legislative race, we will have to wait a while longer for its enactment. It has now been referred to parliamentary committees and must then go through the entire legislative procedure in the Sejm and the Senate. Realistically, it should appear at the turn of Q2/Q3 2025.


    Author: r.pr. Piotr Grzelczak, GFP_Legal Law Firm (Grzelczak Fogel and Partners sp.p.)