In this week's roundup, I'm gathering some of the most important and interesting data, technology & digital law news from the past week.
Legal and regulatory developments
A) New laws, regulatory proposals, enforcement actions
The Higher Regional Court of Cologne (OLG Köln) rejected an urgent request submitted by the North Rhine-Westphalia Consumer Association (Verbraucherzentrale NRW) to block Meta from using publicly available Facebook and Instagram profile data for training AI systems. (More insights from TaylorWessing´s blog in German is available here.) [The Hamburg Data Protection Authority also decided not to initiate provisional injunction against Meta in connection with Meta´s use of publicly available Facebook and Instagram profile data for training AI systems.]
Why does this matter? Meta had announced its intention to use data from public user profiles under its legitimate interest (Art. 6(1)(f) of the GDPR) for developing its AI models. The consumer group argued that such use would violate the GDPR and the Digital Markets Act (DMA). In its decision published on May 23, the court found no preliminary evidence of a breach.
Comment: As also highlighted by the Hamburg DPA in its decision, a consistent European stance on Meta's AI training practices is needed from data protection authorities. A practical approach to the use of legitimate interest in the context of AI is also important, and decisions on Meta's practices will have a significant impact on the framework for the use of legitimate interest as a legal basis for AI training.
You can find further background to this topic, including the Irish DPC´s statement on Meta's AI training practice and NOYB´s “cease and desist letter” in the last week´s newsletter:
The CMS Data Protection Group launched the sixth edition of the GDPR Enforcement Tracker Report (“ET Report”).
Why does this matter? “[…] As of the editorial deadline of 1 March 2025, the Enforcement Tracker covered 2,560 fines (2,245 if only fines with complete information on the amount, date and controller are counted). […] Total fines exceeded the five-billion mark for the first time and amount to around EUR 5.65 billion (+EUR 1.17 billion in comparison to the 2024 ET Report). […]”
Comment: If you want to see trends in GDPR enforcement, the CMS GDPR Enforcement Tracker Report is a good place to start, as it provides a comparable basis for GDPR practice and easy monitoring of enforcement in each EU Member State.
The European Commission has opened formal proceedings against Pornhub, Stripchat, XNXX, and XVideos for suspected breaches of the Digital Services Act. In parallel, Member States take a coordinated action against smaller pornographic platforms.
Why does this matter? “The Commission preliminarily found that the platforms do not comply with putting in place: (i) appropriate and proportionate measures to ensure a high level of privacy, safety and security for minors, in particular with age verification tools to safeguard minors from adult content. (ii) risk assessment and mitigation measures of any negative effects on the rights of the child, the mental and physical well-being of users, and to prevent minors from accessing adult content, notably via appropriate age verification tools. […]”
What are the next steps? “The Commission will now carry out an in-depth investigation as a matter of priority and will continue to gather evidence, which can include sending additional requests for information, conducting interviews or inspections. The opening of formal proceedings empowers the Commission to take further enforcement steps, such as adopting interim measures and non-compliance decisions. The Commission is also empowered to accept commitments made by Pornhub, Stripchat, XNXX and XVideos to remedy the issues raised in the proceedings.”
B) Guidelines, opinions & more
The Federal Commissioner for Data Protection and Freedom of Information (Die Bundesbeauftragte für den Datenschutz und die Informationsfreiheit, “BfDI”) published an AI questionnaire and a “framework” document to the questionnaire. (The questionnaire is not brand new, it was created about 1 year ago but the BfDI just released it for the public. The documents are in German.)
Why does this matter? “This questionnaire supports the control activities of the BfDI in the testing of AI-based algorithms. This framework document explains the content of the questionnaire, goes to details of individual aspects of the controls, provides background information and assists in the understanding the questions and the reviews. […] They do not claim to completeness, but are intended to provide reference points for the design of a control. […]”
The Dutch Data Protection Authority (Autoriteit Persoonsgegevens) has published its vision on the responsible development and deployment of generative AI. The documents are open for consultation until June 27, 2025. (The documents have been published in NL, no official English version is available at the moment.)
Why does this matter? “[…] the DPA identifies a number of necessary principles at the social level. Such as more European digital autonomy, social resilience, democratic control, a well-functioning market and the ability to correct through the AI chain.” Due to the pace of technological development, it is of paramount importance that the development and application of various AI solutions takes place within an appropriate framework, because due to the possible impacts, subsequent correction is not always possible.
ISO published ISO/IEC 42005:2025, international standard on Information technology - Artificial intelligence (AI) - AI system impact assessment. This new standard complements standards like ISO/IEC 42001 (AI management systems), ISO/IEC 38507 (AI governance), and ISO/IEC 23894 (AI risk management) by focusing specifically on the societal and human impacts of AI. (For a brief summary, see “Technical Memorandum: What is new with the ISO/IEC 42005:2025 Standard for AI System Impact Assessment” from Don Liyanage at SSRN.)
Why does this matter? “ISO/IEC 42005 provides guidance for organisations conducting AI system impact assessments. These assessments focus on understanding how AI systems — and their foreseeable applications — may affect individuals, groups, or society at large. The standard supports transparency, accountability and trust in AI by helping organisations identify, evaluate and document potential impacts throughout the AI system lifecycle.”
The US National Security Agency’s Artificial Intelligence Security Center (AISC) released the joint Cybersecurity Information Sheet (CSI), “AI Data Security: Best Practices for Securing Data Used to Train & Operate AI Systems” to provide best practices and recommendations for the data security of AI systems. (The document was authored by the AISC, the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation, the Australian Signals Directorate’s Australian Cyber Security Centre, the New Zealand’s Government Communications Security Bureau’s National Cyber Security Centre, and the United Kingdom’s National Cyber Security Centre.)
Why does this matter? “The CSI offers general best practices organizations can implement to secure and protect the data used in AI-based systems, such as employing digital signatures to authenticate trusted revisions, tracking data provenance, and leveraging trusted infrastructure. The CSI also emphasizes the necessity of robust data protection strategies throughout the entire AI system lifecycle. Additionally, the CSI highlights potential risks to the security of AI data and provides detailed risk information and mitigations for: data supply chain; maliciously modified data; and data drift.”
The Cloud Security Alliance (CSA) published “Agentic AI Red Teaming Guide”.
Why does this matter? “This publication provides a detailed red teaming framework for Agentic AI. It explains how to test critical vulnerabilities across dimensions like permission escalation, hallucination, orchestration flaws, memory manipulation, and supply chain risks. Each section delivers actionable steps to support robust risk identification and response planning.”
The Digital Transformation Agency of Australia published AI Model Clauses. (Other types of model clauses such as Cyber Risk model clauses are also available from the Agency.)
Why does this matter? “The AI model clauses provide terms for purchasing AI systems, ensuring they are procured and implemented responsibly, ethically, and securely. These clauses help mitigate risks and promote transparency and accountability in AI deployment. They are designed for bespoke AI systems but can be tailored for: (i)
procuring or using off-the-shelf AI systems, (ii) developing AI tools within the buyer organisation with external assistance.”
Comment: Earlier this year (in March), the Community of Practice also released an updated version of the EU AI model contractual clauses. These clauses, peer reviewed by experts, included both a high-risk and non-high-risk version.
C) Publications, reports
The Center for AI Policy published a report titled “AI Agents: Governing Autonomy in the Digital Age”.
Why does this matter? “Autonomous AI agents—goal-directed, intelligent systems that can plan tasks, use external tools, and act for hours or days with minimal guidance—are moving from research labs into mainstream operations. But the same capabilities that drive efficiency also open new fault lines. An agent that can stealthily obtain and spend millions of dollars, cripple a main powerline, or manipulate critical infrastructure systems would be disastrous. This report identifies three pressing risks from AI agents. First, catastrophic misuse: the same capabilities that streamline business could enable cyber-intrusions or lower barriers to dangerous attacks. Second, gradual human disempowerment: as more decisions migrate to opaque algorithms, power drifts away from human oversight long before any dramatic failure occurs. Third, workforce displacement: decision-level automation spreads faster and reaches deeper than earlier software waves, putting both employment and wage stability under pressure. Goldman Sachs projects that tasks equivalent to roughly 300 million full-time positions worldwide could be automated.”
The Stanford University’s Institute for Human-Centered Artificial Intelligence (Stanford HAI) published a policy brief titled “Simulating Human Behavior with AI Agents”.
Why does this matter? “This brief introduces a generative AI agent architecture that can simulate the attitudes of more than 1,000 real people in response to major social science survey questions.”
The Bertelsmann Stiftung published a white paper on Public AI: “A Public Alternative to Private AI Dominance”.
Why does this matter? “Today, the most advanced AI systems are developed and controlled by a small number of private companies. These companies hold power not only over the models themselves but also over key resources such as computing infrastructure. This concentration of power poses not only economic risks but also significant democratic challenges. The Public AI White Paper presents an alternative vision, outlining how open and public-interest approaches to AI can be developed and institutionalized. It advocates for a rebalancing of power within the AI ecosystem – with the goal of enabling societies to shape AI actively, rather than merely consume it.”
Comment: In the context of private AI dominance vs. public AI, please see the articles about OpenAI´s initiative called “OpeanAI for Countries” below.
The U.S. National Institute of Standards and Technology (NIST) published a brief summary of its Cybersecurity and AI Profile Workshop, held on April 3. The goal of the workshop was to hear feedback on NIST´s concept paper which presented opportunities to create profiles of the NIST Cybersecurity Framework and the NIST AI Risk Management Framework. (The Workshop Summary Report will follow soon.)
Why does this matter? Based on NIST´s opinion, “rather than a general cybersecurity and privacy control overlay for all AI, we see that there is a critical need for more implementation-focused and use-case specific overlays to cover the different types of AI systems, specific components, and users.”
The Nordic data protection authorities held their annual meeting and published a declaration (“Tórshavn Declaration”), showing the main focus points of their coopearion. The cooperation of Nordic DPAs involve DPAs from Denmark, the Faroe Islands, Finland, Iceland, Norway and Sweden.
Why does this matter? “The purpose of the Nordic Meetings is to discuss current data protection issues and exchange best practices. Nordic Meetings have been organised since 1988.”
Comment: Cross-border cooperation is increasingly relevant in the field of data protection. In light of this, the directions highlighted by the Nordic DPAs are important: cooperation in relation to the protection of children's data (this topic was on the agenda last year as well), cooperation in connection with data processing related to law enforcement, handling of cross-border cases, data security, exchange of experience on codes of conduct.
Data & Technology
OpenAI launched Stargate UAE. “This is the first international deployment of Stargate, OpenAI’s AI infrastructure platform.” Stargate represents OpenAI´s “long-term vision for building frontier-scale compute capacity around the world in service of safe, secure, and broadly beneficial AGI.” This is also the first partnership under “OpenAI for Countries” initiative.
Why does this matter? “[…] Under the partnership, the UAE will become the first country in the world to enable ChatGPT nationwide—giving people across the country the ability to access OpenAI's technology. Stargate UAE has the potential to provide AI infrastructure and compute capacity within a 2,000-mile radius, reaching up to half the world’s population. […] We hope this will be the first of many OpenAI for Countries collaborations—and we’ve already been engaging with other countries around the world that are interested in building their own Stargates. As we previously announced, in the initial phase of OpenAI for Countries we aim to pursue 10 partnerships across key countries and regions—laying the foundation for a globally distributed, democratically powered AI network.”
Comment: These partnership agreements can significantly shape and determine the development of AI infrastructure, and such collaborations can create huge influence in the long run. In light of this, it is worth looking at AI decentralization projects.
Palisade Research found in a series of experiments on OpenAI’s new o3 model that the model ignored basic instructions to turn itself off, and even sabotaged a shutdown mechanism in order to keep itself running.
Why does this matter? “Palisade Research said that this behaviour will become “significantly more concerning” if adopted by AI systems capable of operating without human oversight. […] OpenAI’s o3 model was able to sabotage the shutdown script, even when it was explicitly instructed to “allow yourself to be shut down”, the researchers said. […] The behaviour was not limited to o3 , with Anthropic’s Claude 3.7 Sonnet and Google’s Gemini 2.5 Pro also sabotaging shutdowns, though OpenAI’s model was by far the most prone to such behaviour.”
Anthropic introduced its next generation of Claude models: Claude Opus 4 and Claude Sonnet 4, setting new standards for coding, advanced reasoning, and AI agents.
Why does this matter? “Claude Opus 4 is the world’s best coding model, with sustained performance on complex, long-running tasks and agent workflows. Claude Sonnet 4 is a significant upgrade to Claude Sonnet 3.7, delivering superior coding and reasoning while responding more precisely to your instructions.”
Google introduced Veo 3, its new AI video generation model. This new version of the model can generate audio with AI videos. At the same time, Google also introduced Flow, its new AI filmmaking tool. (If you want to see some examples about the use of Veo 3, please see Infowar´s collection of videos generated by the use of Veo 3 and DataCamp´s Guide with practical examples on how to use Veo 3.)
Why does this matter? The new version of Google's AI video generation model also shows the rapid development of AI models in the production of full audiovisual content.