In this week's roundup, I'm gathering some of the most important and interesting data, technology & digital law news from the past week.
Legal and regulatory developments
A) New laws, regulatory proposals, enforcement actions
The UK Information Commissioner´s Office (ICO) fined 23andMe of GBP 2,310,000 for infringements of Articles 5(1)(f) and 32(1) of the UK GDPR (i.e., 23andMe failed to implement appropriate security measures).
Why does this matter? “As a result of the Infringements, a threat actor was able to perpetrate a credential stuffing attack over the course of at least five months (the “Data Breach”), during which they obtained access to personal data relating to 155,592 UK-based customers of 23andMe (“Affected UK Data Subjects”). The personal data exfiltrated by the threat actor was offered for sale on a number of online forums in August and October 2023, with the relevant posts indicating that the threat actor had targeted 23andMe customers according to their racial and ethnic background. Whilst the nature of the personal data accessed by the threat actor will have varied between the Affected UK Data Subjects, at least some of it constituted special category data. This special category data included personal data relating to health and genetic data, as well as data relating to the racial or ethnic origin of some customers, which could be inferred from the personal data processed by 23andMe.”The Data (Use and Access) Act (DUAA) became law in the UK on June 19, 2025. The UK Information Commissioner´s Office (ICO) provided several useful guidance on the DUAA. For a summary of main changes, please see ICO´s summary here.
Why does this matter? “The DUAA is a new Act of Parliament that updates some laws about digital information matters. It changes data protection laws in order to promote innovation and economic growth and make things easier for organisations, whilst it still protects people and their rights. […] The changes will be phased in between June 2025 and June 2026.” “The DUAA amends, but does not replace, the UK General Data Protection Regulation (UK GDPR), the Data Protection Act 2018 (DPA) and the Privacy and Electronic Communications Regulations (PECR).”
The European Commission found earlier this year (in April) that Meta breached the Digital Markets Act (DMA) obligation to give consumers the choice of a service that uses less of their personal data. Therefore, the Commission imposed a fine of EUR 200 million on Meta. Now, the non-confidential version of the Decision concerning Meta is available. (Apple was also found in breach of the DMA and a fine in the amount of EUR 500 million was imposed on Apple. For more details, see the Commission´s press release published in April.)
B) Guidelines, opinions & more
The Conference of Independent Data Protection Authorities of Germany (Datenschutzkonferenz, DSK) published a Guidance on recommended technical and organisational measures in the development and operation of AI systems. (The document is in German.) Based on the guidelines, I prepared an overview of the most important measures to be taken at different stages of the AI lifecycle (available on LinkedIN). [At the same time, the DSK also published a short resolution on “Confidential Cloud Computing”.]
Why does this matter? “[…] This paper is primarily aimed at manufacturers and developers of AI systems and is intended to serve as an aid to them in the data protection-compliant development of AI systems that can be used in compliance with data protection regulations. Controllers who want to use AI systems can […] consult the position paper presented here to take into account the technical development opportunities in the procurement process.”
Comment: It is crucial that the document covers the entire AI lifecycle, from design to deployment and use of the system.
The French Data Protection Authority (CNIL) published its recommendations on the development of AI systems, specifying the conditions for using legitimate interest, in particular in the case of web scraping.
Why does this matter? The CNIL confirms that legitimate interest can be a valid legal basis for the development of AI systems under strict conditions. CNIL´s position aligns with the European Data Protection Board (EDPB) opinion from December 2024. It also provides practical examples for the use of legitimate interest.
The National Institute of Standards and Technology (NIST) published guidance offering 19 example zero trust architectures (ZTA), using off-the-shelf commercial technologies, giving organizations valuable starting points for building their own architectures.
Why does this matter? “ZTA implements a risk-based approach to cybersecurity — continuously evaluating and verifying conditions and requests to decide which access requests should be permitted, then ensuring that each access is properly safeguarded. Zero trust also prevents attackers who have gained access from roaming freely within the network and wreaking havoc as they go. Because of its effectiveness against both internal and external threats, ZTA adoption is increasing, and some organizations are required to use a ZTA.”
The Philippines' National Privacy Commission issued guidelines (NPC Circular No. 2025-01) on the processing of personal data collected using body-worn cameras.
Why does this matter? “The Circular sets out requirements for law enforcement and security, as well as vloggers using body-worn cameras and other recording devices.”
C) Publications, reports
The OECD published a policy paper on “Sharing trustworthy AI models with privacy-enhancing technologies”
Why does this matter? “Privacy-enhancing technologies (PETs) are critical tools for building trust in the collaborative development and sharing of artificial intelligence (AI) models while protecting privacy, intellectual property, and sensitive information. This report identifies two key types of PET use cases. The first is enhancing the performance of AI models through confidential and minimal use of input data, with technologies like trusted execution environments, federated learning, and secure multi-party computation. The second is enabling the confidential co-creation and sharing of AI models using tools such as differential privacy, trusted execution environments, and homomorphic encryption. PETs can reduce the need for additional data collection, facilitate data-sharing partnerships, and help address risks in AI governance. However, they are not silver bullets. While combining different PETs can help compensate for their individual limitations, balancing utility, efficiency, and usability remains challenging. Governments and regulators can encourage PET adoption through policies, including guidance, regulatory sandboxes, and R&D support, which would help build sustainable PET markets and promote trustworthy AI innovation.”
Comment: Along with the TOMs recommendations from the DSK (see above), this OECD paper is a useful resource for applying PETs in the AI context.
The OECD also published a working paper on the “Developments in Artificial Intelligence markets: New indicators based on model characteristics, prices and providers”.
Why does this matter? “Given AI’s potential to generate productivity and welfare gains, the paper provides new empirical evidence about AI markets to assess whether potential AI users benefit from favourable market developments regarding prices, quality and variety. It leverages an extensive data collection covering Generative AI model characteristics, including their performance and price, developers, cloud providers, and downstream AI-powered applications globally over the past two years. It finds several trends that are indicative of dynamism for the time being – including declining quality-adjusted prices and a growing number of market players and model offerings – but several risks remain, related to bottlenecks in the key inputs to AI, notably data, computing power and skills.”
The European Parliamentary Research Service (EPRS) published a policy briefing titled “What role for AI skills in (re-)shaping the future European workforce?”
Why does this matter? “[…] The growing skills gap in the EU, with almost half of the population lacking basic digital skills, including AI skills, poses a significant challenge for the future that needs to be addressed for the EU to maintain its competitiveness and manage regional disparities. […] Fostering anticipatory governance, a culture of innovation, supporting diversity and inclusiveness in the AI workforce, and strengthening digital infrastructure are all critical to ensuring that the benefits of AI are shared by all, while minimising its negative impacts. […] Targeted investment in EU-wide digital infrastructure and education that emphasises lifelong learning and skills development could ensure balanced economic growth and competitiveness in the global talent market. By examining the multifaceted interaction between AI, skills and jobs, a way forward may be identified that focuses on the needs of EU citizens and ensures that the future European workforce – and citizens in general – are equipped to succeed in an increasingly automated and AI-driven economy.”
The Joint California Policy Working Group on AI Frontier Models released a final version of its report, “The California Report on Frontier AI Policy”, outlining a policymaking framework for AI.
Why does this matter? “This report leverages broad evidence—including empirical research, historical analysis, and modeling and simulations—to provide a framework for policymaking on the frontier of AI development. Building on this multidisciplinary approach, this report derives policy principles that can inform how California approaches the use, assessment, and governance of frontier AI—principles rooted in an ethos of “trust but verify.” This approach takes into account the importance of innovation while establishing appropriate strategies to reduce material risks. The report does not argue for or against any particular piece of legislation or regulation. Instead, it examines the best available research on foundation models and outlines policy principles grounded in this research that state officials could consider in crafting new laws and regulations that govern the development and deployment of frontier AI in California. […]”
A new research paper has been published that “empirically investigates the current scope of DPAs' activities in enforcing regulations concerning AI-driven services and automated decision-making (ADM) systems to assess their suitability for enforcing the AI Act.” (Mazur, Joanna - Novelli, Claudio - Choińska, Zuzanna: “Should Data Protection Authorities Enforce the AI Act? Lessons from EU-wide Enforcement Data”, June 12, 2025)
Why does this matter? “Our findings suggest that, although many DPAs have some experience with AI-related enforcement, their activity level in this area varies significantly. Combining these results with a legal analysis of the AI Act's provisions, we argue that, despite being heterogeneous, the expertise developed by DPAs represents a valuable resource for effectively enforcing the AI Act.”
The Digital Economist published a position paper on “The ROI of AI Ethics Profiting with Principles for the Future”.
Why does this matter? “"This paper explores the intersection of return on investment (ROI) and AI ethics, making a compelling case for why prioritizing ethical considerations in AI implementation is not only a moral imperative but also a strategic business decision. Significant consequences are revealed upon closer examination of the financial, reputational, and operational risks that arise from neglecting ethics considerations or unethical AI practices (SCoRe 2025; EY 2025). […] Central to this paper is a proposed ethical AI ROI calculator, a tool designed to help technology practitioners, along with leaders in business, government, and organizations, assess the potential returns and risks associated with their AI initiatives. By inputting relevant data points and considering a range of ethical factors, users could generate custom ROI projections and make data-driven decisions about prioritizing ethical AI practices.”
The European Commission´s Joint Research Center published a study on the “Outlook Report on Generative AI - Exploring the Intersection between Technology, Society and Policy”.
Why does this matter? “This Outlook report, prepared by the European Commission’s Joint Research Centre (JRC), examines the transformative role of Generative AI (GenAI) with a specific emphasis on the European Union. It highlights the potential of GenAI for innovation, productivity, and societal change. […] the Outlook report begins with an overview of the technological aspects of GenAI, detailing their current capabilities and outlining emerging trends. It then focuses on economic implications, examining how GenAI can transform industry dynamics and necessitate adaptation of skills and strategies. The societal impact of GenAI is also addressed, with focus on both the opportunities for inclusivity and the risks of bias and over-reliance. Considering these challenges, the regulatory framework section outlines the EU’s current legislative framework, such as the AI Act and horizontal Data legislation to promote trustworthy and transparent AI practices. Finally, sector-specific ‘deep dives’ examine the opportunities and challenges that GenAI presents. […] The report concludes that GenAI has the potential to bring significant social and economic impact in the EU, and that a comprehensive and nuanced policy approach is needed to navigate the challenges and opportunities while ensuring that technological developments are fully aligned with democratic values and EU legal framework.”
The Europol published its 2025 Internet Organised Crime Threat Assessment (IOCTA) report (“Steal, deal and repeat - How cybercriminals trade and exploit your data”). This report is Europol’s analysis of evolving threats and trends in the cybercrime landscape, with a focus on how it has changed over the last 12 months.
Why does this matter? “During the past year, organised crime has continued to evolve at an unprecedented pace. The rapid adoption of new technologies and the continued expansion of our digital infrastructure has further shifted criminal activities to the online domain. This shift has meant that digital infrastructure and the data within it have become prime targets, making data become a key commodity, serving both as a target and an enabler in the cybercrime threat landscape.”
Data & Technology
The German automaker, VW (through its subsidiary, Moia) may be close to launch self-driving (Level 4) in 2026.
Why does this matter? “With the presentation of the ID. Buzz AD, Moia is entering the next stage of autonomous mobility. In Hamburg, the subsidiary of the Volkswagen Group presented the production version of the ID. Buzz AD. This is nothing less than the first fully autonomous production vehicle from Volkswagen, optimized for use in public and private mobility services. The shuttle is to become the heart of a holistic ecosystem that combines vehicle, software platform and operator support. The so-called "Moia Turnkey Solution" aims to support cities, municipalities and fleet operators in the fast and safe introduction of autonomous driving services. In addition to the ID. Buzz AD including an integrated "self-driving system" from camera, software and AI specialist Mobileye, it also includes the self-developed "AD MaaS" platform (which stands for "Autonomous Driving Mobility-as-a-Service"). The software controls fleets in real time, takes over security functions, supports passengers digitally and can be seamlessly integrated into existing booking systems.”
NVIDIA and Deutsche Telekom announced a cooperation to advance Germany’s sovereign AI. (Deutsche Telekom´s press release is available here.)
Why does this matter? “This AI factory, to be located in Germany and operated by Deutsche Telekom, will enable Europe’s industrial leaders to accelerate manufacturing applications including design, engineering, simulation, digital twins and robotics. […] This AI infrastructure — Germany’s single largest AI deployment — is an important leap for the nation in establishing its own sovereign AI infrastructure and providing a launchpad to accelerate AI development and adoption across industries. In its first phase, it’ll feature 10,000 NVIDIA Blackwell GPUs — spanning NVIDIA DGX B200 systems and NVIDIA RTX PRO Servers — as well as NVIDIA networking and AI software.”
Comment: Together with Mistral´s Compute initiative, siginificant steps have been taken to develop the AI infrastructure in Europe. France’s President Emmanuel Macron emphasized the importance of AI infrastructure, calling Mistral´s initiative “our fight for sovereignty, for strategic autonomy”. Meanwhile, UK Prime Minister Keir Starmer committed £1billion to national AI infrastructure.
Mattel and OpenAI announced a strategic collaboration to support AI-powered products and experiences based on Mattel’s brands.
Why does this matter? “[…] By using OpenAI’s technology, Mattel will bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy, and safety. […] Mattel and OpenAI will emphasize safety, privacy, and security in the products and experiences that come to market. […] In addition, Mattel will incorporate OpenAI’s advanced AI tools like ChatGPT Enterprise into its business operations to enhance product development and creative ideation, drive innovation, and deepen engagement with its audience. With OpenAI, Mattel will have advanced AI capabilities that can power the development and operations of consumer products and experiences.”
Comment: Integrating AI into toys is a sensitive topic. On the one hand, it's important to empower the next generation to use AI confidently but cautiously. However, exposing children to technology too early can cause significant harm. (A new empirical MIT study reports that “ChatGPT may be eroding critical thinking skills.” Note that the paper has not yet been peer-reviewed and has a relatively small sample size.) Such initiatives should also be examined through the lens of research-based recommendations, such as those from the Alan Turing Institute's study titled “Understanding the Impacts of Generative AI Use on Children”. Fewer and fewer areas of life remain where even the youngest among us can fully escape the effects of digital technologies, including generative AI solutions. It's important that children not be fully affected by such technologies at an age when they cannot be prepared for their effects. Just because something can be done doesn't mean it should be….
Neo Performance Materials, a Canadian company, is close to officially opening its new European magnet facility in Narva, Estonia. (The official opening ceremony is scheduled for September 2025.) This is a significant milestone for the European automotive and renewable energy supply chains.
Why does this matter? “Neo’s new facility in Estonia marks a critical step forward in one of the most strategically crucial permanent magnet projects in Europe and globally. This strategy aims to scale magnet manufacturing across Europe and beyond, advancing Neo’s mission to build resilient, parallel global supply chains for rare earth magnetics and other critical materials, serving rapidly accelerating markets. […] Neo’s advanced industrial materials are critical to the performance of many everyday products and emerging technologies.”