In this week's roundup, I'm gathering some of the most important and interesting data, technology & digital law news from the past week.
Legal and regulatory developments
A) New laws, regulatory proposals, enforcement actions
The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) have adopted a letter, addressed to the European Commission, on the upcoming proposal on the simplification of record-keeping obligation under the GDPR. The joint letter replies to the letter sent by the European Commission to the EDPB and the EDPS where the Commission explained how it intends to introduce specific modifications to the GDPR.”
Why does this matter? “Simplification of GDPR requirements is certainly an important objective, but at the same time, easing the obligation of medium-sized companies to keep proper records of data processing is a rather symbolic measure. Presumably, this seems like a "low-hanging fruit", as this amendment can easily be made without serious consideration of the logic of data protection, and it can be immediately proven that important steps have been taken against bureaucracy. On the other hand, the narrowing of the RoPA obligation affects one of the fundamental tools of data protection compliance, as no other data protection obligation can be fulfilled without up-to-date knowledge of the ongoing data processing activities.
NOYB (none of your business), the Austrian privacy advocacy group, led by Max Schrems, sent Meta “cease and desist” letter over training plans.
Why does this matter? “Meta has announced it will use EU personal data from Instagram and Facebook users to train its new AI systems from 27 May onwards. Instead of asking consumers for opt-in consent, Meta relies on an alleged 'legitimate interest' to just suck up all user data. The new EU Collective Redress Directive allows Qualified Entities such as noyb to issue EU-wide injunctions. As a first step, noyb has now sent a formal settlement proposal in the form of a so-called Cease and Desist letter to Meta. Other consumer groups also take action. If injunctions are filed and won, Meta may also be liable for damages to consumers, which could be brought in a separate EU class action. Damages could reach billions. In summary, Meta may face massive legal risks – just because it relies on an "opt-out" instead of an "opt-in" system for AI training.”
B) Guidelines, opinions & more
The Commission´s AI Office published its AI Literacy - Questions & Answers.
Why does this matter? “Providers and deployers of AI systems should take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. They should do so by taking into account their technical knowledge, experience, education and training of the staff and other persons as well as the context the AI systems are to be used in and the persons on whom the AI systems are to be used. […] Article 4 of the AI Act entered into application on 2 February 2025, therefore the obligation to take measures to ensure AI literacy of their staff already applies. The supervision and enforcement rules apply from 3 August 2026 onwards.”
The Irish Department of Public Expenditure published “Guidelines for the Responsible Use of AI in the Public Service”.
Why does this matter? “The Guidelines for the Responsible Use of Artificial Intelligence in the Public Service have been developed to actively empower public servants to use Artificial Intelligence (AI) in the delivery of services. By firmly placing the human in the process, these guidelines aim to enhance public trust in how Government uses AI. The Guidelines compliment and inform strategies regarding the adoption of innovative technology and ways of working already underway in the public service, and seek to set a high standard for public service transformation and innovation, while prioritising public trust and people’s rights.”
The European Commission seeks feedback on the guidelines on protection of minors online under the DSA (“Commission guidelines on measures to ensure a high level of privacy, safety and security for minors online pursuant to Article 28(4) of Regulation (EU) 2022/2065“).
Why does this matter? “The guidelines aim to support platforms accessible by minors in ensuring a high level of privacy, safety, and security for children, as required by DSA. The draft guidelines are open for final public feedback until 10 June 2025. The Commission is seeking contributions of all stakeholders, including children, parents and guardians, national authorities, online platform providers, and experts. The publication of the guidelines is expected by the summer of 2025.”
The Information Commissioner's Office of the UK (ICO) has launched a public consultation on its draft updated guidance on encryption. ICO have updated this guidance to follow their ‘must, should, could’ framework to give greater clarity on which encryption measures we expect organisations to implement. They have also updated the ‘encryption in practice’ section of the guidance to reflect the current state of technology, especially Hypertext Transfer Protocol Secure (HTTPS).”
Why does this matter? “Processing personal information securely is essential to maintaining trust and confidence in digital services. Encryption is an effective technical measure that helps you achieve this.“
C) Publications, reports
The NATO Strategic Communications Centre Of Excellence published a study regarding the impact of Digital Services Act on the spread of harmful content on social media (“Impact of the Digital Services Act: A Facebook Case Study“).
Why does this matter? “The adoption of the EU Digital Services Act (hereinafter the DSA) aimed to create a safe online information environment, specifically within the EU/EEA area. The aim of this research was to measure the effects of the DSA in curbing the spread of harmful content on social media. As measuring the results of such a broad goal was challenging, our study focused on one of the dominant social media platforms: Facebook. To assess the impact of the DSA, we compared the share of harmful content published by Polish and Lithuanian accounts on Facebook before and after the DSA entered into force.
Our multi-stage approach involved using a small AI model, GPT-4o mini, to initially flag harmful content, followed by applying larger models for validating and in-depth reasoning. In total we classified 959 harmful posts from 2023 and 1,392 posts from 2024.
As a result, the study demonstrated that despite certain improvements the platform made in creating a safe online environment, we could not claim an overall enhancement after DSA enforcement. Additionally, our results highlight the current vulnerabilities and areas for improvement that the platform should address.”
The European Commission released analysis of stakeholder feedback on AI definitions and prohibited practices public consultations. The report was prepared by the Centre for European Policy Studies (CEPS) for the EU AI Office.
Why does this matter? “The report presents a comprehensive analysis of responses to each of the 88 questions of the stakeholder consultation, organised into nine key sections. […] Respondents called for clearer definitions of technical terms such as "adaptiveness" and "autonomy", cautioning against the risk of inadvertently regulating conventional software. […] Among other findings, the report highlights that prohibited practices - such as emotion recognition, social scoring, and real-time biometric identification - occasioned significant concern. Stakeholders called for concrete examples of what is prohibited and what not.”
The OECD published a Regulatory Policy Working Paper titled “A mapping tool for digital regulatory frameworks - Including a pilot on efforts to regulate AI“.
Why does this matter? “This paper presents a mapping tool for governments to systematically assess, identify and bridge gaps in regulatory governance as applied to digital technologies. It describes how the mapping tool was built and piloted, highlighting definitional and methodological considerations. The paper then details each component of the tool, including findings from applying the tool to a sample of thirteen whole-of-government efforts to regulate artificial intelligence (AI). Finally, the paper provides further insights that can be drawn from using the mapping tool, presents a more systematic application of the tool to digital technology regulations, and outlines next steps to promote application of the mapping tool.“
The United States Copyright Office published the 3rd part of its report on Copyright and artificial intelligence, focusing on Generative AI Training (pre-publication version). (Part 1 of the report focuses on Digital Replicas [July 2024] and Part 2 covers Copyrightability [January 2025].)
Why does this matter? “This Part of the Copyright Office’s Report on Copyright and Artificial Intelligence addresses the use of copyrighted works in the development of generative AI systems. The groundbreaking technologies involved draw on massive troves of data, including copyrighted works, to enable the extraordinary capabilities they now offer to the public. Do any of the acts involved require the copyright owners’ consent or compensation? And to the extent they do, how can that feasibly be accomplished? These issues are the subject of intense debate. Dozens of lawsuits are pending in the United States, focusing on the application of copyright’s fair use doctrine. Legislators around the world have proposed or enacted laws regarding the use of copyrighted works in AI training, whether to remove barriers or impose restrictions. The stakes are high, and the consequences are often described in existential terms. Some warn that requiring AI companies to license copyrighted works would throttle a transformative technology, because it is not practically possible to obtain licenses for the volume and diversity of content necessary to power cutting-edge systems. Others fear that unlicensed training will corrode the creative ecosystem, with artists’ entire bodies of works used against their will to produce content that competes with them in the marketplace. The public interest requires striking an effective balance, allowing technological innovation to flourish while maintaining a thriving creative community. […]”
The European Union Intellectual Property Office (EUIPO) also published a study on copyright issues concerning the development of generative AI (“Development of Generative Artificial Intelligence from a Copyright Perspective“).
Why does this matter? “This study explores the developments in GenAI from the perspective of EU copyright law. It is structured around three main components, (1) a technical, legal and economic analysis to further understand the functionality of GenAI and the implications of its development, as well as a detailed examination of copyright-related issues regarding the (2) use of content in GenAI services development and the (3) generation of content.”
The European Defence Agency (EDA) published a white paper titled “Trustworthiness for AI in Defence - Developing Responsible, Ethical, and Trustworthy AI Systems for European Defence”.
Why does this matter? “The purpose of this document is to collect, present and describe the aspects of Trustworthiness for AI in Defence in a ‘food for thought’ approach reflecting the combined view of AI experts and stakeholders from Defence Industry, Academia and Ministries of Defence. This effort is performed in the context of the European Defence Agency’s (EDA) Action Plan on Artificial Intelligence for Defence and tries to address the topics of trusted AI and verification, validation and certification requirements analysis. The topics covered and analysed in this document will provide the appropriate knowledge of the current global status considering the AI regulations, standards and frameworks for AI trustworthiness and will also recommend the follow-up activities that will further assist the EU Members States and Defence Industry to better prepare, plan and develop the future AI systems aligned with the identified expectations. The target audience is EU Member States especially the MODs that will further evaluate the whitepaper’s recommendations and may use it as a reference point for future related AI research activities. Target audience will be expanded also to Defence Industry to highlight the key aspects and requirements for developing AI systems for military use considering all regulations, standards and methodologies assisting them in better planning and design to deliver trusted AI services and applications.”
The Centre for Information Policy Leadership (CIPL), as part of its EU Artificial Intelligence Act Implementation Project, also published a document concerning AI literacy: “AI Act Article 4: AI Literacy Best Practices and Recommendations for Practitioners”.
Why does this matter? “In this white paper, CIPL shares emerging AI literacy best practices and recommendations that can support practitioners and organisations seeking to build or evolve AI programs to promote responsible and trustworthy innovation.”
The Chartered Institute of Arbitrators published a “Guideline on the Use of AI in Arbitration (2025)“.
Why does this matter? The Guideline “seeks to give guidance on the use of AI in a manner that allows dispute resolvers, parties, their representatives, and other participants to take advantage of the benefits of AI, while supporting practical efforts to mitigate some of the risk to the integrity of the process, any party’s procedural rights, and the enforceability of any ensuing award or settlement agreement.”
IBM released a whitepaper titled “Agentic AI in Financial Services: Opportunities, Risks, and Responsible Implementation”.
Why does this matter? “The global economy is currently experiencing an AI super
cycle, driven by unprecedented progress and investment in AI technologies. This cycle is igniting business transformation initiatives aimed at accelerating growth and uncovering new efficiencies. […] Agentic AI introduces a novel risk landscape, requiring a shift in risk management practices. The autonomous nature of AI agents complicates human oversight, making real-time intervention challenging. This necessitates a proactive
approach to risk management, integrating AI systems with robust guardrails and real-time monitoring to ensure safety and reliability.”
The International Labour Organisation (ILO) published a Global report titled “revolutionizing health and safety: The role of AI and digitalization at work”.
Why does this matter? “Digitalization and automation are impacting millions of jobs worldwide, presenting unprecedented opportunities to enhance occupational safety and health. Automation and smart monitoring systems can reduce hazardous exposures, prevent workplace injuries and improve overall working conditions. Nevertheless, proactive policies are needed to address the potential risks.”
The European Union Agency for Cybersecurity (ENISA) has developed and published the European Vulnerability Database (EUVD) as provided for by the NIS2 Directive.
Why does this matter? “The objective of the EUVD is to ensure a high level of interconnection of publicly available information coming from multiple sources such as CSIRTs, vendors, as well as existing databases. […] The database is accessible to the public at large to consult information related to vulnerabilities impacting IT products and services. It is also addressed to suppliers of network and information systems and entities using their services.“
The U.S. National Institute of Standards and Technology (NIST) begins its review of its work under the Internet of Things (IoT) Cybersecurity Improvement Act. It is anticipated that a draft of the updated guidelines will be released during a virtual workshop on June 18.
Why does this matter? “The IoT Cybersecurity Improvement Act called for NIST to revisit our IoT cybersecurity guidelines every five years. With that in mind, as well as the evolution of IoT product components and technologies, NIST will be beginning our five-year revision” of the respective guidelines.
Data & Technology
Perplexity AI is in late-stage talks to raise USD 500 million at a USD 14 billion valuation of the company.
Why does this matter? Perplexity is an AI-powered search engine that provides direct, comprehensive answers to user queries, drawing on real-time data from the internet and backing its responses with citations. The company competes against tech-giants, such as Google or Microsoft-backed OpenAI. The valuation of Perplexity grows very quickly as its valuation was EUR 9 billion in December 2024 and USD 3 billion in June 2024.
The new paper from DeepSeek titled “Insights into DeepSeek-V3: Scaling Challenges and Reflections on Hardware for AI Architectures” provides an in-depth analysis of the DeepSeek-V3/R1 model architecture.
Why does this matter? “The rapid scaling of large language models (LLMs) has unveiled critical limitations in current hardware architectures, including constraints in memory capacity, computational efficiency, and interconnection bandwidth. DeepSeek-V3, trained on 2,048 NVIDIA H800 GPUs, demonstrates how hardware-aware model co-design can effectively address these challenges, enabling cost-efficient training and inference at scale. This paper presents an in-depth analysis of the DeepSeek-V3/R1 model architecture and its AI infrastructure, highlighting key innovations such as Multi-head Latent Attention (MLA) for enhanced memory efficiency, Mixture of Experts (MoE) architectures for optimized computation-communication trade-offs, FP8 mixed-precision training to unlock the full potential of hardware capabilities, and a Multi-Plane Network Topology to minimize cluster-level network overhead. Building on the hardware bottlenecks encountered during DeepSeek-V3's development, we engage in a broader discussion with academic and industry peers on potential future hardware directions, including precise low-precision computation units, scale-up and scale-out convergence, and innovations in low-latency communication fabrics. These insights underscore the critical role of hardware and model co-design in meeting the escalating demands of AI workloads, offering a practical blueprint for innovation in next-generation AI systems.”
The MIT Technology Review reported that “police and federal agencies have found a controversial new way to skirt the growing patchwork of laws that curb how they use facial recognition: an AI model that can track people using attributes like body size, gender, hair color and style, clothing, and accessories.”
Why does this matter? According to MIT Technology Review´s report, “the tool, called Track and built by the video analytics company Veritone, is used by 400 customers, including state and local police departments and universities all over the US. […] The product has drawn criticism from the American Civil Liberties Union, which—after learning of the tool through MIT Technology Review—said it was the first instance they’d seen of a nonbiometric tracking system used at scale in the US. They warned that it raises many of the same privacy concerns as facial recognition but also introduces new ones […]”