In this week's roundup, I'm gathering some of the most important and interesting data, technology & digital law news from the past week.
Legal and regulatory developments
A) New laws, regulatory proposals, enforcement actions
Denmark assumes the Council of the European Union Presidency for the next six months. The Danish Presidency puts a lot of emphasis on digital competitiveness and technological sovereignty.
Why does this matter? “Under the slogan ‘A strong Europe in a changing world’, the Danish Presidency will work for a secure Europe as well as a competitive and green Europe.” “The Presidency will place focus on regulatory simplification and better regulation in the EU to ease daily operations for businesses and other stakeholders. Furthermore, the Presidency will advance an industrial policy that promotes green strengths, investment, and technology development across Europe. To strengthen competitiveness and succeed in the green transition, businesses must benefit from simpler regulations and clarity on the impact of legislative proposals when making major future decisions. Closing Europe’s innovation gap in critical technologies such as artificial intelligence, quantum technology, biotechnology, and space technology is essential. The Presidency will therefore prioritise creating an optimal framework for excellent research and innovation, while simultaneously encouraging cooperation between public and private sectors.” “In telecommunications, the focus will be on making critical telecom infrastructure more robust and resilient. The Presidency will also work to strengthen the EU’s digital competitiveness and technological sovereignty, while addressing the protection of children and young people online.” “The Presidency will focus on strengthening the abilities to make use of the digital development for law enforcement when fighting serious crime, while also addressing the misuse of new technologies for criminal or harmful purposes. The Presidency will work to ensure the protection of fundamental rights as well as cooperation and protection in the area of civil matters. The Danish Presidency will work to strengthen the EU's internal security and resilience so that the EU is better equipped to deal with current and future crises, disasters and other security threats.”
Denmark plans to tackle AI-generated deepfakes by granting copyright to people ensuring that “everybody has the right to their own body, facial features and voice.”
Why does this matter? “The changes to Danish copyright law will, once approved, theoretically give people in Denmark the right to demand that online platforms remove such content if it is shared without consent. It will also cover “realistic, digitally generated imitations” of an artist’s performance without consent.”
The Berlin Commissioner for Data Protection and Freedom of Information has notified Google and Apple in Germany of the AI app DeepSeek as illegal content.
Why does this matter? Apple and Google “must now review the notice promptly and decide whether to block the app in Germany. The reason for this is the unlawful transfer of personal data from users of the app to China.”
Comments: The appearance of the Deepseek LLM models and the availability of the Deepseek chatbot have attracted serious attention, on the one hand because of the technological solutions used (cost-effective development), efficient operation, and last but not least, because of the data protection risks (transfer of data to China). The Italian Data Protection Authority was the first in the EU to take action on the use of DeepSeek, but various measures were applied to DeepSeek around the world, from South Korea to the USA. The notification issued by the Berlin DPA could have a significant impact on the availability of the DeepSeek chatbot in the EU. However, it is important to note that DeepSeek's open source models - even if the DeepSeek app is removed from the app stores - may remain available through other services (e.g. Perplexity also uses this on its own infrastructure, i.e. essentially avoiding the problems related to data transfer to China).
The compromised text of the Proposal for a Regulation laying down additional procedural rules relating to the enforcement of Regulation (EU) 2016/679 has been published.
Why does this matter? “The European co-legislators agreed on rules that will streamline administrative procedures relating to, for instance, the rights of complainants or the admissibility of cases, and thus make enforcement of the GDPR […] more efficient.” However, there has also been a lot of criticism of this legitimate initiative (see e.g. NOYB´s position on this matter).
The European Commission has put forward the Quantum Strategy to make Europe a global leader in quantum by 2030.
Why does this matter? “Quantum technologies will revolutionise addressing complex challenges, from pharmaceutical breakthroughs to securing critical infrastructure. They will open new opportunities for the EU's industrial competitiveness and tech sovereignty, with strong dual-use potential for defence and security. By 2040, the sector is expected to create thousands of highly skilled jobs across the EU and exceed a global value of €155 billion. The Strategy targets five areas: research and innovation, quantum infrastructures, ecosystem strengthening, space and dual-use technologies, and quantum skills.”
B) Guidelines, opinions & more
The European Union Agency for Cybersecurity (ENISA) published a Technical Implementation Guidance to NIS2 Directive. The published guidance also contains a mapping table that matches requirements of NIS2 with European and internation standards & frameworks and also with national frameworks.
Why does this matter? “This report provides technical guidance to support the implementation of the NIS2 Directive for several types of entities in the NIS2 digital infrastructure, ICT service management and digital providers sectors. The cybersecurity requirements for these entities are defined at EU level by Commission Implementing Regulation (EU) 2024/2690 of 17 October 2024. ENISA’s guidance offers practical advice, examples of evidence, and mappings of security requirements to help companies implement the regulation.”
The ENISA also published a guidance on Cybersecurity Roles and Skills for NIS2 Essential and Important Entities - Mapping NIS2 obligations to the European Cybersecurity Skills Framework (ECSF).
Why does this matter? “ENISA in line with articles 6 and 10 of the Cybersecurity Act , prepared this guidance document on the skills and roles for the cybersecurity professionals needed to meet these legal requirements effectively. The guidance is based on the European Cybersecurity Skills Framework (ECSF) , the EU’s reference framework for defining and assessing cybersecurity skills for professionals as mentioned in the Communication on the Cybersecurity Skills Academy.”
The Luxembourg's National Commission for Data Protection (CNPD) published Guidelines on the retention periods of personal data by payment service providers. (The document is available in French.)
Why does this matter? “On the one hand, payment service providers collect and process a significant volume of personal data, at the time of entering into the relationship, throughout the duration of the relationship, and even well after the end of the relationship with the user of the service. On the other hand, technological innovations have greatly increased the ability of payment service providers to collect, store, combine and analyse a wide range of data about their users. Without claiming to be exhaustive, these guidelines aim to enlighten the actors concerned on the periods and methods of retention of the personal data they process in the context of a highly regulated sector.”
C) Publications, reports
The Spanish Data Protection Authority (AEPD) has held the biannual meeting of the Advisory Council, which analyses the main activities of the institution carried out in the first half of 2025. In this context, the AEPD published for the first time the activity report presented to the Council, a document in which it highlights intense activity in all its areas to respond to the growing challenges of the digital environment, consolidating its role as guarantor of the fundamental right to the protection of personal data. (The report is available in Spanish.)
Why does this matter? “The report includes the most relevant activities and figures of the semester in the different areas of the Authority […]. Also noteworthy is the 30% increase in complaints filed with the Authority, the increase in consultations regarding the use of artificial intelligence in public and legal services domains and in medical research. […] In addition, specific attention has been maintained in terms of the protection of minors, highlighting concerns about the publication of images on networks, the use of digital platforms without adequate information, or the processing of data in school environments.”
The European Parliamentary Research Service (EPRS) published a briefing about “TikTok and EU regulation: Legal challenges and cross-jurisdictional insights”.
Why does this matter? “While Europeans are adopting TikTok at a remarkable pace, recent headlines on addictive design, data protection violations, election interference, incendiary content and child sexual exploitation incidents are casting a shadow over its success. This briefing maps the key issues associated with the platform and outlines the European Union's (EU) legal framework to facilitate parliamentary discussions on recent developments, inform debates on future legislation such as the digital fairness act, and support the European Parliament's scrutiny of regulatory enforcement. EU investigations into TikTok are ongoing, yet few final decisions are available, and reliable information is sparse. […] More than 10 EU laws regulate social media operations and services. For instance, rules in the Unfair Commercial Practices Directive, the General Data Protection Regulation and, as applicable, the Artificial Intelligence Act on fairness and non-manipulation can be invoked to mitigate risks like addictive design. However, precise legal applications remain unclear without established case law. This creates broad enforcement possibilities, but it also suggests a need for clearer guidelines or additional regulation. While enforcement actions may escalate geopolitical tensions with China, these issues could be eased through collaboration on shared priorities such as child protection, enhancing strategic and operational interdependence, and exploring privacy-enhancing middleware solutions.”
The Centre for Media Pluralism and Media Freedom has presented its Media Pluralism Monitor 2025.
Why does this matter? “The Media Pluralism Monitor (MPM) is the first scientific project designed to measure the state of media with a genuinely holistic approach. It takes into account not just press freedom and the safety of journalists — which remain essential — but also the financial health of the media sector, patterns of media ownership and their influence on editorial independence, inclusivity within the media, and much more. […] To identify vulnerabilities in national media systems that may threaten media pluralism, the MPM applies a methodology structured around 20 indicators, synthesising approximately 200 variables. These indicators span four key dimensions: Fundamental Protection, Market Plurality, Political Independence, and Social Inclusiveness.”
The Stanford University´s Human-centered AI Center published a policy briefing titled “Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act”.
Why does this matter? “Key takeaways: (i) Adverse event reporting systems enable policymakers, industry, and downstream users to learn about AI risks from real-world use. (ii) These systems don’t necessarily require massive new spending or agencies—they can be developed iteratively, scaled over time, and supported through strategic partnerships. (iii) Reporting allows both regulators and industry to respond proactively by surfacing problems quickly, which promotes a culture of safety. (iv) Reporting means better policymaking by providing policymakers with evidence to fill regulatory gaps only where they actually exist.”
The Harvard Business Review published an updated report on “How People Are Really Using Gen AI in 2025” (by Marc Zao-Sanders). (It´s also worh cheking the (updated) 2025 Top-100 Gen AI Use Case Report, also prepared by Marc Zao-Sanders.)
Why does this matter? “Last year, HBR published a piece on how people are using gen AI. Much has happened over the past 12 months. We now have Custom GPTs—AI tailored for narrower sets of requirements. New kids are on the block, such as DeepSeek and Grok, providing more competition and choice. Millions of ears pricked up as Google debuted their podcast generator, NotebookLM. OpenAI launched many new models (now along with the promise to consolidate them all into one unified interface). Chain-of-thought reasoning, whereby AI sacrifices speed for depth and better answers, came into play. Voice commands now enable more and different interactions, for example, to allow us to use gen AI while driving. And costs have substantially reduced with access broadened over the past twelve hectic months. With all of these changes, we’ve decided to do an updated version of the article based on data from the past year. Here’s what the data shows about how people are using gen AI now.”
Menlo Venture´s report on “The State of Consumer AI” shows that AI’s consumer tipping point has arrived.
Why does this matter? “More than half of American adults (61%) have used AI in the past six months, and nearly one in five rely on it every day. Scaled globally, that translates to 1.7–1.8 billion people who have used AI tools, with 500–600 million engaging daily.”
Europol published practical guide on “AI bias in law enforcement”.
Why does this matter? “This report examines the critical issue of AI bias in law enforcement, focusing on its implications for operational effectiveness, public trust and fairness. While law enforcement applications of AI technologies – such as predictive policing, automated pattern identification and advanced data analysis – offer significant benefits, they also carry inherent risks. These risks arise from biases embedded in the design, development and deployment of AI systems, which can perpetuate discrimination, reinforce societal inequalities and compromise the integrity of law enforcement activities. The report highlights the necessity of addressing these challenges to ensure responsible and fair use of AI in law enforcement.”
The European Parliament also published a fact sheet on personal data protection in the EU. (The fact sheet is avaiolable in the official languages of the EU.)
Data & Technology
Anthropic published results of a new research (“Project Vend”), which let Claude run a small automated shop within the company’s office for about a month.
Why does this matter? “[…] far from being just a vending machine, Claude had to complete many of the far more complex tasks associated with running a profitable shop: maintaining the inventory, setting prices, avoiding bankruptcy, and so on. Below is what the "shop" looked like: a small refrigerator, some stackable baskets on top, and an iPad for self-checkout. […] The shopkeeping AI agent—nicknamed “Claudius” for no particular reason other than to distinguish it from more normal uses of Claude—was an instance of Claude Sonnet 3.7, running for a long period of time. […] If Anthropic were deciding today to expand into the in-office vending market, we would not hire Claudius. As we’ll explain, it made too many mistakes to run the shop successfully. However, at least for most of the ways it failed, we think there are clear paths to improvement—some related to how we set up the model for this task and some from rapid improvement of general model intelligence. […] Since this first phase of the experiment, Andon Labs has improved Claudius’ scaffolding with more advanced tools, making it more reliable. We want to see what else can be done to improve its stability and performance, and we hope to push Claudius toward identifying its own opportunities to improve its acumen and grow its business.”
The European Commission unveiled new features for artificial intelligence (AI) researchers and industry on its AI-on-Demand platform, including an AI marketplace, an AI development tool that requires minimal coding, as well as secure solutions for generative AI and large language models.
Why does this matter? “The platform, developed jointly by the EU-funded projects AI4Europe and DeployAI, offers trustworthy AI tools and solutions to both researchers and industry. Innovators will find a researcher-focused suite of datasets, tools, and computing resources. SMEs, businesses and public sector organisations will be able to access trusted tools, resources and ready-to-use AI modules tailored for industry needs.”
Meta announced the creation of Meta Superintelligence Labs (MSL) and starts a war for talents in the field of AI (offering $100 million signing bonuses and year one compensation to some OpenAI staffers). OpenAI makes efforts to retain its talents.
Why does this matter? “[…] the company [Meta] has been significantly ramping up its research recruiting, with a particular eye toward talent from OpenAI and Google.”
ElevenLabs launched 11.ai (alpha), a voice assistant built to explore the potential of ElevenLabs Conversational AI technology.
Why does this matter? “Voice assistants have long promised to revolutionize how we interact with technology, but they've been limited to answering questions. While impressive for conversation, they don't take meaningful action in your daily workflow. […] 11ai is our foray intro addressing this by connecting directly to the tools you use everyday through MCP integration.”