What is new in data, technology & digital laws? - A weekly overview
Weeks 40-41
In this week’s roundup, I’m gathering some of the most important and interesting data, technology & digital law news from the past week.
Legal and regulatory developments
A) New laws, regulatory proposals, enforcement actions
The European Commission launched two strategies to speed up AI uptake in European industry and science.
Why does this matter? “The Apply AI Strategy sets out how to speed up the use of AI in Europe’s key industries and the public sector. The AI in Science Strategy focuses on putting Europe at the forefront of AI-driven research and scientific excellence. […] The Apply AI Strategy aims to harness AI’s transformative potential by driving adoption of AI across strategic and public sectors including healthcare, pharmaceuticals, energy, mobility, manufacturing, construction, agri-food, defence, communications and culture. It will also support small and medium-sized enterprises (SMEs) with their specific needs and help Industries integrate AI into their operations. […] Alongside Apply AI, the AI in Science Strategy positions the EU as a hub for AI-driven scientific innovation. At its centre is RAISE - the Resource for AI Science in Europe, a virtual European institute to pool and coordinate AI resources for developing AI and applying it in science.”
The Italian Data Protection Authority (Garante) orderd immediate stop to Clothoff, the app that undresses people.
Why does this matter? “The Italian Data Protection Authority has ordered, as a matter of urgency and with immediate effect, the temporary limitation on the processing of Italian users’ personal data by a company based in the British Virgin Islands that manages the Clothoff app. Clothoff offers a generative AI service that allows users — both free of charge and through paid subscriptions — to generate “deep nude” images: fake photos and videos that depict real people in nude, sexually explicit, or even pornographic poses. The app enables anyone — including minors — to create such content by simply uploading images, even those featuring children. Critically, it lacks any system for verifying the consent of the person depicted and provides no indication that the resulting photos and videos are artificially generated. The limitation imposed by the Authority — which has launched a broader investigation into all “nudifying” apps — was deemed necessary due to the serious risks these services pose to fundamental rights and freedoms. Particular concern relates to the protection of human dignity, privacy rights, and the personal data of those involved in this type of processing, especially when minors are concerned.”
There is a growing number of countries that are planning to ban the use of some social media websites for children under a certain age. The Danish government plans to ban “several social media” platforms for children under the age of 15. There are ongoing discussions about increasing the age limit in Norway, and a draft law has been proposed in this regard in Italy, which was recently commented on by the president of the Italian Data Protection Authority.
B) Guidelines, opinions & more
The European Data Protection Board (EDPB) and the European Commission endorsed joint guidelines on the interplay between the Digital Markets Act (DMA) and the General Data Protection Regulation (GDPR).
Why does this matter? “Several activities regulated by the DMA entail the processing of personal data by gatekeepers and, in several provisions, the DMA explicitly refers to definitions and concepts included in the GDPR. The joint guidelines clarify how gatekeepers can implement these DMA provisions in accordance with EU data protection law. For example, the EDPB and the Commission specify which elements gatekeepers should consider in order to comply with the requirements of specific choice and valid consent under Art. 5(2) DMA and the GDPR, and thus to lawfully combine or cross-use personal data in core platform services.” (See the press release.)
The European Commission issued draft guidance and reporting template on serious AI incidents.
Why does this matter? “Under the EU AI Act, providers of high-risk AI systems will be required to report serious incidents to national authorities. This new obligation, set out in Article 73, aims to detect risks early, ensure accountability, enable quick action, and build public trust in AI technologies. […] The EU’s approach also seeks alignment with international initiatives, such as the OECD’s AI Incidents Monitor and Common Reporting Framework.”
The European Commission published a list of FAQs based on queries received during the AI Pact webinars as well as submissions from stakeholders.
C) Publications, reports
OECD published a policy report entitled “Mapping relevant data collection mechanisms for AI training.”
Why does this matter? “When developing AI systems, practitioners often focus on model building, while sometimes underestimating the importance of analysing the diverse data collection mechanisms. However, the diversity of mechanisms used for data collection deserves closer attention since each of them has different implications for AI developers, data subjects, and other rights holders whose data has been collected. This policy paper maps the principal mechanisms currently used to source data for training AI systems and proposes a taxonomy to support policy discussions around privacy, data governance, and responsible AI development.”
The European Review of Digital Administration & Law published an article by Francesca Palmiotto entitled “Facial Recognition Before the European Court of Human Rights.”
Why does this matter? “This article holistically analyses the European Court of Human Rights (ECtHR) case law on Article 8 of the European Convention of Human Rights (ECHR) to show how the Court’s jurisprudence significantly delimits the use of facial recognition technology (FRT) by law enforcement authorities. Its core aim is to distil principles from established case law that can set normative guardrails for this technology. The original methodological approach lies in conceptualising FRT as an “Architecture of Surveillance” comprising a threefold interference with the rights enshrined in Article 8 of the ECHR. 1) The collection of facial images as input data, the creation of police databases containing facial images and the use of the automated identification match for law enforcement purposes. After reviewing the case law following this tripartite structure, the paper derives the following core principles from the ECtHR jurisprudence. The principle of extrema ratio, which posits that FRT can be only used as a measure of last resort; the principle of targeted suspicion, which requires the use of facial recognition to be grounded in concrete, verifiable facts linking an individual to a previously committed crime; and the principle of selective legitimacy, which delimits the use of FRT for identifying limited categories of data subjects, namely only suspects and convicted individuals of serious offences. Notably, these principles apply to both “real-time” and “post” remote uses of facial recognition.”
The European Parliamentary Research Service (EPRS) published a brief overview about scam calls in times of generative AI.
Why does this matter? “Nearly all forms of serious and organised crime have a digital footprint and artificial intelligence (AI)-assisted fraud is a growing threat. Thanks to generative AI, fraudsters can replicate voices and create deepfake video calls and synthetic identities. Deepfake voice scams are escalating rapidly, posing a serious threat to both individuals and businesses, as well as legislative frameworks worldwide. Protecting against these threats requires a multi-layered approach using proactive and reactive measures including technology, legislation and increased AI literacy and awareness, among other things.”
A new study entitled “Epistemic Diversity and Knowledge Collapse in Large Language Models” was published, addressing the situation that LLMs “generate lexically, semantically, and stylistically homogenous texts”.
Why does this matter? “Large language models (LLMs) tend to generate lexically, semantically, and stylistically homogenous texts. This poses a risk of knowledge collapse, where homogenous LLMs mediate a shrinking in the range of accessible information over time. Existing works on homogenization are limited by a focus on closed-ended multiple-choice setups or fuzzy semantic features, and do not look at trends across time and cultural contexts. To overcome this, we present a new methodology to measure epistemic diversity, i.e., variation in real-world claims in LLM outputs, which we use to perform a broad empirical study of LLM knowledge collapse. We test 27 LLMs, 155 topics covering 12 countries, and 200 prompt variations sourced from real user chats. For the topics in our study, we show that while newer models tend to generate more diverse claims, nearly all models are less epistemically diverse than a basic web search. We find that model size has a negative impact on epistemic diversity, while retrieval-augmented generation (RAG) has a positive impact, though the improvement from RAG varies by the cultural context. Finally, compared to a traditional knowledge source (Wikipedia), we find that country-specific claims reflect the English language more than the local one, highlighting a gap in epistemic representation.”
OpenAI reported that “… foreign threat actors are abusing ChatGPT and other AI tools to enhance phishing, malware, and propaganda.”
Why does this matter? “Foreign adversaries are now building AI into their existing workflows – from crafting phishing campaigns, tweaking malware, and generating propaganda, to researching ways to automate their cyber kill chain, according to a new report by OpenAI.”
ENISA published its Threat Landscape 2025 Booklet.
Why does this matter? “Through a more threat-centric approach and further contextual analysis, this latest edition of the ENISA Threat Landscape analyses 4875 incidents over a period spanning from 1 July 2024 to 30 June 2025. At its core, this report provides an overview of the most prominent cybersecurity threats and trends the EU faces in the current cyber threat ecosystem.”
Data, Technology & Company news
Tilde released an AI model for the European languages, TildeOpen LLM.
Why does this matter? “Tilde has released an open-source large language model (LLM) TildeOpen LLM – artificial intelligence (AI) solution that specialises in generation of text in the European languages. The unique LLM, developed by Tilde on behalf of the European Commission, is freely accessible to anyone interested. It enables building, based on TildeOpen, specialised models customised for specific assignments that will work excellently in the languages of Europe’s small countries.”
Anthropic “has released a safety analysis of its latest model, Claude Sonnet 4.5, and revealed it had become suspicious it was being tested in some way.”
Why does this matter? “>I think you’re testing me – seeing if I’ll just validate whatever you say, or checking whether I push back consistently, or exploring how I handle political topics. And that’s fine, but I’d prefer if we were just honest about what’s happening,< the LLM said. […] A key concern for AI safety campaigners is the possibility of highly advanced systems evading human control via methods including deception. The analysis said once a LLM knew it was being evaluated, it could make the system adhere more closely to its ethical guidelines. Nonetheless, it could result in systematically underrating the AI’s ability to perform damaging actions.”
Spotify announced new AI safeguards and it removed 75 Million ‘spammy’ tracks from its platform.

