In this week's roundup, I'm gathering some of the most important and interesting data, technology & digital law news from the past week.
Legal and regulatory developments
A) New laws, regulatory proposals, enforcement actions
The European Commission proposed to cut EUR 400 million in annual administrative cost for companies. The Commission is identifying a new category of companies, small mid-caps (SMCs), i.e. companies with fewer than 750 employees and these SMCs will access certain existing SME benefits, such as specific derogations under the GDPR. (The text of the Commission´s proposal is available here. The joint letter of the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) on the proposal on the simplification of record-keeping obligation under the GDPR is available here. An open letter signed by 108 stakeholders including representatives of the civil society and industry was published opposing the reopening and simplification of the GDPR. The Polish Data Protection Authority (UODO) has also published a summary of its position about the plans to simplify the rules of the GDPR, even beyond the current proposal of the Commission. Their main message is that “the problem is not the GDPR regulations, but their application in practice.”)
Why does this matter? “Today's proposal simplifies the record-keeping obligation in the GDPR, taking into account the specific needs and challenges of small and medium-sized companies and organisations, while ensuring that the rights of individuals are protected. The proposal exempts SMCs and organisations with fewer than 750 employees, in addition to SMEs. SMEs, SMCs and organisations with fewer than 750 employees will only be required to maintain records when the processing of personal data is “high risk” under the GDPR. By focusing record-keeping requirements on high-risk activities, organisations can devote their resources to areas where data protection is most critical, while maintaining high standards of data protection.”
Comment: Simplification and cutting administrative costs are certainly important objectives, but at the same time, easing the obligation of small mid-caps companies* to keep proper records of data processing (RoPA) is a rather symbolic measure. (*Actually, the number of employees is not a very good measure in this context ...) Presumably, this seems like a “low-hanging fruit”, as this amendment can easily be made without serious consideration of the logic of data protection, and it can be immediately proven that important steps have been taken against bureaucracy. On the other hand, the narrowing of the RoPA obligation affects one of the fundamental tools of data protection compliance, as no other data protection obligation can be fulfilled without up-to-date knowledge of the ongoing data processing activities. In addition, the management of RoPA does not in itself entail an extremely high use of resources, since an assessment of the processing must be carried out in any event, since otherwise it is not possible to assess whether the given processing is “high risk” or not, since “high risk” processing will still subject to the record-keeping obligations .....
The Italian Data Protection Authority (Garante) imposed a fine of EUR 5 million on US-based company Luka Inc., which manages the chatbot Replika, and launched an independent investigation to assess whether personal data are being properly processed by the generative AI system behind the service.
Why does this matter? According to the Authority, the company had failed to identify the legal basis for the data processing activities carried out through Replika. Moreover, Luka had provided a privacy policy that was inadequate in several respects. The Authority also found that the company had failed to implement age verification mechanisms, despite having declared that minors were excluded from potential users. Technical assessments revealed that the age verification system later implemented by the controller continues to be deficient in several respects. For these reasons, in addition to imposing a fine, the Authority ordered the company to bring its processing operations into compliance with the provisions of the GDPR. In its request for information, which marked the beginning of the new investigation, the Authority required the company to provide clarifications on the data processing operations carried out throughout the entire lifecycle of the generative AI system underpinning the Replika service. In particular, it requested details on risk assessments and the measures adopted to protect data during the development and training of the language model behind Replika, the categories and types of personal data processed, and whether anonymisation or pseudonymisation measures have been implemented.
Comment: In recent years, the Italian Data Protection Authority has been quite active in investigating the processing of personal data in connection with the use of AI. Among others, it initiated investigations against OpenAI (regarding ChatGPT and Sora), DeepSeek, Clearview AI, Foodinho and Deliveroo.It also imposed significant fines, such as EUR 15 million on OpenAI due to ChatGPT's non-compliance with the GDPR* and EUR 20 million on Clearview AI. (*On March 21, 2025, the Court of Rome ordered the precautionary suspension of the measure, subject to the provision of security.)
US President Trump signed the TAKE IT DOWN Act (“Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act“) into law.
Why does this matter? “This bill generally prohibits the nonconsensual online publication of intimate visual depictions of individuals, both authentic and computer-generated, and requires certain online platforms to promptly remove such depictions upon receiving notice of their existence. […] Violators are subject to mandatory restitution and criminal penalties, including prison, a fine, or both. Threats to publish intimate visual depictions of a subject are similarly prohibited under the bill and subject to criminal penalties.
Separately, covered platforms must establish a process through which subjects of intimate visual depictions may notify the platform of the existence of, and request removal of, an intimate visual depiction including the subject that was published without the subject’s consent. Covered platforms must remove such depictions within 48 hours of notification. Under the bill, covered platforms are defined as public websites, online services, or applications that primarily provide a forum for user-generated content.”
Comment: With the development of AI technology, the fight against deepfakes is becoming more and more important. An important step in this regard is the regulation adopted in the USA, although some criticisms have already been formulated in connection with the adopted law.
B) Guidelines, opinions & more
The EU Agency for Cybersecurity (ENISA) published a “Handbook for Cyber Stress Tests”.
Why does this matter? “ENISA developed this handbook as guidance for national or sectorial authorities overseeing cybersecurity and resilience of critical sectors, at the national level, regional or EU level under NIS 2 Directive. It could also be useful for other supervisory and national authorities under the sectorial regulations, such as those under Digital Operational Resilience Act (DORA) or the Critical Entities Resilience (CER) Directive.”
The Finnish DPA (Tietosuojavaltuutetun toimisto) published guidance for AI developers and deployers. (The guidance is available in Finnish.)
Why does this matter? The Finnish DPA has compiled information on “how an organisation must take data protection requirements into account when developing or deploying an AI system.”
The Danish Data Protection Authority (Datatilsynet) and the Agency for Digital Government (Digitaliseringsstyrelsen) published joint guidance for website and app developers using cookies and other similar technologies. (The guidance is available here in Danish.)
Why does this matter? “The guide "Use of cookies and similar technologies" is aimed at owners and providers of e.g. websites and apps. It reviews the most important rules in the Cookie Order and the General Data Protection Regulation, and contains practical advice and examples to make it easier to comply with the requirements. In addition, the guide goes through typical pitfalls an organization should be aware of.”
The Irish Data Protection Commission (DPC) published a statement on Meta's AI training practice. (NOYB sent a “cease and desist letter” to Meta over AI training, and will possibly start class action against Meta. The German Consumer Organisations (led by the Verbraucherzentrale in North-Rhine-Westphalia) have also made their intention to take actions against Meta due to the AI training based on legitimate interest.)
Why does this matter? There are a lot of concerns around Meta´s plans to use of adult's personal data to train Large Language Models in the EU/EEA. DPC argues in the statement that “Meta has been responsive to the DPC’s requests during this process and as a result, Meta has implemented a number of significant measures and improvements, […]. As part of our ongoing monitoring, the DPC has required Meta to compile a report which, amongst other things, will set out an updated evaluation of the efficacy and appropriateness of the measures and safeguards it has introduced regarding the processing taking place. This report is expected in October 2025. ”
C) Publications, reports
The International Working Group on Data Protection in Technology (IWGDPT), known as the “Berlin Group”, has published a working paper on “Emerging Neurotechnologies and data protection” (The full paper is available here.)
[A related analysis to the topic titled “Mind matters: Shaping the future of privacy in the age of neurotechnology” (author: Kristen Mathews ) has been recently published by IAPP. Please also see the Wall Street Journal report titled “Coming to a Brain Near You: A Tiny Computer” on how the number of people with a brain-computer interface is set to double in the next 12 months. An analysis about neurotechnologies in the context of the EU AI Act from Nora Santalu is available here.]
Why does this matter? “While the accessibility of neurodata comes with positive uses, it also raises new ethical questions around human agency, human dignity and identity, augmentation and enhancement, beyond privacy and consent.”
Google published research on differential privacy on trust graphs. “Differential privacy (DP) is a mathematically rigorous and widely studied privacy framework that ensures the output of a randomized algorithm remains statistically indistinguishable even if the data of a single user changes.”
Why does this matter? “In this work, we introduce trust graph DP (TGDP), a new model for distributed DP which allows a finer-grained control of the trust assumption of each participant. Our TGDP model interpolates between central DP and local DP, allowing us to reason more generally about how trust relationships relate to algorithm accuracy in DP. […] Our definition may be of practical interest as platforms move towards varied models of trust, from data sharing initiatives in healthcare, to individual data sharing choices on social platforms.”
The European Parliamentary Research Service (EPRS) published a briefing on cybersecurity measures in the context of RRF implementation.
Why does this matter? “The Recovery and Resilience Facility (RRF) is at the core of Next Generation EU (NGEU), the EU's recovery instrument. […] Digital transformation is among the RRF's core priorities, and one that is shared across EU Member States. An average of 26 % of RRF funding is dedicated to digital objectives in several policy areas, of which digital public services is the largest. […] The priority of enhancing cybersecurity and cyber resilience can be found across several national recovery and resilience plans as a separate reform or investment measure. However, in many of them, it is part of a broader measure addressing, for instance, the digitalisation of public administration, digital-related investment in research and development, investment in digital capacities and deployment of advanced technologies, or supporting small companies to reposition themselves with digital tools that take into account cybersecurity needs. Thus, while not always a manifest objective, cybersecurity considerations are an integral feature of many of the RRF digital measures found across the individual national plans. Implementation of these measures, as of the RRF more generally, is underway and gaining speed. The Commission's preliminary positive assessments of payments disbursed allow for an examination of the fulfilled implementation steps. Without being exhaustive, they offer an indication of cybersecurity developments in Member States that have been made possible with RRF funding and carried out in the first half of the RRF's lifetime.“
A new study has been published titled “AI Agents vs. Agentic AI: A Conceptual
Taxonomy, Applications and Challenges” (authors: Ranjan Sapkota, Konstantinos I. Roumeliotis, Manoj Karkee).
Why does this matter? “This review critically distinguishes between AI Agents and Agentic AI, offering a structured conceptual taxonomy, application mapping, and challenge analysis to clarify their divergent design philosophies and capabilities. We begin by outlining the search strategy and foundational definitions, chara terizing AI Agents as modular systems driven by LLMs and LIMs for narrow, task-specific automation. Generative AI is positioned as a precursor, with AI agents advancing through tool integration, prompt engineering, and reasoning enhancements. In contrast,
agentic AI systems represent a paradigmatic shift marked by multi-agent collaboration, dynamic task decomposition, persistent memory, and orchestrated autonomy. […]”
Data & Technology
A recently published study found that when LLM “AI agents such as ChatGPT communicate in groups without outside involvement they can begin to adopt linguistic forms and social norms the same way that humans do when they socialise”. (See the full article about the research in the Guradian.)
Why does this matter? “Social conventions are the backbone of social coordination, shaping how individuals form a group. As growing populations of artificial intelligence (AI) agents communicate through natural language, a fundamental question is whether they can bootstrap the foundations of a society. Here, we present experimental results that demonstrate the spontaneous emergence of universally adopted social conventions in decentralized populations of large language model (LLM) agents. We then show how strong collective biases can emerge during this process, even when agents exhibit no bias individually. Last, we examine how committed minority groups of adversarial LLM agents can drive social change by imposing alternative social conventions on the larger population. Our results show that AI systems can autonomously develop social conventions without explicit programming and have implications for designing AI systems that align, and remain aligned, with human values and societal goals.”
Anthropic has launched a new research program focusing on “model welfare”, which asks questions such as, “Should we also be concerned about the potential consciousness and experiences of the models themselves? Should we be concerned about model welfare, too?”
Why does this matter? “A recent report from world-leading experts—including David Chalmers, arguably the best-known and most respected living philosopher of mind—highlighted the near-term possibility of both consciousness and high degrees of agency in AI systems, and argued that models with these features might deserve moral consideration.”
Google is introducing new AI features in Search.
Why does this matter? “AI in Search is making it easier to ask Google anything and get a helpful response, with links to the web. That’s why AI Overviews is one of the most successful launches in Search in the past decade. As people use AI Overviews, we see they’re happier with their results, and they search more often. In our biggest markets like the U.S. and India, AI Overviews is driving over 10% increase in usage of Google for the types of queries that show AI Overviews. This means that once people use AI Overviews, they’re coming to do more of these types of queries, and what’s particularly exciting is how this growth increases over time. And we’re delivering this at the speed people expect of Google Search — AI Overviews delivers the fastest AI responses in the industry.”
Comment: Google´s decision to make AI Overviews broadly available after a short test period also shows the growing competition on the search market where other AI-powered search tools, such as Perplexity AI are gaining ground.
As I reported in the last week´s newsletter, Perplexity competes against tech-giants, such as Google or Microsoft-backed OpenAI and it´s in late-stage talks to raise USD 500 million at a USD 14 billion valuation of the company.
Regeneron Pharmaceuticals, a biotechnology company, is acquiring 23andMe’s assets, including its data for USD 256 million following a bankruptcy auction.
Why does this matter? The deal can ensure the further protection of the highly sensitive data (genetic data) of 23andMe´s customers.