In this week's roundup, I'm gathering some of the most important and interesting data, technology & digital law news from the past week.
Legal and regulatory developments
A) New laws, regulatory proposals, enforcement actions
The European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) have adopted a Joint Opinion on the Proposal for simplification measures for SMEs and SMCs, in particular the record-keeping obligation under Art. 30(5) GDPR.
Why does this matter? “The EDPB and the EDPS support the general objective of the Proposal to reduce the administrative burden for SMEs and SMCs as long as pursuing this objective does not result in lowering the protection of fundamental rights of individuals, in particular the fundamental right to protection of personal data. […] Regarding the record keeping obligation under Article 30 GDPR, the EDPB and the EDPS highlight that, in addition to facilitating ex-post compliance demonstration, records of processing activities constitute a very useful means to support compliance with several GDPR requirements. The EDPB and the EDPS therefore encourage enterprises and organisations employing fewer than 750 persons that do not engage in high-risk processing to choose the most appropriate methods to adequately support compliance with GDPR obligations and not have a negative impact on data subjects’ rights.”
The EDPB has also adopted a statement (“Helsinki Statament”) on “new initiatives to facilitate easier GDPR compliance, to enhance the dialogue with a broad range of stakeholders, to strengthen consistency and to develop cross-regulatory cooperation in the new digital regulatory landscape.”
Why does this matter? “These initiatives enable, in particular, micro, small and medium organisations; empower responsible innovation; and reinforce competitiveness in Europe. The EDPB members underlined the essential role of data protection in upholding European values and individuals’ fundamental rights, and supporting the human-centric use of personal data.”
The Irish Data Protection Commission (DPC) announced inquiry into TikTok Technology Limited’s transfers of EEA users’ personal data to serves located in China.
Why does this matter? “[…] during that previous inquiry, TikTok maintained that transfers of EEA users’ personal data to China took place by way of remote access only and that EEA user data were not stored on servers located within China i.e. EEA user data were stored on servers located outside of China and were accessed remotely by TikTok staff from within China. Accordingly, the DPC’s decision of 30 April 2025 did not consider TikTok’s storage of EEA users’ personal data on servers located in China. However, in April 2025, TikTok informed the DPC of an issue that it had discovered in February 2025, namely that limited EEA user data had in fact been stored on servers in China, contrary to TikTok’s evidence to the previous inquiry. […] The inquiry will consider the following provisions of the GDPR: Articles 5(2) (accountability), 13(1)(f) (transparency information in relation to third country transfers), 31 (obligation to cooperate with the supervisory authority) and Chapter V GDPR (compliance with the relevant requirements for third country transfers).”
B) Guidelines, opinions & more
The General-Purpose AI Code of Practice has been published. More information on the code is available in this dedicated Q&A. The Code has three chapters: Transparency, Copyright, and Safety and Security.
Why does this matter? “The General-Purpose AI (GPAI) Code of Practice is a voluntary tool, prepared by independent experts in a multi-stakeholder process, designed to help industry comply with the AI Act’s obligations for providers of general-purpose AI models. […] The Code was published on July 10, 2025. In the following weeks, Member States and the Commission will assess its adequacy. Additionally, the code will be complemented by Commission guidelines on key concepts related to general-purpose AI models, to be published still in July. […] The Chapters on Transparency and Copyright offer all providers of general-purpose AI models a way to demonstrate compliance with their obligations under Article 53 AI Act. The Chapters on Safety and Security is only relevant to the small number of providers of the most advanced models, those that are subject to the AI Act's obligations for providers of general-purpose AI models with systemic risk under Article 55 AI Act.”
The International Organization for Standardization released ISO/IEC 42006:2025 standard that sets out the additional requirements for bodies that audit and certify artificial intelligence management systems (AIMS) according to ISO/IEC 42001. It builds on ISO/IEC 17021-1 and ensures that certification bodies operate with the competence and rigour necessary to assess organisations developing, deploying or offering AI systems.
Why does this matter? “AI systems present unique challenges in areas like ethics, data quality, risk, and transparency. To certify that an organisation responsibly manages these challenges, auditors themselves need specialised knowledge and clear rules for conducting assessments. ISO/IEC 42006 ensures that the audit and certification of AI management systems is performed consistently and credibly, giving customers and stakeholders confidence that certified organisations meet the expectations set out in ISO/IEC 42001.”
The French Data Protection Authority (CNIL) has published the final version of its guide on data transfer impact assessments.
C) Publications, reports
The European Parliamentary Research Service (EPRS) published a briefing titled “Revisiting the GDPR: Lessons from the United Kingdom experience”.
Why does this matter? “[…] The UK's reform proposals, aimed at boosting economic growth and innovation, focused on reducing administrative burdens, promoting data reuse for research, and facilitating artificial intelligence development. However, critics warned these reforms unduly weaken fundamental rights and jeopardise the UK's adequacy status with the European Union (EU). As the United States pressures the EU to adopt a more lenient regulatory stance, and industry voices call for a wider review of the General Data Protection Regulation, the UK's experience offers cautionary lessons. Any broader reform effort that is not carefully designed and fails to account for the public's strong data protection expectations will likely face significant opposition from civil society.”
A new paper was published about the challenges of Agentic AI, focusing on the specific challenges posed by cross-border Agent AI in the banking sector: “Cross-Border Agentic AI and the EU AI Act's Global Reach” (by Theodoros Karathanasis).
Why does this matter? “Agentic Artificial Intelligence is transforming banking operations from reactive systems to autonomous decision-makers. However, cross-border use raises regulatory challenges. It is evident that even activities which are categorised as “wholly offshore”, such as data processing or model training, may be subject to the provisions of the EU AI Act, should they be found to be part of a broader strategy that has substantial e@ects in the EU. The opaque logic of many AI systems further complicates the establishment of accountability and traceability across borders. A further question to be addressed is how Agentic AI is treated under the EU AI Act when its operations span multiple countries.”
UNESCO published a Playbook on Red Teaming artificial intelligence for social good.
Why does this matter? “This Red Teaming PLAYBOOK provides a step-by-step guide to equip organizations and communities with the necessary tools to design and implement their own Red Teaming initiatives for social good. Based on UNESCO’s own Red Teaming experience testing AI for gender bias, it offers clear, actionable guidance on how to run structured evaluations of AI systems for both technical and non-technical communities. Making AI testing tools like this Red Teaming PLAYBOOK accessible to all, gives communities the power to engage in responsible technological developments and actively advocate for actionable change that promote social good and help mitigate the risks of, for example, technology-facilitated gender-based violence (TFGBV), the use of technology to enact or mediate violence that disproportionately affect women and girls.”
A new study, commissioned by the European Parliament’s Policy Department for Justice, Civil Liberties and Institutional Affairs at the request of the Committee on Legal Affairs, was published on Generative AI and Copyright.
Why does this matter? “This study examines how generative AI challenges core principles of EU copyright law. It highlights the legal mismatch between AI training practices and current text and data mining exceptions, and the uncertain status of AI-generated content. These developments pose structural risks for the future of creativity in Europe, where a rich and diverse cultural heritage depends on the continued protection and fair remuneration of authors. The report calls for clear rules on input/output distinctions, harmonised opt out mechanisms, transparency obligations, and equitable licensing models. To balance innovation and authors’ rights, the European Parliament is expected to lead reforms that reflect the evolving realities of creativity, authorship, and machine generated expression.”
Anthropic proposed a “Frontier Model Transparency Framework”.
Why does this matter? “Frontier AI development needs greater transparency to ensure public safety and accountability for the companies developing this powerful technology. AI is advancing rapidly. While industry, governments, academia, and others work to develop agreed-upon safety standards and comprehensive evaluation methods—a process that could take months to years—we need interim steps to ensure that very powerful AI is developed securely, responsibly, and transparently. We are therefore proposing a targeted transparency framework, one that could be applied at the federal, state, or international level, and which applies only to the largest AI systems and developers while establishing clear disclosure requirements for safety practices. Our approach deliberately avoids being heavily prescriptive. We recognize that as the science of AI continues to evolve, any regulatory effort must remain lightweight and flexible. It should not impede AI innovation, nor should it slow our ability to realize AI's benefits—including lifesaving drug discovery, swift delivery of public benefits, and critical national security functions. Rigid government-imposed standards would be especially counterproductive given that evaluation methods become outdated within months due to the pace of technological change.”
Data & Technology
OpenAI is up to release web browser in challenge to Google Chrome.
Why does this matter? “The browser is slated to launch in the coming weeks, three of the people said, and aims to use artificial intelligence to fundamentally change how consumers browse the web. It will give OpenAI more direct access to a cornerstone of Google’s success: user data. If adopted by the 400 million weekly active users of ChatGPT, OpenAI’s browser could put pressure on a key component of rival Google’s ad-money spigot. Chrome is an important pillar of Alphabet’s ad business, which makes up nearly three-quarters of its revenue, as Chrome provides user information to help Alphabet target ads more effectively and profitably, and also gives Google a way to route search traffic to its own engine by default.”
Ukraine is set to become the first country in Europe to launch direct-to-cell mobile services via Starlink, following an agreement between Kyivstar, and SpaceX.
Why does this matter? “The new service will allow standard mobile phones to connect directly to Starlink’s satellite network, bypassing traditional cell towers, and will be rolled out in two phases. Over-the-top (OTT) messaging services such as WhatsApp and Signal are expected to go live by the end of 2025, with full mobile satellite broadband data and voice services to follow in early 2026.”
Nvidia became the first publicly traded company to surpass a $4 trillion market valuation.
Why does this matter? “The ravenous appetite for Nvidia’s chips are the main reason that the company’s stock price increased by 10-fold since early 2023, catapulting its market value from about $400 billion to $4 trillion.”