The European AI Office recently released the third draft of the Code of Practice for general-purpose AI (GPAI) models. Scheduled for finalization in May, this Code aims to complement the AI Act by outlining commitments and measures that GPAI model providers must follow to meet their obligations under the Act. The Centre for Democracy and Technology Europe (CDT Europe) has expressed disappointment with this latest draft, which largely excludes fundamental rights from mandatory risk assessments.
A key aspect of the Code is its systemic risk taxonomy, which specifies risks that GPAI model providers should assess and mitigate proactively. CDT Europe, among others, has consistently advocated for improvements to this taxonomy to address known risks such as discrimination, privacy concerns, and issues related to child sexual abuse material and non-consensual intimate imagery. However, these fundamental rights risks have been relegated to a secondary list of optional considerations while the primary focus remains on existential threats like loss of control and chemical or nuclear risks.
Laura Lazaro Cabrera, CDT Europe's Counsel and Director of the Equity and Data Programme, stated: “The removal of discrimination from the selected systemic risk list is a significant regression in the drafting process, and an alarming step backwards for the protection of fundamental rights. We emphasised in each round of feedback the importance of preserving and strengthening the discrimination risk, as well as including privacy risks, child sexual abuse material and non-consensual intimate imagery in the list.”
Cabrera further commented: “Instead, the third draft confirms what many of us had feared – that consideration and mitigation of the most serious fundamental rights risks would remain optional for general-purpose AI model providers. Fundamental rights are not ‘add-ons’. They are a cornerstone of the European approach to AI regulation.”
CDT Europe also raised concerns about how this draft discourages providers from assessing optional fundamental rights risks. Providers are instructed to consider these only when they are reasonably foreseeable and select them for further assessment if they relate specifically to high-impact capabilities with systemic risk. This change removes incentives for addressing fundamental rights risks.
“It is not too late for the drafters to course-correct. But this draft is closest to final product – and foreshadows a significant erosion of fundamental rights in AI landscape,” added Lazaro Cabrera.