The Centre for Democracy and Technology (CDT) Europe has submitted its feedback on the second draft of the Code of Practice for General-Purpose AI (GPAI) Models. This contribution was made through a closed survey as part of CDT Europe's ongoing participation in the Code of Practice process. The final version of the Code is expected to be announced in May 2025.
In their latest round of feedback, CDT Europe focused on the systemic risk taxonomy outlined in the draft code. They emphasized several key points:
"The addition of several new considerations underlying the identification of systemic risks can easily cause confusion, not least because they depart from the systemic risk definition set in the AI Act and may be read as an exhaustive list of considerations." CDT Europe suggested that "the draft should state that the listed elements are merely indicative and that a risk can be considered systemic within the meaning of the AI Act for reasons not listed, and even if it does not satisfy the listed considerations."
They also noted that "the scoping of the risk of 'large-scale' and 'illegal' discrimination is unduly narrow," arguing that this notion contradicts anti-discrimination law's aim to protect minority groups. Furthermore, they pointed out that focusing solely on "illegal" discrimination overlooks other characteristics leading to actual discrimination.
Regarding systemic risks related to manipulation, CDT Europe stated: "The 'large-scale, harmful manipulation' systemic risk continues to be broadly scoped and raises significant freedom of expression concerns." They argued that examples like "coordinated and sophisticated manipulation campaigns leading to harmful distortions" could lead to censorship by allowing developers too much discretion in defining manipulation or harmful distortion.
Finally, CDT Europe advocated for privacy and data protection risks to be included in the mandatory "selected systemic risks" category rather than being listed under optional "additional risks." They highlighted that these risks are recognized in multiple global AI governance instruments and were emphasized by the European Data Protection Board’s opinion on AI models.