The third draft of the General-Purpose AI Code of Practice was released last week. CDT Europe has expressed disappointment with recent changes made to the systemic risk taxonomy in this draft.
Under the AI Act, obligations are set for general-purpose AI (GPAI) models, especially those posing systemic risks. These risks can be determined either on a case-by-case basis using specific criteria or presumed if they surpass a certain benchmark in training compute. When GPAI models are identified as posing systemic risks, providers must undertake additional risk assessment and mitigation efforts. The Act itself does not detail specific risks that need addressing; instead, this task is delegated to the Code of Practice through its systemic risk taxonomy.
Incorporating fundamental rights risks into the Code's systemic risk taxonomy is essential to ensure GPAI model providers assess and mitigate potential threats to these rights. Since its inception, the Code has employed a two-tiered approach to systemic risks: mandatory assessments listed in Appendix 1.1 and optional considerations in Appendix 1.2. Most fundamental rights risks appear under optional considerations in Appendix 1.2, with the latest draft introducing illegal large-scale discrimination as an addition to this list.
This analysis reviews the implications of how systemic risk taxonomy is currently structured and addresses some justifications presented in the Code for its latest version.
"Read our full analysis."