The European Union has introduced a pioneering framework for regulating artificial intelligence technologies through the EU Artificial Intelligence Act. This legislation includes specific provisions for general-purpose AI (GPAI) models, also known as foundation models. While the law is finalized, it allows for compliance methods to be determined post-adoption via a multi-stakeholder consultative process.
This Code of Practice initiative is led by the European Commission’s AI Office and involves GPAI model providers, academics, rightsholders, and civil society organizations like CDT Europe. The goal is to establish a final Code by May 2025 before the AI Act's GPAI provisions become enforceable in August 2025.
The Code of Practice will complement the AI Act. Although adherence to it will be optional, following its measures will help GPAI model providers demonstrate compliance with their obligations under the Act until a harmonized standard is published.
CDT Europe expressed its honor at being selected to participate in developing this Code of Practice and emphasized its commitment to ensuring robust protection of fundamental rights.
The inclusion of GPAI models in the AI Act was prompted by the rising popularity of large language models like ChatGPT. These models present challenges due to their potential to produce illegal or inaccurate content, use copyrighted materials without permission, exhibit bias, and compromise privacy and data protection.
The AI Act uses a two-tiered system for GPAI models: default rules applicable to all models and specific rules for those presenting "systemic risks." Systemic risks are determined based on quantitative criteria or designation by the European Commission if a model significantly impacts public health, safety, security, or fundamental rights.
Codes of Practice have been used before by the European Commission but never in this capacity. The new Code aims to enable entities regulated by EU law to comply with legislation through its guidelines until harmonized standards are developed.
According to the AI Act, the Code can address any obligations set for GPAI model providers. It focuses on transparency and copyright issues, risk identification and assessment, risk mitigation, and internal governance. Once finalized, it requires approval from the European Commission.
While not mandatory for compliance with obligations under the AI Act, this Code will provide legal certainty for GPAI model providers until August 2027 when existing models must comply with new regulations.
Civil society involvement is crucial in this process as per the AI Act's invitation for stakeholder participation. Organizations like CDT Europe play an essential role in ensuring that rules governing GPAIs are strictly observed while promoting transparency and accountability.
CDT Europe confirmed its official participation in developing this Code of Practice alongside other civil society organizations. They aim to ensure that advocates for fundamental rights and public interest have their voices heard throughout this process.