The EU’s Artificial Intelligence Act (AI Act) covers a broad taxonomy of actors, ranging from providers and deployers of AI systems to importers and distributors. The obligations pertaining to each of these actors have been the subject of extensive compliance coverage. Often lost in the conversation are the obligations imposed on public authorities in the European Union by the AI Act, including the changes they may need to incorporate when considering the use of AI.
Public authorities and entities acting on their behalf are not exempted from any of the obligations of the Act — rather, they are subject to additional obligations. In this brief, key considerations for public authorities contemplating the deployment of AI are explored.
**Providers vs deployers: a permeable divide**
Public authorities can be providers and/or deployers under the AI Act. While both providers and deployers have clear, concurrent obligations, providers bear the largest share of obligations under the Act. Resource-mindful public authorities contemplating using AI may prefer the comparatively lighter compliance burden involved for deployers (in theory, they can do this by acquiring an AI system rather than developing it themselves). However, authorities should consider that deployers of AI may nonetheless be categorized as “providers” if they:
- Make a substantial modification to a high-risk AI system such that it remains high-risk.
- Modify the intended purpose of an AI system that was not formerly high-risk but becomes so.
- Put their trademark/name on a high-risk AI system integrated into their products or services.
**Public authorities as deployers of high-risk AI systems**
Despite having potential roles as providers under the Act, public authorities will largely fall into the category of deployers. There are two sets of obligations for deployers: general obligations and those specific to public authorities.
General obligations include:
- Following instructions for use, exercising human oversight over high-risk AI systems, and reporting risks.
- Ensuring input data is sufficiently representative.
- Observing approval processes for biometric identification used for law enforcement purposes.
- Providing individual notifications in cases where high-risk AI supports decision-making.
- Providing explanations on decision-making upon request.
- Disclosing certain uses of AI systems unless permitted by law.
Additional specific obligations for public authority deployers include:
- Refraining from using unregistered high-risk AI systems in an EU database managed by the European Commission.
- Undertaking fundamental rights impact assessments before deploying any high-risk AI system.
- Submitting specific information about high-risk AI systems to a regional database.
**Checklist for Deployers**
Considering these varied obligations, CDT offers a three-step framework for public authorities considering deploying an AI system:
1. **Assessment of risk level:** Establish whether an AI system is considered high-risk under Annex III or other relevant legislation covered by Annex I.
2. **Ensure provider compliance:** Verify registration in an EU-wide database and ensure instructions prepared by providers meet standards set in the Act before proceeding with any deployment.
3. **Assessment of impacts and capacity:** Conduct fundamental rights impact assessments; assess readiness for human oversight; ensure mechanisms exist to inform individuals about decisions made using high-risk AIs; develop processes for required disclosures; manage requests for explanations; field complaints related to fundamental rights harms; notify workers if deploying an AIsystem in workplaces.
Deploying an AI system is complex due to ensuring effectiveness, cost-efficiency, and legal compliance such as with GDPR. Given these challenges alongside new requirements from the AI Act, public authorities must adopt careful approaches focusing on risks and necessary mitigations while preparing institutionally to implement required processes effectively.