Federal agencies are approaching the December 16 deadline to publish updated AI use case inventories. These inventories detail their implementation of risk management practices required by the Office of Management and Budget's memorandum M-24-10, titled "Advancing Governance, Innovation, and Risk Management for Agency use of Artificial Intelligence." The memorandum emphasizes the importance of identifying and addressing potential harms associated with AI technology.
The Center for Democracy & Technology (CDT) has reviewed 38 publicly available compliance plans from various federal agencies. The review highlighted inconsistencies in AI governance approaches that could lead to under-identifying or failing to address potential harms. However, some agencies have adopted innovative governance models that may serve as examples for others.
Each agency Chief AI Officer is tasked with "instituting the requisite governance and oversight process to achieve compliance with this memorandum and enable responsible use of AI in the agency." Compliance plans provide insight into how agencies plan to govern their AI use, focusing on soliciting information about AI use cases, reviewing these cases for public rights and safety impacts, and ensuring compliance with M-24-10.
The review found varied approaches among agencies due to differences in maturity levels of AI governance programs and strategies for integrating new obligations within existing operations. Some agencies have established multi-phase processes involving multiple expert reviews throughout an AI system's lifecycle. For instance:
- The Department of Housing and Urban Development uses "review gates" during deployment.
- The Department of Labor employs a "Use Case Impact Assessment Framework."
- The Department of Veterans Affairs requires approval from its AI Governance Council.
- The Department of State mandates independent reviews for significant changes.
Other agencies rely solely on their Chief AI Officers without cross-agency review processes, potentially leading to gaps in governance practices.
Some agencies integrate standardized decision-making processes into existing structures, while others create new systems tailored for AI technologies. For example:
- Agencies like the Department of Interior integrate AI governance into existing risk management programs.
- Departments such as Health and Human Services develop new systems with standard procedures for subcomponents.
Agencies also implement oversight mechanisms beyond M-24-10 requirements. Some conduct semi-annual reviews or audit non-public use cases. However, most only commit to annual reviews, which may not suffice given the rapid pace of AI adoption.
M-24-10 emphasizes protecting public rights and safety by involving civil rights officials in identifying safety-impacting AI within agencies. While many appoint senior civil rights officials to their boards, additional steps are needed to embed these experts into decision-making processes.
Emerging practices show promise for enhancing AI governance effectiveness:
1. Partnering with academia: The Department of Labor collaborates with Stanford University.
2. Independent review processes: Established by the Department of Labor.
3. Centralized permissions: Implemented by the Department of Treasury and Social Security Administration.
The OMB should encourage experimentation with such approaches, while the CAIO Council can facilitate sharing best practices across agencies.
In conclusion, while compliance plans indicate progress toward meeting deadlines, their impact depends on effective implementation. As the new year approaches, maintaining momentum is crucial for implementing necessary safeguards against potential risks posed by AI technology.