Jason Oxman President and Chief Executive Officer at Information Technology Industry Council | Official website
WASHINGTON – Today, global tech trade association ITI released a first-of-its-kind set of consensus tech sector practices that companies are using to develop and deploy AI technology safely and securely, aiming to build trust with consumers.
ITI’s AI Accountability Framework defines responsibilities across the entire AI ecosystem, outlining steps AI developers, deployers, and integrators—a newly defined term for an intermediate actor in the supply chain—are taking to address high-risk AI uses, including for frontier AI models. It also introduces the concept of auditability, where an organization retains documentation of risk assessments to increase transparency in AI systems. ITI’s AI Accountability Framework can inform both governments looking to develop AI policies and organizations seeking to advance their AI risk management practices.
“The technology industry appreciates the important role that consumer trust plays in advancing the adoption of AI and furthering innovation. ITI’s AI Accountability Framework serves to deepen that trust by detailing practices that developers, deployers and integrators are taking to increase AI safety and mitigate risk, and is a guide that policymakers can build on as they contemplate approaches to AI governance,” said ITI’s Vice President of Policy Courtney Lang.
The Framework details seven practices being used by actors across the AI ecosystem:
- Early and continuous risk and impact assessments throughout the AI development lifecycle, which can help an organization address specific risks and make more informed decisions about how an AI deployment might impact different groups;
- Testing frontier models to identify and address flaws and vulnerabilities prior to release;
- Documenting and sharing information about the AI system with others in the AI value chain, allowing those who are integrating or deploying AI systems to better understand the system and prior risk management activities;
- Undertaking explanation and disclosure practices so that end-users have a basic understanding of the AI system and know when they are interacting with an AI system;
- Using secure, accurate, relevant, complete, and consistent training data, which can help mitigate biased outputs and produce consistent results across applications;
- Ensuring that AI systems are secure-by-design to protect end-users;
- Appointing AI Risk Officers and training employees who interact with or use AI systems.
Read ITI’s full AI Accountability Framework here.
This Framework is the latest in ITI’s series of policy guides charting key issues in artificial intelligence:
- April 2024: Building an AI-Powered Government: A Blueprint for U.S. Federal, State, and Local Policymakers
- January 2024: Authenticating AI-Generated Content: Exploring Risks, Techniques & Policy Recommendations
- August 2023: Understanding Foundation Models & the AI Value Chain: ITI’s Comprehensive Policy Guide