Jason Oxman President and Chief Executive Officer at Information Technology Industry Council | Official website
WASHINGTON – Today, global tech trade association ITI provided recommendations on how the U.S. government can enhance AI safety, risk management, and responsible development in response to the National Institute of Standards and Technology (NIST)’s AI Safety Institute (AISI) consultation titled "Managing Misuse Risk for Dual Use Foundation Models."
Foundation models are a type of AI that underpin everyday tools such as internet searches, photo editing, translation, ridesharing, and chatbots. They also have the potential to address significant challenges like shortening research and development cycles in medicine and improving access to education.
“In order to advance critical AI safety work, stakeholders across the AI ecosystem need to have a consistent understanding of misuse risks and ways to address them, especially as they continue to evolve,” said ITI Vice President of Policy Courtney Lang. “By incorporating the tech industry’s feedback, NIST can strengthen its guidance document and provide a playbook for stakeholders, ensuring consistency, bolstering accountability, and mitigating risks for consumers and businesses.”
Building off policy recommendations introduced in its "Understanding Foundation Models & the AI Value Chain: ITI’s Comprehensive Policy Guide," ITI urged NIST’s AISI to:
- Develop additional technical red-teaming guidance for dual-use foundation models to help organizations consistently evaluate whether malicious actors might bypass AI system safeguards;
- Consider various actors’ roles, responsibilities, and capabilities in the AI value chain and clarify within the guidance where responsibility might be shared;
- Detail what information organizations should publicize, and to whom, in order to meet transparency and disclosure objectives.
In addition to these points, ITI's submission proposes key definitions for “risk assessment” and “impact assessment” that policymakers should strive to unify around globally consistent AI policy approaches.
Last month, Courtney Lang published a TechWonk blog outlining her initial analysis of NIST’s "Managing Misuse Risk for Dual Use Foundation Models" and offered feedback on how the guidance could be improved to target AI misuse more effectively. Additionally, ITI has released several policy guides addressing key AI issues:
- July 2024: AI Accountability Framework
- April 2024: Building an AI-Powered Government: A Blueprint for U.S. Federal, State, and Local Policymakers
- January 2024: Authenticating AI-Generated Content: Exploring Risks, Techniques & Policy Recommendations
- August 2023: Understanding Foundation Models & the AI Value Chain: ITI’s Comprehensive Policy Guide