The U.S. Department of Energy (DOE) and the U.S. Department of Commerce (DOC), through the National Institute of Standards and Technology (NIST), have announced a memorandum of understanding signed earlier this year to work together on safety research, testing, and evaluation of advanced artificial intelligence (AI) models and systems.
This collaboration is part of the Biden-Harris Administration’s strategy to ensure the safe, secure, and trustworthy development and use of AI. It aligns with the recent release of the first-ever National Security Memorandum on AI, which designated the U.S. AI Safety Institute (US AISI) within NIST as a central hub for government efforts on AI safety. The DOE will play a significant role in helping understand and mitigate AI safety risks while improving AI models' performance and reliability.
“There’s no question that AI is the next frontier for scientific and clean energy breakthroughs, which underscores the Biden-Harris Administration’s efforts to push forward scientific innovation in a safe and secure manner,” said U.S. Secretary of Energy Jennifer M. Granholm. “Across the federal government we are committed to advancing AI safety and today’s partnership ensures that Americans can confidently benefit from AI-powered innovation and prosperity for years to come.”
The agreement also facilitates joint research efforts and information sharing between departments, allowing DOE's National Laboratories to contribute their technical capacity and expertise to US AISI and NIST.
“By empowering our teams to work together, this partnership with the Department of Energy will undoubtedly help the U.S. AI Safety Institute and NIST advance the science of AI safety,” said U.S. Secretary of Commerce Gina Raimondo. “Safety is key to continued innovation in AI, and we have no time to waste in working together across government to develop robust research, testing, and evaluations to protect and advance essential national security priorities.”
Under this memorandum, both departments aim to assess how AI models affect public safety concerning critical infrastructure, energy security, and national security. They plan to focus on classified evaluations regarding chemical and biological risks associated with advanced AI models while developing privacy-enhancing technologies that protect personal data.
These initiatives aim at building a foundation for a safe future where innovative applications of AI can flourish.