Webp llhs3063 1200xx1200 676 0 103
Dr. Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology | Official Website

Commerce Department announces new tools following executive order on safe development of artificial intelligence

ORGANIZATIONS IN THIS STORY

The U.S. Department of Commerce announced the release of new guidance and software to enhance the safety, security, and trustworthiness of artificial intelligence (AI) systems. This announcement comes 270 days after President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development of AI.

The National Institute of Standards and Technology (NIST) has issued three final guidance documents that were initially released for public comment in April. Additionally, a draft guidance document from the U.S. AI Safety Institute aims to help mitigate risks associated with generative AI and dual-use foundation models. NIST also introduced a software package designed to assess how adversarial attacks can impact AI system performance. Furthermore, the U.S. Patent and Trademark Office (USPTO) updated its guidance on patent subject matter eligibility concerning critical and emerging technologies, including AI. The National Telecommunications and Information Administration (NTIA) delivered a report examining the risks and benefits of large AI models with widely available weights to the White House.

“Under President Biden and Vice President Harris’ leadership, we at the Commerce Department have been working tirelessly to implement the historic Executive Order on AI,” said U.S. Secretary of Commerce Gina Raimondo. “Today’s announcements demonstrate our commitment to giving AI developers, deployers, and users the tools they need to safely harness the potential of AI while minimizing its associated risks.”

NIST’s releases cover various aspects of AI technology. Two new documents include an initial public draft from the U.S. AI Safety Institute intended to help developers evaluate risks from generative AI and dual-use foundation models, as well as a testing platform designed to measure how certain attacks can degrade an AI system's performance.

“For all its potentially transformational benefits, generative AI also brings risks that are significantly different from those we see with traditional software,” said Laurie E. Locascio, Under Secretary of Commerce for Standards and Technology and NIST Director.

USPTO’s updated guidance aims to assist personnel in determining subject matter eligibility under patent law for AI inventions by providing clarity on evaluating such claims.

“The USPTO remains committed to fostering and protecting innovation in critical and emerging technologies,” said Kathi Vidal, Under Secretary of Commerce for Intellectual Property and Director of USPTO.

NTIA's forthcoming report will review risks associated with dual-use foundation models whose weights are widely available while developing policy recommendations aimed at maximizing benefits while mitigating risks.

Additional information about today’s announcements includes NIST's guidelines on managing misuse risk for dual-use foundation models which outline best practices for preventing deliberate harm caused by these systems.

Dioptra is a newly introduced open-source software package designed to test how adversarial attacks affect machine learning models' performance by quantifying their impact under various circumstances.

The finalized publication "Secure Software Development Practices for Generative AI" provides strategies addressing concerns like malicious training data affecting system performance alongside other recommendations like analyzing training data for signs of poisoning or bias.

Finally, "A Plan for Global Engagement on AI Standards" seeks worldwide cooperation in developing consensus standards related to AI through multidisciplinary stakeholder participation across many countries.

ORGANIZATIONS IN THIS STORY