Webp bidenai
President Biden signing the Executive Order | twitter.com/POTUS

Biden issues Executive Order on AI, industry remains committed to security and US leadership

Commerce

ORGANIZATIONS IN THIS STORY

President Joe Biden has issued an Executive Order (EO) aimed at ensuring that the United States leads the world in developing and mitigating risks related to artificial intelligence (AI). The EO sets standards for safety and security in AI development, intending to protect consumers' privacy, promote innovation, and advance America's leadership in the field, according to a fact sheet released by the White House.

To protect Americans from potential risks related to AI systems, the EO requires AI developers to share the results of safety tests and other critical information with the U.S. government. Companies will need to disclose whether an AI model they have developed poses any threats to national security, public health and safety, or economic security. 

To guard against AI-enabled fraud and deception, the U.S. Department of Commerce will develop guidelines for authenticating and labeling AI-generated content. The federal government will use those guidelines to help Americans verify that any communications they receive from the government are authentic. The EO also establishes a cybersecurity program that will support the development of AI tools that can identify and address vulnerabilities in critical software.

To protect Americans' privacy, Biden is calling on Congress to pass legislation that will accelerate "the development and use of privacy-preserving techniques," enhance research and technologies that protect privacy, and strengthen federal agencies' guidelines for AI-related risks in data collection and use, according to the fact sheet. Biden has also called on federal departments and programs to ensure that AI is not engaging in discrimination, according to the fact sheet.

To promote innovation and competition, the EO establishes a pilot program of the National AI Research Resource, which students and researchers can use to access data and research materials, as well as expanded grants to support AI development in areas like healthcare. The U.S. will work with international partners to ensure that AI development and implementation is safe, secure, and trustworthy.

The fact sheet said that the EO "builds on previous actions the President has taken, including work that led to voluntary commitments from 15 leading companies to drive safe, secure, and trustworthy development of AI."

Leading AI developers, including Google, signed onto the White House's voluntary AI commitments earlier this year, agreeing to prioritize safety, security and trust.

Google said in a blog post that to build on the momentum of the AI commitments, it partnered with government and industry leaders to host a forum on Oct. 17 to discuss AI and security. During the forum, Google leaders discussed the company's new report, Building a Secure Foundation for American Leadership in AI. The report highlights the importance of mitigating potential cyber threats before attacks take place. Google leaders emphasized three "key organizational building blocks" during the forum: understanding that bad actors are interested in and can use AI, deploying AI systems that are secure, and strengthening security through AI. Google Public Sector CEO Karen Dahut and Google Cloud VP and Chief Information Security Officer Phil Venables said those building blocks can contribute to American leadership in the AI ecosystem and can increase the benefits of AI technologies for Americans. They said in the post that they believe AI advancements will lead to "the biggest technological shift we will see in our lifetimes."

ORGANIZATIONS IN THIS STORY

More News