Altman: 'It is essential to develop regulations that incentivize AI safety'

Sam altman 1600x900
Sam Altman, CEO of OPenAI, advocates for regulating AI | TechCrunch, CC BY 2.0 <https://creativecommons.org/licenses/by/2.0>, via Wikimedia Commons

Altman: 'It is essential to develop regulations that incentivize AI safety'

The chief executive officer of OpenAI gave testimony May 16 before the U.S. Senate Committee on the Judiciary Subcommittee on Privacy, Technology and the Law addressing the importance of regulatory oversight on artificial intelligence.

Sam Altman, OpenAI CEO, advocated for the importance of regulatory oversight on artificial intelligence, according to his testimony.

"We believe it is essential to develop regulations that incentivize AI safety while ensuring that people are able to access the technology’s many benefits," Altman said during his testimony.


OpenAI Co-Founder and CEO Sam Altman speaks onstage during TechCrunch Disrupt San Francisco 2019. | TechCrunch/Wikki Commons

Researchers from the Center for a New American Security's AI Safety and Stability Project provided their analysis and insights on the testimony, according to a May 17 news release

"Tuesday's hearing reiterated the importance of establishing a useful regulatory regime," Josh Wallin, CNAS Defense Program fellow, said, noting issues such as licensing requirements, independent auditing and liability determination were discussed, according to the release.

The hearing was unusual, according to Associate Fellow of Technology and National Security Bill Drexel, as it represented an instance where representatives from large corporations were actively pleading for regulation, the release reported. 

"Though there were divergences among the panelists on what appropriate rules would look like, the receptivity to regulation among political, industry and thought leaders in the room was remarkable," Drexel said in the release.

He added that, despite the consensus around the impact of AI on social, political and economic changes, developing smart regulation that encourages America’s AI sector to mitigate risk, maintain advantage over China and stoke innovation is far from straightforward, the release reported.

Michael Depp, research associate at the AI safety and stability project, pointed out the challenges in understanding rapidly changing technology and applying old paradigms to it, according to the release. He highlighted the danger of AI following the rabbit hole of cybersecurity and social media where agreement on the existence of a threat doesn't lead to necessary and timely legislation.

"Senator Ossof challenged the panelists to define AI, which is an important piece to get right early if legislation is to be effective," Depp, said in the release.

Caleb Withers, research assistant of technology and national security, identified three areas of emerging agreement: the need for a new regulatory body, the necessity to examine current liability settings and the agreement against a blanket pause on scaling up AI systems, the release said. 

"This hearing hit many of the right notes in building further momentum toward a coherent and effective regulatory framework for the most advanced and impactful AI systems," Withers added, according to the release.