Samaltmantechcrunchsf2019day2oct3 cropped
OpenAI Co-Founderand CEO Sam Altman speaks during TechCrunch Disrupt San Francisco in 2019. | TechCrunch/Wikki Commons

Altman: 'I think moving with caution and an increasing rigor for safety issues is really important'

A group of leading technology experts signed an open letter calling for a pause in the development of artificial intelligence.

The letter, signed by experts including Tesla CEO Elon Musk and Apple co-founder Steve Wozniak, argues AI has the potential to be extremely dangerous if developed without proper precautions. The letter was published on the Future of Life Institute website. Sam Altman, CEO of OpenAI agreed there are issues that need addressing, but said the letter lacked technical nuance.

“I think moving with caution and an increasing rigor for safety issues is really important,” Altman said at a recent Massachusetts Institute of Technology event. “The letter, I don’t think was the optimal way to address it."

"AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts," the letter said. "These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt."

The letter continued to say they are not asked for a stop in AI development, but are encouraging "a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."

According to CNBC, Altman shared the concerns about the potential risks of AI, but believes a pause in development is not the solution. Instead, he called for increased collaboration between researchers and industry to ensure that AI is developed in a responsible and ethical manner. 

Altman emphasized the importance of transparency, safety and involving a diverse range of voices in the development of AI technology, according to CNBC. The letter urged for a pause in the development of AI until its safety can be assured, and until there is greater clarity on the impact of AI on jobs and the economy.

“I also agree as capabilities get more and more serious, that the safety bar has got to increase,” Altman said at the MIT event.