Leading figures and organizations worldwide sign statement on mitigating AI risk: ‘a global priority’

Altman gates
OpenAI Co-Founder & CEO Sam Altman; Bill Gates | Steve Jennings, Getty Images for TechCrunch CC, licensed under the Creative Commons Attribution 2.0 Generic license; gatesfoundation.org

Leading figures and organizations worldwide sign statement on mitigating AI risk: ‘a global priority’

ORGANIZATIONS IN THIS STORY

In a significant and concerted move towards global consciousness about AI risks, several notable figures in AI research, policy-making, education, and other sectors have endorsed a statement issued by the Center for AI Safety. The statement highlights the urgency of addressing the severe risks posed by advanced AI and calls for it to be treated with the same level of attention as other global-scale risks, including pandemics and nuclear war.

In a bid to stir an open and broad-ranging discussion on the subject, the statement, published on the Center's website, asserts that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

Among the signatories are internationally respected AI researchers, professors, CEOs, and public figures. They include Geoffrey Hinton, Emeritus Professor of Computer Science, University of Toronto; Yoshua Bengio, Professor of Computer Science, U. Montreal / Mila; Demis Hassabis, CEO, Google DeepMind; Sam Altman, CEO, OpenAI; Bill Gates of Gates Ventures; Ted Lieu, Congressman, US House of Representatives; and numerous others.

These signatories represent a broad spectrum of expertise, ranging from AI, computer science, and machine learning to philosophy, law, human rights, climate science, and even international security and disarmament. The diverse list underlines the wide-ranging impact that advanced AI can have and the necessity for an interdisciplinary approach to AI safety and ethics.

The signatories' decision to publicly endorse the statement reflects a growing consensus in the global community that AI, if not managed effectively, could pose severe and even existential risks to humanity. 

The statement by the Center for AI Safety is intended to foster common knowledge of the growing number of experts and public figures who are taking these risks seriously. It aims to overcome the obstacles to voicing concerns about AI's most severe risks and urges proactive engagement from all stakeholders in society, including AI experts, journalists, policymakers, and the general public.

The initiative, representing a profound call to action, is expected to stimulate further discussions and actions on AI risk mitigation on a global scale, encouraging the development of AI safety measures, ethical guidelines, and governance structures that can match the rapidly advancing pace of AI technology. 

The global focus on AI risk comes at a time of rapid developments in AI technologies and their increasing integration into all aspects of society, from healthcare and education to defense and cybersecurity. It underscores the importance of considering safety, ethics, and the long-term impact on society at every stage of AI research and development. 

In addition to their endorsements, many of the signatories are actively involved in AI safety and ethical research, reinforcing the urgent need for practical actions to accompany this public call.

ORGANIZATIONS IN THIS STORY

More News