The Future of Life Institute (FLI) has raised concerns over Meta's plans to develop artificial general intelligence (AGI) and make it open-source. The announcement was made by Meta's founder and CEO, Mark Zuckerberg, via an Instagram post last week.
In his January 18th Instagram video, Zuckerberg discussed Meta's long-term vision of building AGI. He stated that the company aims to open source it "responsibly, and make it widely available so everyone can benefit." Further, he revealed that Meta is working towards aligning its two major artificial intelligence (AI) research initiatives, FAIR and GenAi, to realize this vision. The company is also focusing on training its next-generation model Llama 3 and expanding its compute infrastructure.
FLI criticized Meta's approach in a recent post stating, "Meta's pursuit of open-source AGI—despite admittedly lacking a definition of it—reflects the recklessness with which Big Tech are developing and deploying advanced AI systems, disregarding warnings from leading AI experts and putting profits over public safety."
Founded in 2014, FLI aims to "steer transformative technologies away from extreme, large-scale risks and towards benefiting life," according to the institute's website. Max Tegmark, a cosmologist and professor at the Massachusetts Institute of Technology (MIT), currently serves as FLI’s president. The non-profit institute’s external advisors include MIT Physics Professor Alan Guth; Elon Musk; Martin Rees, astronomer and co-founder of the Centre for the Study of Existential Risk; actor Morgan Freeman; and Nick Bostrom, director of Future of Humanity Institute at Oxford University.
In March 2023, FLI drafted an open letter calling for a six-month pause on training artificial intelligence systems more powerful than GPT-4, the OpenAI large language model released in the same month. To date, this letter has garnered more than 33,000 signatures from scientists, big tech executives, scholars and other concerned individuals. Among the signatories are Yoshua Bengio, considered one of the godfathers of AI; Musk; and Apple co-founder Steve Wozniak.
The open letter highlighted the risks associated with advanced AI systems: "As stated in the widely-endorsed Asilomar AI Principles, ‘Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.’ Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators—can understand, predict, or reliably control."