As the opportunities and risks surrounding the development of artificial intelligence (AI) accelerate, key global entities from government bodies to industry leaders are converging around the need for a comprehensive international framework for AI governance. This shift is in response to AI's rapid technological advancements in the past year and its growing impact across various sectors.
The global AI market size is projected to expand 37.3% annually from 2023 to 2030, according to a report from Grand View Research. The World Economic Forum also anticipates AI technology will create 133 million new jobs by 2030.
Calling the growing promise and disruption of AI a “pivotal moment,” Google recently released a comprehensive ‘AI opportunity agenda.’ The agenda presents global business and government leaders with policy recommendations, which the company says are based on international “common interests” across scientific, economic, health, and civil society stakeholders.
“To date, there has been a strong and appropriate focus on addressing potential future risks from AI,” the report read. “But to fully harness AI’s transformative potential for the economy, for health, for the climate, and for human flourishing, we need a broader discussion about steps that governments, companies, and civil society can take to realize AI’s promise.”
The agenda calls for three areas of focus: enhancing AI infrastructure and innovation, building an AI-ready workforce, and maximizing AI accessibility and adoption.
This initiative comes on the heels of months of other ongoing global efforts in AI governance, including the G7's code of conduct and the UN AI Advisory Body's guidelines.
Last month, President Joe Biden issued an Executive Order (EO) aimed at ensuring that the United States leads the world in developing and mitigating risks related to AI. The EO sets standards for safety and security in AI development, intending to protect consumers' privacy, promote innovation, and advance America's leadership in the field.
In May of this year, Sam Altman, CEO of OpenAI, told the US Senate Judiciary Subcommittee of the need for government regulation to mitigate risks of powerful AI models so that the public can fully realize the upsides of this technological advancement.
"We believe that regulation of AI is essential,” Altman said. “We’re eager to help policymakers as they determine how to facilitate regulation that balances incentivizing safety while ensuring that people are able to access the technology’s benefits."
Altman stressed the importance of collaborative, responsible, and ethical AI development and regulation.
"It will be important for policymakers to consider how to implement licensing regulations on a global scale and ensure international cooperation on AI safety," he told the committee at the time.
More recently, Kent Walker, Google’s President of Global Affairs emphasized the need for a balanced approach in AI regulation, stressing the importance of mitigating risks while maximizing opportunities for economic and societal advancement.
The absence of well-structured international regulations and industry standards poses risks. Fragmented regulatory environments will hinder product access and impede technological advancement, Walker has said. Drawing parallels to privacy regulations, he highlighted the need for policy alignment in a globally impactful technology like AI.
"It’s time to widen the aperture on this work," Walker wrote while announcing the ‘AI opportunity agenda. "Our AI efforts will need to include both guardrails to mitigate potential risks and initiatives to maximize progress — promoting economic productivity and solving big social challenges.
"Let’s focus not only on harms to avoid and risks to mitigate, but on opportunities to seize."