A recent report by the Center for a New American Security (CNAS) predicts that by 2030, cutting-edge artificial intelligence (AI) models will be trained on computing power that is a million times more potent than what is currently used to train advanced models. The report calls on policymakers to brace themselves for an explosive growth in AI.
The CNAS press release reveals that the report, titled "Future-Proofing Frontier AI Regulation: Projecting Future Compute for Frontier AI Models," anticipates a thousand-fold increase in the amount of computing hardware used to train AI compared to what was used for GPT-4. Alongside advancements in algorithms, AI systems are expected to be trained with computing power one million times more effective within approximately five years.
CNAS mentioned in the press release, "These leaps forward are possible without government intervention (financed solely by large tech companies) and without fundamental breakthroughs in chip design."
The report, authored by CNAS’ Executive Vice President and Director of Studies Paul Scharre, cautions that the impending surge in AI growth implies these systems are poised to become significantly more computationally intense and capable within about five years. Scharre adds that this projection will trigger a boom in demand for hardware and chips. CNAS further stated in the press release, "Given intense geopolitical competition with China, the United States must adopt policies that safeguard America’s advantages in chips while ensuring AI’s benefits are widely shared."
The publication of the CNAS report coincided with another significant development - a U.S. Department of State-commissioned report making headlines. This document, prepared by Gladstone AI Inc., raises concerns about AI introducing new categories of weapons of mass destruction-like (WMD-like) and WMD-enabling catastrophic risks. It also voices apprehension over AI labs that have publicly announced their intention or expectation to achieve human-level and superhuman artificial general intelligence. The Gladstone report states, "The risks associated with these developments are global in scope, have deeply technical origins, and are evolving quickly. As a result, policymakers face a diminishing opportunity to introduce technically informed safeguards that can balance these considerations and ensure advanced AI is developed and adopted responsibly."