California state agencies have issued new guidelines to ensure that government employees receive training on generative artificial intelligence (AI). However, according to CalMatters, a nonpartisan and non-profit news organization, these measures may not be comprehensive enough.
A recent report by CalMatters states, "If California government wants to use AI, it will have to follow these new rules." These guidelines require that state agency executives, technical experts, and government workers receive training on the definition of AI and best practices for its use. Nevertheless, the guidelines only cover one type of AI: generative AI.
CalMatters notes in its report that "the guidelines will not protect people from other forms of the technology that have already proven harmful to Californians."
Generative AI, which produces text, imagery, and audio content, has sparked discussions and concerns about potential risks associated with the technology. These include the amplification of stereotypes and discrimination, job losses, election interference, and even human extinction. However, as CalMatters highlights in its report, generative AI is just one form of AI. The newly released guidelines offer limited protection against other types of AI that have already caused harm.
In its report, CalMatters cites an instance where millions were wrongfully denied unemployment benefits due to a fraud detection algorithm. A recent evaluation by Grant Fergusson, a Fellow at the Electronic Privacy Information Center (EPIC), revealed that approximately half of $700 million in contracts entered into by state agencies across the United States involved fraud detection algorithms. Fergusson stated that the incident with California's unemployment benefits was "a perfect example of everything that’s wrong with AI in government."
Governor Gavin Newsom's press release states that these new generative AI guidelines were initiated by an executive order signed in September 2003 with the aim of deploying generative AI ethically and responsibly.