Webp 3r2l9nmmbri3huekmox6348shtyh
Alexandra Reeve Givens President & CEO at Center for Democracy & Technology | Official website

Senate committee advances bills addressing harmful impacts of artificial intelligence on elections

ORGANIZATIONS IN THIS STORY

On May 15, 2024, the U.S. Senate Committee on Rules & Administration marked up and advanced three bills addressing the role of AI in elections: the Preparing Election Administrators for AI Act, the AI Transparency in Elections Act, and the Protect Elections from Deceptive AI Act. The first two of these bills aim to address some risks posed by AI in elections, though they could benefit from improvements. The third bill raises significant constitutional and implementation concerns.

Senate Majority Leader Schumer emphasized the urgency of these measures, stating that AI “has the potential to jaundice or even totally discredit our election systems.” This comes amid warnings from the FBI and Cybersecurity and Information Security Agency about generative AI's potential to interfere with elections through deepfakes and disinformation.

Examples include realistic fake images created by the DeSantis campaign depicting Trump hugging Dr. Fauci and a deepfake robocall of President Biden discouraging voting in New Hampshire. Beyond deepfakes, bad actors could use AI to scale disinformation campaigns or clone election officials' voices for fraudulent communications.

While these threats are largely hypothetical at present, failing to mitigate them may leave vulnerabilities leading up to November’s election.

The Preparing Election Administrators for AI Act requires the US Election Assistance Commission (EAC) to develop voluntary guidelines within 60 days of enactment. These guidelines will address risks associated with using AI technologies in election administration, cybersecurity risks, and how information generated by AI can affect accurate election information sharing and public trust.

Despite its benefits, this legislation faces challenges due to its timing relative to upcoming elections. Additional appropriations for election security grants are recommended. Coordination between EAC, CISA, and NIST is also suggested for drafting these guidelines.

The AI Transparency in Elections Act creates labeling standards for political advertising containing media generated by artificial intelligence. It mandates clear statements regarding an advertisement’s use of AI during specific periods around Election Day. This aims to provide transparency but raises questions about feasibility and constitutional protections for parody and satire.

The Protect Elections from Deceptive AI Act seeks to create a federal cause of action for candidates whose likeness appears in materially deceptive AI-generated media. While aiming to protect public discourse from harmful disinformation, it poses significant constitutional issues related to political speech protected under the First Amendment.

This bill would prohibit knowing distribution of materially deceptive AI-generated content intended to influence an election or solicit funds. It defines such content as any image, audio, or video altered by machine learning that a reasonable person would find fundamentally different from unaltered media or believe accurately depicts actions not exhibited by a candidate.

Critics argue that this bill could empower censorship of political speech not just by candidates but also ordinary citizens. The Supreme Court has affirmed that political speech holds high value under First Amendment protections. Falsehoods falling short of defamation remain protected speech.

For example, if a realistic AI-generated video depicted former President Trump verbalizing his written posts criticizing President Biden's policies without defaming him, it could still be subject to litigation under this act if deemed materially deceptive.

This legislation's ambiguity might encourage strategic litigation aimed at silencing public debate participation by critics or regular citizens engaging in public discourse. Modifications requiring proof of defamation before silencing speech are suggested to prevent misuse while targeting constitutionally proscribable content.

ORGANIZATIONS IN THIS STORY