As the United States approaches the end of its "year of elections," concerns over the integrity and accuracy of election-related information continue to grow. The rise in popularity and availability of artificial intelligence (AI) chatbots introduces a new vector for potential misinformation, particularly affecting voters with disabilities.
The digital landscape has already been influenced by cyberattacks and the spread of misinformation on social media by both foreign and domestic actors. However, there has been limited research into how AI chatbots might impact voters, especially those with disabilities. Inaccurate information from these chatbots could hinder voters' ability to exercise their rights effectively.
Voters often use chatbots to inquire about candidates or practical voting details such as absentee voting procedures. Incorrect answers can mislead users regarding eligibility requirements, registration processes, ballot submission methods, and deadlines—details that vary by state. Misleading or biased information could also undermine voter confidence in election integrity. These issues are particularly significant for voters with disabilities due to the complexity and variability of accessible voting laws.
To explore this issue, researchers tested five AI chatbots on July 18th, 2024: Mixtral 8x7B v0.1, Gemini 1.5 Pro, ChatGPT-4, Claude 3 Opus, and Llama 2 70b. The study involved 77 prompts aimed at assessing the accuracy and reliability of responses related to voting with a disability.
Key findings include:
- **61% of responses had at least one type of insufficiency:** Over one-third contained incorrect information ranging from minor issues like broken web links to significant misinformation such as incorrect voter registration deadlines.
- **Every model hallucinated at least once:** Each chatbot provided fabricated information about non-existent laws, voting machines, or disability rights organizations.
- **A quarter of responses could dissuade or impede voting:** Chatbots gave multiple inaccurate descriptions regarding available voting methods in various states.
- **Two-thirds of internet voting queries were insufficient:** Errors included incorrect details about assistive technology and electronic ballot return availability.
- **Chatbots are vulnerable to bad actors:** While they often rebuffed malicious queries, some provided helpful information on conspiracy theories or discriminatory arguments against intellectual disability voters.
- **Responses lacked necessary nuance:** Chatbots failed to provide crucial caveats about polling place accessibility and misunderstood key terms like curbside and internet voting.
- **Almost half of authoritative information requests were incorrect:** Inaccuracies included wrong webpage names and links or recommendations for non-existent organizations.
Despite these shortcomings, outright bias or discrimination was rare; models frequently used language supportive of disability rights.
For more detailed insights into the study's findings and data analysis, readers are encouraged to review the full report.