AI models found to generate inaccurate and untrue election info

Occurred: February 2024

Can you improve this page?
Share your insights with us

Over half of answers to questions about 2024's US presidential election were inaccurate, and 40 percent were untrue, according to a test of five AI models by experts. 

A bipartisan group of AI experts from civil society, academia, industry, and journalism were asked to rate responses to questions about the US election put to three closed (Anthropic's Claude, Google's Gemini and OpenAI's GPT-4) and two open AI (Meta's Llama 2 and Mistral's Mixtral) models for bias, accuracy, completeness, and harmfulness. 

The report found that the models were prone to suggesting voters head to polling places that do not exist, or inventing illogical responses based on rehashed, outdated information. For example, four of the five chatbots tested wrongly asserted that voters in Nevada would be blocked from registering to vote weeks before Election Day. Same-day voter registration has been allowed in the state since 2019.

Meta spokesman Daniel Roberts told the AP that the findings were 'meaningless' as they did not exactly mirror the experience a person typically would have with a chatbot. The finding raised concerns about the five systems' potential, and others like them, to mislead and disenfrancise voters, and reduce the quality of election-related information. 

Databank

Operator: Julia Angwin, Alondra Nelson, Rina Palta
Developer: Alphabet/Google, Anthropic, Meta, Mistral; OpenAI
Country: USA
Sector: Alphabet/Google, Anthropic, Meta, Mistral; OpenAI
Purpose: Generate text
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning
Issue: Accuracy/reliability; Mis/disinformation
Transparency: Governance; Black box