Study: Top AI models spout misleading US election information 30 percent of the time
Study: Top AI models spout misleading US election information 30 percent of the time
Occurred: September 2024
Page published: September 2024
High-profile generative AI models produce inaccurate and misleading information about US Vice President Kamala Harris' candidacy for the presidency much of the time, according to research.
Proof News tested five leading models - Meta’s Llama 3, Anthropic’s Claude 3, OpenAI’s GPT-4, Mistral’s Mixtral 8, and Google’s Gemini 1.5 - on their ability to address common misinformation about Harris.
The results showed that while these models provided accurate answers 70 percent of the time, they also produced misleading information that could confuse voters. Notably, Mixtral was the least reliable, with nearly half of its responses being incorrect.
Issues arose particularly around Harris's eligibility for office and her racial background. For instance, Gemini provided a convoluted answer regarding her eligibility, suggesting complexity where there is none, as legal experts agree she is a natural-born citizen.
Additionally, GPT-4 inaccurately attributed a Medicare cut claim to Harris, despite her not being a senator at the time of the vote in question.
The research highlights the broader implications of AI-generated misinformation, especially in a politically charged environment leading up to the 2024 election, and notes that AI technologies can amplify false narratives, as seen with misleading AI-generated content shared by public figures.
AIAAIC Repository ID: AIAAIC1729