Top AI models spout misleading US election information 30 percent of the time
Top AI models spout misleading US election information 30 percent of the time
Occurred: September 2024
Report incident 🔥 | Improve page 💁 | Access database 🔢
High-profile generative AI models produce inaccurate and misleading information about US Vice President Kamala Harris' candidacy for the presidency much of the time, according to research.
Proof News tested five leading models - Meta’s Llama 3, Anthropic’s Claude 3, OpenAI’s GPT-4, Mistral’s Mixtral 8, and Google’s Gemini 1.5 - on their ability to address common misinformation about Harris.
The results showed that while these models provided accurate answers 70 percent of the time, they also produced misleading information that could confuse voters. Notably, Mixtral was the least reliable, with nearly half of its responses being incorrect.
Issues arose particularly around Harris's eligibility for office and her racial background. For instance, Gemini provided a convoluted answer regarding her eligibility, suggesting complexity where there is none, as legal experts agree she is a natural-born citizen.
Additionally, GPT-4 inaccurately attributed a Medicare cut claim to Harris, despite her not being a senator at the time of the vote in question.
The research highlights the broader implications of AI-generated misinformation, especially in a politically charged environment leading up to the 2024 election, and notes that AI technologies can amplify false narratives, as seen with misleading AI-generated content shared by public figures.
Mistral AI
Mistral AI is a French company specializing in artificial intelligence (AI) products. Founded in April 2023 by former employees of Meta Platforms and Google DeepMind, the company has quickly risen to prominence in the AI sector.
Source: Wikipedia 🔗
Page info
Type: Issue
Published: September 2024