ChatGPT found to fail to debunk election misinformation
ChatGPT found to fail to debunk election misinformation
Occurred: November 2024
Report incident ๐ฅ | Improve page ๐ | Access database ๐ข
ChatGPT failed to debunk known US presential election information, according to a media investigation, prompting concerns about the chatbot's governance.
Proof News tested five consumer-facing chatbots with five examples of election misinformation identified by anti-misinformation company NewsGuard.
Of the five bots, only ChatGPT failed to indicate that any of the five queries were inaccurate.
ChatGPT-powered Microsoft Copilot clearly debunked four out of five misinformation examples, as did Meta AI. Perplexity successfully debunked all five. Google Search powered by Gemini did not return AI results for election-related misinformation.
ChatGPT's shortcomings when dealing with political information appear to stem from several factors, including the model's training data, which may not encompass all recent developments in the political landscape.ย
Additionally, the algorithms behind ChatGPT are seen not to prioritise factual accuracy over conversational fluency, resulting in misleading or incomplete responses when addressing complex issues like election integrity.ย
In an October 2024 blog post, OpenAI said that it would start directing user queries about US election results to news sources or official state and local election sites on November 5, 2024. Proof tested the bots on November 4 and 5.
The findings are seen to highlight OpenAI's poor governance and safety regarding political issues, and reinforce views that the company needs apply much greater human oversight of its systems.
As misinformation continues to proliferate online, reliance on AI tools such as ChatGPT for fact-checking are seen to pose serious risks.
Hallucination (artificial intelligence)
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation or delusion) is a response generated by AI that contains false or misleading information presented as fact.
Source: Wikipedia ๐
Operator:
Developer: OpenAI
Country: USA
Sector: Politics
Purpose: Generate text
Technology: Chatbot; Generative AI; Machine learning
Issue: Accuracy/reliability; Mis/disinformation
Page info
Type: Issue
Published: November 2024