ChatGPT invents fake links to news partners’ investigations
ChatGPT invents fake links to news partners’ investigations
Occurred: July 2024
Report incident 🔥 | Improve page 💁 | Access database 🔢
ChatGPT is generating inaccurate URLs for news articles from its partner organisations, according to an investigation by the Nieman Journalism Lab.
When asked about specific investigations or reports by Nieman researchers, ChatGPT sometimes provided detailed summaries along with fabricated URLs that appeared legitimate but did not actually exist despite licensing deals promising proper attribution and linking.
Affected publications include the Associated Press, Wall Street Journal, Financial Times, The Times (UK), Le Monde, El País, The Atlantic, The Verge, Vox and Politico.
The finding highlighted concerns about ChatGPT's accuracy and reliability, notably its tendency to 'hallucinate' information and data, including citations and content links, thereby contributing to a poor user experience and the potential generation of misinformation and disinformation.
OpenAI acknowledged the issue pointed out by Nieman and stated they are working on addressing it.
Hallucination (artificial intelligence)
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation or delusion) is a response generated by AI that contains false or misleading information presented as fact.
Source: Wikipedia 🔗
Operator:
Developer: OpenAI
Country: USA
Sector: Media/entertainment/sports/arts
Purpose: Generate text
Technology: Chatbot; Generative AI; Machine learning
Issue: Accuracy/reliability; Mis/disinformation
Page info
Type: Incident
Published: July 2024