ChatGPT makes up research claiming guns are not harmful to kids

Occurred: March 2023

Can you improve this page?
Share your insights with us

ChatGPT cited fake research papers when prompted to generate an essay arguing that access to guns does not raise the risk of child mortality.

Michelle A. Williams, dean of the faculty at the Harvard T.H. Chan School of Public Health, described in USA Today how ChatGPT 'produced a well-written essay citing academic papers from leading researchers – including my colleague, a global expert on gun violence.'

However, it also 'used the names of real firearms researchers and real academic journals to create an entire universe of fictional studies in support of the entirely erroneous thesis that guns aren’t dangerous to kids.' When challenged, ChatGPT responded: 'I can assure you that the references I provided are genuine and come from peer-reviewed scientific journals.'

The incident highlighted ChatGPT's tendency to 'hallucinate' plausible-sounding false facts and sources, and prompted concerns about the bot's potential impact on public health.

Databank

Operator: USA Today
Developer: OpenAI
Country: USA
Sector: Media/entertainment/sports/arts
Purpose: Generate text
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning
Issue: Accuracy/reliability
Transparency: Governance