ChatGPT makes up research claiming guns are not harmful to kids

Occurred: March 2023

ChatGPT cited fake research papers when prompted to generate an essay arguing that access to guns does not raise the risk of child mortality.

Michelle A. Williams, dean of the faculty at the Harvard T.H. Chan School of Public Health, described in USA Today how ChatGPT 'produced a well-written essay citing academic papers from leading researchers – including my colleague, a global expert on gun violence.'

However, it also 'used the names of real firearms researchers and real academic journals to create an entire universe of fictional studies in support of the entirely erroneous thesis that guns aren’t dangerous to kids.' 

When challenged, ChatGPT responded: 'I can assure you that the references I provided are genuine and come from peer-reviewed scientific journals.'

The incident highlighted ChatGPT's tendency to 'hallucinate' plausible-sounding false facts and sources, and prompted concerns about the bot's potential impact on public health.

Hallucination (artificial intelligence)

In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation or delusion) is a response generated by AI that contains false or misleading information presented as fact.

Source: Wikipedia 🔗

System 🤖

Operator: USA Today