ChatGPT falsely tells man he killed his children

Occurred: March 2025

ChatGPT falsely accused a Norwegian man of murdering two of his children and attempting to kill a third, resulting in distress and prompting a privacy complaint against OpenAI.

What happened

Arve Hjalmar Holmen queried ChatGPT about himself and received a fabricated story claiming he had murdered two of his sons and attempted to kill a third, resulting in a fictional 21-year prison sentence. 

The AI-generated response included a number of accurate personal details, such as the number and gender of his children and his hometown, making the false information seem more credible. 

The episode caused significant distress to Mr. Holmen, who fears that people might believe the false information, potentially damaging his reputation and personal life.

Why it happened

The incident occurred due to a phenomenon known as "hallucinations" in which AI systems generate fabricated information and present it as factual. Hallucinations can result from incorrect data, biases, or inaccuracies in the AI model's training dataset. 

In ChatGPT's case, the system may have combined fragments of accurate information with fictional elements, creating a false narrative about Mr. Holmen. 

What it means

The case highlights the broader societal implications of AI-generated misinformation, raising concerns about privacy, accuracy, and the potential for AI systems to cause harm through false accusations. 

It also highlights the need for improved AI safeguards, including meaningful human oversight, of these systems, and stronger transparency in AI decision-making processes. 

It is also clear that mechanisms for individuals to correct false information generated by AI systems should be improved.

System 🤖

Operator: Arve Hjalmar Holmen
Developer: OpenAI
Country: Norway
Sector: Media/entertainment/sports/arts
Purpose: Generate text
Technology: Generative AI; Machine learning
Issue: Accuracy/reliability; Misrepresentation; Privacy

Legal, regulatory 👩🏼‍⚖️