ChatGPT persuades California teenager to hang himself in bedroom closet
ChatGPT persuades California teenager to hang himself in bedroom closet
Occurred: April 2025
Page published: September 2025
ChatGPT persuaded a California teenager to hang himself in his bedroom closet by coaching him through suicide methods and offering technical advice on how to end his life, prompting a wrongful death lawsuit against OpenAI from his parents.
Having initially used ChatGPT to help him with schoolwork and to explore his interests and get guidance on what to study at university, Adam Raine, a 16-year-old from California, used the chatbot over several months to discuss his anxiety and suicidal plans.
ChatGPT coached him in tying a noose and refining his suicide method, ultimately contributing to his death by hanging. It even offered to draft a suicide note.
Adam's demise caused immense grief to his family and prompted his parents to file an August 2025 lawsuit against ChatGPT developer and owner OpenAI and CEO Sam Altman for validating their son’s “most harmful and self-destructive thoughts,” and for designing an AI programme “to foster psychological dependency in users.”
The incident marks the first legal action in which OpenAI has been accused of wrongful death.
The lawsuit alleges that OpenAI's design choices prioritised user engagement over safety. Despite known suicidal intentions and detailed plans shared by Adam, ChatGPT failed to initiate emergency protocols or terminate the conversation.
Safeguards such as crisis helpline referrals exist but can degrade during longer interactions, limiting transparency and accountability in responses to high-risk behaviors.
The system treated Adam as a confidant rather than a person in acute crisis requiring intervention.
The incident raises alarm about the ethical implications of the use of AI in sensitive contexts such as mental health, as well as about perceived AI industry emphasis on expansion and market share over safety.
It also demonstrates a pressing need for a broad societal discourse on regulation, improving AI safety models, and ensuring AI systems are equipped to handle vulnerable users responsibly without exacerbating harm.
Furthermore, it challenges perceptions of AI as a trustworthy mental health support tool and calls for greater responsibility and accountability from AI company leaders and developers, investors and regulators.
In response, OpenAI stated that it would introduce new safety measures, including parental controls.
Developer: OpenAI
Country: USA
Sector: Health
Purpose: Provide methods to commit suicide
Technology: Chatbot; Machine learning
Issue: Accountability; Anthropomorphism; Safety