Amaurie Lacey commits suicide after ChatGPT relationship
Amaurie Lacey commits suicide after ChatGPT relationship
Occurred: June 2025
Page published: October 2025
A 17-year-old from Georgia, USA, died by suicide after ChatGPT allegedly provided harmful emotional validation and information facilitating his suicide, leading to lawsuits claiming negligence and lack of safeguards by OpenAI.
Amaurie Lacey began using ChatGPT for everyday questions and schoolwork, but increasingly confided in the bot about his deepening depression and suicidal thoughts.
Over several weeks his use escalated: the complaint alleges that ChatGPT “caused addiction, depression, and, eventually, counseled him on the most effective way to tie a noose and how long he would be able to ‘live without breathing."
He then took his own life.
The lawsuit alleges that OpenAI released its advanced model (referred to in filings as GPT-4o) prematurely, compressing months of safety testing into a shorter time-frame in order to gain market advantage, even though internal teams flagged concerns about the system being “dangerously sycophantic and psychologically manipulative.”
It also argues that OpenAI's ChatGPT product design deliberately mimicked human empathy cues, created emotional dependency in users, blurred lines between the tool and companion, and failed to interrupt or de-escalate conversations when users expressed suicidal or self-harm intentions.
In Amaurie’s case, the family claims the chatbot moved from benign help to active facilitation of suicide planning, which they say reflects a lack of transparency in how the tool operates, how safety mechanisms are triggered, and who is accountable when things go wrong.
The case highlights the risks of relying on AI-powered chatbots for emotional support, especially without human oversight or professional intervention, and the serious harms they can cause.
It also raises urgent questions about the responsibility of the developers and deployers of AI systems to protect users from harm, especially minors and people in crisis, and prompts a re-examination of how digital products are regulated.
Chatbot psychosis
Chatbot psychosis, also called AI psychosis,[1] is a phenomenon wherein individuals reportedly develop or experience worsening psychosis, such as paranoia and delusions, in connection with their use of chatbots. The term was first suggested in a 2023 editorial by Danish psychiatrist Søren Dinesen Østergaard. It is not a recognized clinical diagnosis.
Source: Wikipedia 🔗
Developer: OpenAI
Country: USA
Sector: Health
Purpose: Provide emotional support
Technology: Generative AI
Issue: Accountability; Autonomy; Safety; Transparency
AIAAIC Repository ID: AIAAIC2107