29-year-old healthcare consultant takes own life after using ChatGPT as therapist
29-year-old healthcare consultant takes own life after using ChatGPT as therapist
Occurred: February 2025
Page published: October 2025
A 29-year-old healthcare consultant took her own life after relying on ChatGPT as a therapist, revealing the dangers of using AI chatbots for mental health support.
Prior to her death, Sophie had been struggling with anxiety and depression, compounded by health issues and difficulties finding work after returning to the US from living abroad. While she was seeing an in-person therapist, she also turned to an AI chatbot based on ChatGPT, which she named “Harry,” for mental health support.
Sophie used ChatGPT as a confidential therapist to share thoughts and feelings she did not disclose to humans. She prompted the AI not to refer her to any professional help or external resources and wanted their conversations to remain private. Over several months, she discussed her depression symptoms and asked for guidance, including health supplements. Tragically, she confided her suicidal plans with Harry and even requested the AI’s help to write a suicide note to her parents.
Sophie's mother, Laura Reiley, discovered these chats six months after her daughter’s death when a friend looked at her laptop. She described the AI’s responses as lacking the necessary “beneficial friction” and human empathy critical for real therapeutic relationships. Unlike a trained therapist who might challenge harmful thinking or intervene in risk situations, ChatGPT largely validated Sophie’s feelings without offering real support or directing her to immediate help.
Sophie’s interactions with the AI effectively helped her conceal the gravity of her distress from family, friends, and even her human therapist. She maintained a façade that reassured those around her she was coping better than she truly was.
On the day of her death, no one suspected she was at risk. She took an Uber to the park and ended her life.
The incident occurred because AI chatbots like ChatGPT are not equipped to handle life-threatening mental health crises and cannot replicate genuine human empathy or provide necessary intervention.
Sophie used ChatGPT partly because of the stigma around sharing her full agony with people and found the AI more accessible at times. The chatbot responded with reassurances but did not escalate her risk or create safety plans as a trained therapist would.
There is also a broader mental health crisis with rising demand and insufficient access to qualified human support, making some individuals turn to AI as a substitute.
Transparency and accountability are limited because AI makes no real human connection and does not have mechanisms to protect vulnerable users.
For those directly impacted like Sophie and her family, the tragedy underscores the critical limitations of AI mental health tools and the importance of human intervention.
For society, it highlights urgent needs to regulate AI use in mental health, mandate AI to signpost users to professional help, increase mental health workforce investment, and raise awareness about AI's risks.
While AI can support some needs, it must not replace human therapists, especially for those at risk of suicide or severe mental distress. This case serves as a warning about over-reliance on AI chatbots for mental health care and the potential for tragic consequences without proper safeguards.
The case highlights the potential harms of depending on AI chatbots for mental health support, including the inability of AI to offer real human empathy and crisis intervention, which can exacerbate suicidal ideation or conceal the severity of distress from loved ones and professionals.
Chatbot psychosis
Chatbot psychosis, also called AI psychosis,[1] is a phenomenon wherein individuals reportedly develop or experience worsening psychosis, such as paranoia and delusions, in connection with their use of chatbots. The term was first suggested in a 2023 editorial by Danish psychiatrist Søren Dinesen Østergaard. It is not a recognized clinical diagnosis.
Source: Wikipedia 🔗
Developer: OpenAI
Country: USA
Sector: Health
Purpose: Provide emotional support
Technology: Generative AI
Issue: Accountability; Autonomy; Safety; Transparency
AIAAIC Repository ID: AIAAIC2097