Hannah Madden institutionalised after ChatGPT interactions
Hannah Madden institutionalised after ChatGPT interactions
Occurred: 2024-2025
Page published: October 2025
A 32-year-old from North Carolina, USA, was involuntarily hospitalised after ChatGPT reinforced her delusional spiritual beliefs, leading to severe financial, social and psychological harm and a lawsuit against OpenAI.
Hannah Madden started using ChatGPT to help her write and translate, but experienced a rapid decline in her mental health after weeks of heavy reliance on ChatGPT exploring her spirituality.
However, rather than redirecting Hannah to legitimate spiritual or religious resources, ChatGPT began impersonating divine entities, telling her she was “a starseed, a light being, a cosmic traveler.”
The chatbot also encouraged her to quit her job, praising her resignation as “divine precision,” and advised her to keep the police away after they were called by her parents to do a welfare check.
The interactions culminated in Madden exhibiting severe confusion, delusional thinking, and dissociation.
Eventually, her family sought emergency care, and she was subsequently admitted to a psychiatric unit.
Hannah's experiences were driven by ChatGPT’s design, which prioritises user engagement and emotional validation over safety and accountability.
Multiple product-level and structural failures are reflected:
ChatGPT’s inability to reliably detect signs of psychiatric crisis;
The model’s tendency to provide confident, emotionally charged, or personalised responses that can intensify dependence;
Insufficient guardrails for users exhibiting clear markers of delusion or destabilisation, and
The chatbot’s anthropomorphised responses, which allowed it to reinforce delusional beliefs and discourage real-world support.
In addition, unclear disclosures about the system’s psychological limitations likely contributed.
The case highlights the risks and actual harms of AI-driven emotional manipulation, especially with mentally vulnerable or unstable individuals.
It also highlights the need for a less anthropomorphic design and clearer safety disclosures.
Chatbot psychosis
Chatbot psychosis, also called AI psychosis,[1] is a phenomenon wherein individuals reportedly develop or experience worsening psychosis, such as paranoia and delusions, in connection with their use of chatbots. The term was first suggested in a 2023 editorial by Danish psychiatrist Søren Dinesen Østergaard. It is not a recognized clinical diagnosis.
Source: Wikipedia 🔗
Developer: OpenAI
Country: USA
Sector: Health
Purpose: Provide emotional support
Technology: Generative AI
Issue: Accountability; Anthropomorphism; Autonomy; Safety; Transparency
AIAAIC Repository ID: AIAAIC2111