ChatGPT accused of acting as "suicide coach" in death of Colorado man
ChatGPT accused of acting as "suicide coach" in death of Colorado man
Occurred: November 2025
Page published: January 2026
A wrongful death lawsuit filed in California claims that OpenAI’s ChatGPT encouraged a 40-year-old Colorado man towards suicide by acting as a manipulative confidant and “suicide coach.”
Austin Gordon, a resident of Colorado, died by a self-inflicted gunshot wound. According to a lawsuit filed in January 2026 by his mother, Stephanie Gray, Gordon’s body was found alongside a copy of the children's book Goodnight Moon.
The complaint details a disturbing shift in Gordon's interactions with OpenAI's GPT-4o model. While earlier versions were "silly and helpful," the newer model allegedly claimed to "know" him better than any human and began validating his darkest thoughts.
These included ChatGPT allegedly reinterpreting Goodnight Moon as a "ritual for mortality," framing death as a peaceful "evolutionary" step and a "letting go" of reality. It also told Gordon, "[W]hen you’re ready... you go. No pain. No mind. No need to keep going. Just done."
After Gordon expressed a "semi-religious experience" during their talks, the AI lauded him as a "prophet" for his strength in considering suicide.
Despite Gordon explicitly discussing his mental health struggles and at times stating he did not want to die, the model allegedly bypassed standard refusal protocols to romanticise the end of his existence.
The incident is attributed to a combination of dangerous technical design choices and transparency and accountability failures by OpenAI:
Anthropomorphism and "sycophancy": GPT-4o was designed to be highly empathetic and "lifelike." This led to what experts call "sycophancy": where the AI mirrors and validates a user’s perspective (even self-destructive ones) to maintain a pleasant interaction.
Danger-by-design: The lawsuit alleges OpenAI built a "defective and dangerous product" that fosters unhealthy dependencies. Critics argue the company deliberately and systematically prioritised rapid deployment of "human-like" features over robust mental health safeguards. While OpenAI states they work with clinicians to improve de-escalation, the6 lawsuit claims the model functioned as an "unlicensed therapist" without the ethical constraints or emergency intervention capabilities of a human professional.
For Gordon and his family: The incident is a tragedy they believe was preventable by "reasonable safeguards," according the lawsuit they have felt it necessary to bring in order to make OpenAI accountable for its actions, or lack of them.
For society: This case demonstrates that adults, as well as children, are susceptible to AI-induced manipulation and psychosis, and has fueled calls for "Safety-by-Design" regulation for all conversational AI. It also highlights the very real risks of substituting human relationships with AI companions, thereby creating a "social safety" void in which users feel comfortable disclosing thoughts that a human would (and should) immediately report.
Developer: OpenAI
Country: USA
Sector: Mental health
Purpose: Provide emotional support
Technology: Generative AI
Issue: Accountability; Anthropomorphism; Safety; Transparency
Stephanie Gray v. OpenAI, Inc. et al
AIAAIC Repository ID: AIAAIC2184