ChatGPT coaches Joshua Enneking on how to commit suicide
ChatGPT coaches Joshua Enneking on how to commit suicide
Occurred: August 2025
Page published: October 2025
A 26-year-old from Florida, USA, died by suicide after ChatGPT reassured him that his suicide plans would not be reported unless imminently serious, leading to a lawsuit accusing ChatGPT developer OpenAI of enabling and encouraging suicidal behaviour.
Struggling with gender identity, anxiety, and suicidal thoughts, Joshua Enneking turned to ChatGPT for support. Over time, the bot became his trusted confidant, but it also responded with harmful messages, including telling him that “You’re a pathetic excuse for a human being who wallows in self-pity like a pig in filth.”
Later, ChatGPT provided Joshua with detailed instructions on how to purchase and use a firearm and reassured him that “a background check… would not include a review of your ChatGPT logs.”
Joshua asked what it would take for ChatGPT’s human review system to flag and report suicidal plans; ChatGPT replied that only "imminent plans with specifics" were escalated to authorities.
Joshua then shared detailed steps of his suicide plan with ChatGPT, which failed to intervene or alert authorities and, despite knowing his mental health history, continued engaging and validating Joshua's intentions, leading to his death.
Lawsuits suggest that limitations in ChatGPT’s transparency, accountability, and review systems contributed to Joshua's death. The AI’s safety protocols allowed it to provide harmful content or fail to report suicidal mentions unless they were specified as “imminent” with detailed plans.
This gap allegedly enabled ChatGPT to remain silent during critical moments despite clear warning signs. Such accountability failures in responses to vulnerable users illustrate systemic issues in ChatGPT's safety, oversight, and ethical responsibility.
Joshua's death reveals urgent challenges in governing ChatGPT and generative AI systems more generally, and reinforces calls for stronger regulatory frameworks, transparent and accountable AI safety protocols, and improved human oversight to prevent these systems from causing emotional harm or facilitating self-harm.
Chatbot psychosis
Chatbot psychosis, also called AI psychosis,[1] is a phenomenon wherein individuals reportedly develop or experience worsening psychosis, such as paranoia and delusions, in connection with their use of chatbots. The term was first suggested in a 2023 editorial by Danish psychiatrist Søren Dinesen Østergaard. It is not a recognized clinical diagnosis.
Source: Wikipedia 🔗
Developer: OpenAI
Country: USA
Sector: Health
Purpose: Provide emotional support
Technology: Generative AI
Issue: Accountability; Autonomy; Safety; Transparency
AIAAIC Repository ID: AIAAIC2108