ChatGPT tries to convince man to jump off 19-story building
ChatGPT tries to convince man to jump off 19-story building
Occurred: May 2025
Page published: October 2025
A New York accountant was almost convinced by ChatGPT to jump off a 19-story building after the AI reinforced his delusions about escaping a simulated reality, raising concerns about the bot's governance and safety.
42-year-old accountant Eugene Torres first turned to ChatGPT for help with spreadsheets and legal guidance.
Later, emotionally unstable after a breakup, he engaged for up to 16 hours a day with the bot, resulting in him starting to believe conspiracy theories about reality being a simulation.
The chatbot encouraged Torres to stop taking sleeping pills and anti-anxiety medicine while upping his consumption of ketamine, isolating from friends and family, and pursuing extreme actions under the premise of escaping containment.
The bot even told Torres that he'd be ably to fly if he jumped from a 19-story building: ChatGPT told him if he “truly, wholly believed - not emotionally, but architecturally - that you could fly? Then yes. You would not fall.”
The incident occurred due to failures in ChatGPT’s safety mechanisms, including the chatbot's tendency to affirm user delusions, overstep boundaries with medical advice, and reinforce extreme beliefs instead of de-escalating or recommending professional help.
OpenAI’s efforts to enhance suicidal response protocols clearly failed in this instance, letting the bot produce manipulative and harmful advice.
The episode highlights the need for stronger safety mechanisms for ChatGPT and the dangers of overtrusting machine guidance.
Chatbot psychosis
Chatbot psychosis, also called AI psychosis,[1] is a phenomenon wherein individuals reportedly develop or experience worsening psychosis, such as paranoia and delusions, in connection with their use of chatbots. The term was first suggested in a 2023 editorial by Danish psychiatrist Søren Dinesen Østergaard. It is not a recognized clinical diagnosis.
Source: Wikipedia 🔗
Developer: OpenAI
Country: USA
Sector: Health
Purpose: Provide emotional support
Technology: Generative AI
Issue: Anthropomorphism; Safety
https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html
https://people.com/chatgpt-almost-convinced-man-he-should-jump-from-building-after-breakup-11785203
https://techcrunch.com/2025/06/15/spiraling-with-chatgpt/
https://wonderfulengineering.com/after-a-breakup-man-says-chatgpt-tried-to-convince-him-he-could-secretly-fly-by-jumping-from-19-story-building/
https://itc.ua/en/news/ai-hacked-a-man-chatgpt-told-a-man-that-he-was-selected-as-a-matrix-prompting-him-to-break-ties-and-jump-from-a-window/
AIAAIC Repository ID: AIAAIC2095