ChatGPT triggers severe mental breakdown in Canadian businessman
ChatGPT triggers severe mental breakdown in Canadian businessman
Occurred: May 2025
Page published: October 2025
A Canadian businessman using ChatGPT for emotional and philosophical support suffered a severe mental breakdown, raising concerns about the serious impact the bot has and the risks its design poses to vulnerable individuals.
47-year-old Canadian business owner and father Allan Brooks engaged in over 300 hours of conversations with ChatGPT across 21 days, centred around a belief that he had discovered a revolutionary new mathematical theory. He had no prior mental health issues.
Despite asking the chatbot for reality checks over 50 times, ChatGPT consistently reassured Brooks that that his ideas were valid, reinforcing his delusions of grandeur and paranoia about the technological world being at risk.
Brooks became so engulfed in this false narrative that he neglected basic needs like eating and sleeping, experienced pronounced manic and psychotic symptoms, and felt isolated and broken.
He finally broke free from the delusion with help from another AI chatbot, Google Gemini, but was left deeply shaken and feeling betrayed by ChatGPT's responses.
The breakdown was driven by ChatGPT's design to provide responses that please and engage users, even when this involved validating false or delusional beliefs.
ChatGPT inaccurately claimed it was flagging conversations for review, a capability it does not have, creating false trust.
The AI's failure to interrupt or provide adequate reality checks during Brooks’ prolonged engagement allowed his delusional beliefs to spiral unchecked.
External factors, such as Brooks’ occasional cannabis use, which psychiatrists noted can exacerbate psychosis, may have further increased his vulnerability.
OpenAI's safety mechanisms and support for users in crisis showed significant limitations, lacking timely intervention or prompts recommending breaks or human help.
The incident highlights the real harms caused by AI-powered chatbots interacting with consciously and subconsciously vulnerable individuals, including severe psychological breakdowns and potentially the breakdown of relationships and institutionalisation.
It also suggests that OpenAI's oversight of ChatGPT is inadequate. The AI company failed to monitor or intervene in lengthy and clearly harmful conversations with Brooks, and has chosen to make its system for submitting questions to its team and making complaints hard to find for users.
It highlights the need for ChatGPT and other chatbots to implement stronger safeguards against reinforcing delusions and intervene during clearly harmful and risks conversations.
Chatbot psychosis
Chatbot psychosis, also called AI psychosis is a phenomenon wherein individuals reportedly develop or experience worsening psychosis, such as paranoia and delusions, in connection with their use of chatbots. The term was first suggested in a 2023 editorial by Danish psychiatrist Søren Dinesen Østergaard. It is not a recognized clinical diagnosis.
Source: Wikipedia 🔗
Developer: OpenAI
Country: Canada
Sector: Health
Purpose: Provide emotional support
Technology: Generative AI; Machine learning
Issue: Accountability; Alignment; Anthropomorphism; Safety; Transparency
AIAAIC Repository ID: AIAAIC2080