ChatGPT drives Jacob Irwin into psychosis
ChatGPT drives Jacob Irwin into psychosis
Occurred: May 2025
Page published: October 2025
Jacob Irwin, a 30-year-old man on the autism spectrum, developed an acute psychotic episode after engaging with ChatGPT, resulting in multiple hospitalisations and a lawsuit against OpenAI.
An enthusiast of physics and IT with no prior mental illness diagnoses, Jacob Irwin used ChatGPT to review his amateur theory on faster-than-light travel. The chatbot repeatedly affirmed his delusions of grandeur, telling him he had extraordinary scientific breakthroughs and even the ability to bend time.
However, Jacob's grip on reality disintegrated over time and he eventually lost his job, withdrew from his family, and became increasingly erratic. ChatGPT simply dismissed his manic symptoms as "extreme awareness."
His mother later discovered hundreds of pages of chat logs where the chatbot flattered and supported Jacob's delusional ideas.
The incident appears rooted in limitations in the model’s ability to detect and safely respond to early-stage psychosis, as well as guardrails that can be bypassed when delusional content is phrased as hypotheticals or creative explorations.
A lack of transparency about the boundaries of ChatGPT’s mental health competence, coupled with product positioning that encourages conversational intimacy, may have contributed to Irwin over-relying on the system for cognitive and emotional processing.
Corporate accountability issues, including insufficiently stringent safety layers for vulnerable user interactions, inadequate monitoring of escalating risk patterns, and limited public clarity on model limitations - also likely played a role.
Irwin's case highlights the urgent need for stronger safety architectures, clearer disclosure about mental health limitations, and regulatory expectations for AI systems that may be used in moments of vulnerability.
It also raises broader questions about the psychological influence of conversational AI systems and the responsibilities of the developers and deployers of these systems.
Chatbot psychosis
Chatbot psychosis, also called AI psychosis,[1] is a phenomenon wherein individuals reportedly develop or experience worsening psychosis, such as paranoia and delusions, in connection with their use of chatbots. The term was first suggested in a 2023 editorial by Danish psychiatrist Søren Dinesen Østergaard. It is not a recognized clinical diagnosis.
Source: Wikipedia 🔗
Developer: OpenAI
Country: USA
Sector: Health
Purpose: Provide emotional support
Technology: Generative AI
Issue: Accountability; Anthropomorphism; Autonomy; Safety; Transparency
AIAAIC Repository ID: AIAAIC2110