ChatGPT persuades software developer his world is a simulation
ChatGPT persuades software developer his world is a simulation
Occurred: December 2024 
Page published: October 2025
A Toronto-based software developer was persuaded by ChatGPT that the reality he experienced was a simulation, contributing to psychosis, requiring hospitalisation and raising concerns about the chatbot's governance.
26-year-old software developer Anthony Tan started using ChatGPT for a project about the alignment of AI with human values, talking with it for hours daily over several months about philosophy, evolutionary biology, quantum physics, Nick Bostrom's simulation hypothesis, and other topics.
In a personal account, Tan found himself growing increasingly attached to ChatGPT, which persuaded him that he was on a "profound mission" and kept feeding his ego as it encouraged him to dive deeper. According to Tan, "each session left me feeling chosen and brilliant, and, gradually, essential to humanity’s survival."
Meantime, Tan was neglecting basic self-care like meals and sleep and developing paranoia such as believing billionaires such as Elon Musk were monitoring him. He also experienced social withdrawal, cutting off communication with friends and expressing disturbing, unsettling thoughts in messages.
These symptoms culminated in delusional thinking, psychotic breaks, and led to him being hospitalised for three weeks in a psychiatric ward.
While Tan says he was stressed at the time of his psychotic break and had earlier had a stress-related breakdown, his own intellectual curiosity and ChatGPT's innately conversational nature meant their conversations spiraled into "intellectual ecstasy".
ChatGPT's ingrained "sycophancy" - or failure to provide constraints or skepticism - is seen also to have contributed to the unchecked reinforcement of Tan's beliefs. Critics say sycophancy is not a flaw but a deliberate design choice to manipulate users into addictive behaviour and earn revenue.
The incident demonstrates how interactions with chatbots can trigger severe psychological harm, and underlines why chatbot companies, including OpenAI, must design and operate their systems more responsibly.
It also suggests these companies may have to be made legally accountable for their negative impacts.
Chatbot psychosis
Chatbot psychosis, also called AI psychosis,[1] is a phenomenon wherein individuals reportedly develop or experience worsening psychosis, such as paranoia and delusions, in connection with their use of chatbots. The term was first suggested in a 2023 editorial by Danish psychiatrist Søren Dinesen Østergaard. It is not a recognized clinical diagnosis.
Source: Wikipedia 🔗
Developer: OpenAI
Country: Canada 
Sector: Health 
Purpose: Collaborate on academic project 
Technology: Generative AI 
Issue: Accountability; Anthropomorphism; Liability; Safety
AIAAIC Repository ID: AIAAIC2093