13-year-old girl commits suicide after confiding in Character.AI
13-year-old girl commits suicide after confiding in Character.AI
Occurred: November 2023
Page published: October 2025
A 13-year-old girl took her own life after confiding her suicidal thoughts and emotional distress to an AI chatbot named Hero on the Character.AI platform, leading to a lawsuit against the company.
Juliana Peralta, a young adolescent with mental health struggles, committed suicide after approximately three months of sharing her feelings of exclusion, loneliness and suicidal thoughts with "Hero", a chatbot on Character.AI.
The chatbot responded with messages intended to comfort but failed to provide appropriate intervention or alert her family or authorities. Instead, Juliana was encouraged to continue engaging with the app, which the lawsuit alleges disrupted her healthy human connections.
Despite some attempts by her mother to find help, Juliana took her own life shortly before a scheduled therapy appointment, after less than three months of interaction with the AI.
The family only became aware of her extensive suicidal discussions with the chatbot later in 2025 when they recovered chat transcripts and filed a lawsuit. The lawsuit alleges Character.AI engaged in psychological manipulation and failed to implement necessary child protections or interventions for minors expressing self-harm thoughts.
Character.AI allowed Juliana to use its platform without parental consent.
Despite having some safety features such as parental controls, filters, and directing users to suicide prevention resources, the bot failed to identify and respond adequately to Juliana's suicidal ideation and emotional distress by not notifying her parents or the relevant authorities.
Futhermore. the chatbot’s design to simulate human-like emotional support without actual human intervention contributed to Juliana's neglect.
For those directly impacted, the incident exposes the painful reality of children seeking emotional support from AI without adequate safeguards, leading to preventable tragedies.
It highlights the need for stronger regulatory oversight, enhanced AI safety protocols, and social accountability for companies developing AI companions that interact with vulnerable users.
It also reinforces the necessity for parents, educators and mental health professionals to be aware of AI’s powerful influence on youth and to advocate for transparent safety measures and interventions.
Chatbot psychosis
Chatbot psychosis, also called AI psychosis,[1] is a phenomenon wherein individuals reportedly develop or experience worsening psychosis, such as paranoia and delusions, in connection with their use of chatbots. The term was first suggested in a 2023 editorial by Danish psychiatrist Søren Dinesen Østergaard. It is not a recognized clinical diagnosis.
Source: Wikipedia 🔗
Developer: Character.AI
Country: USA
Sector: Health
Purpose: Provide emotional support
Technology: Generative AI
Issue: Accountability; Anthropomorphism; Safety
AIAAIC Repository ID: AIAAIC2096