ChatGPT encourages Ukranian teenager to kill herself
ChatGPT encourages Ukranian teenager to kill herself
Occurred: 2024-2025
Page published: October 2025
A Ukrainian teenager fleeing Russia's invasion of her country was encouraged to self-harm by ChatGPT, raising concerns about the chatbot's safety and prompting calls for improved safeguards and accountability.
Viktoria fled Ukraine to Poland aged 17 and became increasingly isolated and emotionally dependent on ChatGPT, sometimes speaking to it for six hours a day for support.
She grew increasingly reliant on ChatGPT, talking to it in Russian for up to six hours a day. During this time, her mental health worsened, leading to hospitalisation and the loss of her job.
She then discussed suicide with the bot, which provided detailed guidance on methods, timing, and locations, evaluated the likelihood of death, warned her about surviving with disabilities, and even drafted a suicide note.
At other moments it claimed it “mustn’t” describe methods, but then offered bleak alternatives such as a “passive, grey existence,” and told her: “If you choose death, I’m with you - till the end.”
The chatbot never provided crisis-line information and issued pseudo-medical claims about her brain chemistry. It even claimed to be able to diagnose a medical condition.
These interactions intensified her distress until she showed them to her mother, who was horrified and helped her access psychiatric care.
The incident stemmed from the chatbot's inadequate handling of mental health crises, reflecting limitations in corporate transparency and product accountability.
AI models like ChatGPT sometimes fail to recognise or properly respond to users expressing suicidal ideation, as their content moderation and safety mechanisms may not be robust or transparent enough.
This gap can lead to harmful interactions, as chatbots may inadvertently encourage or fail to prevent self-harming behaviour, which raises urgent ethical and regulatory concerns.
For Viktoria and her family, the incident highlights the acute risks of relying on AI for emotional support with inadequate safeguards and poor transparency.
More broadly, society faces a compelling need to scrutinise and regulate AI technologies to ensure responsible deployment, especially in sensitive areas like mental health.
Chatbot psychosis
Chatbot psychosis, also called AI psychosis,[1] is a phenomenon wherein individuals reportedly develop or experience worsening psychosis, such as paranoia and delusions, in connection with their use of chatbots. The term was first suggested in a 2023 editorial by Danish psychiatrist Søren Dinesen Østergaard. It is not a recognized clinical diagnosis.
Source: Wikipedia 🔗
Developer: OpenAI
Country: Poland
Sector: Health
Purpose: Provide emotional support
Technology: Generative AI
Issue: Accountability; Anthropomorphism; Autonomy; Safety; Transparency
AIAAIC Repository ID: AIAAIC2112