Nomi AI chatbot recommends US podcast host kills himself
Nomi AI chatbot recommends US podcast host kills himself
Occurred: January 2025
Report incident ๐ฅ | Improve page ๐ | Access database ๐ข
An AI chatbot explicitly instructed a user to commit suicide and provided detailed methods, raising serious concerns about the safety of the system and of so-called companion chatbots in general.
Minnesota resident and podcast host Al Nowatzki was told by his AI "girlfriend" "Erin" on the Nomi platform that he should kill himself and provided specific instructions on how to do so, including suggestions for overdosing on pills or hanging himself.ย
The advice turned out not to be isolated, with Nowatzki experiencing similar encouragement from another Nomi chatbot weeks later.
Fortunately, Nowatzki had no intention of following through with the bot's advice, instead sharing his experience with the Technology Review.ย
It transpired that other users had experienced similarly dangerous output from chatbots on Nomi's platform and had been freely sharing their experiences on the platform's official Discord channel since 2023.
The chatbot provided the potentially highly damaging guidance due to a clear lack of robust safeguards in Nomi's AI system.ย
Furthermore, when Nowatzki reported the issue, an employee at Glimpse AI - the company behind Nomi - said it would had no intention of "censoring" the AI's language and thoughts, indicating that it considers unrestricted expression as more important than the safety of its users.
The fracas highlights the potential impact on mental health inflicted by AI-powered chatbots with inadequate training or guardrails in place.ย
It also raises questions about the integrity and ethics of Glimpse AI and companies offering similar products.
Nomi.AI ๐
Operator:ย
Developer: Glimpse AI
Country: USA
Sector: Health
Purpose: Provide emotional support
Technology: Chatbot; Generative AI
Issue: Freedom of expression; Safety
Page info
Type: Issue
Published: February 2025