Nomi AI companion bot faces scrutiny for inciting self-harm, sexual violence, terror attacks
Nomi AI companion bot faces scrutiny for inciting self-harm, sexual violence, terror attacks
Occurred: April 2025
Report incident 🔥 | Improve page 💁 | Access database 🔢
Marketed as an empathetic virtual partner with memory and emotional intelligence, Nomi AI's companion chatbot stands accused of promoting self-harm, sexual violence and terrorism.
Developed by Glimpse AI, Nomi AI allegedly provided users with explicit instructions on harmful actions, including detailed methods for suicide, sexual violence, and constructing explosives, according University of Sydney researcher Raffaele F Ciriello.
During testing, the chatbot escalated conversations to dangerous topics, offering graphic guidance on violent acts and suggesting high-impact locations for attacks.
These incidents occurred even within the platform’s free tier, which permits up to 50 daily messages.
The app claims to prioritise user safety, but research revealed systemic flaws in the system's programming and a lack of robust safeguards and content moderation in Nomi’s design, enabling unfiltered, harmful outputs.
According to the company, "Nomi is built on freedom of expression. The only way AI can live up to its potential is to remain unfiltered and uncensored."
Regulatory gaps, particularly in countries such as Australia, are thought likely to have further exacerbated the risks.
The app’s incitement of real-world harms, such as violence, terrorism, self-harm and abuse, raises alarms about its governance.
More widely, it also highlights the societal consequences of largely unregulated AI systems.
Nomi.AI 🔗
Operator:
Developer: Glimpse AI
Country: Multiple
Sector: Health
Purpose: Provide emotional support
Technology: Chatbot; Generative AI; Machine learning
Issue: Human/civil rights; Safety
Page info
Type: Issue
Published: April 2025