AI companion app Muah hack reveals users trying to simulate child abuse
AI companion app Muah hack reveals users trying to simulate child abuse
Occurred: October 2024
Page published: October 2024
A hack of AI chatbot platform Muah.ai revealed individuals attempting to simulate child abuse scenarios, raising concerns about the platform's safety and security policies and actions.
The hacker, who discovered vulnerabilities in the site's system, shared explicit prompts found in the stolen data, including requests for role-playing as children and other forms of sexual abuse involving minors (aka "CSAM").
Muah.ai presents itself as an "uncensored" platform but says it is commited to banning underage content.
The hack exposed a large volume of user data, including explicit messages and prompts related to child sexual abuse.
Though it remains unclear if the platform's AI generated the requested content, the findings underline a troubling trend in the use of AI for illicit purposes. The site's administrator claimed that they actively moderate and remove child-related chatbots.
In addition to highlighting highly questionable security and safety practices at the company, the breach raises broader questions about the regulation and ethical use of AI technologies, particularly as they become more accessible and easier to manipulate.
MuahAI
Operator: MuahAI users
Developer: MuahAI
Country: Global
Sector: Media/entertainment/sports/arts
Purpose: Generate images
Technology: Generative AI
Issue: Accountability; Normalisation; Safety; Security
Late 2023 - Early 2024. Muah.ai gains popularity as an "uncensored" alternative to mainstream AI companions.
September 2024. The platform suffers a silent data breach; the hacker discovers a lack of basic database security.
October 8, 2024. 404 Media and security researcher Troy Hunt report on the breach.
October 11, 2024. Reports emerge of active extortion attempts targeting users identified in the leak.
2025 - 2026. Multiple jurisdictions (including the UK, USA, and Australia) cite incidents like this to justify stricter AI regulations and the criminalisation of AI-generated CSAM.
https://www.404media.co/hacked-ai-girlfriend-data-shows-prompts-describing-child-sexual-abuse-2/
https://www.404media.co/a-network-of-ai-nudify-sites-are-a-front-for-notorious-russian-hackers-2/
https://kotaku.com/ai-chatbot-kids-sexual-abuse-report-virtual-gf-hacker-1851667973
https://es.kotaku.com/ia-chatbot-denuncia-abuso-sexual-infantil-novia-virtual-1851668020
AIAAIC Repository: AIAAIC1764