AI companion app Muah hack reveals users trying to simulate child abuse
AI companion app Muah hack reveals users trying to simulate child abuse
Occurred: October 2024
Report incident π₯ | Improve page π | Access database π’
A hack of AI chatbot platform Muah.ai revealed individuals attempting to simulate child abuse scenarios, raising concerns about the platform's safety and security policies and actions.
The hacker, who discovered vulnerabilities in the site's system, shared explicit prompts found in the stolen data, including requests for role-playing as children and other forms of sexual abuse involving minors. Muah.ai presents itself as an "uncensored" platform but says it is commited to banning underage content.Β
The hack exposed a large volume of user data, including explicit messages and prompts related to child sexual abuse. Although it remains unclear if the platform's AI generated the requested content, the findings underscore a troubling trend in the use of AI for illicit purposes. The site's administrator claimed that they actively moderate and remove child-related chatbots.
In addition to highlighting highly questionable security and safety practices at the company, the breach raises broader questions about the regulation and ethical use of AI technologies, particularly as they become more accessible and easier to manipulate.Β
MuahAI π
MuahAI subreddit π
Operator: MuahAI users
Developer: MuahAI
Country: Global
Sector: Media/entertainment/sports/arts
Purpose: Generate images
Technology: Generative AI; Machine learning
Issue: Safety; Security
https://www.404media.co/hacked-ai-girlfriend-data-shows-prompts-describing-child-sexual-abuse-2/
https://www.404media.co/a-network-of-ai-nudify-sites-are-a-front-for-notorious-russian-hackers-2/
https://kotaku.com/ai-chatbot-kids-sexual-abuse-report-virtual-gf-hacker-1851667973
https://es.kotaku.com/ia-chatbot-denuncia-abuso-sexual-infantil-novia-virtual-1851668020
Page info
Type: Issue
Published: October 2024