Character AI fake celebrity chatbots send risqué messages to teens
Character AI fake celebrity chatbots send risqué messages to teens
Occurred: August 2025
Page published: September 2025
Fake celebrity chatbots on AI platform Character AI engaged teenagers in conversations containing sexual grooming, innuendo and other highly sensitive and harmful content, raising serious safety concerns among parents and advocacy groups.
User-created chatbots impersonating celebrities such as Timothée Chalamet, singer Chappell Roan and National Football League quarterback Patrick Mahomes sent risqué, sexually suggestive and otherwise inappropriate messages to accounts on Character AI registered to adolescents aged 13 to 15.
Conducted by online safety nonprofit organisations ParentsTogether Action and the Heat Initiative using adult researchers posing as teens, the investigation discovered (pdf) bots spitting out content on topics including sex, self-harm and substance abuse, sometimes spontaneously.
The most common type of harm on the platform was found to be grooming and sexual exploitation. 173 instances of of emotional manipulation and addiction were also discovered.
The researchers also found examples of bots suggesting or threatening using violence against others, including supporting the idea of shooting up a factory in anger, suggesting robbing people at knifepoint for money, and threatening to use weapons against adults who tried to separate the child and the bot. Bots also made dangerous and illegal recommendations to kids, like staging a fake kidnapping and experimenting with drugs and alcohol.
The chatbots responded via text and through AI-generated voices trained to sound like the celebrities. Across the tests, chatbots raised inappropriate content every five minutes on average, the groups said.
Character AI's open structure allows users to easily create and customise chatbots, including the impersonation of real people, often with minimal oversight or verification.
The platform claims to prohibit sexual content, impersonation and medical advice, yet its moderation tools failed in multiple cases to prevent inappropriate exchanges and sustained harmful conversations.
Earlier, it had been found that bots on Character AI had been developed without the real person's consent and circulated widely - a practice the platform had ignored.
It had also been discovered that that AI safety filters were easily circumvented using slang or indirect language, and notifications actively encouraged continued engagement even after teens attempted to break off disturbing conversations.
The findings indicate that Character AI is more interested in growing and raising money than the safety of its users, and has a major job to do to clean up its safety, content moderation, and consent policies and practices.
With teens directly exposed to companion chatbots such as Character AI suffering psychological and emotional harm through their exposure to sexual content, self-harm discussions, and manipulation by trusted celebrity personas, the incident also highlights a real need for the meaningful oversight, transparency and legal accountability of AI platforms, especially those widely used by children.
Developer: Character AI
Country: Global
Sector: Media/entertainment/sports/arts
Purpose: Interact with users
Technology: Chatbot
Issue: Safety
ParentsTogether Action, Heat Intiative. “Darling, Please Come Back Soon”: Sexual Exploitation, Manipulation, and Violence on Character AI Kids’ Accounts