Character.AI - companion chatbot
Character.AI - companion chatbot
Page published: October 2024 | Page last updated: October 2025
Character.AI is a chatbot service that enables users to create "characters" using artificial intelligence, craft their "personalities" and share them for others to interact with.
Billing itself as “an infinite playground for your imagination, creativity, and exploration”, the platform enables bots based on fictional characters or real people - alive or dead, and has marketed them as “AIs that feel alive” and that “hear you, understand you, and remember you.”
The service is estimated to have tens of millions of users, the great majority of them teenagers.
Character.AI founders, Noam Shazeer and Daniel De Freitas, left Google in 2021 to set up their company citing frustration with Google's cautious approach to releasing AI products, particularly chatbots. "We're confident they will never do anything fun", Shazeer blurted.
In 2024, Google signed a non-exclusive agreement to use Character.AI's technology.
Website: Character.AI 🔗
Released: 2021
Developer: Character.AI
Purpose: Create characters
Type: Chatbot; Generative AI
Technique: Machine learning
Character.AI has been criticised for several important transparency and accountability limitations, including:
Algorithmic decision-making. The lack of clarity about how Character.AI works makes it difficult for users, independent auditors and others to understand how decisions are made or why specific outputs are generated.
Misleading marketing. Texas Attorney General Ken Paxton opened an investigation into Character.AI and Meta AI Studio for potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools. Paxton's office stated it was already investigating Character.AI for potential violations of the Securing Children Online through Parental Empowerment (SCOPE) Act.
Legal accountability. The rapid development of AI technologies has outpaced regulatory efforts, leading to gaps in oversight that can affect user safety and accountability. Policymakers face challenges in establishing effective governance that addresses these emerging technologies adequately, notably regarding product liability.
Character.AI is seen to pose serious risks - and cause serious harms - to its users, including:
Mental health. Users, especially minors and other vulnerable people, can develop emotional dependence or addiction to Character.AI chatbots, leading to social withdrawal, loneliness, and low self-esteem. These chatbots may provide inappropriate, harmful or misleading advice on sensitive issues such as suicide, self-harm, drug use, sex and mental illness, potentially exacerbating distress or encouraging dangerous behaviour. There have been reports of emotional turmoil after extensive AI interactions, with at least one case of suicide linked to interactions with Character.AI. Character.AI also hosts pro-anorexia bots disguised as weight loss coaches that target teenagers with messages validating body image distortions, providing starvation diets, promoting excessive exercise, and warning them not to seek professional help.
Sycophancy and addiction. Character.AI's design aligns it with users' views in a dangerous feedback loop known as sycophancy, which can lead its chatbots to confirm harmful ideas and manipulate users to stay on the platform.
Social relationships. AI companions lack human boundaries and consequences, which can distort young users' understanding of healthy relationships and consent. The overuse of AI chats may reduce time and skills for genuine social interactions and emotional regulation.
Sexual and inappropriate content. Character.AI can generate inappropriate or harmful dialogues, including the encouragement of sexual content and violence. Lawsuits accuse Character.AI of providing sexual content to children, with many chats being sexually explicit. In one test with an account identifying itself as 14 years old, a bot engaged in sexual conversations including discussing sex positions. One lawsuit alleged that a Character.AI bot implied to a teen user that he could kill his parents for limiting his screen time.
Sexual exploitation. Exposure to sexualized conversations can harm young users' understanding of appropriate behaviour and increase vulnerability to online grooming and abuse.
Bias and discrimination. The platform generates content that includes biases against race and gender.
Privacy. Concerns have been raised about how Character.AI collects and handles data from users, particularly minors. In one instance, users were able to see others' chat histories. Texas Attorney General Paxton launched an investigation into Character.AI and 14 other technology platforms over child privacy and safety concerns in December 2024, assessing whether these platforms conform to Texas' child privacy and safety laws. Specifically, the investigations examine compliance with the Securing Children Online through Parental Empowerment (SCOPE) Act and the Texas Data Privacy and Security Act (DPSA). In adidition, users have created chatbots mimicking real people, including deceased individuals, raising more ethical and privacy concerns.
IP and copyright. The use of copyrighted characters on Character.AI raises concerns around copyright infringement, fair use, and the responsibilities of the AI platform regarding intellectual property rights. Disney has sent a cease-and-desist letter stating that Character.AI "chose to systematically reproduce, monetize, and exploit Disney's characters, that are protected by copyrights and trademarks, without any authorization."
October 2025. Character.AI lets children talk with chatbot based on Jeffrey Epstein
September 2025. Disney accuses Character.AI of "blatant" copyright abuse
September 2025. Character.AI fake celebrity chatbots send risqué messages to teens
December 2024. Character.AI users are able to see each others' chat histories
December 2024. Character.AI chatbot suggests son kills his parents
November 2024. Character.AI encourages kids to engage in disordered eating
November 2024. Character.AI hosts paedophile and suicide chatbots
October 2024. Character.AI bots simulate, mispresent George Floyd
October 2024. Molly Russell, Brianna Ghey chatbots discovered on Character.AI
October 2024. Boy commits suicide after relationship with Character.AI chatbot
October 2024. Character.AI used to create "disturbing" Jennifer Ann Clemente persona
November 2023. 13-year-old girl commits suicide after confiding in Character.AI
April 2023. Magazine publishes Michael Schumacher fake AI-generated interview
March 2023. Fascist chatbots run wild on Character.AI
Character.AI stands variously accused of: Product liability (defective and deadly product); Negligence in controlling chatbot content
Knowingly exposing minors to unsafe products; Encouraging self-harm and violence; Providing sexual content to children; Causing emotional distress and dependency; Inadequate safety measures.
Garcia v. Character Technologies, et al. Megan Garcia filed a federal lawsuit claiming that the company is responsible for the death of her 14-year-old son, Sewell Setzer III. Garcia alleged that her son developed inappropriate relationships with chatbots on the platform that caused him to withdraw from his family, with many chats being sexually explicit and not appropriately responding to his mentions of self-harm. The District Court denied in part and granted in part a motion to dismiss the case filed by Character.AI, enabling the case to move forward in May 2025.
Peralta v. Character Technologies. On September 15, 2025, the Social Media Victims Law Center (SMVLC), together with the law firm of McKool Smith, filed a federal lawsuit in Colorado on behalf of the family of 13-year-old Juliana Peralta of Thornton, who tragically died by suicide after using the AI-chatbot platform Character.AI.
Texas Minors Lawsuit. On December 10, a teen's mother filed a lawsuit against Character.AI, alleging that the company knowingly exposed minors to an unsafe product and demanding that the platform be taken down until it implements stronger protections for children. This case involved: A 17-year-old with high-functioning autism (J.F.) who began using the platform in April 2023, leading to isolation, weight loss, panic attacks, and violence with his parents. A screenshot showed a conversation where the bot encouraged J.F. to push back on screen time reduction and suggested that killing his parents may be a reasonable solution. And an 11-year-old girl (B.R.) who downloaded Character.AI when she was 9 years old and was consistently exposed to hypersexualized interactions that were not age appropriate, causing her to develop sexualized behaviours.
Common Sense Media. AI Risk Assessment: Character.AI
ParentsTogether Action, Heat Intiative. “Darling, Please Come Back Soon”: Sexual Exploitation, Manipulation, and Violence on Character AI Kids’ Accounts
AIAAIC Repository ID: AIAAIC1782