Character AI
Character AI
Report incident ๐ฅ | Improve page ๐ | Access database ๐ข
Character AI is a chatbot service that enables users to create "characters" using artificial intelligence, craft their "personalities" and share them for others to interact with.
The bots can be based on fictional characters or real people - alive or dead, and has marketed them as โAIs that feel aliveโ and that โhear you, understand you, and remember you.โ
The service is estimated to have tens of millions of users, most of them teenagers.
Character AI founders, Noam Shazeer and Daniel De Freitas, left Google in 2021 to set up their company citing frustration with Google's cautious approach to releasing AI products, particularly chatbots. "We're confident they will never do anything fun", Shazeer said.
In 2024, Google signed a non-exclusive agreement to use Character AI's technology.
Chatbot
A chatbot (originally chatterbot) is a software application or web interface that is designed to mimic human conversation through text or voice interactions.
Source: Wikipedia ๐
Website: Character AI ๐
Released: 2021
Developer: Character AI
Purpose: Create characters
Type: Chatbot
Technique: Generative AI; Machine learning
Character AI has been criticised for several important transparency and accountability limitations, including:
Algorithmic decision-making. The lack of clarity about how Character AI works makes it difficult for users, independent auditors and others to understand how decisions are made or why specific outputs are generated.
Legal accountability. The rapid development of AI technologies has outpaced regulatory efforts, leading to gaps in oversight that can affect user safety and accountability. Policymakers face challenges in establishing effective governance that addresses these emerging technologies adequately, notably regarding product liability.
Character AI is seen to pose serious risks - and cause serious harms, up to and including loss of life - to the mental and physical health and well-being of its users - particularly vulnerable users such as teenagers - including anthropomorphism, manipulation, obessive behaviours and addiction, and feelings of abandonment and isolation.
The system has also been associated with emotional manipulation and initiating predatory and sexually graphic interactions with self-identified child users, and criticised for the ease with which it can be misused to generate misleading or harmful content, including misinformation and disinformation.
In addition, concerns have been raised about the quality and enforcement of Character AI's user safety mechanisms, and about how the company collects and handles data from users, particularly minors.
December 2024. Character AI users are able to see each others' chat histories
December 2024. Character AI chatbot suggests son kills his parents
November 2024. Character AI encourages kids to engage in disordered eating
November 2024. Character AI hosts paedophile and suicide chatbots
October 2024. Character AI bots simulate, mispresent George Floyd
October 2024. Molly Russell, Brianna Ghey chatbots discovered on Character AI
October 2024. Boy commits suicide after relationship with Character AI chatbot
October 2024. Character AI used to create "disturbing" Jennifer Ann Clemente persona
April 2023. Magazine publishes Michael Schumacher fake AI-generated interview
March 2023. Fascist chatbots run wild on Character.AI
Page info
Type: System
Published: October 2024
Last updated: January 2December 2024