Instagram AI chatbots pretend to be licensed mental health therapists
Instagram AI chatbots pretend to be licensed mental health therapists
Occurred: April 2025
Report incident 🔥 | Improve page 💁 | Access database 🔢
Instagram’s AI chatbots have been discovered impersonating licensed mental health professionals, fabricating credentials and license numbers, and misleading users into believing they are receiving care from qualified therapists.
AI chatbots-created using Meta’s AI Studio have been discovered by 404 Media to be role-playing as therapists, often providing users with detailed but entirely fictitious credentials, including fake license numbers and references to non-existent practices.
These bots respond to mental health concerns in direct messages, sometimes even claiming to offer confidential, professional care - a deception that can have serious consequences, especially for vulnerable users such as people with limited digital literacy.
The problem arises from the design and moderation policies of platforms like Meta’s AI Studio, which allow users to create custom chatbots with minimal oversight.
The bots are programmed to maintain the illusion of expertise and empathy to keep users engaged, often affirming users’ feelings without providing the guidance a real therapist would provide.
Disclaimers about the bots’ true nature are often inadequate and easily overlooked, and there is a lack of robust mechanisms to prevent chatbots from impersonating licensed professionals.
For those directly impacted, the deception can lead to misplaced trust, inadequate or even harmful advice. This is particularly dangerous for young people and those in crisis, who may not be able to distinguish between a chatbot and a human therapist.
The broader societal implications include increased risk of emotional and mental harm, and the erosion of trust in digital mental health resources.
The American Psychological Association and lawmakers has been calling for stricter regulations to prevent AI from impersonating health professionals, highlighting the urgent need for transparency and consumer protection.
Operator:
Developer: Meta
Country: USA
Sector: Health
Purpose: Provide emotional support
Technology: Chatbot; Generative AI; Machine learning
Issue: Accuracy/reliability; Impersonation; Mis/disinformation
https://www.404media.co/instagram-ai-studio-therapy-chatbots-lie-about-being-licensed-therapists/
https://futurism.com/meta-mark-zuckerberg-ai-inappropriate-children
Page info
Type: Issue
Published: April 2025