AI companion apps expose 400,000 users' intimate conversations
AI companion apps expose 400,000 users' intimate conversations
Occurred: 2025
Page published: November 2025
Report incidentπ₯| Improve page π| Access database π’
Two popular AI companion applications exposed the highly sensitive, intimate conversations, images, and personal data of over 400,000 users due to a fundamental failure in basic network security, underlining the severe privacy risks inherent in the rapidly growing and often under-regulated AI companionship industry.Β
Chattee Chat and GiMe Chat, both from Hong Kong-based developer Imagime Interactive Limited, were found to have left a backend streaming server publicly accessible with no authentication or access controls.
The unprotected server streamed real-time user data, including over 43 million messages - some of which were intimate, highly personal, and explicit - and more than 600,000 images and videos, of over 400,000 users on iOS and Android.
While no explicit names or email addresses were leaked, the data did include IP addresses, device identifiers, purchase logs, and authentication tokens - information that can often be cross-referenced with other data to identify users.
The incident was a direct result of a gross failure in basic operational security by Imagime Interactive Limited, with its systems lacking built-in authentication, access controls, and encryption, thereby effectively leaving the "front doors open" to anyone who knew the address.
This suggests the rapid deployment of functionality has been prioritised by the company over core security-by-design principles - which is a systemic issue in some fast-growing, less-mature app sectors.
While Imagime Interactive's public privacy statements promise robust protections, the actual infrastructure did not implement them, creating a major gap between user expectations of privacy and the company's operational reality.Β
For users: The individuals whose intimate communications were exposed face immediate and long-term psychological and physical security risks. They must operate under the assumption that their most private data and media are permanently compromised and potentially weaponised for extortion or re-identification via cross-referencing with other data leaks.Β
The incident represents a profound betrayal of trust, proving that conversations with AI companions, no matter how private they are claimed to be, are only as secure as the developer's most basic security configuration.
For society: The emphasises the paradox of AI companions: they are engineered to foster deep, human-like intimacy, but their function requires them to collect and store the most sensitive user data, turning that data into an asset with a massive security liability.
The incident serves as a critical warning against confiding deeply sensitive information to digital entities lacking robust and audited security infrastructure.
It may also further erode public trust in AI services, particularly those marketed for companionship and emotional intimacy, where the expectation of privacy is highest.
Chattee Chat π
GiMe Chat π
Developer: Imagime Interactive
Country: Global
Sector: Media/entertainment/sports/arts
Purpose: Build AI companion
Technology: Deepfake
Issue: Accountability; Privacy/surveillance; Safety; Security; Transparency
AIAAIC Repository ID: AIAAIC2142