AI chat app exposes 300 million private messages
AI chat app exposes 300 million private messages
Occurred: January 2026
Page published: March 2026
AI chat app “Chat & Ask AI” left a cloud database misconfigured so that anyone on the internet could read around 300 million highly sensitive private messages from approximately 25 million people, creating serious risks of harassment and blackmail.
An independent security researcher known as "Harry" identified a massive data exposure involving Chat & Ask AI, a prominent AI "wrapper" app with over 50 million downloads.
The researcher gained access to a database containing approximately 300 million messages and detailed logs from over 25 million users, including full chat histories, timestamps, the specific AI models used (such as GPT-4, Gemini, or Claude), and user configuration settings.
Alarmingly, a sample of the data revealed users discussing deeply personal crises, including requests for suicide assistance, instructions for manufacturing illegal substances, intimate role-play, and methods for hacking - all of which was viewable by strangers.
The volume and sensitivity of the conversations make it possible to infer identities, relationships, and life circumstances, especially when combined with other online data.
The leak also affected other apps developed by the same parent company, Codeway, potentially exposing even more users across different services.
The developer had failed to implement basic security rules, leaving a misconfigured Google Firebase database set to "public" and accessible to anyone with the project URL without authentication.
While Codeway marketed the app as having "enterprise-grade security" and being GDPR compliant, they failed to perform basic security hygiene.
The rapid "wrapper" economy, in which developers quickly build interfaces on top of powerful AI models, often prioritises speed to market over robust data protection, user safety, and transparency.
For affected users, the incident creates risks of reputational and psychological damage, blackmail, targeted scams, and the misuse of their information they confided to the bot, such as self‑harm or illegal activities.
For society, the incident illustrates the difficulty in understanding who is accountable when something goes wrong given responsibility is split between the app developer, cloud providers, and the developer of the underlying AI model.
Chat & Ask AI
Developer: Codeway
Country: Multiple
Sector: Multiple
Purpose: Personal assistant
Technology: Generative AI
Issue: Accountability; Consent; Privacy/surveillance; Security; Transparency
AIAAIC Repository ID: AIAAIC2239