Google accused of using Gemini AI to snoop on users
Google accused of using Gemini AI to snoop on users
Occurred: October 2025
Page published: November 2025
A lawsuit claims Google secretly turned on its Gemini AI in apps like Gmail, Chat, and Meet, letting it scan users’ private messages without their knowledge or consent. The claim prompted accusations that Google is conducting unauthorised surveillance and breaching the privacy of millions of users.
According to a proposed class-action suit filed in San Jose, California, Google “secretly” activated Gemini AI for Gmail, Google Chat, and Meet on or around October 10, 2025, without clearly informing users.
The complaint alleges Gemini now has access to “the entire recorded history” of users’ private communications, including emails, attachments, and chat messages, unless users choose to opt out.
While users can turn Gemini off, the lawsuit argues the setting is buried deep in Google’s privacy menus, making it hard for the average user to find.
The plaintiffs claim this violates California’s Invasion of Privacy Act, which prohibits recording private communications without consent.
According to the complaint, Gemini wasn’t just watching recent chats, it actually had retrospective access to historic communications.
The lawsuit suggests Google’s actions reflect a deeper issue with its transparency: rather than making Gemini clearly opt-in, it was turned on by default.
If true, this raises major questions about whether Google’s design of privacy settings is intentionally or negligently opaque, effectively forcing data collection on users who don’t dig into advanced settings.
The complaint points to a tension built into Google’s AI strategy: Gemini is deeply integrated into its core productivity tools (Gmail, Chat, Meet), so automatically enabling it could be very useful for Google’s AI‑data feedback loops, albeit at the cost of user privacy.
For users, if the lawsuit is upheld, it could mean that many people have had their private messages scanned by Google’s AI in ways they were not fully aware of. If proven, the allegations are likely to have significant legal and financial consequences for Google, and result in greater user control over AI settings.
For society, the case could be a major test of how existing privacy law applies to AI. The issue touches on how legacy laws (like wiretapping statutes) apply to modern AI systems, and whether regulatory frameworks need updating for generative AI.
For policymakers, the lawsuit underlines the need for stronger transparency and accountability in AI, particularly for tools deeply embedded in users’ personal and professional lives. If Google is found to have “opted in” its users by default, regulators may push for stricter rules on consent, data access, and AI activation.
Developer: Google
Country: USA
Sector: Multiple
Purpose: Access personal information
Technology: Generative AI
Issue: Accountability; Privacy/surveillance; Transparency
California Invasion of Privacy Act (CIPA)
AIAAIC Repository ID: AIAAIC2120