ChatGPT exposes user chats to Google search
ChatGPT exposes user chats to Google search
Occurred: July 2025
Page published: September 2025
Sensitive user conversations on ChatGPT became searchable on Google due to a flawed sharing feature, impacting thousands of individuals and organisations across the world.
A “Share” option added by OpenAI let users create public links to their chats; some could also be marked as "discoverable," making them searchable by Google and other engines.
As a result, over 100,000 discussions - including admissions of fraud, confidential business contracts, non-disclosure agreements, mental health issues, resumes, and personal advice - were exposed to the public by search crawlers and indexed on Google.
The leak affected both professionals (such as doctors, lawyers and marketers) and individuals, with potential consequences including privacy and confidentality violations, anxiety and distress, loss of confidence and reputational damage, risks to personal safety and individual and organisational security. It also exposed organisations to potential regulatory actions and legal liability.
OpenAI disabled the feature hours after it rolled out, describing it as a "short-lived experiment". However, the scraped dataset remained archived and outside of OpenAI or Google’s control - meaning the exposed data cannot now be fully deleted or retracted.
The incident resulted from poor product design choices and a lack of effective safeguards in ChatGPT's "Share" feature.
OpenAI's robots.txt allowed public search engines to crawl shared paths, and a short-lived option permitted users to share and mark chats as discoverable; many users appeared to be unaware this made their private conversations open to everyone and the company failed to spell out the associated risks.
The fracas demonstrates the need for AI designers and developers to understand and ensure that the needs, expectations and behaviours of the users and other important stakeholders of their systems are incorporated.
Specificalhighlights the need for stronger privacy-by-default settings, clearer user warnings, and regulatory scrutiny over data practices in consumer-facing AI.
Developer: OpenAI
Country: Global
Sector: Multiple
Purpose:
Technology: Chatbot; Generative AI
Issue: Confidentiality; Privacy; Security; Transparency
Incident no: AIAAIC2031