Samsung employees leak sensitive data to ChatGPT

Occurred: March-May 2023

Can you improve this page?
Share your insights with us

Samsung employees leaked secret work information to ChatGPT, compromising the Korean company confidentiality and jeopardising its security.

Three employees in Samsung's semiconductor division used ChatGPT to check sensitive database source code for errors, optimise code, and generate minutes about a recorded meeting.

Samsung responded to the data leaks by warning its workers on the potential dangers of leaking confidential information, before banning the use of all generative AI chatbots on company-owned devices and other devices running on its internal networks.

ChatGPT user guide recommends that users ‘do not enter sensitive information.’ And unless users explicitly opt out, their data is used to train its models, according to OpenAI's data policy

Databank

Operator: Samsung
Developer: OpenAI
Country: S Korea
Sector: Technology
Purpose: Generate text
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning
Issue: Confidentiality; Security
Transparency: Governance