ChatGPT accused of violating GDPR by not correcting inaccurate personal info
ChatGPT accused of violating GDPR by not correcting inaccurate personal info
Occurred: April 2024
Report incident 🔥 | Improve page 💁 | Access database 🔢
OpenAI’s AI model, ChatGPT, has been accused of violating the General Data Protection Regulation (GDPR) by not correcting inaccurate personal information.
Privacy group noyb (None of Your Business) filed a complaint against OpenAI triggered by ChatGPT’s failure to supply the correct birthday of a public figure, instead making a wild guess.
Noyb argued that this behaviour violates GDPR rules on privacy, the accuracy of information, and the right for individuals to correct inaccurate information.
The group also claimed that OpenAI refused to correct or delete wrong answers, and would not disclose information about the data processed, its sources, or recipients.
The complaint further stated that ChatGPT’s “hallucinations” or generation of false information about individuals can have serious consequences, and argued that if a system cannot produce accurate and transparent results, it should not be used to generate data about individuals.
Violating the EU’s GDPR can lead to a penalty of up to 4 percent of a company’s global revenue.
Noyb has asked the Austrian privacy watchdog to investigate OpenAI to check on the accuracy of the personal data it handles.
Hallucination (artificial intelligence)
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting, confabulation or delusion) is a response generated by AI that contains false or misleading information presented as fact.
Source: Wikipedia 🔗
Operator: OpenAI
Developer: OpenAI
Country: Austria
Sector: Multiple
Purpose: Generate text
Technology: Chatbot; Generative AI; Machine learning
Issue: Accountability; Accuracy/reliability; Mis/disinformation; Privacy
Page info
Type: Incident
Published: May 2024