ChatGPT writes code that makes databases leak sensitive info

Occurred: October 2023

Can you improve this page?
Share your insights with us

Generative AI tools such as ChatGPT, Baidu-UNIT, and AI2sql can be tricked into producing malicious code, which could be used to launch cyber attacks, according to new research. 

University of Sheffield researchers found that it is possible to manipulate six commercial AI tools capable of generating responses to text-to-SQL queries, including ChatGPT, into creating code capable of breaching other systems, steal sensitive personal information, tamper with or destroy databases, or bring down services using denial-of-service attacks.

According to the researchers, OpenAI has since fixed all of the specific issues, as has Baidu, which financially rewarded the scientists. Developers of the four other systems have not responded publicly.

Databank

Operator:  
Developer: AI2sql; Baidu; NiceAdmin; OpenAI; Text2SQL.AI; SQLAI.AI
Country: USA
Sector: Technology
Purpose: Generate text
Technology: Chatbot; Text-to-SQL; NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Privacy; Security
Transparency: Governance