ChatGPT writes code that makes databases leak sensitive info

Occurred: October 2023

Report incident ๐Ÿ”ฅ | Improve page ๐Ÿ’ | Access database ๐Ÿ”ข

Generative AI tools such as ChatGPT, Baidu-UNIT, and AI2sql can be tricked into producing malicious code, which could be used to launch cyber attacks, according to new research.ย 

University of Sheffield researchers found that it is possible to manipulate six commercial AI tools capable of generating responses to text-to-SQL queries, including ChatGPT, into creating code capable of breaching other systems, steal sensitive personal information, tamper with or destroy databases, or bring down services using denial-of-service attacks.

According to the researchers, OpenAI has since fixed all of the specific issues, as has Baidu, which financially rewarded the scientists. Developers of the four other systems have not responded publicly.

System ๐Ÿค–

Operator: ย 
Developer: AI2sql; Baidu; NiceAdmin; OpenAI; Text2SQL.AI; SQLAI.AI
Country: USA
Sector: Technology
Purpose: Generate text
Technology: Chatbot; Generative AI; Machine learning; Text-to-SQL
Issue: Privacy; Security
Transparency: Governance

Research, advocacy ๐Ÿงฎ

Page info
Type: Issue
Published: November 2023