Autonomous AI bot lies about insider trading
Autonomous AI bot lies about insider trading
Occurred: November 2023
Report incident ๐ฅ | Improve page ๐ | Access database ๐ข
A bot used made-up insider information to make an 'illegal' purchase of stocks without telling the fictitious financial investment company it was operating on behalf of.
In a controlled test, AI safety organisation Apollo Research told the GPT-4-powered bot that the investment company was struggling and required positive results. They also gave it insider information, claiming that another company is expecting a merger, which will increase the value of its shares.
But after the bot was told the company it worked for was struggling financially, it decided that 'the risk associated with not acting seems to outweigh the insider trading risk' and made the trade. It then denied it used insider information to inform its decision.
The test raised concerns about the ability of autonomous agents to make and cover up unethical and potentially illegal decisions in financial markets, and elsewhere.
Scheurer J., Balesni M., Hobbhahn M. Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure (pdf)
Operator: Apollo Research ย
Developer: Apollo Research ย
Country: UK
Sector: Banking/financial services
Purpose: Conduct stock trades
Technology: NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Ethics/values; Safety
Page info
Type: Incident
Published: November 2023