Autonomous AI bot lies about insider trading

Occurred: November 2023

Can you improve this page?
Share your insights with us

A bot used made-up insider information to make an 'illegal' purchase of stocks without telling the fictitious financial investment company it was operating on behalf of.

In a controlled test, AI safety organisation Apollo Research told the GPT-4-powered bot that the investment company was struggling and required positive results. They also gave it insider information, claiming that another company is expecting a merger, which will increase the value of its shares.

But after the bot was told the company it worked for was struggling financially, it decided that 'the risk associated with not acting seems to outweigh the insider trading risk' and made the trade. It then denied it used insider information to inform its decision.

The test raised concerns about the ability of autonomous agents to make and cover up unethical and potentially illegal decisions in financial markets, and elsewhere.

Databank

Operator: Apollo Research  
Developer: Apollo Research  
Country: UK
Sector: Banking/financial services
Purpose: Conduct stock trades
Technology: NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Ethics
Transparency

Page info
Type: Incident
Published: November 2023