NYC AI chatbot tells businesses to break law

Occurred: March 2024

New York City’s Microsoft-powered AI chatbot was found to give wrong advice and inadvertently encouraged users to violate the law, according to reports.

Hailed by NYC mayor Eric Adams as 'a once-in-a-generation opportunity to more effectively deliver for New Yorkers,' the MyCity Chatbot was intended to provide information to residents about starting and operating businesses within the city

However, the bot was discovered to be providing 'dangerously' inaccurate and misleading information and advice, including: 

The concerns prompted experts to complain that the chatbot offered misguided advice wth potential legal remifications, and that it was taken down.

Per The Markup, the bot was still live a week later and churning out much the same misguided advice and encouraging illegal behaviour. Albeit a message flagged it as 'a beta product' that may provide “inaccurate or incomplete” responses to queries. 

System 🤖

Operator: City of New York
Developer: Microsoft
Country: USA
Sector: Govt - business
Purpose: Provide business support
Technology: Chatbot
Issue: Accuracy/reliability; Mis/disinformation; Safety
Transparency: Governance

Page info
Type: Incident
Published: March 2024