Microsoft Tay chatbot tricked into generating offensive tweets
Occurred: March 2016
Report incident 🔥 | Improve page 💁 | Access database 🔢
Tay, an AI chatbot released by Microsoft that was programmed to respond to other Twitter users and caption photographs provided to it, was manipulated to generate racist, homophobic and anti-semitic tweets.
Designed by Microsoft to mimic a 19-year-old American girl, Tay was meant to improve itself by learning from interacting with human beings on Twitter. However, Twitter users started trolling the bot, resulting in it spewing tens of thousands of racist, homophobic and anti-semitic tweets.
Microsoft deleted unsafe tweets and then suspended the bot's Twitter profile, saying it suffered from a 'coordinated attack by a subset of people' that had 'exploited a vulnerability in Tay.'
A few days later, Microsoft accidentally re-released Tay on Twitter while testing it, only for it to get stuck in a repetitive loop, tweeting 'You are too fast, please take a rest'.
Microsoft pulled the bot a few hours later and apologised.
The fracas resulted in comments about the poor quality of Tay's safety guardrails and questions about Microsoft's artificial intelligence capabilities.
System 🤖
Microsoft Tay (Wayback machine)
Documents 📃
Microsoft (2016). Learning from Tay’s introduction
Related 🌐
Page info
Type: System
Published: February 2023
Last updated: October 2023