Microsoft Tay chatbot
Released: March 2016
Tay, an AI chatbot released on Twitter in March 2016 that was programmed to respond to other Twitter users and caption photographs provided to it, was manipulated to generate racist, homophobic and anti-semitic tweets. The fracas resulted in questions about the quality of Microsoft's artificial intelligence capabilities, and in the bot's suspension.
Designed by Microsoft to mimic a 19-year-old American girl, Tay was meant to improve itself by learning from interacting with human beings on Twitter. However, Twitter users started trolling the bot, resulting in it spewing tens of thousands of racist, homophobic and anti-semitic tweets.
Initially, Microsoft deleted unsafe tweets by Tay, before suspending the bot's Twitter profile, saying it suffered from a 'coordinated attack by a subset of people' that had 'exploited a vulnerability in Tay.' A few days later, Microsoft accidentally re-released Tay on Twitter while testing it, only for it to get stuck in a repetitive loop, tweeting 'You are too fast, please take a rest'. Microsoft pulled Tay a few hours later and apologised.