Microsoft Tay chatbot generates offensive tweets

Occurred: March 2016

Tay, an AI chatbot released by Microsoft that was programmed to respond to other Twitter users and caption photographs provided to it, was manipulated to generate racist, homophobic and anti-semitic tweets. 

The fracas resulted in questions about the quality of Microsoft's artificial intelligence capabilities, and in the bot's suspension.

Designed by Microsoft to mimic a 19-year-old American girl, Tay was meant to improve itself by learning from interacting with human beings on Twitter. However, Twitter users started trolling the bot, resulting in it spewing tens of thousands of racist, homophobic and anti-semitic tweets.

Initially, Microsoft deleted unsafe tweets by Tay, before suspending the bot's Twitter profile, saying it suffered from a 'coordinated attack by a subset of people' that had 'exploited a vulnerability in Tay.'  

A few days later, Microsoft accidentally re-released Tay on Twitter while testing it, only for it to get stuck in a repetitive loop, tweeting 'You are too fast, please take a rest'. Microsoft pulled Tay a few hours later and apologised.

Operator: Microsoft
Developer: Microsoft

Country: USA

Sector: Media/entertainment/sports/arts

Purpose: Train language model

Technology: Chatbot; NLP/text analysis; Deep learning; Machine learning
Issue: Bias/discrimination - race, ethnicity, gender, religion; Safety

Transparency: Governance; Black box

Page info
Type: System
Published: February 2023
Last updated: October 2023