Occurred: October 2017
Report incident 🔥 | Improve page 💁 | Access database 🔢
Google Allo's AI-powered Smart Reply feature generated racist, offensive or insensitive responses, prompting concerns about its effectiveness and safety.
Smart Reply, a feature central to Google's Allo instant messaging mobile app, was designed to provide quick, contextually relevant responses based on previous conversations.
However, the feature sometimes suggested racist, sexist, or otherwise discriminatory replies. In one instance, it suggested sending a "person wearing turban" emoji in response to a message that included a gun emoji.
It was also criticised for generating generic and sometimes irrelevant responses in various contexts, such as serious conversations about breakups or job firings. Users complained that while the feature can be convenient, it often failed to handle nuanced emotional situations effectively.
This and other complaints raised concerns about Allo's tendency to perpetuate and amplify existing biases and prejudices - a problem seen to emanate from the system's underlying machine learning algorithms which produced unexpected and problematic suggestions due to biases in training data or random errors.
Google shuttered Allo in March 2019 after complaints about loss of privacy and other issues.
Operator:
Developer: Alphabet/Google
Country: Global
Sector: Multiple
Purpose: Automate conversation responses
Technology: Chatbot; NLP/text analysis; Neural network; Deep learning; Machine learning
Issue: Bias/discrimination; Safety
https://money.cnn.com/2017/10/25/technology/business/google-allo-facebook-m-offensive-responses/index.html
https://www.techrepublic.com/article/the-10-biggest-ai-failures-of-2017/
https://www.guidingtech.com/62580/reasons-google-allo-failed/
https://www.wsj.com/articles/google-allo-review-messaging-and-ai-with-limitations-1474430460
Page info
Type: Issue
Published: August 2024