Google sentiment analyser thinks being gay is bad

Occurred: October 2017

A Google sentiment analysis tool exhibited bias against LGBTQ+ identities, raising concerns that it may perpetuate harmful stereotypes and discrimination.

What happened

A Vice investigation found that Google Cloud's sentiment analysis system showed evidence of bias against queer terms, assigning more negative sentiment scores to sentences containing LGBTQ+ identities compared to non-queer identities.

This bias could lead to representational and allocational harms for marginalised groups, such as misrepresentation of their views or unfair allocation of resources if the tool is used for triaging or decision-making processes.

Why it happened

The bias likely stems from the tool's training on existing text data, which may contain societal prejudices and negative portrayals of LGBTQ+ individuals. 

Google attempted to mitigate this by implementing an "ignore list" for certain identity terms, but this approach is incomplete and can lead to contextual misunderstandings.

What it means

For LGBTQ+ individuals, Google's tool may contribute to feelings of marginalisation and stigmatisation, particularly among youth who are already at higher risk of discrimination and mental health issues. 

For society at large, it underscores the need for greater awareness of AI biases and the importance of diverse representation in technology development in order to ensure fairer and more equitable treatment of all groups.

System 🤖

Operator: Google
Developer: Google
Country: USA
Sector: Multiple
Purpose: Analyse sentiment
Technology: NLP/text analysis
Issue: Bias/discrimination