Google Autocomplete search predictions

Featured on Google Search and YouTube, Autocomplete is an AI-based system designed to help users complete searches faster and more accurately by predicting keywords and phrases based on what other people look for.

Autocomplete started as an experimental feature in 2004, and was publicly released in 2008 as Google Suggest. Google Suggest was renamed Autocomplete in 2010.

System databank

Operator: Alphabet/Google/YouTube
Developer: Alphabet/Google/YouTube
Country: Argentina; Australia; France; Germany; Hong Kong; Italy; Japan; UK; USA
Sector: Banking/financial services; Business/professional services; Health; Media/entertainment/sports/arts; Politics; Private - individual; Retail
Purpose: Predict search results
Technology: NLP/text analysis; Deep learning; Machine learning  
Issue: Accuracy/reliability; Bias/discrimination; Emotional distress/anxiety; Mis/disinformation; Legal; Privacy; Safety
Transparency: Governance; Black box; Complaints/appeals; Legal; Marketing

Risks and harms

Google has a number of systems to prevent Autocomplete predictions that are unhelpful, unexpected, or unreliable, or that violate Google Search’s policies. However, Autocomplete has been in the spotlight many times for the risks it poses and the harms it causes.

These include its ability to generate inaccurate, untrue, inappropriate, offensive, biased, unethical, and illegal search predictions that variously result in the manipulation of end users, religious and other forms of discrimination, emotional distress and anxiety, loss of privacy, and defamation, amongst others. 

Google has also been castigated for perceived inadequate transparency about how Autocomplete works, and for persistently claiming that it is not legally responsible for harms it may have or has caused.