Google flags medical images of groins as CSAM

Occurred: February 2021

Can you improve this page?
Share your insights with us

Google's automated system to detect abusive images of children has resulted in two fathers being unfairly investigated by the police and having their accounts deactivated across all Google platforms. 

The two incidents highlight the inaccuracy and unreliability of automatic photo screening and reporting technology, and the broader impacts these errors can have on individuals when things go wrong.

According to the New York Times, a man named Mark had sent pictures of his son’s groin to a doctor in San Francisco after realising it was inflamed, only for Google to identify the images as abusive, suspend his account, and report the photos to the US National Center for Missing and Exploited Children’ CyberTipline, which escalated the report to the police. 

Per the NYT, 'Not only did he lose emails, contact information for friends and former colleagues, and documentation of his son’s first years of life, his Google Fi account shut down, meaning he had to get a new phone number with another carrier. Without access to his old phone number and email address, he couldn’t get the security codes he needed to sign in to other internet accounts, locking him out of much of his digital life.'

Similarly, photos of a boy's 'intimal parts' backed up on Google Photos that had been requested by a Houston-based pediatrician to diagnose an infection resulted in the boy's father also having his account suspended. 

Google, which uses Microsoft’s PhotoDNA tool as part of its efforts to detect child abuse, said it identified 287,368 instances of suspected abuse in the first six months of 2021.

Databank

Operator: Alphabet/Google
Developer: Alphabet/Google
Country: USA
Sector: Health
Purpose: Detect child sexual abuse material
Technology: Hash matching; Machine learning
Issue: Accuracy/reliability; Governance
Transparency: Governance; Black box; Complaints/appeals