ImageNet contains inaccurate, derogatory and racially offensive information

Occurred: September 2019

The ImageNet dataset was found to contain inaccurate racist, misogynistic and other discriminatory and derogatory slurs, resulting in controversy about its safety and accusations of privacy and copyright abuse.

ImageNet Roulette, a website that encouraged users to upload selfies and then analyse what it saw by running their photos through a neural network trained on ImageNet found that many captions produced by the code were harmless, though some turned out to be inaccurate, or contained racist, misogynistic and other discriminatory and derogatory slurs.

Created by Kate Crawford, co-founder of the AI Now Institute, artist Trevor Paglen and software developer Leif Ryge, ImageNet Roulette was a 'provocation designed to help us see into the ways that humans are classified in machine learning systems.' 

By automatically scraping images from Google, Bing and photo-sharing platform Flickr to build its training dataset without consent, ImageNet's developers were also accused of ignoring user privacy, leading lawyers and rights activists to call for stronger privacy and copyright laws.

The ensuing fracas led the developers of ImageNet to scrub 'unsafe' and 'sensitive' labels from the database, and to remove links to related photographs - an update seen to have minimal impact on the classification and transfer learning accuracy of the dataset, though some commentators argued it would damage ImageNet's relevance by styming its reproducibility.

Computer vision - recognition

The classical problem in computer vision, image processing, and machine vision is that of determining whether or not the image data contains some specific object, feature, or activity.

Wikipedia: Image recognition 🔗

March 2021. The ImageNet team announced it had blurred 243,198 photographs in its database using Amazon's Rekogniton image and video analytics service.

Operator: Kate Crawford; Trevor Paglen; Leif Ryge
Developer: Jia Deng; Wei Dong, Richard Socher; Li-Jia Li; Kai Li; Fei-Fei Li
Country: USA
Sector: Research/academia
Purpose: Identify objects
Technology: Dataset; Computer vision; Object detection; Object recognition
Issue: Accuracy/reliability; Bias/discrimination - race, ethnicity, gender, religion, national identity, location; Copyright; Privacy; Safety
Transparency: Governance; Privacy

Investigations, assessments, audits 🧐

News, commentary, analysis 🗞️