Occurred: November 2018
Report incident 🔥 | Improve page 💁 | Access database 🔢
A service that uses 'advanced artificial intelligence' to vet potential babysitters by scanning their presence on social media, the web and online criminal databases, was accused of being inaccurate, biased and an abuse of privacy.
California-based Predictim uses natural language processing and computer vision to sort through an applicant's images and posts, and generated a 'risk rating', flagging people prone to abusive behaviour, drug use, and posting explicit imagery. Each scan costs USD 24.99.
However, a damning Washington Post investigation castigated the company for the inaccuracy and opacity of its system, its potential for racial and economic discrimination, and misleading marketing.
Facebook and Twitter responded by saying they would revoke Predictim's access to their platforms on the basis that it had been illegally scraping their users' data.
The company, a product of UC Berkeley’s SkyDeck incubator, closed shortly afterwards.
Predictim
Operator:
Developer: Predictim
Country: USA
Sector: Business/professional services
Purpose: Assess personality
Technology: NLP/text analysis; Computer vision; Machine learning
Issue: Accuracy/reliability; Bias/discrimination; Privacy; Transparency
Page info
Type: Incident
Published: August 2023