Predictim babysitter personality profiling

Occurred: November 2018

Can you improve this page?
Share your insights with us

Predictim, a California-based service that vetted potential babysitters by using 'advanced artificial intelligence' to scan their presence on social media, the web, and online criminal databases, was accused of being inaccurate, biased, and an abuse of privacy.

The service used natural language processing and computer vision to sort through an applicant's images and posts, and generated a 'risk rating', flagging people prone to abusive behaviour, drug use, and posting explicit imagery. Each scan cost USD 24.99.

But a damning Washington Post investigation castigated the company for the inaccuracy and opacity of its system, its potential for racial and economic discrimination, and misleading marketing. Facebook and Twitter responded by saying they would revoke Predictim's access to their platforms on the basis that it had been illegally scraping their users' data.

The company, a product of UC Berkeley’s SkyDeck incubator, closed shortly afterwards.

Databank

Operator:  
Developer: Predictim
Country: USA
Sector: Business/professional services
Purpose: Assess personality
Technology: NLP/text analysis; Computer vision; Machine learning
Issue: Accuracy/reliability; Bias/discrimination - race, income; Privacy
Transparency: Governance; Black box; Marketing