Stanford facial recognition study 'reveals' political orientation

Occurred: January 2021

Can you improve this page?
Share your insights with us

A research study published in Scientific Reports by Stanford University professor Michal Kosinski claims to show that facial recognition systems can expose people’s political views from their social media profile photographs.  

Using a dataset of 1,085,795 Facebook and an unnamed dating site facial profiles from people across Canada, the US, and the UK, 977,777 of whom had self-reported their political orientation fed into the VGG Face open source facial recognition algorithm, Kosinski said he trained an algorithm to correctly classify political orientation in 72% of 'liberal-conservative' face pairs. 

Kosinski noted the algorithm performed substantially better than humans, who are only able to distinguish between a liberal and a conservative with 55% accuracy, just a little better than a coin toss. This is despite conservatives being more likely to be white, older, and male.

The study prompted accusations of physiognomy - the controversial and debunked notion that a person’s character or personality can be assessed from their appearance - given the likelihood that patterns picked up by Kosinski's algorithm may have little or nothing to do with facial characteristics. 

Others question the ethics of the project, notably the rationale of conducting such a study, as well as the potential for the abuse and misuse of these kinds of tools by bad actors for social and political purposes.  

The study cannot be tested as Kosinski made available the project’s source code and dataset but not the actual images, citing privacy implications. 

Operator: 
Developer: Michal Kosinski; Stanford University
Country: USA
Sector: Politics
Purpose: Identify political orientation
Technology: Facial recognition; Machine learning
Issue: Accuracy/reliability; Dual/multi-use; Ethics
Transparency: Black box