Rite Aid US facial recognition racial, income bias

Released: July 2020

Can you improve this page?
Share your insights with us

US drugstore chain Rite Aid quietly used facial recognition technology in hundreds of stores in mostly lower-income, non-white neighbourhoods across the US, prompting a civil, legal, and political backlash.

According to a Reuters investigation, Rite Aid quietly added facial recognition systems to hundreds of stores in the US, and that it had deployed the technology in largely lower-income, non-white neighborhoods in New York and Los Angeles

The investigation also indicated 'serious drawbacks' with RiteAid's first facial recognition partner, FaceFirst, whose technology several security professionals described as inaccurate, especially with regard to Black people and those from other races. 

RiteAid defended its policy by arguing that the technology was only being used in a 'data-driven' manner to detect and deter crime and violence, and that the cameras were appropriately flagged to customers. But the chain swiftly shut down its system after the Reuters investigation, claiming its 'decision was in part based on a larger industry conversation.'

The American Civil Liberties Union (ACLU) had earlier questioned whether American retail chains were using face recognition without telling their customers. RiteAid, like nineteen other chains, failed to answer.

Operator: RiteAid
Developer: FaceFirst; DeepCam; Shenzhen Shenmu

Country: USA

Sector: Retail

Purpose: Reduce crime, violence

Technology: Facial recognition
Issue: Accuracy/reliability; Bias/discrimination - race, ethnicity, income; Privacy

Transparency: Governance; Marketing

Page info
Type: Incident
Published: March 2023
Last updated: December 2023