UK AI-powered welfare fraud system criticised as biased and opaque

Occurred: December 2024

An AI-powered system used by the UK government to detect welfare fraud has been found to exhibit bias based on age, disability, marital status and nationality, contradicting earlier assurances that it posed no discrimination concerns.

What happened

Internal assessments revealed by a Guardian investigation revealed that "statistically significant outcome disparities" in the AI system's recommendations for fraud investigations. 

Designed to identify fraudulent claims for Universal Credit, the system disproportionately flags individuals from certain demographic groups.

The incident sparked outrage and distrust in the UK government's approach to AI, with campaigners criticising the Department of Work and Pensions' (DWP) "hurt first, fix later" approach and calling for increased transparency and accountability in government's use of the technology

Why it happened

The DWP failed to conduct comprehensive fairness analyses for all protected characteristics. No evaluations have been made regarding potential biases related to race, sex, sexual orientation, religion, pregnancy and gender reassignment status. 

Critics argue that this lack of thorough assessment undermines the system's fairness and inclusivity. 

What it means

The case highlights the need for stronger regulatory frameworks and comprehensive fairness analyses before deploying AI systems in public services.

System 🤖

Operator: 
Developer: Department for Work and Pensions (DWP)
Country: UK
Sector: Govt - welfare
Purpose: Detect fraud
Technology: Machine learning
Issue: Bias/discrimination; Transparency

Freedom of information requests 🔦

Investigations, assessments, audits 👁️