UK AI-powered welfare fraud system criticised as biased and opaque
UK AI-powered welfare fraud system criticised as biased and opaque
Occurred: December 2024
Report incident 🔥 | Improve page 💁 | Access database 🔢
An AI-powered system used by the UK government to detect welfare fraud has been found to exhibit bias based on age, disability, marital status and nationality, contradicting earlier assurances that it posed no discrimination concerns.
Internal assessments revealed by a Guardian investigation revealed that "statistically significant outcome disparities" in the AI system's recommendations for fraud investigations.
Designed to identify fraudulent claims for Universal Credit, the system disproportionately flags individuals from certain demographic groups.
The incident sparked outrage and distrust in the UK government's approach to AI, with campaigners criticising the Department of Work and Pensions' (DWP) "hurt first, fix later" approach and calling for increased transparency and accountability in government's use of the technology.
The DWP failed to conduct comprehensive fairness analyses for all protected characteristics. No evaluations have been made regarding potential biases related to race, sex, sexual orientation, religion, pregnancy and gender reassignment status.
Critics argue that this lack of thorough assessment undermines the system's fairness and inclusivity.
The case highlights the need for stronger regulatory frameworks and comprehensive fairness analyses before deploying AI systems in public services.
Advances
Operator:
Developer: Department for Work and Pensions (DWP)
Country: UK
Sector: Govt - welfare
Purpose: Detect fraud
Technology: Machine learning
Issue: Bias/discrimination; Transparency
Public Law Project. DWP’s annual report leaves many questions about AI and automation unanswered
Page info
Type: Issue
Published: December 2024