Rotterdam welfare fraud risk algorithm

The Municipality of Rotterdam used a machine learning algorithm to detect welfare fraud by flagging individuals with high-risk profiles for investigation, resulting in allegations of poor governance and discrimination.

Developed by Accenture and introduced in 2017, the algorithm assigns risk scores based on citizens' gender identity, age, children, language skills, and other criteria.

The system was suspended in 2021 after a critical external review.

Operator: Municipality of Rotterdam
Developer: Accenture; Ministry of Social Affairs and Employment (CZW); Benefits Intelligence Agency Foundation
Country: Netherlands
Sector: Govt - welfare
Purpose: Detect and predict welfare fraud
Technology: Risk assessment algorithm; Machine learning
Issue: Accuracy/reliability; Bias/discrimination - race, ethnicity, gender; Ethics/values; Oversight/review; Proportionality
Transparency: Governance; Black box

Risks and harms 🛑

Rotterdam's welfare fraud algorithm has been criticised for incorrectly targeting individuals, leading to unjustified suspicion and the denial of benefits, reinforcing social inequalities, exacerbating distrust in public institutions and undermining the welfare system's integrity and effectiveness. 

Discrimination

Governance 📔

Transparency 🙈

Page info
Type: System
Published: April 2023
Last uodated: May 2024