Rotterdam welfare fraud risk algorithm
Released: 2017
Can you improve this page?
Share your insights with us
The Municipality of Rotterdam used a machine learning algorithm to detect welfare fraud by flagging individuals with high-risk profiles for investigation, resulting in allegations of poor governance and discrimination.
Developed by Accenture and introduced in 2017, the algorithm assigns risk scores based on citizens' gender identity, age, children, language skills, and other criteria.
Governance
A critical external review (pdf) of the system commissioned by the Dutch government had led to the system's suspension in 2021. The government auditor discovered 'insufficient coordination' between the developers of the algorithms and people using them, potentially resulting in poor ethical decision-making.
The auditors also took issue with the Municipality's failure to assess whether the algorithms were better than the human systems they replaced.
Discrimination
A 2023 Lighthouse Reports/WIRED investigation revealed that individual criteria such as being female, young, having children or having failed a Dutch language test increased a citizen’s risk profile and made them more likely to be flagged for investigation.
Tests revealed that increases of any one of these variables could also unintentionally increase the likelihood of discrimination, particularly against vulnerable groups.
Transparency
The Municipality of Rotterdam had jealously guarded its welfare fraud algorithm, blocking attempts by journalists, researchers and others to gain access to information and data showing how it worked, including its data, code, and model. And, to its credit, the city opted to engage with the Lighthouse Reports/WIRED's investigation - the only government entity to do so across Europe.
That said, per The Markup, 'Rotterdam was the only city to provide the model file and other technical documentation, and that was after a year of negotiations. In the end, it still took a stroke of luck for the reporters to get the training data: City officials unknowingly included it in the source code of histograms they sent to reporters.'
Operator: Municipality of Rotterdam
Developer: Ministry of Social Affairs and Employment (CZW); Benefits Intelligence Agency Foundation
Country: Netherlands
Sector: Govt - welfare
Purpose: Detect and predict welfare fraud
Technology: Risk assessment algorithm; Machine learning
Issue: Accuracy/reliability; Bias/discrimination - race, ethnicity, gender; Ethics; Oversight/review; Proportionality
Transparency: Governance; Black box
System
Research, advocacy
DataEthics (2023). Algorithmic Models for Detecting Welfare Fraud Are Risky
The Markup (2023). It Takes a Small Miracle to Learn Basic Facts About Government Algorithms
Racism and Technology Center (2023). Racist Technology in Action: Rotterdam’s welfare fraud prediction algorithm was biased
Investigations, assessments, audits
WIRED (2023). This Algorithm Could Ruin Your Life
WIRED (2023). Inside a Misfiring Government Data Machine
Lighthouse Reports (2023). Suspicion Machines
Lighthouse Reports (2022). The Algorithm Addiction. Mass profiling system SyRI resurfaces in the Netherlands despite ban
Rekenkamer Rotterdam (2021). Gekleurde Technologie (pdf)
News, commentary, analysis
https://www.wired.co.uk/article/welfare-algorithms-discrimination
https://www.nrc.nl/nieuws/2021/11/10/gebruik-algoritme-ethisch-of-niet-a4065005
https://www.agconnect.nl/artikel/bzk-grijpt-om-bias-overheidsalgoritmen-te-vermijden
https://www.agconnect.nl/artikel/weinig-ethisch-besef-rond-algoritmes-rotterdam
https://rotterdam.raadsinformatie.nl/document/11230084/1/s22bb001743_4_43622_tds
Page info
Type: System
Published: April 2023