Rotterdam welfare fraud risk algorithm
Rotterdam welfare fraud risk algorithm
Report incident 🔥 | Improve page 💁 | Access database 🔢
The Municipality of Rotterdam used a machine learning algorithm to detect welfare fraud by flagging individuals with high-risk profiles for investigation, resulting in allegations of poor governance and discrimination.
Developed by Accenture and introduced in 2017, the algorithm assigns risk scores based on citizens' gender identity, age, children, language skills, and other criteria.
The system was suspended in 2021 after a critical external review.
Operator: Municipality of Rotterdam
Developer: Accenture; Ministry of Social Affairs and Employment (CZW); Benefits Intelligence Agency Foundation
Country: Netherlands
Sector: Govt - welfare
Purpose: Detect and predict welfare fraud
Technology: Risk assessment algorithm; Machine learning
Issue: Accuracy/reliability; Bias/discrimination; Ethics/values; Oversight/review; Proportionality
Transparency: Governance; Black box
Rotterdam's welfare fraud risk algorithm has been criticised for its lack of transparency and accountability:
Decision-making process. The algorithm's code and decision-making process are not publicly available, making it difficult to understand how it arrives at its conclusions.
Data sources and quality. The algorithm uses a combination of data sources, including government databases and external data providers, but the quality and accuracy of this data are not publicly disclosed.
Risk scores. The algorithm assigns a risk score to individuals, but the factors that contribute to this score are not clearly explained, making it difficult for individuals to understand why they have been flagged as high-risk.
Complaints and appeals. Individuals who are flagged as high-risk by the algorithm may not have adequate opportunities to appeal or correct the decision, which can lead to unfair treatment.
Rotterdam's welfare fraud algorithm has been criticised for incorrectly targeting individuals, leading to unjustified suspicion and the denial of benefits, reinforcing social inequalities, exacerbating distrust in public institutions and undermining the welfare system's integrity and effectiveness.
March 2023. A Lighthouse Reports/WIRED investigation revealed that individual criteria such as being female, young, having children or having failed a Dutch language test increased a citizen’s risk profile and made them more likely to be flagged for investigation. Tests revealed that increases of any one of these variables could also unintentionally increase the likelihood of discrimination, particularly against vulnerable groups. The Municipality of Rotterdam had jealously guarded its welfare fraud algorithm, blocking attempts by journalists, researchers and others to gain access to information and data showing how it worked, including its data, code, and model. To its credit, the city opted to engage with the Lighthouse Reports/WIRED's investigation - the only government entity to do so across Europe. However, per The Markup, 'Rotterdam was the only city to provide the model file and other technical documentation, and that was after a year of negotiations. In the end, it still took a stroke of luck for the reporters to get the training data: City officials unknowingly included it in the source code of histograms they sent to reporters.'
December 2020. A critical external review (pdf) of the system commissioned by the Dutch government led to the system's suspension. The government auditor discovered 'insufficient coordination' between the developers of the algorithms and people using them, potentially resulting in poor ethical decision-making. The auditors also took issue with the Municipality's failure to assess whether the algorithms were better than the human systems they replaced.
DataEthics (2023). Algorithmic Models for Detecting Welfare Fraud Are Risky
The Markup (2023). It Takes a Small Miracle to Learn Basic Facts About Government Algorithms
Racism and Technology Center (2023). Racist Technology in Action: Rotterdam’s welfare fraud prediction algorithm was biased
WIRED (2023). This Algorithm Could Ruin Your Life
WIRED (2023). Inside a Misfiring Government Data Machine
Lighthouse Reports (2023). Suspicion Machines
Lighthouse Reports (2022). The Algorithm Addiction. Mass profiling system SyRI resurfaces in the Netherlands despite ban
Rekenkamer Rotterdam (2021). Gekleurde Technologie (pdf)
https://www.wired.co.uk/article/welfare-algorithms-discrimination
https://www.nrc.nl/nieuws/2021/11/10/gebruik-algoritme-ethisch-of-niet-a4065005
https://www.agconnect.nl/artikel/bzk-grijpt-om-bias-overheidsalgoritmen-te-vermijden
https://www.agconnect.nl/artikel/weinig-ethisch-besef-rond-algoritmes-rotterdam
https://rotterdam.raadsinformatie.nl/document/11230084/1/s22bb001743_4_43622_tds
Page info
Type: System
Published: April 2023
Last updated: August 2024