Netherlands SyRI welfare fraud detection automation
SyRI (or 'System Risk Indication') is a risk classification system developed and operated by the Netherlands government to detect and predict social security, tax and employment fraud.
Deployed by the Department of Social Affairs and Employment, the system used data about employment, fines, penalties, taxes, property, housing, education, retirement, debts, benefits, allowances, subsidies, permits and exemptions, amongst others, to determine whether a welfare claimant should be investigated.
In February 2020, SyRI was found to be in breach of human rights law by a Dutch court, which ordered an immediate stop to its use. The government had been taken to court (pdf) by a number of civil rights organisations and two citizens.
The court ruled that SyRI violated article 8 of the European Convention on Human Rights (ECHR), which protects the right to respect for private and family life, and that by primarily targeting poor neighbourhoods, the system may have discriminated against people on on the basis of socioeconomic or migrant status.
The court also ruled that the legislation which permitted SyRI contained insufficient safeguards against privacy intrusions, and that the 'fair balance' between its objectives and the violation of privacy that its use entailed, meant the legislation was unlawful.
SyRI was also roundly criticised for its opaque nature. The court took it to talk for a 'serious lack of transparency' about how its risk scoring algorithm worked, stating that the use of the system is 'insufficiently clear and controllable'.
Human Rights Watch complained that the Dutch government refused during the hearing to disclose 'meaningful information' about how SyRI uses personal data to draw inferences about possible fraud.
Despite the Dutch government's decision not to appeal the ruling and stop using SyRI, a Lighthouse Reports investigation discovered that it had quietly continued to deploy an adapted SyRI in some of the country’s most vulnerable neighbourhoods.
The finding led to a joint investigation by Lighthouse Reports and WIRED that found that a machine learning algorithm used by the Municipality of Rotterdam to detect welfare fraud discriminates against welfare claimants based on ethnicity, age, gender, and parenthood.
Operator: Ministry of Social Affairs and Employment (CZW); Benefits Intelligence Agency Foundation; Municipality of Rotterdam
Developer: Ministry of Social Affairs and Employment (CZW); Benefits Intelligence Agency Foundation
Sector: Govt - welfare
Purpose: Detect and predict welfare fraud
Technology: Risk assessment algorithm; Machine learning
Issue: Bias/discrimination - race, ethnicity, economic; Privacy; Scope creep/normalisation
Transparency: Governance; Black box; Complaints/appeals; Marketing; Legal
Platform Bescherming Burgerrechten. Bij Voorbat Verdacht
de Bruijn H., Warnier M., Janssen M. (2022). The perils and pitfalls of explainable AI: Strategies for explaining algorithmic decision-making
Rachovitsa A., Johann N. (2022). The Human Rights Implications of the Use of AI in the Digital Welfare State: Lessons Learned from the Dutch SyRI Case
van Bekkum M., Borgesius F.Z. (2021). Digital welfare fraud detection and the Dutch SyRI judgment
Privacy International (2020). The SyRI case: A landmark ruling for benefits claimants around the world
Human Rights Watch (2019). Welfare surveillance on trial in the Netherlands
Center for Human Rights and Global Justice (2019). Profiling the poor in the Dutch welfare state
Digital Freedom Fund. NJCM, Platform Bescherming Burgerrechten and others v. The Netherlands (the SyRI case) (pdf)
The Public Interest Litigation Project (PILP) (2015). Profiling and SyRI