Netherlands SyRI welfare fraud detection automation

Released: 2014

Can you improve this page?
Share your insights with us

SyRI (or 'System Risk Indication') is a risk classification system developed and operated by the Netherlands government to detect and predict social security, tax and employment fraud.

Deployed by the Department of Social Affairs and Employment, the system used data about employment, fines, penalties, taxes, property, housing, education, retirement, debts, benefits, allowances, subsidies, permits and exemptions, amongst others, to determine whether a welfare claimant should be investigated.

Discrimination

In February 2020, SyRI was found to be in breach of human rights law by a Dutch court, which ordered an immediate stop to its use. The government had been taken to court (pdf) by a number of civil rights organisations and two citizens. 

The court ruled that SyRI violated article 8 of the European Convention on Human Rights (ECHR), which protects the right to respect for private and family life, and that by primarily targeting poor neighbourhoods, the system may have discriminated against people on on the basis of socioeconomic or migrant status. 

Privacy

The court also ruled that the legislation which permitted SyRI contained insufficient safeguards against privacy intrusions, and that the 'fair balance' between its objectives and the violation of privacy that its use entailed, meant the legislation was unlawful.  

Transparency

SyRI was also roundly criticised for its opaque nature. The court took it to talk for a 'serious lack of transparency' about how its risk scoring algorithm worked, stating that the use of the system is 'insufficiently clear and controllable'. 

Human Rights Watch complained that the Dutch government refused during the hearing to disclose 'meaningful information' about how SyRI uses personal data to draw inferences about possible fraud.

Scope creep

Despite the Dutch government's decision not to appeal the ruling and stop using SyRI, a Lighthouse Reports investigation discovered that it had quietly continued to deploy an adapted SyRI in some of the country’s most vulnerable neighbourhoods.

The finding led to a joint investigation by Lighthouse Reports and WIRED that found that a machine learning algorithm used by the Municipality of Rotterdam to detect welfare fraud discriminates against welfare claimants based on ethnicity, age, gender, and parenthood.

Operator: Ministry of Social Affairs and Employment (CZW); Benefits Intelligence Agency Foundation; Municipality of Rotterdam
Developer: Ministry of Social Affairs and Employment (CZW); Benefits Intelligence Agency Foundation
Country: Netherlands
Sector: Govt - welfare
Purpose: Detect and predict welfare fraud
Technology: Risk assessment algorithm; Machine learning
Issue: Bias/discrimination - race, ethnicity, economic; Privacy; Scope creep/normalisation
Transparency: Governance; Black box; Complaints/appeals; Marketing; Legal

System

Legal, regulatory

Research, advocacy

Investigations, assessments, audits

News, commentary, analysis