Sweden fraud prediction algorithm found to discriminate against women, migrants
Sweden fraud prediction algorithm found to discriminate against women, migrants
Occurred: 2013-
Report incident 🔥 | Improve page 💁 | Access database 🔢
A government fraud prediction algorithm used by Sweden’s Social Insurance Agency was found to disproportionately flag women, migrants, low-income earners, and people without university degrees for fraud investigations.
Since at least 2013, Sweden’s Social Insurance Agency (Försäkringskassan) used a machine learning algorithm to assign risk scores to welfare applicants, automatically flagging those deemed “high risk” for fraud investigations.
However, investigations by Lighthouse Reports and Svenska Dagbladet, supported by Amnesty International, revealed that the system unjustly targets marginalised groups, and especially women and people with foreign backgrounds, at significantly higher rates than others, even when there is no evidence of wrongdoing.
Those flagged by the algorithm often face invasive investigations, delays, or suspension of benefits, sometimes leaving them unable to cover basic needs and causing psychological and financial harm and eroding trust in Sweden’s welfare system.
The process is opaque: applicants are not informed of their risk scores or the reasons for being flagged, and the agency has refused to fully disclose how the algorithm works.
The bias stems from the algorithm’s design and the data used to train it. The model incorporates demographic and behavioural data, but analysis shows it systematically overrepresents women, migrants, low-income earners, and those without higher education among those flagged for fraud.
Critics argue that these disparities are the result of embedded bias in the algorithm’s construction and the agency’s flawed methodology for estimating fraud risk.
Despite warnings from Sweden’s Inspectorate for Social Security in 2018 and ongoing criticism, the agency has denied wrongdoing and has not implemented effective safeguards against discrimination.
The scandal undermines public trust in Sweden’s welfare system and raises broader questions about the unchecked use of opaque, data-driven decision-making in public administration.
Human rights advocates warn that such systems, if left unregulated, risk perpetuating and amplifying existing social inequalities, violating rights to equality, privacy, and social security.
Amnesty International and other organisations have called for the immediate discontinuation of the discriminatory system and for greater transparency, independent audits, and legal safeguards to ensure fairness and accountability in the use of AI in welfare administration.
Predictive analytics
Predictive analytics, or predictive AI, encompasses a variety of statistical techniques from data mining, predictive modeling, and machine learning that analyze current and historical facts to make predictions about future or otherwise unknown events.
Source: Wikipedia🔗
🔗
Operator:
Developer: Försäkringskassan
Country: Sweden
Sector: Govt - welfare
Purpose: Predict fraud
Technology: Prediction algorithm; Machine learning
Issue: Accountability; Bias/discrimination; Privacy; Transparency
Amnesty International. Sweden: Authorities Must Discontinue Discriminatory AI Systems Used by Welfare Agency
Lighthouse Reports (2024). Sweden's Suspicion Machine
Lighthouse Reports (2024). How we investigated Sweden’s Suspicion Machine
ISF (2018). Profilering som urvalsmetod för riktade kontroller
https://www.computerweekly.com/news/366616576/Swedish-authorities-urged-to-discontinue-AI-welfare-system
https://www.svd.se/a/Rzmg9x/forsakringskassans-ai-for-vab-fusk-granskade-kvinnor-oftare
https://www.svd.se/a/QMPexQ/fn-expert-kritiserar-forsakringskassans-ai-analyser-mot-vab-fusk
Page info
Type: Incident
Published: April 2025