Sweden fraud prediction algorithm found to discriminate against women, migrants
Sweden fraud prediction algorithm found to discriminate against women, migrants
Occurred: 2013-
Page published: April 2025
Report incidentđ„| Improve page đ| Access database đą
A government fraud prediction algorithm used by Swedenâs Social Insurance Agency was found to disproportionately flag women, migrants, low-income earners, and people without university degrees for fraud investigations.
Since at least 2013, Swedenâs Social Insurance Agency (FörsĂ€kringskassan) used a machine learning algorithm to assign risk scores to welfare applicants, automatically flagging those deemed âhigh riskâ for fraud investigations.
However, investigations by Lighthouse Reports and Svenska Dagbladet, supported by Amnesty International, revealed that the system unjustly targets marginalised groups, and especially women and people with foreign backgrounds, at significantly higher rates than others, even when there is no evidence of wrongdoing.
Those flagged by the algorithm often face invasive investigations, delays, or suspension of benefits, sometimes leaving them unable to cover basic needs and causing psychological and financial harm and eroding trust in Swedenâs welfare system.
The process is opaque: applicants are not informed of their risk scores or the reasons for being flagged, and the agency has refused to fully disclose how the algorithm works.Â
The bias stems from the algorithmâs design and the data used to train it. The model incorporates demographic and behavioural data, but analysis shows it systematically overrepresents women, migrants, low-income earners, and those without higher education among those flagged for fraud.Â
Critics argue that these disparities are the result of embedded bias in the algorithmâs construction and the agencyâs flawed methodology for estimating fraud risk.Â
Despite warnings from Swedenâs Inspectorate for Social Security in 2018 and ongoing criticism, the agency has denied wrongdoing and has not implemented effective safeguards against discrimination.
The scandal undermines public trust in Swedenâs welfare system and raises broader questions about the unchecked use of opaque, data-driven decision-making in public administration.
 Human rights advocates warn that such systems, if left unregulated, risk perpetuating and amplifying existing social inequalities, violating rights to equality, privacy, and social security.
Amnesty International and other organisations have called for the immediate discontinuation of the discriminatory system and for greater transparency, independent audits, and legal safeguards to ensure fairness and accountability in the use of AI in welfare administration.
Developer: FörsÀkringskassan
Country: Sweden
Sector: Govt - welfare
Purpose: Predict fraud
Technology: Prediction algorithm; Machine learning
Issue: Accountability; Bias/discrimination; Privacy; Transparency
Amnesty International. Sweden: Authorities Must Discontinue Discriminatory AI Systems Used by Welfare Agency
Lighthouse Reports (2024). Sweden's Suspicion Machine
Lighthouse Reports (2024). How we investigated Swedenâs Suspicion Machine
ISF (2018). Profilering som urvalsmetod för riktade kontroller
https://www.computerweekly.com/news/366616576/Swedish-authorities-urged-to-discontinue-AI-welfare-system
https://www.svd.se/a/Rzmg9x/forsakringskassans-ai-for-vab-fusk-granskade-kvinnor-oftare
https://www.svd.se/a/QMPexQ/fn-expert-kritiserar-forsakringskassans-ai-analyser-mot-vab-fusk
AIAAIC Repository ID: AIAAIC1970