Brazil AI-powered welfare app accused of unfairly denying claims
Brazil AI-powered welfare app accused of unfairly denying claims
Occurred: 2018-
Report incident 🔥 | Improve page 💁 | Access database 🔢
An AI-powered social security app designed to speed up welfare claims in Brazil is facing accusations of unfairly rejecting legitimate applications, especially by vulnerable populations.
Brazil's National Social Security Institute (INSS) implemented an AI-powered mobile app to reduce long queues and accelerate the processing of millions of claims for benefits such as sick pay, pensions, and retirement.
While the system has improved efficiency and cleared many straightforward cases, it has also wrongly denied hundreds of claims, particularly from people in remote or rural areas with low digital literacy.
Many of these rejections stem from minor errors or misunderstandings in the digital application process, such as misinterpreting questions or struggling with app functionality.
The lack of human oversight means the AI cannot account for complex or atypical cases, leading to increased appeals and, in some cases, forcing applicants to take legal action they can ill afford.
The drive to digitise and automate welfare services aimed to address chronic inefficiency and backlog in the INSS.
However, the AI system’s design prioritises speed and cost-saving over user accessibility and fairness. It relies heavily on existing databases and rigid decision rules, which can misinterpret incomplete or ambiguous information.
The system’s complexity and lack of transparency, combined with inadequate user support and poor digital infrastructure in rural areas, have led to a disproportionate impact on the poorest, oldest, and most rural populations.
These groups often lack the digital skills needed to navigate the app or correct minor mistakes, resulting in automatic rejections that a human reviewer might have resolved.
For those directly affected, wrongful denials mean loss of essential income, increased anxiety, and additional burdens such as navigating appeals or legal processes.
Indirectly, the system risks deepening social inequality by making welfare less accessible to those most in need, undermining trust in public services, and increasing societal division.
More broadly, the controversy highlights the risks of unregulated AI in public administration - where efficiency gains may come at the expense of fairness, transparency, and human rights.
The situation underscores the need for better governance, user-centered design, and regulatory oversight to ensure that technological innovation in welfare systems serves the public interest and protects vulnerable citizens.
Meu INSS 🔗
Operator:
Developer: Dataprev
Country: Brazil
Sector: Govt - welfare
Purpose: Process welfare claims
Technology: Computer vision; NLP/text analysis
Issue: Accessibility; Accountability; Accuracy/reliability; Bias/discrimination; Transparency
Page info
Type: Incident
Published: April 2025