ICE facial recognition app misidentifies woman twice
ICE facial recognition app misidentifies woman twice
Occurred: October 2025
Page published: February 2026
U.S. immigration's use of a facial recognition tool misidentified a detained woman twice, exposing serious accuracy flaws in the tool and highlighting the risks of wrongful enforcement, civil liberties violations, and unchecked surveillance.
In an immigration enforcement incident in Oregon, USA, in 2025, agents from Immigration and Customs Enforcement (ICE) used smartphone facial recognition app Mobile Fortify to identify a detained woman and determine her immigration status.
Instead of returning a correct identity, the tool produced two entirely different and incorrect names for the same person across scans.
This error raises questions about the accuracy and reliability of a system that ICE has publicly described as providing “definitive” results about someone’s identity and status, sometimes even over official documents such as birth certificates.
The incident stems from a combination of technical limitations and a lack of procedural guardrails.
Facial recognition is notoriously unreliable in the field (where lighting and angles vary) compared to controlled office settings; equally, research from the US National Institute of Standards and Technology (NIST) and other bodies consistently shows that facial recognition algorithms have higher error rates for women and people of colour.
At the same time, internal ICE communications revealed that the agency instructed officers to treat a biometric match as a “definitive determination” of status, indicating institutional over-reliance on the app.
Furthermore, the app was fast-tracked for nationwide deployment in early 2025 without a full Privacy Impact Assessment, and internal policies were reportedly weakened to allow matches to override physical evidence like birth certificates.
For the public: It signals the transition to a "biometric checkpoint" society where U.S. individuals can be scanned without consent, and a software glitch can lead to immediate loss of liberty. It also highlights a dangerous shift in the "presumption of innocence," in which a computer's output is prioritised over human testimony and physical documentation.
For policymakers: The failure underscores the need for federal legislation (such as the proposed ICE Out of Our Faces Act) to prevent arrests based solely on AI matches and to mandate strict accuracy and privacy audits.
Mobile Fortify
Developer: NEC Corporation
Country: USA
Sector: Govt - immigration
Purpose: Verify individuals for detention, deportation
Technology: Facial recognition
Issue: Accountability; Accuracy/reliability; Atuomation bias; Autonomy; Privacy/surveillance; Transparency
January 2025: ICE begins nationwide deployment of Mobile Fortify for field operations.
May 20, 2025: The app is officially integrated into standard ICE enforcement procedures.
October 15, 2025: An Oregon woman is misidentified twice by the app during a raid.
November 2025: Civil rights coalitions (including EPIC) formally urge the DHS to halt the use of the app, citing civil liberties risks.
January 19, 2026: Investigative reports by 404 Media bring the Oregon misidentification incident to public light.
February 2026: A class-action lawsuit is filed against the Department of Homeland Security, alleging the technology is being used to intimidate and track people observing immigration operations.
AIAAIC Repository ID: AIAAIC2218