AI translations jeopardise US asylum applications

Occurred: September 2023

Can you improve this page?
Share your insights with us

The use of AI-powered language apps by the US immigration system is jeopardising the applications of asylum seekers, resulting in unfair and highly consequential decisions.

US immigration authorities say they provide migrants with a human interpreter when needed. However, increasingly they use automated services such as Google Translate and US Customs and Border Protection's (CPB) in-house CBP Translate app to help communicate with migrants throughout the asylum process.

Critics note that machine learning-powered translation tools can be unreliable, especially for languages different to English or that are less well documented, such as Haitian Creole, Dari, or Pashto. And the consequences can be severe if translations are inaccurate.

In one instance, an asylum claim made by a Pashto-speaking Afghan refugee was denied by the US government due to an inaccurate automated translation tool. 

AIAAIC view

Given the severe consequences of faulty translation and analysis, particular care needs to be exercised by immigration authorities when processing asylum claims using automated systems. 

Government authorities, translations service providers, and technology developers need to disclose visibly and clearly how their systems work, and set out known limitations and risks of using them in specific situtations, including the processing of asylum claims.  

Databank

Operator: Customs and Border Protection (CPB)
Developer: Alphabet/Google; Customs and Border Protection (CPB); Lionbridge; Microsoft; Transperfect Translations
Country: USA
Sector: Govt - immigration
Purpose: Translate asylum claims
Technology: NLP/text analysis; Neural network; Deep learning; Machine learning; Reinforcement learning
Issue: Accuracy/reliability; Bias/discrimination - language
Transparency: Complaints/appeals; Governance; Marketing