US Border Patrol's AI surveillance programme leads to rights violations
US Border Patrol's AI surveillance programme leads to rights violations
Occurred: 2023-
Page published: November 2025
The US Border Patrol has deployed a secretive, nationwide AI-driven surveillance program using automated license plate readers and predictive algorithms to track millions of drivers. This has led to stops, searches, and detentions that civil liberties groups argue infringe upon constitutional rights, particularly the Fourth Amendment protection against unreasonable searches.
The U.S. Border Patrol (USBP) and its parent agency, U.S. Customs and Border Protection (CBP), have established a massive, secretive surveillance network across the nation, stretching far beyond the traditional border zone, using Automated License Plate Readers (ALPRs) and AI-driven predictive intelligence.
Nature and Scope: The system scans and records vehicle license plates on public roads, feeding the data into an algorithm that flags drivers whose travel patterns are deemed "suspicious" - criteria that can include driving on backcountry roads, being in a rental car, or taking short trips to the border region. This dragnet surveillance extends thousands of miles inland to states like Illinois and Michigan.
Consequences: Based on AI-generated "suspicion," federal agents communicate tips to local law enforcement, who then conduct "pretext stops" (pulling drivers over for minor traffic violations like speeding or a dangling air freshener) to conceal the federal surveillance tip. These stops often result in aggressive questioning, prolonged detentions, vehicle and property searches, and in some cases, arrests or asset seizures without proof of wrongdoing. Individuals who have been targeted have incurred significant legal fees to clear their names.
Harms/Impacts: The programme directly impacts the privacy and civil liberties of millions of ordinary American drivers, raising significant constitutional questions, especially regarding the Fourth Amendment. Critics warn the system's focus on "suspicious patterns of life" amounts to dragnet surveillance of citizens' daily movements and associations. The lack of transparency and legal ambiguity surrounding the system's nationwide use constitute both actual harm (unwarranted stops/searches/seizures) and potential harm (chilling effect on freedom of movement and association).
The expansion of this AI surveillance program is driven by a push for technology-centric border security solutions, a desire for efficiency, and a significant lack of mandated transparency and accountability.
AI/product limitations: The core issue lies in the reliance on a predictive algorithm that operates on vaguely defined criteria ("suspicious travel patterns") to determine who warrants federal attention. This system can perpetuate inherent biases, is susceptible to error, and engages in mass surveillance rather than targeting known suspects. Other AI tools used by immigration enforcement, such as facial recognition in the CBP One app, have demonstrated lower accuracy for Black and darker-skinned individuals, suggesting the potential for systemic, racially discriminatory outcomes in other systems.
Corporate and government opacity: The programme's development and use have been marked by secrecy. Border Patrol agents and former officials reported efforts to conceal the AI tip as the true reason for a stop in court documents and police reports—a practice sometimes known as "parallel construction." The use of proprietary technologies (like those from companies such as Flock Safety, Vigilant, and Rekor) often involves data sharing between government agencies and private entities, further obscuring the full extent of surveillance and making it nearly impossible for those impacted to legally challenge the process or understand why they were flagged.
Lack of oversight: The Border Patrol has historically been described as a "rogue agency" with a long history of abuses. The rapid adoption of advanced AI technologies has outpaced necessary legislative and regulatory frameworks, allowing the agency to expand its operational scope into an internal intelligence/police force without robust human rights impact assessments or sufficient independent oversight.
For those directly impacted: Individuals targeted by this system face significant stress, legal costs, and the risk of unwarranted detention, search, or seizure of property. Those living in border communities or traveling through them are at higher risk of being monitored and flagged. For immigrants and people of colour, the use of biased or opaque AI systems can exacerbate existing racial discrimination in enforcement, making it more difficult to access rights or achieve legal status.
For society: The existence of a mass, secretive, AI-driven dragnet surveillance system erodes privacy rights, chills free expression and movement, and fundamentally alters the relationship between the government and its citizens. It represents a significant expansion of the government's domestic surveillance capabilities, creating a precedent for turning a border enforcement agency into a nationwide intelligence-gathering body that monitors the "patterns of life" of ordinary Americans. If unchecked, this trend normalises the use of unaccountable AI to pre-judge citizens as "suspicious" based on their otherwise lawful behaviour.
Predictive analytics
Predictive analytics encompasses a variety of statistical techniques from data mining, predictive modeling, and machine learning that analyze current and historical facts to make predictions about future or otherwise unknown events.
Source: Wikipedia 🔗
Conveyance Monitoring and Predictive Recognition System (CMPRS) 🔗
Developer: U.S. Customs and Border Protection (CBP)
Country: USA
Sector: Govt - police
Purpose: Identify suspicious drivers
Technology: Anomaly detection; Machine learning; Optical character recognition; Pattern recognition; Prediction algorithm
Issue: Accountability; Bias/discrimination; Privacy/surveillance; Transparency
AIAAIC Repository ID: AIAAIC2135