U.S. Customs and Border Protection, flush with billions in new funding, is seeking “advanced AI” technologies to surveil urban residential areas, increasingly sophisticated autonomous systems, and even the ability to see through walls.
A CBP presentation for an “Industry Day” summit with private sector vendors, obtained by The Intercept, lays out a detailed wish list of tech CBP hopes to purchase, like satellite connectivity for surveillance towers along the border and improved radio communications. But it also shows that state-of-the-art, AI-augmented surveillance technologies will be central to the Trump administration’s anti-immigrant campaign, which will extend deep into the interior of the North American continent, hundreds of miles from international borders as commonly understood.
The recent passage of Trump’s sprawling flagship legislation funnels tens of billions of dollars to the Department of Homeland Security. While much of that funding will go to Immigration and Customs Enforcement to bolster the administration’s arrest and deportation operations, a great deal is earmarked to purchase new technology and equipment for federal offices tasked with preventing immigrants from arriving in the first place: Customs and Border Protection, which administers the country’s border surveillance apparatus, and its subsidiary, the U.S. Border Patrol.
One page of the presentation, describing the wishlist of Border Patrol’s Law Enforcement Operations Division, says the agency needs “Advanced AI to identify and track suspicious activity in urban environment [sic],” citing the “challenges” posed by “Dense residential areas.” What’s considered “suspicious activity” is left unmentioned.
Customs and Border Protection did not respond to questions posed about the slides by The Intercept.

The reference to AI-aided urban surveillance appears on a page dedicated to the operational needs of Border Patrol’s “Coastal AOR,” or area of responsibility, encompassing the entire southeast of the United States, from Kentucky to Florida. A page describing the “Southern AOR,” which includes all of inland Nevada and Oklahoma, similarly states the need for “Advanced intelligence to identify suspicious patterns” and “Long-range surveillance” because “city environments make it difficult to separate normal activity from suspicious activity.”
Although the Fourth Amendment provides protection against arbitrary police searches, federal law grants immigration agencies the power to conduct warrantless detentions and searches within 100 miles of the land borders with Canada, Mexico, or the coastline of the United States. This zone includes most of the largest cities in the United States, including Los Angeles, New York, as well as the entirety of Florida.
The document mentions no specific surveillance methods or “advanced AI” tools that might be used in urban environments. Across the Southwest, residents of towns like Nogales and Calexico are already subjected to monitoring from surveillance towers placed in their neighborhoods. A 2014 DHS border surveillance privacy impact assessment warned these towers “may capture information about individuals or activities that are beyond the scope of CBP’s authorities. Video cameras can capture individuals entering places or engaging in activities as they relate to their daily lives because the border includes populated areas,” for example, “video of an individual entering a doctor’s office, attending public rallies, social events or meetings, or associating with other individuals.”
Last year, the Government Accountability Office found the DHS tower surveillance program failed six out of six privacy policies designed to prevent such overreach. CBP is also already known to use “artificial intelligence” tools to ferret out “suspicious activity,” according to agency documents. A 2024 inventory of DHS AI applications includes the Rapid Tactical Operations Reconnaissance program, or RAPTOR, which “leverages Artificial Intelligence (AI) to enhance border security through real-time surveillance and reconnaissance. The AI system processes data from radar, infrared sensors, and video surveillance to detect and track suspicious activities along U.S. borders.”
The document’s call for urban surveillance reflect the reality of Border Patrol, an agency empowered, despite its name, with broad legal authority to operate throughout the United States.
“Border Patrol’s escalating immigration raids and protest crackdowns show us the agency operates heavily in cities, not just remote deserts,” said Spencer Reynolds, a former attorney with the Department of Homeland Security who focused on intelligence matters. “Day by day, its activities appear less based on suspicion and more reliant on racial and ethnic profiling. References to operations in ‘dense residential areas’ are alarming in that they potentially signal planning for expanded operations or tracking in American neighborhoods.”
Automating immigration enforcement has been a Homeland Security priority for years, as exemplified by the bipartisan push to expand the use of machine learning-based surveillance towers like those sold by arms-maker Anduril Industries across the southern border. “Autonomous technologies will improve the USBP’s ability to detect, identify, and classify potential threats in the operating environment,” according to the agency’s 2024 – 2028 strategy document. “After a threat has been identified and classified, autonomous technology will enable the USBP to track threats in near real-time through an integrated network.”
The automation desired by Border Patrol seems to lean heavily on computer vision, a form of machine learning that excels at pattern matching to find objects in the desert that resemble people, cars, or other “items of interest,” rather than requiring crews of human agents to monitor camera feeds and other sensors around the clock. The Border Patrol presentation includes multiple requests for small drones that incorporate artificial intelligence technologies to aid in the “detection, tracking, and classification” of targets.
A computer system that has analyzed a large number of photographs of trucks driving through the desert can become effective at identifying similar vehicles in the future. But efforts to algorithmically label human behavior as “suspicious” — an abstract concept compared to “truck” — based only on its appearance has been criticized by some artificial intelligence scholars and civil libertarians as error-prone, overly subjective if not outright pseudoscientific, and often reliant on ethnic and religious stereotypes. Any effort to apply predictive techniques based on surveillance data from entire urban areas or residential communities would exacerbate these risks of bias and inaccuracy.
“In the best of times, oversight of technology and data at DHS is weak and has allowed profiling, but in recent months the administration has intentionally further undermined DHS accountability,” explained Reynolds, now senior counsel at the Brennan Center’s liberty and national security program. “Artificial intelligence development is opaque, even more so when it relies on private contractors that are unaccountable to the public — like those Border Patrol wants to hire. Injecting AI into an environment full of biased data and black-box intelligence systems will likely only increase risk and further embolden the agency’s increasingly aggressive behavior.”
“They’re addicted to suspicious activity reporting because they fundamentally believe that their targets do suspicious things.”
The desire to hunt “suspicious” people with “advanced AI” reflects a longtime ambition at the Department of Homeland Security, Mohammad Tajsar, an attorney at the ACLU of Southern California, told The Intercept. Military and intelligence agencies across the world are increasingly working to use forms of machine learning, often large language models like OpenAI’s GPT, to rapidly ingest and analyze varied data sources to find buried trends, threats, and targets — though systemic issues with accuracy remain unsolved.
This proposed use case dovetails perfectly with the Homeland Security ethos, Tajsar said. “They’re addicted to suspicious activity reporting because they fundamentally believe that their targets do suspicious things, and that suspicious things can predict criminal behavior,” a notion Tajsar described as a “fantasy” that “remains unchallenged despite the complete lack of empiricism to support it.” With the rapid proliferation of technologies billed as artificially intelligent, “they think that they can bring to bear all of their disparate sources of data using computers, and they see that as a breakthrough in what they’ve been trying to do for a long, long time.”
While much of the presentation addresses Border Patrol’s wide-ranging surveillance agenda, it also includes information about other departmental tech needs.

The Border Patrol Tactical Unit, or BORTAC, exists on paper to execute domestic missions involving terrorism, hostage situations, or other high-risk scenarios. But the unit has become increasingly associated with suppressing dissent and routine deportation raids: In 2020, the Trump administration ordered BORTAC into the streets of Portland to tamp down protests, and the special operations unit has been similarly deployed in Los Angeles this year.
According to the presentation, CBP hopes to arm the already heavily militarized BORTAC with the ability to see through walls in order to “detect people within a structure or rubble.”
Another page of the document, listing the agency’s “Subterranean Portfolio,” claims CBP is preparing to lay an additional 2,100 miles of fiber optic cable along the northern and southern border in order to detect passing migrants, as part of a sensor network that also includes seismic, laser, visual, and cellular tracking.
The post Border Patrol Wants Advanced AI to Spy on American Cities appeared first on The Intercept.
This post was originally published on The Intercept.