E.g., 04/18/2024
E.g., 04/18/2024
The Increasing Use of Artificial Intelligence in Border Zones Prompts Privacy Questions

The Increasing Use of Artificial Intelligence in Border Zones Prompts Privacy Questions

A U.S. government supercomputer.

A U.S. government supercomputer. (Photo: U.S. Department of Energy)

The rise of artificial intelligence (AI) promises to streamline operations in sectors such as health care, human resources, and commerce by compiling huge amounts of data to better evaluate risks, improve predictions, and perform operations much faster than would be possible by humans. The same is true for border management, where governments and technology advocates point to the potential of AI to help secure international borders more efficiently and in some cases more safely. In recent years, authorities particularly in the United States and the European Union have moved quickly to integrate “smart border” AI capabilities into their operations, heralding a potential game-changing moment for the ability of governments to patrol their borders.

Border-focused AI technologies come in multiple forms and can include algorithms designed to evaluate travelers’ nuanced and almost imperceptible emotional expressions, biometric analysis of fingerprints and facial recognition, and scanner software that can differentiate humans from wildlife in remote border sections. Many of the systems derive from surveillance tools that have existed in some form for decades but have become increasingly automated so that computers—not human beings—make preliminary determinations about possible threats and how authorities should respond. Artificial intelligence promises to supercharge this surveillance, making tools more powerful and capable of processing and interpreting more data than in the past. Yet the rapid deployment of these technologies, which has often moved faster than legislative and other frameworks to regulate their usage, has also raised concerns about privacy and growing government surveillance of not just migrants and travelers but, at a larger scale, entire populations.

For instance, facial recognition technology has been rolled out globally in airports and other border zones. The Dubai International Airport in 2018 began piloting a “smart tunnel” that uses a system of 80 cameras to scan travelers’ faces and irises, allowing preregistered passengers to verify their identity in a matter of seconds without having to present passports or other documents. The system has since been expanded to more than 120 “smart gates” across the airport. Similar technologies have been unveiled at many airports in the United States and elsewhere, offering travelers a respite from the long security procedures that have come to define modern international travel.

But these types of systems have also raised concerns, most notably about individuals’ privacy. Critics have warned of the possibility of technology creep, in which systems pioneered for border zones slowly make their ways into mainstream society, where they could be used to surveil the public at large. For instance, China, which has deployed artificial intelligence tools as part of its “zero-COVID” policy against the coronavirus, has faced increasing scrutiny over its surveillance and monitoring practices that are likely to outlast the pandemic. Generally speaking, it has been unclear at times whether travelers have consented to giving biometric and other information to government authorities, or what rights individuals have in their still-evolving relationships with AI technologies.

In current practice, AI systems tend to be used as complements for border officials, allowing fewer individuals to monitor more territory and scan more migrants and other travelers in less time and for less money than might be otherwise possible. But technologies have grown more advanced and are designed for new functions, including recent efforts to algorithmically identify asymptomatic travelers infected with the novel coronavirus that causes COVID-19. As these developments progress, understanding how AI is used at international borders will be increasingly crucial, as its application affects not only travelers but also residents. This article reviews the use of AI systems to monitor borders in the United States and the European Union, focusing on detection technologies that make up the so-called smart border.

AI at U.S. Borders: A Digital Wall in the Making

The U.S. government has invested significant amounts of money into technical surveillance upgrades, some including the use of AI, along both its northern and southern borders. In fiscal year (FY) 2021, the Department of Homeland Security (DHS) received more than $780 million for technology and surveillance at the border, according to analysis by advocacy groups Just Futures Law and Mijente. Homeland security interests have long pitched a vision of a “virtual wall”: an ocean-to-ocean network of drones, sensors, and other technologies that could detect illegal border crossers. Proponents contend such a system would be particularly helpful in stretches of remote and unsurveilled land between ports of entry. The idea has had bipartisan support and gained steam under presidents of both parties, largely because of the notion it would be more effective, less expensive, and less disruptive than physical barriers.

The George W. Bush administration launched an early and mostly unsuccessful automated surveillance program along the U.S.-Mexico border, with its vision for a Secure Border Initiative Network (SBInet) that would integrate personnel, technology, and infrastructure to secure the border. About $1 billion had been spent on SBInet by the time the troubled project was canceled in 2011. But efforts have ramped up anew in recent years as technology has evolved. U.S. Customs and Border Protection (CBP) has deployed a system of autonomous surveillance towers that are expected to number 200 by the end of FY 2022, and which use a combination of radar, cameras, and algorithms to scan remote border areas and identify the source of movement. The solar-powered, 33-foot towers can communicate with each other to track objects that move out of range and can be easily packed up and moved to new locations as needed. Data from these towers as well as other sources such as cameras, drones, Light Detection and Ranging (LIDAR) laser systems, and infrared sensors are fed into a system called Lattice, which provides instantaneous interpretation. The AI system has been trained to analyze an object’s movement to detect the difference between a tumbleweed, a car,  and a person, and ignore animals and other false positives. When the system detects movement by people or vehicles, it alerts Border Patrol agents to follow up.

CBP has also used AI technology at the U.S.-Canada border. For instance, the agency has touted the Northern Border Remote Video Surveillance System (NBRVSS), a system of 22 sites with high-resolution cameras and radar systems outfitted with AI capabilities. CBP describes the system as being able to detect and monitor vessels leaving the Canadian shoreline from miles away and send a warning when a vessel enters certain areas by being able to distinguish “unusual vessel movements from ordinary traffic.” If a suspicious vessel is identified, a camera can reveal what it looks like and how many people are onboard, as well as obtain its registration number for background checks.

Supporters claim the NBRVSS system enables agents to perform at a significantly higher capacity, overcoming possible manpower deficits while also increasing agents’ safety. This would be significant, especially since border security guards quit at twice the rate of other law enforcement positions, often citing low morale and unpleasant work conditions. Allowing fewer agents to do more work would seem to better prepare the agency for a fluctuating workforce.

Civil Liberties Proponents Fear a Dragnet

Civil liberties and privacy groups have raised concerns that the use of AI technologies at U.S. borders, especially systems incorporating facial recognition and the use of drones, could infringe on the human rights of foreign and U.S. nationals. The border is essentially exempted from the U.S. Constitution’s Fourth Amendment protections against unreasonable stops and searches. CBP is also allowed to operate immigration checkpoints anywhere within 100 miles of the United States’ international border, an expanded border zone that includes areas in which approximately two-thirds of the U.S. population live.

Critics warn that the use of this technology could lead to endless surveillance and a vast, ever-growing dragnet, as technology that is deployed to patrol the border is also used by local police miles in the U.S. interior. Local police in border communities—and those far from the border—have been revealed to use facial recognition technology, cellphone tracking “stingray” systems, license-plate cameras, drones, and spy planes, with immigration authorities sometimes sharing information with law enforcement for non-immigration purposes. CBP flew nearly 700 surveillance missions between 2010 and 2012 on behalf of other law enforcement agencies according to flight logs, some of which were not directly related to border protection. During  Black Lives Matter protests in Minneapolis in 2020 following the murder of George Floyd, a CBP Predator drone flew over the city and provided live video to authorities on the ground. Similar operations involving helicopters, airplanes, and drones also took place in 14 other cities, broadcasting about 270 hours of footage live to CBP control rooms. Critics’ concerns about the creep of these kinds of technologies from the border into the interior of the country have escalated in recent years, as their use has become more widespread.

There is also evidence that the expansion of surveillance infrastructure, much of it bolstered by AI, leads to an increase in deaths by pushing migrants trying to cross illegally towards more remote and dangerous routes. Researchers have found evidence that surveillance systems can have a “funnel effect,” leading migrants to avoid areas where they might be detected and instead are more likely to head to areas where they face increased risk of dehydration, hyperthermia, injury, and exhaustion.

In some areas these efforts have also received pushback from lawmakers and privacy advocates, including Canadian and Mexican groups that have raised issues with surveillance at their respective borders. The organizations have been especially worried about aerial surveillance conducted by balloons and drones, which they argue would catch Mexican and Canadian citizens. They have also raised concerns that such surveillance, conducted by the United States, could constitute a violation of their countries’ sovereignty.

AI at EU Borders: Patrolling the Seas and Evaluating Expressions

Sea borders tend to be more difficult to patrol than land borders, so the European Union is particularly interested in technologies to monitor the Mediterranean. The area has been an issue of prime concern following the refugee and migration crisis of 2015-16, and leaders have since repeatedly rallied around Member States’ efforts to halt irregular crossings.

A RAND Europe study commissioned by Frontex, the EU border agency, and released in 2021 underscored this interest and found that AI could potentially be used in five different areas: situational awareness and assessment; information management; communication; detection, identification, and authentication; and training and exercise. The study also identified multiple potential barriers, including technological weaknesses; perception of high costs and commercial barriers; insufficient understanding and awareness of AI; lack of skills and expertise; constrained access to relevant technologies; and potential ethical, human-rights, and regulatory issues. The study struck an optimistic tone, framing these barriers as challenges that could be overcome, though still acknowledging that they are challenges.

Research into the area has been going on for years. The four-year Roborder project was one such project, until its completion in August 2021. The nearly 8-million-euro effort was a part of the EU Horizon 2020 initiative, which dedicated 80 million euros to boosting Europe’s research and innovation efforts. Key details about the project’s outcome remain classified, but it aimed to develop an AI-powered autonomous border surveillance system with unmanned mobile robots in the air, water, and ground, capable of operating independently and in swarms. Robots were outfitted with optical, infrared, and thermal cameras, as well as radar and radio frequency sensors to find signs of criminal activity along the sea and coasts. Cellphone frequencies were used to triangulate the location of suspected criminals, with cameras used to identify humans, guns, vehicles, and other objects.

Notably, Roboder was conceived to detect environmental threats in addition to irregular migration and smuggling. In its first real-world demonstration, the AI technology successfully detected a simulated oil spill off the coast of Portugal by using flying and submarine drones that combined imaging with fluorimeter technology.

However, it is clear that unauthorized migration was the main target. In its two other pilot use cases, the system was tested on detecting illegal border crossings both at sea, around the Greek islands, and on land, in remote areas of Bulgaria’s borderlands. The uses were based on recent historical events of unauthorized migration in the Aegean Sea and an incident in which border patrols were overwhelmed at the Hungarian-Serbian border in 2016, where developers suggested the presence of Roborder might have helped.

For human-rights advocates, the potential future uses of Roborder and other AI systems could raise concerns, especially when considering the muscular approach to migrants taken by the European Union and Member States, including alleged pushbacks at sea and on land. In the Mediterranean, EU aerial assets have been deployed to detect migrant boats from the skies and guide the Libyan Coast Guard to these boats, leading to the return of tens of thousands of people to Libya in moves that have been widely condemned. These kinds of activities have raised alarms that AI surveillance would allow for such activities on a grander scale.

Can AI Detect a Lie? The Story of iBorderCtrl

iBorderCtrl, or iCROSS, was another Horizon 2020 project, running from September 2016 to August 2019 with an EU contribution of 4.5 million euros. The project was meant to speed and smooth border control for non-EU nationals arriving in the Schengen Area. It envisioned a two-stage procedure: pretravel registration involving a short interview with a digital avatar, and a second stage during travel to be performed by a portable unit that checked travel documents and employed facial recognition technology. Both phases would include AI lie detection tests. Like the Roborder project, iBorderCtrl was meant to complement the existing capacity of border control officers and speed processes.

Trials of the project ran for six months in 2018 but iBorderCtrl was never deployed for actual border checks. Questions asked by the AI lie detector included “What’s in your suitcase?” and “If you open the suitcase and show me what is inside, will it confirm that your answers were true?” Travelers answered facing a webcam and the system analyzed and rated “microgestures” such as minor eyelid movements to determine if they were lying. Those determined to be truthful were given a QR code to pass the border, while those flagged as suspicious had to provide biometric data such as fingerprints, palm veins, and face matches before being passed to a human agent.

The project ignited a firestorm of criticism. European Parliament Member Patrick Breyer filed a lawsuit seeking the release of documents related to the project in March 2019; last December, the court ruled that some documents not specifically related to iBorderCtrl must be published, although those related to its commercial prospects can remain classified. Opponents also described the system as inaccurate, producing flawed and incorrect results, with some experts suggesting that building a lie detector based on microgestures was fundamentally impossible. iBorderCtrl leaders acknowledged the criticism, but argued that new technologies can improve the efficacy, accuracy, cost, and speed of border control, so long as fundamental rights are protected.

AI projects such as iBorderCtrl and Roborder have been criticized by groups arguing that the European Union has for decades been working towards securitizing and militarizing its borders as part of a growing “Fortress Europe.” They contend that these technologies are part of a wider trend that could be supercharged by AI and big data to create tragic costs for migrants and asylum seekers.

Technology Outpaces Regulation

Despite the rapid expansion into border zones and fast uptake by border control agencies, regulations and guidelines for the deployment of AI have been slower to evolve. Last April, the European Union released the first ever legal framework for AI in an attempt to regulate the technology before it becomes even more mainstream. Crucially, the proposal for harmonized rules specifically mentions AI systems in migration, asylum, and border control, claiming these processes can affect particularly vulnerable people. It notes that ensuring the accuracy, nondiscriminatory nature, and transparency of AI systems is especially important to ensuring that the rights of vulnerable populations are protected. The draft regulation therefore classifies the use of AI systems in migration management as “high risk,” especially regarding technologies such as polygraphs, risk assessments, document verification, and applications for immigrant status.

This approach could mark a turn from previous EU projects such as Roborder and iBorderCtrl.  However, experts have pointed out oversights, including a lack of rules that would impact major technology companies and insufficient focus on people affected by AI systems. Human Rights Watch has called attention to significant exemptions for law enforcement and migration control authorities in requirements to disclose how technologies work. Although the legal framework was viewed by many as path-breaking, the European Union excludes migrants from protections afforded to EU citizens. Still, the proposed regulation was broadly lauded in many spheres as a welcome and necessary step that could become a model globally.

The United States has yet to release a similar comprehensive framework, though there are signals from the Biden administration that AI regulation is taking shape. In October, key White House staffers published an op-ed in Wired calling for a tech “bill of rights” to guard against faulty and harmful uses of AI, and revealed that the White House Office of Science and Technology Policy was developing principles to guard against misuse of powerful technologies. The op-ed pointed out that the failings of AI may be unintentional but can disproportionately affect marginalized individuals and communities. The following month, Lynne Parker, the director of the National Artificial Intelligence Initiative Office, said the United States should model its approach to regulation on Europe’s.

If border zones serve as a testing ground for AI technologies, there is reason for even native-born publics to be mindful of how these tools develop for border control. Migrants, refugees, and other people on the move are often thought of as the “other,” but evaluating how they are impacted by AI systems has ramifications not only for their own wellbeing, but also societies more broadly. Although, legally speaking, travelers and migrants are often afforded very different rights than residents or citizens, civil liberties and privacy advocates have raised legitimate worries about possible creep of technologies from the border. Ambiguity about the limits of border zones and the expanding use of AI are matters of serious concern. As promising as advanced technologies may be in speeding travel, halting smuggling, and identifying environmental disasters, they may also have serious unforeseen ramifications that cannot be ignored.

This article is the product of the author’s research and any opinions reflected therein are entirely her own. They do not represent the opinion of any organizations with which she may be affiliated or organizations she was affiliated with in the past.

Sources

Accenture. N.d. Borders in the Era of AI. Accessed December 23, 2021. Available online.

Agence France Presse. 2020. US Installing AI-Based Border Monitoring System. Agence France Presse, July 2, 2020. Available online.

Alarm Phone, Borderline Europe, Mediterranea – Saving Humans, and Sea-Watch. 2020. Remote Control: The EU-Libya Collaboration in Mass Interceptions of Migrants in the Central Mediterranean. N.p.: Alarm Phone, Borderline Europe, Mediterranea – Saving Humans, and Sea-Watch. Available online.

Anduril Industries. 2021. President Biden Demanded “High-Tech Capacity” for Border Security. Anduril’s Towers Are Delivering It. Blog post, September 16, 2021. Available online.

Bastani, Hasma et al. 2021. Efficient and Targeted COVID-19 Border Testing via Reinforcement Learning. Nature 599 (7883): 108-13. Available online.

Begault, Lucien. 2019. Automated Technologies at EU Borders and the Future of Fortress Europe. Euronews, March 3, 2019. Available online.

Billington, Francesca. 2021. Anduril Industries Is Getting Hundreds of Millions to Build Border Surveillance Tech. dot.LA, July 17, 2021. Available online.

Breyer, Patrick. 2021. Transparency Lawsuit against Secret EU Surveillance Research: MEP Patrick Breyer Achieves Partial Success in Court. Press release, December 16, 2021. Available online.

Broadbent, Meredith and Sean Arrieta-Kenna. 2021. AI Regulation: Europe’s Latest Proposal is a Wake-Up Call for the United States. Commentary, Center for Strategic and International Studies, May 18, 2021. Available online.

Campbell, Zach. 2019. Swarms of Drones, Piloted by Artificial Intelligence, May Soon Patrol Europe’s Borders. The Intercept, May 11, 2019. Available online.

Chambers, Samuel Norton, Geoffrey Alan Boyce, Sarah Launius, and Alicia Dinsmore. 2021. Mortality, Surveillance and the Tertiary “Funnel Effect” on the US-Mexico Border: A Geospatial Modeling of the Geography of Deterrence. Journal of Borderlands Studies 36 (3): 443-68.

Deahl, Dani. 2018. The EU Plans to Test an AI Lie Detector at Border Points. The Verge, October 31, 2018. Available online.

Deloitte. N.d. The Age of With: The AI Advantage in Defence and Security. N.p.: Deloitte. Available online.

European Union. 2020. Intelligent Portable Border Control System. Last updated October 22, 2020. Available online.

---. 2021. Horizon 2020: Autonomous Swarm of Heterogeneous RObots for BORDER Surveillance. Last updated December 1, 2021. Available online.

---. N.d. Horizon 2020: What Is Horizon 2020? Accessed December 23, 2021. Available online.

Frontex. 2021. Artificial Intelligence-Based Capabilities for the European Border and Coast Guard. Warsaw: Frontex. Available online.

Fussell, Sidney. 2019. The Endless Aerial Surveillance of the Border. The Atlantic, October 11, 2019. Available online.

Gallagher, Ryan and Ludovica Jona. 2019. We Tested Europe’s New Lie Detector for Travelers — and Immediately Triggered a False Positive. The Intercept, July 26, 2019. Available online.

Ghaffary, Shirin. 2020. The “Smarter” Wall: How Drones, Sensors, and AI Are Patrolling the Border. Recode, February 7, 2020. Available online.

Glick, Molly. 2021. Airports Are Embracing Facial Recognition. Should We Be Worried? Discover, November 20, 2021. Available online.

Guardian Editorial Board. 2021. The Guardian View on Fortress Europe: A Continent Losing Its Moral Compass. The Guardian, August 1, 2021. Available online.

Human Rights Watch. 2021. How the EU’s Flawed Artificial Intelligence Regulation Endangers the Social Safety Net: Questions and Answers. Blog post, November 10, 2021. Available online.

iBorderCtrl. N.d. Home. Accessed December 23, 2021. Available online.

Kanno-Youngs, Zolan. 2020. U.S. Watched George Floyd Protests in 15 Cities Using Aerial Surveillance. New York Times, June 19, 2020. Available online.

Kerr, Dara. 2019. Drones, Sensors and AI: Here's the Tech That's Being Used at the Border. CNET, July 1, 2019. Available online.

Koscak, Paul. N.d. Artificial Intelligence Turns the Tide on Securing Northern Border Waterways. U.S. Customs and Border Protection blog post. Available online.

Lander, Eric and Alondra Nelson. 2021. Americans Need a Bill of Rights for an AI-Powered World. Wired, October 8, 2021. Available online.

Lennon, Will. 2021. The Virtual Wall: Documents Show CBP Plans for Surveillance Towers at US-Mexico Border. ShadowProof, April 8, 2021. Available online.

Lipowicz, Alice. 2011. Boeing’s SBInet Contract Gets the Axe. Washington Technology, January 14, 2011. Available online.

Lomas, Natasha. 2021. ‘Orwellian’ AI Lie Detector Project Challenged in EU Court. TechCrunch, February 5, 2021. Available online.

MacCarthy, Mark and Kenneth Propp. 2021. Machines Learn that Brussels Writes the Rules: The EU’s new AI Regulation. Lawfare, April 28, 2021. Available online.

Mijente and Just Futures law. N.d. Factsheet: The Dangers of a Tech Wall. Accessed December 23, 2021. Available online.

O’Brien, Matt. 2021. White House Proposes Tech ‘Bill of Rights’ to Limit AI Harms. Associated Press, October 8, 2021. Available online.

Reilly, Dan. 2021. White House A.I. Director Says U.S. Should Model Europe’s Approach to Regulation. Fortune, November 10, 2021. Available online.

Roborder. N.d. Home. Accessed December 23, 2021. Available online.

Satariano, Adam. 2021. Europe Proposes Strict Rules for Artificial Intelligence. New York Times, April 21, 2021. Available online.

sUAS News. 2020. European Project ROBORDER Tests the Use of Unmanned Systems for Sea Pollutant Discharge Detection and Management. sUAS News, December 22, 2020. Available online.

Sussman, Heather, Ryan McKenney, and Alyssa Wolfington. 2021. U.S. Artificial Intelligence Regulation Takes Shape. Insight, Orrick, November 18, 2021. Available online.

U.S. Customs and Border Protection (CBP). 2020. CBP’s Autonomous Surveillance Towers Declared a Program of Record along the Southwest Border. Press release, July 2, 2020. Available online.

Whitlock, Craig and Craig Timberg. 2014. Border-Patrol Drones Being Borrowed by Other Agencies More Often than Previously Known. Washington Post, January 14, 2014. Available online.

Wodinsky, Shoshana. 2018. Palmer Luckey’s Border Control Tech Has Already Caught Dozens of People. The Verge, June 11, 2018. Available online.