The wall bleeds rust. If touched, the rust will stain your hand for the rest of the day. Except for the black part, painted to absorb the sun and become hot to the touch. More difficult to climb this way. A hulking structure that runs along the El Camino del Diablo, the Devil’s Highway, this wall bisects the Sonoran desert, demarcating present day United States and Mexico. But it also hides a different border, largely invisible to the eye. A surveillance dragnet that has claimed thousands of lives.
The Walls Have Eyes: Surviving Migration in the Age of Artificial Intelligence is a global story—a dystopian vision turned reality, where your body is your passport and matters of life and death are determined by algorithm.
Technology at the border is being deployed by governments on the world’s most marginalized with little regulation. Borders are now big business, with defense contractors and tech start-ups alike scrambling to capture this highly profitable market.
A High-Risk Laboratory of Experimentation
The Walls Have Eyes is based on 6 years of research across the world’s borders. From US/Mexico, Greece, Kenya, Poland, and Palestine, racism, technology, and borders create a cruel intersection.
By gathering evidence and foregrounding the experiences of people on the move who are caught at the sharpest edges of innovation, the book argues that more and more people are getting caught in the crosshairs of an unregulated and harmful set of technologies touted to control borders and ‘manage migration,’ bolstering a multibillion-dollar industry.
From social media scraping to biometrics in refugee camps, drones at the border and even robo-dogs recently introduced by the Department of Homeland Security, to algorithms deciding who to deport and who to let in, technologies now augment every aspect of human movement across the planet. Many of these projects also increasingly rely on automation, whether through algorithmic decision-making or artificial intelligence to augment or replace human officers.
While various technological innovations swirl around us today, automation and robotics have been around for a while. Ismail al- Jazari, the Muslim inventor from Anatolia (present-day Turkey) and whom some call the “father of Robotics,” first started documenting his various inventions in the late twelfth century.
How he would feel about his work giving rise to robo-dogs and drones used to hunt people escaping persecution from the Middle East? Since the time of al-Jazari, and in particular in recent decades post 9/11, there has been a proliferation of automated technologies increasingly used to manage migration, control borders, and exclude the so-called unwanted ‘Others.’
By those who build it, technology is often presented as being neutral, but it is always socially constructed. All technologies have an inherently political dimension, whether it be, for instance, about who counts and why, and they replicate biases that render certain communities at risk of being harmed. And in an increasingly divided world, heeding where power accrues when it comes to the development and deployment of technology becomes paramount.
Exporting of “Battle Tested” Technology
Technology travels. It may be developed in one jurisdiction, exported to a second, repurposed in a third. And in the growing militarization of the border, defense and military applications are increasingly used at the border. Israel is at the center of much of the world’s surveillance technology.
Journalist Antony Loewenstein writes in his book The Palestine Laboratory: How Israel Exports the Technology of Occupation Around the Worldthat Israel is one of the leading players in border surveillance and spyware, technology that is first normalized and tested out on Palestinians and then sold to the rest of the world. For example, Israeli Elbit Systems towers are scattered throughout the Sonoran desert, while Heron drones used by EU’s border force Frontex patrol the Mediterranean sea.
Artificial intelligence is also used in war, with technologies like Lavender and Gospel used to fuel a tech-facilitated genocide in the Gaza strip to decide who lives and dies. On November 14th, 2024, the UN Special Committee to investigate Israeli practices found Israel’s warfare methods in Gaza are consistent with genocide, raising serious concerns about the use of AI-enhanced targeting systems:
“The Israeli military’s use of AI-assisted targeting, with minimal human oversight, combined with heavy bombs, underscores Israel’s disregard of its obligation to distinguish between civilians and combatants and take adequate safeguards to prevent civilian deaths.”
Achille Mbembe’s theory of ‘necropolitics’ comes to mind, both through Israel’s weaponization of technology in war, but also in the expansion of the carceral and border dragnet that has claimed thousands of lives, both at the US-Mexico border and along the fringes of Europe. 2024 was deemed the deadliest year on record for people on the move by the International Organization for Migration (IOM).
Necropolitics can be defined as a form of racial exclusion premised on the management of the bodies and, indeed, the very lives of populations deemed undesirable or less than human. Through these state practices predicated on centuries of colonial and imperial logics, certain deaths are made not only palatable but inevitable, now through increasingly automated decision-making that dictates who survives and who should be left to die.
And while states tout surveillance and new technologies as an effective deterrent to prevent migration, statistics do not bear this out. People in desperate situations do not stop coming because of surveillance. Instead, they are forced to take riskier routes through uninhabited terrain, leading to for example an exponential increase of deaths at the US-Mexico border since the introduction of smart border technologies.
Move Fast and Break Things
States are not the only actor to pay attention to. In the growing and lucrative border industrial complex worth billions of dollars, it is the private sector that sets the stage of what counts as a ‘solution’ to the ‘problem’ of migration.
From large companies like Palantir and Anduril to smaller players that develop drones with tazers to be tested out at the border, the private sector has tremendous normative power. This is why so much of innovation in migration centers robo-dogs and AI-lie detectors at the border and not the use of AI to identify racist border guards or to audit unfair immigration decision-makers, with defense contractors and tech start-ups alike scrambling to capture this highly profitable market.
In addition, very little regulation currently exists to regulate artificial intelligence generally and border technologies specifically. Even long-awaited efforts like the EU’s Act to Regulate Artificial Intelligence which was ratified in August 2024 do not go far enough to safeguard the rights of people on the move.
And the rights infringements are many. In addition to the erosion of the right to life, liberty, and security of the person when people die at borders to avoid surveillance or are placed in immigration detention because of an algorithm, privacy rights around sensitive personal data of refugees as well as equality and freedom from AI discrimination at the border are just some of the fundamental rights that experimental technologies breach.
Not to mention that technologically preventing people seeking safety infringes on the global right to asylum, an internationally protected right guaranteed by the 1951 Refugee Convention and its accompanying protocols, as well as the foundational principle of non-refoulement which guarantees that a person will not be forced to return to a place of persecution.
Yet border spaces continue to serve as viable testing grounds for these new technologies precisely because they are places where regulation is already deliberately limited and where an “anything goes” frontier attitude informs the development and deployment of surveillance at the expense of people’s lives.
The Power of Storytelling as an Act of Resistance
While based on 6 years of research and on the ground interviews with hundreds of people on the move as well as undergirded by theory, The Walls Have Eyes ultimately foregrounds personal stories. Each chapter is bookended by ethnographic vignettes, profiling people like Mariam Jamal, a digital rights activist in Kenya, or Little Nasr, a teenage boy with scoliosis trapped in a Greek forest.
The book’s last word goes to Zaid Ibrahim, a Syrian refugee who survived multiple pushbacks at the Greek-Turkey border and years in the squalid Ritsona refugee camp before escaping to Germany to join his family. Collected and translated through WhatsApp messages, in a book that tries to highlight the human stakes of technological experimentation at the border, the last word must go to someone like Zaid.
Commitment to storytelling and story sharing is of course not without its pitfalls. With each conversation and growing relationship, issues of witnessing, extracting, and capturing became more complex in the grappling with how best to convey the atrocities that continue to occur near shores and borders around the world without becoming part of the exploitative surveillance machine myself. However, by infusing this book with as much of the lived experiences of the people affected, The Walls Have Eyes tries to illuminate the testing grounds of these dangerous technologies and bring them from the realm of the abstract to the realm of the real.
In order to tell this global story of power, violence, and technology, The Walls Have Eyes relies on a sometimes-uneasy mix of law and anthropology. It is a slow and trauma- informed ethnographic methodology, one which requires years of being present in order to begin unraveling the strands of power and privilege, story and memory, through which people’s lives unfold.
Sometimes an interview was not even an interview —rather, it was sharing a meal together on the floor of a container inside a refugee camp, or a silent walk along the seashore. This practice does not yield immediate insights— and indeed, researchers have often faced scrutiny about the efficacy and accuracy of our methods or even the validity of what we are doing for such long stretches of time in repeat visits. But it is precisely through this slow unpicking that the real impacts of borders come to light.
Moreover, in the age of ecocide and growing environmental migration, all of us may in one way or another be affected by migration management technologies as we cross borders and move across the world. Surveillance is expanding, and responsibility over grave errors is dissolving. In a way, the abuse of technology is a leveler, encouraging us to move away from rigid binaries between “migrants” and everyone else.
While AI lie detectors may seem like a feature of a dystopian project somewhere far in the future, private companies are busy exhibiting their latest inventions, as Converus recently did in 2022 when it pitched its EyeDetect lie detector to the CIA. And in the spring of 2023, the New York Police Department proudly announced on Instagram that it will once again be deploying robodogs for law enforcement.
Predictions about human behavior are already driving various analytics for border interventions, but what if predictive analytics were used to deny health insurance? What if, with ongoing environmental degradations making the planet more unlivable, analytics were used to dictate where one could live and how one could support a family?
The greatest impact is on traditionally marginalized communities such as refugees and asylum seekers. However, with the incoming Trump administration’s promises to deport millions with the aid of AI surveillance, and the Canadian government acquiescing to these political pressures by devoting $1.3 billion to border technologies, the ecosystem of migration management technologies affects us all.