We are Black and border guards hate us. Their computers hate us too.
Adissu, living without immigration status in Brussels,
BelgiumIf we want different or better or more just futures and worlds, it is important to notice what kind of knowledge networks are already predicting our futures.
Katherine McKittrick, Dear Science and Other Stories
Unlike many books on borders, we have intentionally devoted very little space in these pages to the border as a physical entity, or to spectacular sites of violence such as Ceuta and Melilla, the Mediterranean, the English Channel, and camps like those in Lesvos, Turkey, Arizona, or Manus and Nauru. This is partly because we are keen to avoid normalising the slow violence of exclusionary immigration controls by emphasising only the most extreme. But it is also because we believe that, by focusing too narrowly on the physicality of the militarised hard border – in emphasising fences, walls and camps – we miss something important. For us, describing borders as hardening – calling the EU a ‘fortress’, for example – only captures part of what borders do in the world.1
Borders are increasingly mobile, flexible, virtual and externalised. States surveil migrants from the skies, their unmanned aerial vehicles scanning the deserts and the seas, far from national borders. Algorithms make enforcement decisions – who to detain, who to deport – on the basis of vast amounts of data, traces of information on people and things harvested from social media accounts, financial transactions and innumerable government databases. These state practices are being tested, trialled and rolled out as we write. What are the possible futures for immigration control? How are they emerging in the present? And what modes of resistance will be needed against tools that are not yet consolidated?
As ever, thinking through what these technologies mean for migrants and borders leads us to consider the shape of the states that wield them, and new forms of algorithmic knowledge and control more broadly. In particular, the nexus between governments and private companies in this area demands renewed attention. Companies like Palantir, Amazon, and IBM stand to make lots of money out of ‘high-tech’ bordering. Authoritarian, anti-immigrant governments are proving ideal customers. These corporations must be a site of protest, intervention and action for those committed to dismantling borders in the twenty-first century.
ROBORDER and Drone Police
In this 21st century, we have challenges, and I think we can use 21st-century solutions instead of a 14th-century solution called the wall . . . Even if you put in a fence, ‘bad guys’ can use drones to carry drugs over that fence. So we have to be more flexible, more agile.
Henry Cuellar, Democratic US Representative, Texas
As part of its Research and Innovation Programme, Horizon 2020, the EU is financing a project to develop drones that will be piloted by artificial intelligence and will, it hopes, autonomously patrol Europe’s borders. The drones – including quadcopters, small planes, ground vehicles, submarines and boats – will operate in swarms, identifying humans, weapons and vehicles, and sharing information on ‘targets’. When the AI independently identifies people who may have committed a crime, it will notify border police. ‘The system will be equipped with adaptable sensing and robotic technologies that can operate in a wide range of operational and environmental settings’, the ROBORDER website explains, employing state-of-the-art developments in radar, thermal and optical cameras, and radio-frequency sensors to determine threats along the border.2 As ROBORDER’s technical manager explains, ‘The main objective is to have as many sensors in the field as possible to assist patrol personnel.’3
With ROBORDER’s pilot schemes nearing completion, campaigners have raised concerns about the future uses of these technologies: their possible deployment for military purposes and their sale to various states outside Europe. There is also concern that these automated surveillance drones might in the future be weaponised – enlisted not only to surveil migrants but to stop them. After all, there are already several weaponised drones on the market – flying robots armed with tasers, pepper spray, rubber bullets or live rounds, and missiles – and it has been noted that facial-recognition technologies might easily be added to the ROBORDER system at a later date.
These automated and semi-automated surveillance technologies are also being rolled out at border walls. In 2016 it was reported that the Turkish authorities were adding weapons to their smart border posts along the border with Syria, in which the AI would warn anyone within 300 metres of the border that they would be shot if they did not leave the area, before then opening fire if orders were ignored. As is often the case with press releases about new tools, the towers’ capabilities were subsequently discussed in less lurid, but still alarming terms, described as being ‘equipped with various types of surveillance paraphernalia and connected to equipment that will detect any sort of irregular movement across the green border’.4
Meanwhile, Saudi Arabia has built thousands of miles of border wall, both at the northern border with Iraq and the southern border with Yemen, employing watch towers with night-vision and radar cameras. In Israel, border walls are taken to new extremes and densities, with plans to build reinforced concrete walls underground to prevent tunnelling at the West Bank, Gaza, and the Northern borders with Lebanon and Syria. The Israeli state has long used smart technology, motion sensors and aerial surveillance at all its borders, and the southern border with Egypt has now been fortified with a 150-mile-long smart fence with observation towers, cameras, radar, motion detectors, barbed-wire and twenty- four-hour monitoring – primarily in response to concerns about ‘illegal immigration’.
The US government offers perhaps the biggest market for border security tools, and for Customs and Border Protection (CBP) ‘smart tech’ is becoming increasingly important. Various Silicon Valley companies are pitching their AI drones to CBP, which has expressed interest in equipping drones with facial-recognition technology.5 However, while drones may work well for targeted surveillance, they are not well suited to monitoring wide stretches of land over longer periods of time. The US government wants to implement a ‘virtual’ border wall, and companies like Anduril and Google are promising to deliver the tech solutions. Borders are increasingly intended to be invisible.
On its website, Anduril boasts of ‘cutting-edge hardware and software products that solve complex national security challenges for America and its allies’ (Peter Thiel, who set up Palantir, is one of the Anduril’s key investors).6 Like ROBORDER, the Anduril virtual wall system relies on autonomous helicopter drones operated in conjunction with sentry towers, using high-tech cameras, radar antennae, lasers and other sophisticated sensors to detect unauthorised entry. Anduril’s AI software then processes all this data, automatically flagging suspicious-looking vehicles and people to border agencies. Google’s Cloud technology will be used in tandem with Anduril’s software, and it appears that Google’s AI will be used to train the algorithm in object recognition, which will assist with detecting and categorising people and objects from images and video files. Despite protests from tech workers at Google over previous contracts with the Pentagon, it appears the company is now committed to servicing and profiting from the US border industry.
AI Border Agents, New Biometrics and Interoperable Databases
In 2018, the EU announced that it was piloting iBorderCtrl, a computerised lie-detection test for travellers seeking entry to Europe. According to the project coordinator, ‘iBorderCtrl’s system will collect data that will move beyond biometrics and on to biomarkers of deceit’.7 Graciously, the animated AI border guard customises itself to the traveller’s gender, ethnicity and language, to put applicants at ease as the software analyses thirty-eight of their facial micro-gestures to ascertain honesty and deceit:
‘What is your surname?’
‘What is your citizenship and the purpose of your trip?’
‘If you open the suitcase and show me what is inside, will it confirm that your answers were true?’
The animated border agent asks these questions while, through your laptop camera, the AI scans your face and eye movements for apparent signs of lying. At the end of the interview, you are provided with a QR code that you must present when you arrive at the physical border (iBorderCtrl was piloted at airports in Hungary, Latvia and Greece). After a customary passport check, facial scan and fingerprinting, you may proceed if the AI thinks you are telling the truth (have a nice trip!). However, if the AI border guard judges that you have lied in your interview (what really is in your suitcase?), then your lie-detection score flags you as high risk, the human at the gate is notified, and you may be subject to further inspection and questioning.
Perhaps unsurprisingly, the technology does not work very well. The notion that you can measure whether a person is lying from facial micro-gestures is not borne out by the evidence. In any event, iBorderCtrl is just one example of experimentation with new biometrics at the border – new ways of measuring the body and attempting to establish the truth of identity and risk via biometric traces. The Canadian authorities have installed border-screening ‘emotion-recognition’ kiosks at airports, and the German authorities are experimenting with so-called ‘voice-printing’ technologies to determine where asylum-seekers really come from.
These new biometric technologies, which claim to measure voices, faces, emotions and intentions, are supposed to help states to screen, filter, and adjudicate more effectively, as they attempt to restrict irregular migration and identify security threats in an increasingly mobile world. In this way, they promise to make processes of identification, exclusion and expulsion more efficient and effective. In almost every case, it is questionable that the underlying quality these technologies claim to measure is indeed identifiable and measurable in the way that tech developers suggest. But for governments looking for a pretext upon which to identify and exclude, these fundamental questions are of little concern.
New biometric technologies have been most widely discussed in relation to facial recognition. Police forces in the UK have been especially enthusiastic about rolling out automatic facial recognition in public and quasi-public places – shopping centres, festivals, concerts, sports and community events, and political demonstrations – and have been collaborating with researchers on a live facial-recognition project that could identify people wearing masks or other face coverings.8 Several campaign groups have focused on the issue of racial bias in facial-recognition technologies – pointing out that black people are more likely to be wrongly identified, for example. This might be true, but raises the obvious question: Would it be better if black people were more accurately identified? Complaints about racial bias seem likely to end in improvements to the tech, rather than the prevention of their use altogether. Further, it is not clear how effective arguments on racial bias are when it comes to new biometrics deployed at borders, or indeed in the context of war. Given that war and borders produce race and the racist world order, the concepts of bias and discrimination cannot do all the work we require of them.
This point is important, because biometrics are often trialled in sites of war and humanitarian disaster. As Privacy International reports, biometric data-collection in the name of ‘countering terrorism’ has been accelerating around the globe since 9/11, with little to no regulations or safeguards. The US military has constructed vast biometric databases in Iraq and Afghanistan, ostensibly to distinguish insurgents and terrorists from the local civilian population. Already by 2011, it was estimated that the US military had gathered the digital fingerprints, facial images and iris scans of roughly 1.5 million Afghans and 2.2 million Iraqis.9 Today in Iraq, there are over 100 mobile biometric checkpoints, where over 1 million people have their fingerprints checked every day.10
Meanwhile, following the change of regime in Afghanistan in 2021, the Taliban seized US military biometrics devices. While it was unclear how much biometric data was available to the Taliban, concerns emerged about the possibility that such devices would facilitate revenge attacks against those who had worked with US forces.11 This case raises urgent questions about the limits of demands for procedural safeguards and purpose limitation as a response to the proliferation of surveillance technologies, rather than demanding outright bans. Without claiming that these systems are somehow more dangerous in the hands of the Taliban than the US military, this case reminds us that it is not always possible to defend against the ways in which systems might be used by different actors in the future – especially in the context of war.
Privacy International also discusses the Israeli state’s use of cutting-edge facial-recognition technology, which in the name of counter-terrorism routinely surveils and severely restricts Palestinians’ freedom of movement, as well as biometric initiatives by various international actors in Somalia that have had dubious benefits and detrimental effects on local populations.12 The apparent consensus on the need to ‘counter terrorism’ makes it much easier for states and tech and defence companies to experiment with mass biometric databases, and to build surveillance infrastructure.
We know that biometric data must be stored, classified and made accessible, and therefore new biometric technologies cannot be understood in isolation from the massive interoperable databases we described in Chapter 6. The EU is now introducing the European Travel Information and Authorisation System, which pre-screens travellers from visa-exempt countries. Statewatch usefully reminds us that ‘this data will not just be used to assess an individual’s application, but to feed data mining and profiling algorithms’.13 In other words, our data are not only used to identify us as individuals, but to train the algorithms that will decide the class of people to which we belong: who to let through and who to flag as high-risk in the future. Algorithms are thus about much more than identifying individuals more accurately.
Meanwhile, in the United States, the Department of Homeland Security (DHS) will soon require everyone applying for any visa or immigration status, as well as their citizen sponsors, to provide several forms of biometric data to the US government. The plan is to collect more types of biometric data on more people – fingerprints, iris scans, voice prints, and in some cases DNA samples – and to make searching and matching these biometric traces and profiles easier and more efficient. This forms part of the Homeland Advanced Recognition Technology framework, which is the newest iteration of DHS’s automated biometric identification systems. Huge defence conglomerates have been the main beneficiaries of these contracts, including BAE and Northrup Grumman. Like much of the federal government’s data infrastructure, this new biometric identification system will be hosted on Amazon Web Services. The development of this system is especially troubling, given that the DHS ‘is known as a uniquely opaque and privacy-averse domestic law enforcement and surveillance apparatus’.14
That being said, we have to recognise that much of what we know about border technologies comes from sales pitches and press releases written by the people who hope to sell and use them. Within an overall logic of deterrence, these spectacular stories play an important role. Yet it is important to guard against assuming that these technologies function as their manufacturers intend them to – especially as it is in the interests of both buyers and sellers to conceal any failures and malfunctions. Indeed, we do the work of tech solutionists when we buy into and repeat stories of ruthlessly efficient, dystopian and all-seeing technologies. This is important because, in some circumstances, it can be politically strategically useful to demonstrate that technologies do not and cannot work as they claim to. Of course, there is a danger that the solution then becomes improvement of the technology itself. However, when campaigning for radically different policies, it can be useful to point out the empty promises and misguided imaginaries of the tech solutionists.
We need to be alive to the harms produced by the specific ways in which these tools do not work, as well as those produced when they do. If you are identified by a facial- recognition system as someone you are not, or marked out as lying by an AI border tool when you are telling the truth, this brings its own set of negative consequences, just as when the tool is ‘correct’. While we might not want minoritised groups to be included more effectively in training datasets for these technologies, the harms of the tools not working also produce racist outcomes. Ultimately, of course, we do not want these technologies to work for these purposes for anyone. The point is that basing our arguments solely on whether technology works or not is rarely as effective as it can seem to be, and we need to proceed with nuance and care.
Profit and Prediction
The core mission of our company always was to make the West, especially America, the strongest in the world, the strongest it’s ever been, for the sake of global peace and prosperity, and we feel like this year we really showed what that would mean.
Alex Karp, CEO of Palantir, January 2020
Immigration authorities around the world face one persistent challenge: how to identify and locate unwanted migrants. After all, ‘illegal immigrants’ are not easy to identify; they are our friends, colleagues, neighbours and classmates, and while they might be more likely to live in particular neighbourhoods or work in particular jobs, they do not lead segregated lives. For US Immigration and Customs Enforcement, this presents a problem: how to decide on ‘targets’ – which homes and establishments to raid.
Palantir’s Integrated Case Management System offers the solution. It gathers vast amounts of data from state and federal law-enforcement agencies, various government databases (on visa and visitor entrants, for example), social media websites, utilities and banking data, both historical and live phone and text monitoring, commercial data surveillance and commercial licence-plate-reader data, providing access to over 5 billion data points for physically tracking individuals. Homeland Security Investigations then use this data to build profiles of individuals and their associations.
Meanwhile, ICE enforcement agents can access Palantir’s FALCON analytical software on mobile devices, helping them identify targets for raids and build up profiles and intelligence reports, with real-time technical support from Palantir support staff embedded within ICE facilities in North Virginia. It was revealed in May 2019 that Palantir’s Investigative Case Management (ICM) system was being used to track down over 400 family members of migrant children, while both FALCON and ICM were providing crucial infrastructure in support of intelligence, surveillance and raids. These systems rely on vast amounts of data that must be stored somewhere, and Amazon Web Services supports Palantir by running its software on the Amazon cloud service.15
The US government now spends more on border and immigration control than all other federal law enforcement agencies combined. Budgets rose from $350 million in 1980 to $1.2 billion in 1990 – then to $9.1 billion in 2003 and $23.7 billion in 2018. This astronomical growth in funding has supported an increasingly militarised border force, representing a huge expansion in the capacity to detain and deport, and – most relevant for our purposes here – the expansion of high-tech bordering tools, including cameras, aircraft, motion sensors, drones, video surveillance, biometrics and software tools for managing data and identifying enforcement targets. All of this has generated enormous profits for technology and security firms, private prison providers, and global arms companies.16
While the largest contractors remain well established defence companies – Raytheon, Lockheed Martin, Northrop Grumman, General Dynamics, Boeing – tech corporations providing digital platforms and analytics are becoming increasingly central to border and immigration enforcement: companies like IBM, Google, Amazon Cloud Services, Microsoft and Palantir. When a company like Palantir provides services to ICE, it equips them with the digital infrastructure to track, surveil and identify immigrants in new ways. Indeed, by identifying targets for immigration enforcement, these companies and their algorithms effectively wield sovereign power.17
Palantir gained its first contract with the US military using new tech to predict the location of IEDs in Afghanistan, before developing predictive policing technology as a contractor for the US in Iraq – technology that was subsequently used by police forces in the United States, primarily against the racialised poor.18 Palantir now has tens of active contracts with the US federal government, worth at least $1.5 billion. During the Covid-19 pandemic, the company secured a contract with the UK’s NHS, raising widespread concerns about access to confidential patient data and integration with systems of entitlement-checking and exclusion under the ‘hostile environment’ policy – especially given that Palantir also has a customs contract with the UK government. Palantir is especially controversial because its co-founder and key investor, Peter Thiel, is a far-right libertarian who supported Donald Trump, co-wrote a book called The Diversity Myth, and continues to invest in and associate with various white nationalists and the alt-right.
Given that companies like Palantir evince no shame, only pride, in their work for the security state, it seems unlikely that they will succumb to public pressure and campaigns – although state decision-makers, who are obliged to weigh a range of different public policy considerations, are more likely to do so. However, interrogating companies like Palantir, analysing their software, contracts and political affiliations, at least makes visible the shady workings of new tech at the border. This can then open up a wider conversation about algorithms, predictive analytics and automated decision- making, helping to identify possible interventions for campaigners and tech workers.
This is important, because predictive analytics and algorithms are not the sole preserve of authoritarian right-wing governments. These tools are also employed by ostensibly liberal governments that want to ‘manage migration’ more effectively and make optimum decisions about who to grant access and rights. The Canadian authorities, for example, have been looking for artificial intelligence solutions that can assist immigration officials in deciding on humanitarian and compassionate applications, as well as pre-removal risk assessments – both of which are used as a last resort by immigrants seeking to remain in Canada and resist deportation.19 Countries including Switzerland and the UK have been trialling algorithms to select refugees for resettlement. Similar computational tools are used by ICE to make decisions on who to detain, by police and corrections departments making decisions on criminal sentencing, parole and release in various countries, and by welfare authorities in countries like the Netherlands, which are concerned about benefit and tax fraud – all of which rely on processing huge amounts of data to produce risk profiles and predict outcomes. Algorithms promise solutions to problems surrounding irregular immigration, recidivism and ‘welfare abuse’, and governments seem only too willing to sign up for them.
It is important to note that the algorithms that make decisions about national security (who should be stopped at the airport and which container should be inspected?), immigration (who should be granted a visa and who should be detained?) and policing and prisons (where should we patrol and who should we parole?) have origins in less overtly coercive applications. Similar kinds of computation underlie the modelling that provides evidence on climate change, for example, or predicts the shape of proteins in the quest to learn more about cells, genes and infectious diseases. They are unlikely to be abandoned wholesale, and it is therefore vital to pursue the aim of limiting their permitted application.
Moreover, none of us stands outside these systems, even if individually we feel secure in our status as low-risk travellers, trusted borrowers or law-abiding citizens. We may feel insulated from coercive state power, but our data – innumerable traces of our lives harvested from online profiles, financial transactions, travel histories, associations and government databases – all feed and train the algorithms that go on to make decisions concerning the treatment of other people. In short, many algorithms are used not for identifying individuals, but for the prediction of risk, and the assignment of individuals to categories, in an uncertain and highly mobile world.20
This realisation should not lead to fatalistic resignation. Instead, the prevalence of algorithms and predictive analytics reminds us that anti-racists and migrants’ rights activists need to respond to the digital character of contemporary statecraft. This became clear in the UK context when thousands of migrants on student visas were illegalised, detained and deported on the basis of a faulty voice-recognition algorithm that determined they had cheated on an English language test. Three years later, during the pandemic, thousands of eighteen-year-old school students joined together to protest their predicted A level grades, which reinforced and exacerbated racial and class-based disparities, holding signs that read ‘Fuck the algorithm’. Around the same time, the Joint Council for the Welfare of Immigrants and the legal tech-rights group Foxglove won their legal case against the algorithm used to stream visa applications, which placed applicants in the ‘red’ high-risk group on the basis of seemingly little more than nationality (in response, the Home Office promised to redesign rather than abolish the visa- streaming tool, but this case represents a clear win in the struggle against algorithmic bordering).
To develop successful and astute campaigns and actions, we need first to build awareness and literacy in relation to digital borders. Too often, states and companies introduce systems in relative secrecy, with little public understanding or scrutiny. As part of the wider struggle against nativist and racist anti-immigration politics, we need to force governments to pause, to account for themselves, and to explain their digital systems and the contracts that govern their use. Demands for ‘transparency’ are both necessary and insufficient, but they do make it easier for us to understand what we are up against.
Where our movements are strong enough, potential strategies might include securing moratoria on the use of algorithmic decision-making tools and new biometric technologies like facial recognition.21 This can secure us some breathing space while we work to defeat the broader logics of policing and borders that these technologies aim to enact, as well as time to build coalitions with people concerned by the way these tools might be put to other uses once refined at the border. As organisers in the US have made clear, this will have to give a central role to unionised tech workers themselves, who can exert real leverage.22 Authoritarian governments and defence companies might be beyond the reach of public shame, indifferent to our demands, but the first step involves at least making our demands, loudly and clearly: ‘Fuck the algorithm’, ‘Ban facial recognition’, ‘Shut down Palantir, IBM, Amazon, Google, Microsoft’, ‘Firewall the data’ – and so on.
Algorithms Don’t Deport People . . .
Contemporary bordering practices seek to restrict the mobility of unwanted people and things, while at the same time ensuring the speedy flow of other people and things. Borders are there to monitor, regulate and filter how people and goods move across them, not necessarily to stop or slow them down. In this context, ‘security’ means controlling circulation, not preventing it.23 The world is supposed to be on the move, now more than ever; but the problem for states is their need to assert control over these mobilities, especially the sheer energy and restlessness that drives human movement. This makes technology especially important, because it offers the tools to observe, surveil, inspect, verify, fix individual identities, collect and process information, calculate risks and identify patterns.
You do not need a camp to immobilise and immiserate a person when you can identify them and strike out their right to have rights at the click of a button. Targeted bordering – which relies on gathering ever more information on people and things, harvested via more total forms of surveillance and identification technologies (biometrics) and processed by computational tools (algorithms) – allows states to exclude unwanted migrants earlier and more easily, while maintaining and facilitating the mobility of valued people, goods and services. It also reinforces a sense of conditionality and precariousness for those who, for the time being and for limited purposes, are currently able to cross borders, producing docile, obedient subjects. Borders are not only hardening, then – they are becoming more dense, more mobile, more virtual, and more pre-emptive.
Algorithms, computation and AI robot border-drones are an important part of this story. The developers of these technologies promise to replace humans – who are plagued by error, bias and limited brain power. In the process, they create new kinds of knowledge, truth and authority.24 Algorithms process more data than any human could, and determine riskiness and worthiness through opaque calculations concerning innumerable data on people, things, transactions and complex associative links. Their complexity and novelty, and the vast amounts of data they deploy, lend them a veneer of legitimacy and scientific truth. Many are sold as helping us, and especially states, to see and process more; they promise to keep us safer in an insecure and highly mobile world.
But such new technologies do not represent a total break with previous systems, and we should avoid indulging fantasies of a ‘robot takeover’. Human decision-making about who should go to prison, who should have their welfare benefits stopped, and who should be granted refugee status can be just as unfair, heartless and arbitrary as any algorithm. Indeed, it is through the use of reams of human data collection and decision-making that algorithms and the rules by which they operate are trained. Emphasising the lack of transparency and accountability in relation to algorithms therefore risks the acceptance of liberal fantasies surrounding human decision-making, as though rational, reasoning human subjects can reach fair decisions in courts of law, welfare offices and parole boards – as though the people making these decisions are somehow ‘accountable’ to those over whose lives they hold such terrible power. This has never been the case.
The problem with algorithmic decision-making is not simply that it is somehow less accountable and transparent, or that there is no human in the loop. The problem is the uses to which algorithms are put. After all, machine-learning algorithms can be used to predict earthquake aftershocks or identify cancerous tumours. Thus, the problem is not the technology per se, but the formulation of the problems to which algorithms offer solutions: the idea that movement across national borders is inherently dangerous and problematic, that welfare claimants are often undeserving ‘cheats’, that locking people up helps keep the rest of us safe. Dismal ideas about risk, security and scarcity – and the deeply sedimented hierarchies that determine who is defined a problem and whose life counts as valuable – drive the ways in which algorithms are deployed at borders.
If algorithms make new kinds of knowledge, truth and authority possible, the solution is not to make the algorithm less biased, to open its code up to scrutiny, or to put a human in the loop, but to challenge the logics of exclusion and expulsion that license their use at the border – though that is not to say that some of those procedural demands cannot buy us time.
In short, we cannot blame everything on the AI. The source of violent borders remains the uneven geographies of late capitalism, racialised global inequalities, violent nativism, restrictive ideas about gender and sexuality, punitive law-and-order policies, and militarism and war – all of which are articulated in new ways in the context of these emergent technologies, driven by governmental enthusiasm for technological solutions and the profit motive of powerful tech and defence companies.
A Bordered Dystopia
Alberto lowers himself into the coffin. This is what he and the other drones call them, in what must once have been a wan attempt at humour. They are shipped across the dead ocean from site to site in coffins. He heaves gently. The anti-emetic hasn’t kicked in yet. The cheap box is cushioned with a reddish mulch that will protect his body during the passage, which can be rough, especially if they’re boarded by Frontex, who are known for culling undeclared Freight, or at least inflicting intentional damage.
The company insists that the mulch – also known as Infusion™ – ‘provides 100% nutrition, hydration, and infection control through the skin barrier for journeys of up to three weeks’. That’s some generous spin, of course. A third of drones die – fail, in company-speak – during the passage. The rest emerge weak and delirious, their muscles wasted from paltry oxygen rations; skin irritated from lying in their own filth. It’s only with stims that they are able to work in the weeks that follow.
There are no words in the company lexicon for what the passage does to their minds: they are drones, after all.
The only rank lower than Freight is Waste, and Alberto has been relegated to Freight ever since he was put under 360. He’ll probably fail before he comes up for reassignment. The pin in his fingertip begins to pulse blue as the sedative line activates. He will awaken in the same grim coffin.
Freight is moved from facility to facility as the company requires. He suspects the facilities are underground, but there’s no way to tell. It’s just a hunch he has because they’re always warm, and he knows the company wouldn’t waste heating on drones. Alberto doesn’t know where he has been these last few months, and he doesn’t know where he is going. Each facility is self-contained, and Freight does the same work wherever it’s deployed. The facilities have no exterior of which Alberto is aware, and 360s aren’t allowed outside anyway.
Alberto knows that this is an ocean passage because the coffins in the hangar unspool in rows. For land journeys, they are tessellated: something to do with the air supply. Humming dread washes over him, as it always does in the face of that indigo spoiled water. As Alberto submits to his dim terror, images of his Calculation swim, unwanted, into view.
He can no longer say when it happened. Freight rarely sees the sky, and has no way of marking time: the lights in the gunmetal facilities stay bright and blanking.
This – the Calculation – has brought what were once known as prison and work to an interminable convergence. Flashing up on a glass screen beside Alberto’s bed: Guilty, Freight, 360. Guilty, Freight, 360, a mantra: his designation and his destiny; the class of profiles to which he had always belonged, not helped by the fact that his childhood unit had also been relegated, for a perceived infraction of which his breeders never spoke.
By accident or design, there are no mirrored surfaces in the facility by which one might trace the ebb of time: the insistent greying or recession of a hairline; the track of new furrows on a cheek. Sedatives, stims and exo-suits are integral to the existence of every drone at the company. It’s the only way they can be made physically and mentally to bear the repetition and relentless interiority of company life. For 360s, the unstinting use of all three blanks out any linear sense of physical demise or degradation of spirit. Only in the transitions between one phase and another – waking to sedation, and vice versa – does the pain of self-awareness pierce the cloud; but only for moments.
Calculation is how most things in company life – in life – are determined: the clean and final operation of an opaque set of rules on a body of data amassed by your surveillance feeds and, some suspect, those of your breeders and theirs, stretching back across generations. Alberto’s Calculation was the result of his attendance at an unauthorised meeting which, unsurprisingly, was also attended by a mole.
He doesn’t know where he is going, but he can’t see why the company would bother shipping the 360s to anywhere other than a company facility. There was no need for the old way, of segregated prisons, when 360 was just drone life without company malfunctions. Drone or 360, everyone’s exosuits were fitted with a slick range of monitors. But once a Calculation put you on 360, no more random sampling – your surveillance feeds went straight to corphead. The ankle geotrackers actually worked. Rather than some vague threat easily evaded with the right tools, the collars would buzz you every time you entered unauthorised space. For 360s, that was everywhere except the facility and the sedation bay.
Even before Alberto’s 360, outside was the preserve of those who could afford personal environmental regulation – either suits or adapted SUVs. Everyone else took their chances with the radiation forecast every few months, and used whatever junk they could find as protection from the caustic rain. He has heard myths of whole regulated towns and conurbations where the air is cool and the sky is made of glass. If they exist, they’re North somewhere.
And so, as the sedatives win and he slips under, one last memory loop unwinds: goodbye to his unit, whom he remembers with the ghost of a fondness of which he is no longer capable; goodbye to the corrugated-iron trailer that he shared with his match and brood on the edge of quadrant three; goodbye to the weekly hour’s rest and the freedom to spend some moments un-stimmed and un-suited, every once in a while. But this is some biological camber of his mind towards schmaltz: there had never been anything so crisp and clean as a goodbye, only a subterranean fog of frosted rage and regret in the liminal space of every sedation. Because when the Calculation came in all that time ago, it had at the same time activated his forgetting line, and the pin in his finger had begun to pulse green before he had more than an icewater blossoming moment to feel the loss of the little that had been his shatter through his gut and his legs.