9

image

The Future

the drone age evokes both dreams and nightmares. For some, the arrival of unmanned aircraft fulfills a dream that began in the earliest days of aviation. As drones have become cheaper and easier to obtain, the skies are now open to those who once saw flying as an unattainable goal. The spread of drones to more people unleashes human potential and holds with it the tremendous promise of advancing scientific knowledge and commerce. From environmental conservation to archaeology, drones can be put to a nearly unlimited number of peaceful purposes. It is also true that drones will allow people to achieve some goals faster and with less risk than ever before. This is not an advantage which can be easily dismissed. As many of today’s societies have tried to control risk or even banish it from daily life, the prospect of being able to fly, and even fight, without jeopardizing the life of a pilot seems almost irresistible. And if drones can also help to alleviate human suffering in war zones and crisis areas, as many defenders of the technology suggest, it is hard to see why we should be afraid of a world full of them.

Yet the nightmares about drones continue. Drones tap into a deep vein of fear about technology and its potential misuse that has been part of the public debate over aviation for at least a century. In 1907, the writer H.G. Wells reacted to the emergence of manned aircraft by depicting a future with aerial bombardment of cities and even kamikaze attacks in his novel War in the Air. Wells saw manned aircraft as harbingers of doom that inevitably would be misused for our own destruction during wartime. Today, the fact that drones are unmanned makes it even easier for critics to depict them only as uncaring robots and see their spread as our worst fears come to life. These fears play a disproportionately large role in the public debate, as evidenced by the regular connections drawn by the media between drones and George Orwell’s “Big Brother” as well as the Terminator films and other science fiction accounts of futuristic technology gone wrong. Given the depictions of drones in movies and television, it is not surprising that many people today fear that their arrival will produce a hellish future of dehumanized warfare and remote slaughter.

Drones also evoke concerns about whether the technology itself will render humans irrelevant or obsolete. Some scholars fear that an over-reliance on drones may lead us to become intolerant of human error and diminish the value of human agency, creativity, and spontaneity.1 If the skeptics are right, a world awash in drones may not necessarily be more dangerous, but it could be duller, less creative, and ultimately less free. This fear underlies some of the biggest questions surrounding drones in both the military and civilian worlds. Will military pilots be displaced from the prestigious positions they hold now? Will they be reduced to mere custodians of largely autonomous machines? And what happens to the rest of us living under the gaze of the drone’s camera? It is hard to feel like a human, with all of the freedom and richness of experience that this implies, when reduced to a pixelated dot under the gaze of a drone. It is even harder to believe that freedom at large will thrive if drones are used to watch us and regulate our behavior.

These fears have begun to mobilize new constituencies to call for a ban on “killer robots” and other related technology. In 2017, Elon Musk, founder of the Tesla and Space X companies, joined forces with dozens of artificial intelligence (AI) industry leaders to call for a ban on autonomous weapons. Such a ban would include drones that use sophisticated algorithms to make decisions about whether and when to kill an enemy on the battlefield. For Musk and others, rendering drones and other weapons truly autonomous in this way carried more risks than benefits. They wrote:

Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.2

The noted Cambridge physicist Stephen Hawking concurred with the call to “ban killer robots” because the use of AI to make drones and other robots autonomous weapons would bring about risks which humans scarcely comprehend. He warned that, “unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many.”3

How should we understand the dreams and nightmares that drones evoke? At a minimum, it is clear that the extreme versions of both sides of the argument are overstated. Drones are not a panacea for all of the world’s problems; in a world full of drones, war, poverty, and environmental degradation will continue. But drones are also not necessarily harbingers of a nightmarish world where technology is entirely out of control. Too much of our debate resembles a form of shadowboxing in which critics of drones attack fears of what might happen instead of what actually is happening. It is not that Musk, Hawking, and their colleagues are wrong, but that they are peering into the future and identifying future threats at the expense of the ones that have become apparent today. As this chapter will show, there are good reasons to be concerned with how future technological developments with drones will play out. But it is equally important not to ignore what the evidence surrounding the use of drones today tells us about how the technology affects the strategic choices of its users.

Drones and Strategic Choice

This book has suggested that drone technology is important because it has a unique array of characteristics that affect how humans use it. There is no single characteristic of drones that produces radical changes in the way that people behave. Rather, the characteristics of drones together produce subtle but noticeable shifts in the strategic choices of its users. Each characteristic plays a different role. That drones are low-cost makes them easy to buy and quick to spread among even resource-constrained actors, such as NGOs and terrorist groups. That drones are adaptable also makes them attractive to actors, such as militaries and peacekeeping forces, which must deliver payloads in highly complex or dangerous environments. That drones can be precise and yield high levels of information about what they see on the ground makes them indispensable for actors who dream of knowing more about the environment in which they operate. And finally that drones are unmanned makes them invaluable for all actors for whom risk is defined almost exclusively by a loss of life of pilots or other personnel. For societies that have put a premium on measuring and avoiding risk in this way, drones are almost ideally designed: they offer the promise of invulnerability while opening the world beneath the camera’s gaze subject to precise investigation and action.

An important theme of this book is that drones are rarely introduced into situations as mere tools that can be applied to a pre-existing mission. Instead, they tend to change the boundaries of the mission itself, leading to what is often known as goal displacement. Goal displacement—or as it sometimes called in the military, mission creep—occurs when the objectives of a particular operation drift or expand over time in response to new opportunities and, in this case, new technologies. As this book has demonstrated, drones are particularly susceptible to this dynamic because they provide the appearance of controllable risk. If there is no pilot and hence no risk of casualties, the reasoning goes, why shouldn’t we use drones for yet another task? This reasoning explains why the United States, which initially sought drones for military action in declared war zones like Afghanistan, suddenly discovered that the technology has value for targeted killings outside declared armed conflicts. Similarly, as chapter 8 has shown, states that purchased drones originally for surveillance found themselves contemplating using the technology for limited probes of their enemy’s airspace. This type of mission creep can also be seen in responses to humanitarian disasters and in peacekeeping missions. For example, NGOs may buy drones to map the terrain and provide information to first responders in disaster zones but soon find themselves tempted to directly provide relief to vulnerable groups through aid drops. Because they are so readily available and flexible, drones are a gateway into new missions and goals that seem to get bigger all of the time.

A second major theme of this book is that the nature of drone technology affects the calculation of risk associated with it, specifically the decisions over what to do and how to do it. Industry defenders of drones have argued from a standpoint of technological neutrality that drones are just a faster or more efficient way of doing something that one always intended to do. They acknowledge that drones can be misused by actors like terrorist groups, but they argue that the fault lies wholly with the person who misused it than with the technology itself. This book has adopted a different argument: that the technology itself structures choices and induces changes in decision-making over time. In other words, drones, like all forms of technology, are not neutral but rather they reorder the calculation of risk and privilege certain types of actions at the expense of others. While the users themselves remain primarily to blame for bad behavior with drones, this perspective emphasizes that their menu of choices has been altered or constrained by drone technology itself.

To understand this point, a comparison with nuclear weapons is useful. Although they are dissimilar from drones in a number of ways, and clearly more consequential for war and peace, nuclear weapons are nevertheless an example of a type of technology that began to structure the choices of those who adopted it. States with nuclear weapons found themselves thinking of crisis situations in different ways, worrying particularly about the risk of escalation and adjusting their behavior accordingly.4 In some cases, strategic stability brought about by a parity of nuclear weapons yielded mission creep (for example, by leading to deeper involvement in proxy wars like Vietnam and Afghanistan), as well as substantial changes in the risk calculations of the states themselves.5 Nuclear weapons also produced an antiseptic language to describe its operations (e.g., counterforce versus countervalue, second strike, and so on) much in the same way that drone-based targeted killings have (e.g., disposition matrix, kill box, military-aged males). Nuclear weapons did not entirely change the options available to leaders, or absolve them of ultimate responsibility for their decisions, but they altered the menu of choices available. Although drones are smaller, more diffused, and less lethal than nuclear weapons, they have also altered the choices available to the diverse array of actors who wield them.

One of the reasons why this point has been so contested when considering drones is that there is a crucial but overlooked distinction between political decision-making and tactical or military decision-making. As chapter 3 has demonstrated, many drone pilots deny that their decision-making has been in any way altered by the fact that the technology is now unmanned and insist that they are governed by the same rules as manned aircraft. The evidence suggests that this is largely true, especially for militaries with a high level of oversight and accountability. Despite what some news accounts imply, US drone pilots are not free to simply kill people at will and are held accountable for mistakes on their watch. Even more, the video record from drones provides incontrovertible evidence of wrongdoing and makes avoiding accountability more difficult. As a result of being located in a rule-bound bureaucratic structure, pilots have a limited degree of discretion about how they use drones.

But at higher levels in government, where the decisions about who and where to strike are made, the political decision-making is not as constrained and rule-bound. Although decisions are not subject entirely to discretion, senior decision makers have more latitude in deciding to expand the area of targeted killings and in assigning new targets to the “kill list.” This can be seen in the expansion of targeted killings first under Obama and later under Trump. It is here where the changes in risk calculations and goals when using unmanned technology matters. Although there are only a limited number of cases of leaders equipped with robust drone fleets, the record so far suggests that leaders at the political level may become more aggressive or risk-taking when the lives of pilots are not at stake.6 This dynamic explains the expansion of the targeted killing list, the increasing number of drone fly-bys and probes at the interstate level, and even some of the risk-taking seen when rebel and terrorist organizations bait more powerful enemies like Israel. This does not always translate into operational looseness at the tactical or operational level, but rather a gradual, deliberate expansion of the role, scope, and geographic range of drones once they are already deployed. In practical terms and in the short term, as chapter 4 demonstrates, this leads to relentless pressure on military pilots to fly and do more with drones, even while they remain constrained at the operational level in their decisions about who to strike.

This dynamic also leads to pressure to learn and do more and more about the battlefield. One of the themes of this book is that everyone, from militaries to NGOs to terrorist organizations, seeks to harness the capabilities of drones to know more about the environment in which they operate. The use of drones generally leads to an increase in the quantity, and often quality, of information available to an actor. But this is not a neutral or even always a positive development. As chapter 8 noted, increased levels of information can generally help decision-making, but it can also exaggerate cognitive bias and lead to overconfidence and errors in making decisions. More specifically, it can lead to overconfidence that one can control one’s environment because one can see it; it can lead to underestimating the adaptability of the enemy or target because they appear as mere dots beneath the gaze of the drone. The world does not sit still just because it is being watched. Peacekeepers equipped with drones are now confronting the fact that militia groups are changing their behavior and adapting due to the presence of drones over refugee camps. Even the US military was surprised by the degree to which groups like the Islamic State adapted their tactics and strategies to drones and gradually found ways to use drones against them. At its core, technology like drones can elevate the acquisition of information from a means to an end, making acquiring more information the goal of the activity and losing sight of the reasons why that information was collected in the first place.7 This can be seen in the US military’s insatiable desire for surveillance coverage by drones, collecting so much data on the world’s activities that even they are struggling to sort through it and use it effectively.

Perhaps the greatest consequence of the emergence of drones for war and peace is their impact on our thinking. Many of the theoretical criticisms of drones have centered on their impact on notions of warrior honor, now that combat is virtual, remote, or depersonalized.8 Yet this is misleading: many forms of technology, from artillery to manned aircraft to nuclear weapons are designed to protect one side’s fighters while killing the others from a distance.9 Drones are only the next step in a journey to remote, depersonalized warfare that began long ago. What is more consequential is the mode of thinking that becomes dominant and entrenched with the use of drones. This type of thinking, described by Jacques Ellul as “technique,” reduces political and moral issues to problems of technical efficiency.10 As a comprehensive but reductive mode of thinking, it holds up what machines like drones can do as the ultimate standard for measuring a resolution to a problem. This can already be seen with discussions over targeted killing; instead of asking why we are using aircraft for a task in the first place, we tend to debate instead whether the drone is better than the manned alternative. The answer to the latter question is often “yes,” which seems to put an end to the debate. But the underlying questions behind our actions—should we be engaged in targeted killing at all? Should we be surveilling these populations at all? To what end will all of this killing be put?—are often elided. As Ellul would suggest, “technique” tells us that we do not have a problem of militancy in the borderlands of Pakistan and Yemen, but rather one of efficiency and precision with our “vast killing machine” of targeted killings. Similarly, we do not have a political problem of countries generating vast flows of refugees, but rather a practical problem of how best to watch and count them with drones once they appear in refugee camps. As this book has shown, this approach dehumanizes such problems and sidelines the essential moral questions that they raise. If “technique” exercises a colonizing effect on decision-making and spreads to other domains, we should expect the types of mechanical thinking surrounding drones—one in which genuine dilemmas become reduced to problems of technical competence and instrumental rationality—to dominate thinking about war and peace in the future. And that will be an even greater problem if technology continues to advance at the relentless pace that it does today.

Future Trends

Although today’s technology yields some conclusions about how drones will affect decision-making, the technology of tomorrow brings with it the possibility of even further acceleration of these trends. Drones are unlikely to remain in their present form for long, as today’s laboratory technological developments will radically alter their future development. As an analogy, if we consider today’s drones as equivalent to the early manned aircraft, the next generation of technology will change at least as much as aircraft did when the jet age began in the late 1940s. Five technological developments, in particular, will intersect with drone technology and change their operations in remarkable ways, with real consequences for the strategic choices surrounding their use.

Artificial Intelligence

Perhaps the greatest potential change in how drones operate will come from the rise of AI. Although there are debates about how it should be defined, AI can be broadly understood as efforts to use the processing power of computers to simulate or anticipate human behavior.11 AI is widely touted as the next major revolution in technology and indeed human development; some estimates suggest that millions of jobs are at risk of being replaced by computers whose abilities are indistinguishable from those of human beings.12 The use of AI is hardly new; some types of AI have been around in embryonic form since at least the early 1950s. More recently, rapid advances in computing and processing power, data storage, communications, and connectivity have brought some of the dreams of AI researchers into reach. In particular, researchers have made significant progress in helping computers to learn from their mistakes and think strategically, even to the point where they can defeat humans in games and other tests of reasoning. Aside from evoking fears from science fiction about computers that learn to turn on their human counterparts, the rapid development of AI has led to concerns that it may outstrip human cognition or change the way that we think, perhaps by pushing us even more toward “technique.”13

The hype around AI is substantial. Today, it is common to hear companies like Microsoft claim that AI has the potential to change the world in fundamental ways. But governments have not been far behind. In 2017, Russian President Vladimir Putin claimed that “artificial intelligence is the future, not only for Russia, but for all humankind . . . It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”14 While many defense industry leaders hail the potential of AI to revolutionize military operations, others have expressed concerns that it could lead to accidental war. Elon Musk, for example, has predicted that AI should be understood as a “civilizational existential risk” and predicted that it could cause World War III.15

At present, the risk of an AI-generated war is low. Researchers are working on a number of different ways for AI to “think” and learn as humans do and have made considerable progress over the last decade, but no AI is yet self-aware and making decisions in the way that is depicted in Stanley Kubrick’s 2001 or the Terminator movies. Most of what AI currently does is mimetic: it mimics human behavior, either by following decision rules or by deducing behavioral pathways, to produce behavior that resembles that of humans. In addition, the greatest advances in AI have come about in labs, in settings far removed from real life, and most AI-enabled drones have been slow and not as adaptive as humans.16 There are also pitfalls in getting AI-enabled drones to think in strategic ways as humans do. At present, most AI-enabled drones can follow basic commands, but they cannot quickly adjust to threats or prioritize tasks in the service of the larger mission.

What makes AI potentially important, at this stage, is that it is able to complete tasks at a speed that human beings cannot match. If the raw processing speed of AI was attached to a task like surveillance, there is a danger that killing could become so easy as to be almost automatic. It is this danger that alarmed employees at Google, who protested until the company pulled out of plans to help the Pentagon use AI to sift through and analyze its vast collection of drone footage for military uses, among which might include identifying targets.17 In the end, Google abandoned the controversial Project Maven and issued new guidelines committing it to avoid technology that causes “overall harm,” as well as weapons designed to injure or kill, although there are some ambiguities about whether the company would pursue peaceful technologies which could be retrofitted to that purpose.18 Moreover, even without Google, research into the use of AI for drones will continue unabated, as other companies are eager to secure generous Pentagon contracts for this technology.

In the future, research into AI will become even more important because it raises the possibility of developing drones that operate autonomously. There is an important distinction between automated systems—which respond along preprogrammed logic steps to stimuli—and autonomous systems, which can decide how to achieve goals within certain parameters.19 Today, many manned and unmanned aircraft are at least partially automated. For example, commercial airliners have multiple back-up systems and redundancies that come into play along a preprogrammed pathway as systems or parts fail on the aircraft. For many of us, the fact that commercial planes automatically adjust to stimuli, like turbulence, is a good thing. There is no reason why drones cannot be automated to respond to similar stimuli. For example, next-generation drones might be preprogrammed to adjust their altitude and speed to accommodate environmental factors like wind or weather. Alternatively, an automated drone might also be preprogrammed to avoid colliding with another drone or a missile fired at it.

Automated drones have four advantages over drones steered by pilots throughout their mission. First, they would no longer require pilots to fly them throughout the entire mission, thus reducing the tedium and associated costs of paying pilots. Today, some drones like the Global Hawk are already highly automated in that they are preprogrammed to fly between fixed waypoints, with pilots only required to watch the video feed and make adjustments as needed. Second, automated drones might also reduce the risks of crashes. Even today, Global Hawk and Reaper drones are controlled by pilots on take-off and landing, when crashes are most likely. Allowing take-off and landing to be even more automated might lower this crash rate. Third, automated responses from drones might prevent humans from overcompensating for environmental factors like wind or weather. Fourth, drones which are highly automated will respond more quickly than those controlled by humans to external threats like incoming missiles. In contests where speed matters, the few seconds gained by automating the response to the missile might be crucial and save the drone. For these reasons, it is almost inevitable that commercial and military drones will have more automated elements in the future, just as manned aircraft gradually were infused with automated systems to make them more reliable and safe.

To make a drone autonomous—that is, capable of partially directing itself, and deciding the steps it takes, toward its goal—requires a much bigger leap in terms of both technology and military doctrine. An autonomous drone could theoretically be programmed as a scout, surveying the landscape ahead of ground forces, or it could locate potential targets along set parameters (for example, military-aged men apparently holding weapons) and strike at will once they are located. The problem lies in the specification of the parameters: how can these be set without producing false positives (i.e., people misidentified as enemies) and accidents? The danger here is that AI might misinterpret the parameters or confuse the evidence determining that the parameters have been met (i.e., for example, mistaking a man holding a shovel for a man holding a weapon). For this reason, the central issue with autonomous weapons is the degree to which humans remain in the loop. A human pilot can be present at any stage in this loop, which is described as the “sense-decide-act” loop by the Pentagon. As Paul Scharre has pointed out, the fact that a human is directly involve in picking a target, or authorizing it, would render weapons systems only “semi-autonomous.”20 For example, a human might only be responsible for supervising target selection by the drone, or alternatively could have some responsibility for pulling the trigger. Only a fully autonomous weapon or drone would cast the human entirely out of the loop, making the decision entirely on its own in each specific circumstance.

Not all autonomous weapons or drones would necessarily be lethal. It is not hard to imagine that the United States might choose to conduct long-range surveillance with nearly autonomous drones. In these circumstances, the worst that could happen would be that the drone might take too many pictures or videos, or perhaps invade someone’s privacy with excessive surveillance. The problem grows much bigger when these drones are lethal. For example, Israel has developed an autonomous Harop drone, which operates like a loitering munition and can dive-bomb targets on the battlefield without a human being in its decision loop. The Harop drone, which has stealth capability, uses its own body as a warhead and can destroy targets through kamikaze attacks in minutes. Even more powerful lethal autonomous weapons systems (LAWS) carry greater risks: they might commit fratricide by killing one’s own soldiers, or be hacked by an enemy, or subject to an accident.21 In each of these cases, there are serious questions about who should be held accountable, especially if the weapons system or drone was fully autonomous when it took these actions. For this reason, a number of military ethicists and senior policymakers have argued that LAWS present a serious challenge to the current laws of war. It is not clear that a fully autonomous drone equipped with a missile, for example, would meet the standard of discrimination and necessity that is needed when using deadly force. It is also not clear who could be held responsible in a military court if a fully autonomous weapon goes wrong or makes costly mistakes on the battlefield.

The development of AI will gradually improve the prospects of automated and eventually autonomous drones and make selecting these options in a cost-sensitive budgetary environment hard to resist. At present, the United States is reluctant to accept fully autonomous drones in part due to cultural and organizational preferences to have a pilot in control.22 Although the United States is investing heavily in autonomous weapons, and believes that these weapons can ultimately be made compliant with international law of armed conflict, it has nevertheless committed that there will always be a human “in the loop” for any decision to take a life.23 Yet other countries may make a different decision and put pressure on the United States to rethink this commitment.24 For example, Russia, China, Israel, and South Korea have made investments in autonomous weapons.25 Of these countries, China is the most serious player because its substantial investment in AI for commercial and surveillance purposes has yielded a deep industrial base that it could harvest for autonomous unmanned systems.26 In a confrontation with the United States, China might choose to employ autonomous drones that can adapt more rapidly and attack more nimbly than their US counterparts still controlled by human pilots. If wars against adversaries like Russia and China become, as one military officer put it, “extremely lethal and fast,” US reservations about autonomous weapons could give way in favor of necessity and operational efficiency.27 A world of competing, clashing, autonomous drones is not imminent so long as current policy remains in place, but the fragility of states’ commitment to keeping a human “in the loop” remains a serious concern.

Swarming

Another potential game-changer for drones is the development of swarming drones. Until now, most of the military drones used by the United States and other great powers have operated either on their own or as part of a small grouping of other drones, largely (though not exclusively) in uncontested airspace. The bulk of communications to and from the drones were directed to the pilot located on the ground. But this is about to change. Recent advances in AI and robotics have offered new opportunities for drones to fly together and coordinate as part of a swarm. The obvious models for swarming drones lie in the animal kingdom: insects, birds, and some pack animals that work together and coordinate either explicitly or with tacit signals to achieve common tasks. Small unmanned drones, some of which are no bigger than birds or insects, now fly in formation and attack an enemy together in a coordinated way. As Scharre has described it:

Emerging robotic technologies will allow tomorrow’s forces to fight as a swarm, with greater mass, coordination, intelligence and speed than today’s networked forces. Low cost uninhabited systems can be built in large numbers, “flooding the zone” and overwhelming enemy defenses by their sheer numbers. Networked, cooperative autonomous systems will be capable of true swarming—cooperative behavior among distributed elements that give rise to a coherent, intelligent whole. And automation will enable greater speed in warfare, with humans struggling to keep pace with the faster reaction times of machines.28

One example of this is the Gremlin drone. In 2015, DARPA issued a wish list for next generation technology which included disposable swarming drones that could be dropped from the back of a manned aircraft and retrieved in mid-air. By 2018, Dynetics had acquired a $38.6 million contract to develop Gremlins and proved in a series of test flights with a C-130 aircraft that such operations were possible.29 Gremlin drones fly in small numbers—perhaps no more than twenty at a time, though larger swarms are possible—can be used for reconnaissance and probing an enemy’s defenses, and be redeployed up to twenty times.30 The US Navy is also experimenting with swarming scout drones, known as LOCUST drones, that can harass an enemy and get them to waste anti-air missiles trying to knock them down.31 The US Air Force is testing Perdix drone swarms: 100 small drones no larger than a robin bird deployed from two FA-18 Super Hornets.32 In 2015 alone, the United States flew eighty Perdix missions over the Alaskan border. In the words of the Strategic Capabilities Office of the Pentagon:

The Perdix are not preprogrammed, synchronized individuals. They are a distributed brain for decision-making and adapt to each other, and the environment, much like swarms in nature. Because every Perdix communicates and collaborates with every other Perdix, the swarm has no leader and can gracefully adapt to changes in drone numbers. This allows this team of small inexpensive drones to perform missions once done by large expensive ones.33

The advantages of swarming drones are numerous. They can effectively compensate for a greater enemy mass (e.g., the larger numbers of ships, tanks, or even people held by an enemy) by dispersing and attacking from multiple vantage points. This has the advantage of confusing enemies and making them fight on multiple fronts, exhausting both their manpower and munitions and saturating their defenses. Drones can swarm targets and confuse the enemy about where to fight and might also be able to disable their command and communications systems. They can also disperse if needed and split up when under fire, coming back together again when their attack resumes. Drone swarms are also remarkably resilient; their distributed intelligence allows individual drones to “trade off” being the leader of the swarm when the leading drone is taken out by enemy fire. Even when elements of the swarm are destroyed, they will gradually degrade in combat power, but not suddenly stop fighting on a dime.34 Some of the networks behind swarming drones are even capable of healing themselves to keep the swarm going.

Drone swarms are still in the experimental stage and may not see battlefield use for some time.35 More generally, they have some of the same command and control problems of autonomous drones. In particular, drone swarms often move too fast and in too complex a fashion for a single person to completely control their operations. As a result, commanders are forced to develop broad parameters as instructions for the swarm, which can lead to problems if the parameters are misinterpreted. DARPA is currently working on voice and gesture recognition for swarming drones to make the swarms more responsive to changing battlefield needs.36 Swarms will also need to be resilient: electronic warfare, like jamming, is a real risk to their viability.37 DARPA has conducted tests of how drone swarms function when their communications and GPS signals were under attack and there is some evidence that swarms could continue to function despite such countermeasures.38

The United States is arguably behind China in efforts to develop swarming drones. China has already fielded larger drone swarms than the United States for reconnaissance missions and even managed to run a swarm of 1,000 drones for an aerial show.39 China is investing heavily in military-grade swarming drones in order to be able swarm US radars and disrupt their powerful command control networks in the event of a clash in the South China Sea. The state-owned China Electronics Technology Group (CETC) has been experimenting with swarms of a hundred or more drones for some time, essentially in anticipation of a battle in which a great mass of swarming drones overwhelms an opponent.40 If China fields these swarms and resolves the command and control problems that they present, US military officials will have to develop successful anti-swarming countermeasures or fight a conflict on China’s terms: fast-moving, mobile, and lethal.

Blended Manned-Unmanned Missions

Until recently, a relatively sharp distinction has been drawn between manned and unmanned aircraft, hinging around the presence or absence of a pilot. But this distinction is breaking down as the US military has invested more in manned-unmanned teaming (MUMT) capability. This capability, seen as critical for future Pentagon plans, would pair complimentary systems and allow for more adaptive, flexible responses. For example, the US Army has already been experimenting with having pilots in Apache helicopters control Grey Eagle drones. This would allow the Apache pilots, flying no more than 100 km from a potential target, to quickly destroy targets spotted by the Grey Eagle drones without having to call in backup.41 A similar effort by the US Air Force would allow a pilot in a manned aircraft like an F-16 to control drones as “loyal wingmen,” deploying them as needed. Two models, the Mako and Valkyrie, have been developed to this end.42 In time, these expendable, low-cost UCAVs might fly ahead of manned aircraft to scout terrain, confuse or overwhelm radar defenses, or even swarm an adversary. The US Navy is also developing unmanned aerial refueling drones that would extend the range of manned aircraft like the F/A-18 Super Hornet and F-35 Joint Strike Fighter.43 The United States is experimenting with a number of modes of remote command and control which would break the link between the drone and the ground control station thousands of miles away. For example, local ground troops might control sensors in the battlefield and seamlessly pass or distribute command to different actors, depending on the operational need, or even control or direct multiple unmanned and manned systems from a single device.44 This would change the “remote” nature of drones and untether them from the ground control station that typically dictates their behavior.

Land and Sea Drones

Another evolution in drone technology that has the potential of furthering “drone thinking” and complicating decision-making is the expansion of drones to new domains, specifically land and sea. While drones have traditionally been designed for flight, the same principles lying beneath unmanned technology can be applied to ground forces and underwater vehicles. In its Unmanned Systems Integrated Roadmap, which projects developments until 2038, the Pentagon imagines a future with a proliferation of ground vehicles that are remotely controlled as today’s drones are.45 Some ground robotics models, such as the Packbot, are designed for tasks such as IED inspection and removal and have been used extensively in Afghanistan and Iraq for more than a decade.46 But other models under development are also being considered for clearing dangerous vehicle routes, moving squads from place to place, and even combat.47 In February 2018, the United States conducted its first set of tests with a ground robot shooting at targets on the battlefield.48 The United States is not the only country experimenting with ground robotics. Russia has announced that it will be fielding tank-like ground robots in Syria and elsewhere for reconnaissance and close-fire support, although there are concerns about its reliability and effectiveness.49

The maritime domain is another area in which drones and associated robotic technology will play a growing role. The Pentagon has developed a number of drones, for surface and underwater use, that would be able to detect and disarm mines. The logic here is obvious: mines are extremely dangerous for both manned ships and submarines, and disabling them with drones drastically reduces the risks of losing sailors.50 But other underwater drones will be used for ISR.51 Especially if they have a wide range and can sustain a very long time in operations, underwater drones are ideal for picking up signals of submarines and other evidence of incoming naval vessels. Other Autonomous Underwater Vehicles (AUVs) are used for scouting the sea floor and detecting environmental hazards. Some unmanned maritime vehicles, such as the SeaFox, can also be used for port surveillance. Other countries are also experimenting with underwater drones, though they imagine much more aggressive uses than the United States has so far. For example, China is now using underwater drones to extend its claims in the South China Sea.52 Russia has developed a nuclear armed underwater drone, termed the Status-6 or Kanyon, which has a range of 6,200 miles and can descend to 3,200 feet underwater.53 At present, the United States is not developing a nuclear-armed drone, but it may eventually choose to do so if more of its rivals continue their development of the technology. An underwater arms race with drones is not out of the question if more countries follow suit.

Miniaturization

The final future trend through which drones may revolutionize political decision-making is miniaturization. To some extent, this is well-established trend, as drones have been getting smaller every year since the Predator made its debut. Such examples as the Raven, Black Hornet, and Switchblade already prove that soldiers on the battlefield can carry useful drones in their backpacks for rapid deployment. While the Raven and Black Hornet are largely reconnaissance drones, the Switchblade is now capable of carrying munitions and killing enemies by dive-bombing them up to 12 miles away from its launch. Other types of small quadcopter drones, already available on the commercial market, are being retrofitted by groups like the Islamic State to hold explosives and then be rammed into troops on the ground. We are already moving into a world where the drones doing the most damage, and attracting the most attention, are small drones rather than Reapers and Global Hawks.

This trend of miniaturization will continue with modern drone companies going even further to produce nano drones no bigger than birds or even insects. These are potential game-changers because they will be able to conduct surveillance without even being noticed. One model, the Nano Hummingbird developed by AeroVironment, weighs less than a pound and resembles a small bird. It has two flapping wings and could move seamlessly into populated areas without notice. Another model, the DragonflEye, is even smaller—it is a backpack equipped with energy, guidance, and navigation systems attached to living dragonflies, turning them into “cyborg drones.”54This drone uses the natural appearance of the dragonfly as a camouflage and allows the insect to eat biomatter in its environment to sustain itself. In doing so, it cuts down on the energy that the drone element of the DragonflEye must sustain, while allowing its motions to be controlled remotely by its human pilot.55

A number of obstacles stand in the way of nano drones being put into use, not the least of which is their astronomical cost. But if the technical problems regarding their flight capacity, durability, and cost can be resolved, nano drones will usher in a revolution in what can be seen and heard. If a nano drone could fly into a building and perch itself in a corner, it could record video and audio of people and convey it back to intelligence officials without anyone noticing. This would be tremendously useful to spy agencies and soldiers in the field who are chasing terrorist leaders. For years, drones have been identifiable by their appearance and by the whirring noise they make when in the skies. The number of plausible uses for the technology will skyrocket if these factors are no longer in play. If a drone can resemble a bird or insect, it will automatically be in camouflage within the natural world and the risks of using it will plummet even further. If drones are as indistinguishable from their environment, as these models suggest is possible, they will be almost irresistible for future use in surveillance, reconnaissance, and spy craft, assuming their costs can be brought down to a manageable level.

Conclusion

All of these models of drones are not yet ready for widespread use, but may be in the next decade or so. The drone age is not one in which technology will sit still. With growing popular scientific and commercial interest in drones and a deep commercial base for their development, it is inevitable that drone technology will continue to develop by leaps and bounds, becoming cheaper and more capable with every passing year. If this trend continues, the technology itself will certainly outpace the contemporary legal and ethical frameworks associated with its use and throw up new dilemmas not yet imagined for political decision-making. Drones may allow us to see and know more about the world we live in, but it is far from obvious that they will make our choices clearer or our decisions easier.

One way to improve those decisions is to develop strong legal standards and norms for the use and sale of drones. To some extent, this is already underway with the humanitarian code of conduct under development and with some of the FAA’s regulations for domestic use in the United States. But the political constraints around drone use are considerably weaker. Within the US government, although drone pilots are constrained by the rules of engagement, those authorizing targeted killings are much less constrained in making decisions. Under the Obama administration, the process for selecting targets for drone strikes was located entirely in the executive branch, with considerable latitude for discretion and little transparency, throughout President Obama’s first term. Facing a possible defeat in the 2012 presidential elections, the Obama administration began to develop a secret drones “rulebook” to govern their use if Mitt Romney were to be elected president.56 In 2013, President Obama laid out new standards for using drones for targeted killings and tightened the standards for permitting civilian casualties.57 The Obama administration also developed tight export standards and launched a joint declaration with fifty other countries on the import and export of drones.58 But President Donald Trump swept much of this away, loosening the standards for drone use and enabling exports to a wider variety of countries.59 Today, the United States has a vast killing machine at its disposal, but little transparency or accountability for how it is used by the Trump administration.

All of this suggests that allowing countries to regulate themselves might be insufficient. Instead, we should look to the United Nations or other international organizations to regulate drones and keep their users honest. One option might be to back the development of an international regulatory mechanism, along the lines of the Convention on Certain Conventional Weapons (CCW), which could establish rules for how drones may be sold and used.60 Alternatively, the United States could back the creation of a UN investigative body on drones which would help to collect information on how drones are used and shame those who use them carelessly or cruelly. A voluntary code of conduct for how drones might be sold and used, which the Obama administration explored, might be an idea worth reviving as drones spread around the world.

In the end, it is also important to realize that the drone age may be one in which everyone can take to the skies, but this accomplishment will not come without a cost. We will continue to risk turning our nightmares into reality until we come to understand how unmanned technology subtly shapes our behavior and the decisions that we make. Drone technology might open up new vistas for human accomplishment and allow many to follow their dreams, but it also alters the menu of choices that lie in front of us. As drone technology falls into the hands of more people, it enables shifts in risks and goals that are not always articulated or even understood by those using them. That drones are unmanned and thus less risky in terms of human life makes them seductive; it also makes things once considered too risky or too ambitious suddenly seem less so. To avoid the worst consequences that may flow from drones, we must learn to carefully measure how unmanned technology changes our strategic choices and to think through how a range of other actors—from repressive governments to terrorist organizations—will respond with their own use of the technology. Only by anticipating what drones do to ourselves and to others can we ensure that our embrace of unmanned technology does not come at the cost of our humanity.