4

image

Eyes in the Sky

as world war i crept closer in August 1914, most European statesmen believed that the coming conflict would be short and glorious. In early August, German monarch Kaiser Wilhelm told his troops, “you will be home before the leaves have fallen from the trees.”1 The German army general staff believed that the war with France would be a rout that would last no more than four weeks.2 Despite their different motivations and strategies, French officials largely concurred with this assessment, believing that a short, decisive war would forever resolve many of the tensions between the great powers of Europe. Some senior French officials almost seemed anxious for war. In 1913, one senior general declared, “give me 700,000 men and I will conquer Europe!”3 Others were more circumspect about the prospect for gains from fighting but thought that the growing tensions in Europe could not be overcome in any other way. Across the continent, a sense of nervous anticipation prevailed, though only a few saw the disaster that would be World War I coming. The British foreign secretary Sir Edward Grey was among them. Looking across St. James Park at dusk on August 3, 1914, Grey quietly remarked to his companion that “the lamps are going out all over Europe, we shall not see them lit again in our lifetime.”4

When the war began, events moved quickly. The German war plan was premised on the view that German forces could sweep through Luxembourg and Belgium and knock out France in less than five weeks before turning their attention to Russia. But like most war plans, this one did not survive the first brush with reality. German forces faced stiff resistance from Belgian forces and the unexpected entry of Britain into the war on the side of France and Belgium. This panicked some of the German leadership—the Kaiser, always prone to nervous fits, wailed in anguish that he had never anticipated British entry into the war—but the German forces marched relentlessly forward. By early September, they had advanced deep into northeastern France and pushed back exhausted French forces to within 30 miles of Paris. As Paris prepared for a siege, the French government fled to Bordeaux, while hundreds of thousands of French civilians left the capital. French General Joseph Joffre began to make urgent plans for a counterattack to break the German lines. Among the most important problems he faced was information about the movements of the enemy. To solve this problem, he turned to the fledgling aerial reconnaissance teams formed by the Grand Quartier Général (GQG), the command staff of the French army.

The decision to develop an aerial reconnaissance capacity only years after the Wright brothers discovered flight was a controversial one. In Britain and France, many dedicated army officers were convinced that aerial operations would never supplant ground forces in importance; in fact, General Sir Douglas Haig told one of his fellow officers in 1911 that pursuing aerial operations was a waste of time.5 Most aerial reconnaissance units were placed under the control of the army even though there was a strong cultural and organizational bias against these operations among ground forces. Flying was described by senior army officials as a foolish indulgence and not something that real soldiers did. It did not help that aerial operations were also very dangerous. Early aerial reconnaissance involved pilots going aloft in rickety planes, landing on makeshift runways near their ground forces, and recounting orally what they had seen from the air. These reports were often “imprecise, because in the excitement of the first taste of combat, the observers’ inadequate prior training frequently led them to misidentify troop nationalities and activities.”6 Army forces sometimes mistook their own planes for enemy aircraft and shot at them, an occurrence common enough to prompt militaries to paint flags and other colored markers on wings to allow them to be distinguished by ground forces. Bad weather frequently demanded the cancellation of flights, and turbulence was particularly severe against the thin wings that held the plane aloft. Some early planes were slow (roughly 26 miles an hour) and prone to either fail in mid-air or crash with little warning.7 It is not surprising that many military officials initially saw aerial operations as a distraction that they could not afford as war approached. Marshall Ferdinand Jean Marie Foch, later supreme allied commander of Western forces, concluded in 1911 that “airplanes are interesting as toys but of no military value.”8

The perception of aerial reconnaissance slightly improved when its advocates were able to take camera images of enemy positions from the air. These photos were naturally more precise and detailed than a verbal report and did more to convince skeptics that there was some value in the aerial enterprise. The Germans were the most sophisticated in terms of aerial photography at the onset of the war and had a wide range of cameras available to take pictures of enemy positions.9 The French were the first among the Allies to take aerial photography seriously and by August 1914 had developed mobile labs for processing black and white images taken from the air.10 By contrast, the British army treated aerial photography as a bothersome novelty and even made pilots buy their own cameras for this purpose. The earliest demonstrations of aerial photography were fraught with danger because they involved the pilot simultaneously flying the plane and using the camera. One British commentator remarked that the operations were so dangerous that observers of aerial reconnaissance exercises “came to scoff” but “remained to pray.”11

The urgent needs of the coming war overcame the organizational barriers against the development of rudimentary air forces, starting with France in 1910.12 At the beginning of the war, Britain had approximately 150 aircraft, France 160, Germany 246, and Russia about 150.13 Not all of these aircraft were used for aerial reconnaissance; across all countries there were advocates of the strategic bombing of cities to break the morale of citizens, much in the way that the noted Italian general and theorist of air power Giulio Douhet predicted.14 Other saw value in aircraft, but believed that their best use was to spot artillery fire or to direct ground forces. Germany even experimented with using planes for psychological warfare by dropping leaflets over Paris in late August 1914 warning that, “The German army stands before the gates of Paris. You have no choice but to surrender.”15 Yet for the most part the technology behind the aircraft operating in 1914 was not up to these tasks: the earliest attempts at strategic bombing were embarrassing failures, as the bombers were shot down or missed their targets by a wide margin. In 1914, the actual war in the air revolved around aerial reconnaissance. Although the first evidence of aerial reconnaissance occurred during colonial engagements in Libya and Iraq, it proved its worth during what came to be known as the “miracle” of the Marne.

As French forces considered options to defeat the encroaching German forces in late August 1914, the chief problem that they faced was detecting the location of the Germany army commanded by General Alexander von Kluck. At the Battle of Mons, information provided by two British pilots allowed the French to redeploy forces and hold off the German advance, thus allowing a fighting retreat against the advancing Germans.16 Eager to cut off Paris from the main French forces, Von Kluck took his forces southeast and exposed the flanks of the German First and Second Armies.17 His goal was to destroy the French forces entirely before moving against Paris.18 British Royal Flying Corps (RFC) pilots were the first to discover this eastward shift of German forces and reported it back to the French ground forces. Although the reports were clear about the general direction of German forces, French military officials were reluctant to accept their findings because they did not correspond with other intelligence about the expected German war plan.19 The chief problem was not intelligence but organizational inertia: GQG command were committed to a certain view of Germany’s intentions, backed up by human intelligence, and the information coming from aerial reconnaissance seemed to contradict it. But additional French operations—including those by Corporal Louis Breguet, a pioneer in the design of aircraft who flew his own prototype for this mission—eventually confirmed that Von Kluck’s forces were indeed shifting eastward.20 French officials began to watch more closely, and over the next few days, British and French aerial reconnaissance provided irrefutable evidence of the movement of German forces. Crucially this information allowed the French forces to adapt and lay a trap for the Germans.

Under the command of General Joffre, the French military repositioned their forces to create a pocket where German forces would be caught and annihilated by the French 5th and 6th Armies and the British Expeditionary Forces (BEF). To do this, the French needed to move the 7th Division northward from Paris to the front, but they faced a shortage of available rail lines to move the troops. The French quickly moved most troops and equipment by truck and the remaining train lines, but to supplement these efforts they also famously commissioned taxis from Paris to move nearly 4,000 infantry a distance of 30 miles to strengthen the French lines.21 By the time the 750,000 strong German army walked into the trap, they were facing one million French and British forces.22 The German forces, exhausted from previous battles and days of marching across the French countryside, found themselves enveloped by Allied armies. The battle began on September 6 and lasted for four days of intense fighting that saw the use of machine guns, small arms fire, and grenades for maximum carnage. The casualties were astonishingly high: one recent estimate suggests that as many as 300,000 soldiers perished during the First Battle of the Marne, mostly from French and German forces.23

The consequences of the First Battle of the Marne were strategic, psychological, and symbolic. The immediate strategic consequence was that the German army was forced to retreat and withdrew 40 miles to the River Aisne.24 They settled in for another round of fighting by digging the trenches that would come to characterize World War I. The success of British and French forces at the Marne effectively set the stage for the brutal trench warfare of the Western front and guaranteed that the war would last for years. Its psychological consequence was the shattering of German optimism that they would indeed be home for Christmas. After the Marne it was clear that by assuming they could knock out the Western Allies before facing the Russians, the German command had made a grievous miscalculation. The grim fact was that the war would be conducted on both fronts, and the Central Powers would face encirclement by their enemies amid battlefields that resembled a charnel house. For Germany, this would contribute to the psychology of victimhood that coursed through its political history during the twentieth century. Many Germans saw the Marne as one of many examples when the nation was on the cusp of victory only to have it snatched away from them at the last possible minute by unexpected developments, even betrayals.25

The symbolic significance of the Marne went even further. The miracle of the Marne was the first time in which aerial reconnaissance played a decisive role in changing fortunes on the battlefields. It would not be the last. After the Battle of the Marne, no one could dispute the value of “eyes in the sky” for understanding events on the ground. The debate increasingly considered how best to see this way, not whether to do it at all. In a study of the effect of aviation on the war published in 1922, Walter Raleigh concluded that “reconnaissance, or observation, can never be superseded; knowledge comes before power; and the air is first of all a place to see from.”26 The information-gathering aspect of aviation was paramount and arguably more important than bombing or other functions of aircraft. In an interview published in the New York Times as the United States entered the war in 1917, Orville Wright acknowledged this development:

I have never considered bomb-dropping as the most important function of the airplane and I have no reason to change this opinion now that we have entered into the war. The situation shows that, as a result of the flying machines’ activities, every opposing General knows precisely the strength of his enemy and precisely what he is going to do. Thus surprise attacks, which have for thousands of years have determined the event of war, are no longer possible. When the United States sends enough airplanes abroad to bring down every German airplane which attempts to ascertain the disposition of the armies of the Allies—literally sweeps from the heavens every German flying machine—the war will be won because it will mean that the eyes of the German gunners have been put out.27

Wright saw that taking to the skies would transform warfare from a contest of brute force into one of information. As Raleigh noted, information precedes power and changes how power is enacted. In Wright’s view, power is enacted not by destroying the enemy entirely but rather by rendering them incapable of acting—in his words, putting their eyes out—through achieving what would today be known as air superiority and information dominance. Although this line of thinking receded during the emphasis on strategic bombing during the Cold War, it has resurfaced with modern aircraft and drones over the last decade or more. With drones and other modern technology, the United States now establishes air superiority as its first priority and imposes “effects-based” operations to nullify the freedom of action of its enemies on the ground.

The emergence of aerial reconnaissance had a more subtle effect in changing what was seen by combatants. Having planes in the sky provided an unparalleled view of the dimensions of the battlefield and provided an enhanced ability to track battlefield developments over time. The rest of World War I would see aerial reconnaissance used to track enemy movements, the positions of vehicles, and trench lines, and also to create tiled photo mosaics of entire battlefields.28 Perhaps more crucially it changed the way that we see the battlefield itself, shifting from a soldier’s view of highly personalized combat like that described in the Iliad to a vertically oriented one where soldiers and tanks were seen even more as pieces moving on a chessboard. By its nature, aerial imagery offers a different vantage point on the features of the ground by mapping their relationship to each other spatially and portraying the earth anew in scale and proportion. Aerial imagery can also change how people respond to those elements of the terrain. As Paul Virilio noted, the battle between the GQG and those who saw the coming German advance from the air was not just a bureaucratic one, but one in which the advocates of aerial reconnaissance imposed a “point of view” on others.29

Today’s aerial imagery from drones and manned aircraft does the same. It imposes a point of view, providing unprecedented levels of detail about the targets themselves, but it also changes how the features on the ground are perceived by different parts of the military. For drone pilots, the images on the ground are perceived with humanizing realism—for example, it is not unusual to watch targets perform daily tasks that become increasingly mundane to the viewer, or personal activities, for days on end—but with the remove and distance that comes from seeing from the air. For military commanders, drones hold out the hope of achieving a stereoscopic view of the battlefield that produces “information dominance” and allows them to fight while protecting civilians and their own personnel. It is these intertwined hopes—to learn enough about the battlefield that it overwhelms and paralyzes the opponent and to use those advantages in information to fight so precisely that the carnage of World War I becomes impossible—that led the United States and other countries to invest so much in drone technology since the Persian Gulf War. But this investment carries with it the risk of goal displacement: to learn more and more about the world until the US military is overwhelmed with images and data. The burden of this investment has gradually changed the organization and culture of the US military itself, while also pointing to even more changes in the way we fight in the years ahead.

Information Dominance

The Persian Gulf War of January 1991 is widely acknowledged to be the onset of what became known as the “Revolution in Military Affairs (RMA).”30 While the concept is suffused with jargon, the underlying idea is simple: there has been a change in the nature of warfare due to a revolution in the speed of communication and information processing. As a result, the United States—the country most poised to take advantage of these changes—can now fight more precisely and humanely than ever before. Gone would be the days of mass-scale strategic bombing imagined during the height of the Cold War. In its place would be a lean US military that could fight from the air with a level of sophistication that made blunt force unnecessary. One early articulation of the RMA highlighted four essential elements of this transformation: (1) extremely precise, stand-off strikes, largely from the air; (2) dramatically improved command, control, and intelligence; (3) information warfare; and (4) non-lethality.31 The promise of an RMA-inflected war was that it could be fought with such superior battlefield intelligence that accidents and “friendly fire” incidents could be dramatically reduced. In the words of one US general, the goal of the RMA was to “abolish Clausewitz.”32 The United States would be able to use data collected from satellites, aerial imagery, and signals intelligence to construct a richly textured picture of the battlefield and to estimate, even predict, the strategic behavior of the enemy.

Over time, the US military’s impulse to learn as much as it could about the battlefield and to operate nimbly as a consequence has been enshrined as military doctrine. Although drones played only a small role in the Persian Gulf War, battlefield success against Saddam Hussein’s forces was cited as a proof of concept for the RMA and expanded the ambitions of the Pentagon’s elite planners to develop a superior knowledge of the battlefield to triumph over future enemies. This emphasis on the crucial role of superior battlefield information is reflected in different iterations of the doctrine and operational concepts of the Pentagon over the last two decades. In the mid-1990s, the Pentagon sought “dominant battlespace awareness” through its superiority in the information age; some of the Air Force planners went even further to anticipate “predictive battlespace awareness” that would allow the United States to guess what the enemy was planning to do before they did it.33 By early 2000, the Pentagon had published a strategic plan for confronting conflicts in 2020 that called on the United States to use its intelligence-gathering capacity to maximize all of the elements of its power and to achieve “full-spectrum dominance.”34 Some optimistic analysts argued that the information technology revolution would now allow the Pentagon to act more like private companies that use their dominant knowledge of the marketplace to lock in permanent advantages for themselves.35 What unites all of these formulations was the assumption that the United States would possess unique advantages over its enemies now that an unprecedented level of detail about enemy actions was in reach.36 Alongside this assertion of opportunity for the United States was a conceptual shift that recast war as a competition for information. To win battles, the United States would know more than its opponents; to win the war of ideas and perception on a global level, it would deploy that knowledge to fight precisely and to spare civilians unnecessary harm.

Early efforts at capitalizing on the RMA with drones were not wholly successful. In the Persian Gulf War, Pioneer drones were used to locate targets for navy 16-inch guns, but otherwise they had relatively limited battlefield utility.37 Throughout the 1990s, drones were deployed for a variety of reconnaissance missions in Iraq, Bosnia, and Kosovo, but they were seen as unreliable and could not always operate in poor weather. The technology lagged behind what was necessary for an efficient, precise use of airpower. In particular, the time lag between the collection of video of targets on the ground and the delivery of those images to military commanders was too long for the video to be used for effective targeting. When they were delivered, the images were also not sufficiently detailed to be useful. In some cases, these images were so narrowly cast that pilots described it as seeing through a “soda straw.” Initially, the drive for information dominance from the Persian Gulf War onward was carried on the back of other technologies, such as manned aircraft, satellites, GPS, internet-based communication technologies, and advances in computing power. By the early 2000s, drones were able to play an enhanced role in the pursuit of Intelligence, Surveillance, and Reconnaissance (ISR) tasks because some of the technological problems, including most notably the time lag for video transmissions, had been overcome.

The growing importance of drones was seen at both the strategic and tactical level. At the strategic level, the United States began to experiment with substituting the U-2, a workhorse aircraft that had been conducting high-altitude surveillance for decades, with equivalent drone models, such as the Dark Star and the Global Hawk. The difference between the two was their operating environment. In development since the mid-1990s, the Dark Star was made of non-metal composites and was designed for stealth operations in a high-threat environment.38 The Global Hawk, by contrast, was much closer to a conventional U-2 aircraft and was designed for surveillance operations in uncontested airspace (fig. 4.1). By the end of the 1990s, the Dark Star was abandoned due to problems with reliability and cost, effectively conceding that the United States did not have drone surveillance capabilities in places where manned aircraft or missiles might want to knock them from the sky. But the Global Hawk persisted and found a new life as the battlefield shifted to countries like Afghanistan and Iraq. Since the United States had air superiority in these environments and faced no risk of being shot down, the less-stealthy Global Hawk could thrive. Described as “the theater commander’s around the clock, low-hanging (surveillance) satellite,” the Global Hawk can fly as high as 65,000 feet, loiter for twenty-four hours or more, and monitor an area the size of the state of Illinois.39 It takes both still images and video and can fly nearly autonomously between way points around the globe, all controlled from a ground station in the continental United States. The chief advantage of the Global Hawk is endurance: it can fly for more than twenty-four hours without needing to land because it has no pilot or crew who need to rest. It generally flies on autopilot between set waypoints, though it can be diverted to meet an urgent operational need. The Global Hawk is equipped with sophisticated radar and infrared cameras; it can track moving targets and collect signals from mobile phones and other electronic devices.

image

image

Figure 4.1 Global Hawk drone photographed in a hangar in South Asia in 2006.

Although the Global Hawk surveillance aircraft were designed for high-altitude surveillance, to take pictures of fixed locations as the U-2 did, they were increasingly used in tactical operations. Between 2001 and 2003 in Afghanistan, the Global Hawk flew more than fifty missions and logged 1,000 combat hours.40 As the journalist William Arkin noted, it gave “commanders something they never had before: a persistent, wide-angle view of the battlefield.”41 The soda-straw view was no longer. The infrared cameras and synthetic aperture radar were also used in Iraq to identify targets such as vehicles and groups of people through sandstorms.42 One Global Hawk drone, nicknamed Grumpy, was used intensely through twenty-one straight days of fighting in Iraq in 2003 and was responsible for providing information that led to the destruction of 300 tanks.43 The Global Hawk went through a number of technological developments and gradually absorbed more bandwidth. One estimate in 2003 suggested that Global Hawk drones would use 1.1 gigabytes per second of bandwidth per drone, ten times the total bandwidth used by the entire military in the Persian Gulf War.44 The Global Hawk can take a picture every few seconds, but its chief limitation was bandwidth for conveying the image.45 It was recording more information than could be recorded ever before and, as a result, an insatiable demand for overflights was generated across all levels of the military.

Medium-altitude drones, such as the Predator and Reaper, were soon deployed in the Iraq war for a variety of reconnaissance and targeting purposes. Although there has been more attention paid to the targeted killing functions of Predator and Reaper drones, these medium-range drones were just as often devoted to reconnaissance of the battlefield as they were to direct military action. The Pentagon developed two designations for Predators—RQ for reconnaissance and MQ for multi-mission—to reflect the different emphasis of various operations. Operated by different services, including the CIA, the US Air Force, and some National Guard units, Predator and later Reaper drones were flown over active combat zones (such as Afghanistan and Iraq) and over unofficial battlefields collecting still images and video. One Reaper pilot described a typical mission, working for a “customer” in the military or intelligence community:

The typical mission for a conventional RPA is that you are in an [Air Tasking Order]. You are assigned to a ground entity. Somebody looks at your feed. You will get a brief and a chat with your customer. Once that happens, you are at the mercy of your customer. Some days [the mission] is to stare at a building, some days it’s to scan a region or to escort a UN convoy. There is never a time when your commander in a squadron tells you what to do. It’s always the customer.46

Many Predator and Reaper missions involved support of ground forces or surveillance of potential targets. The geographic scope of Predator and Reaper deployments expanded over time to active conflict zones like Iraq and Afghanistan, but also throughout the Middle East, North Africa, and East Asia. Over time, the sensor equipment on medium-sized drones grew increasingly sophisticated and was able to see in as many as thirty directions at one time, with plans to develop sensors that can see in as many sixty-five directions.47

The role that reconnaissance Predators and Reapers played in the hunt for IEDs that were killing and wounding US troops amid the insurgency in Iraq shows how they could have direct tactical value. In 2006, the Pentagon spent millions of dollars setting up Task Force Odin, a US Army specialized unit designed to detect and ultimately foil IED attacks in Iraq. To do this, they equipped Predators and other medium-range drones such as MQ-1C Gray Eagles with highly classified “black boxes” designed to track and detect evidence of IEDs being planted so as to harm US troops.48 The goal was to achieve persistent surveillance of the area so those placing the bombs could be conclusively identified. In the US military, this was called getting “left of the boom” so they could find evidence of the emplacement of potential IED attacks and disrupt them before they affected troops on the ground. Visual imagery from these black boxes—for example, unusual movements by individuals and vehicles, or even signs of implantation of bombs like displaced brush by the side of the road—were combined with other sources of intelligence—such as spies on the ground and electronic intercepts—to prevent IED attacks. In some cases, Gray Eagle drones were affixed with a laser target that would locate or as pilots say “sparkle~” a target with infrared on the ground and allow other, manned aircraft to fire a Hellfire missile at it. Gradually, the Pentagon authorized a shift to “attacking the network” that produced the IEDs in Iraq. This meant that the drones deployed by the United States would convey imagery and intelligence to ground forces to allow them to identify and target those responsible for making and planting the bombs, but it also meant that armed drones such as Predators would launch missiles against these individuals if needed.49 The result was a dramatic reduction in the number of IED attacks against US soldiers in Iraq, even if the tactic itself spread to other theaters.

Without much fanfare, the Gray Eagle became an essential tool for hunting and destroying insurgents in Afghanistan and Iraq. As journalist Sean Naylor revealed in 2019, the US Army’s 160th Special Operations Aviation Regiment, officially called Echo Company and based out of Kentucky, developed a reputation for being the most lethal unit in the army due to their extensive use of Gray Eagle drones. In less than twelve months between 2014 and 2015, they were responsible for killing 340 enemies in Afghanistan and the Iraq-Syria regions. They used Gray Eagle drones, each equipped with signals equipment and four Hellfire missiles to hunt ISIS operatives across the battlefield. Crucial to this effort was the fact that Gray Eagle drones could remain in the air for up to twenty-four hours, providing near-continuous coverage at a high altitude and allowing for close combat support. In the words of one former officer, “If you’re looking to whack somebody who needs whacking, then send the Gray Eagle.”50

One of the distinctive features of this effort was that it was conducted with a corresponding effort to destroy the insurgent networks, rather than just high value targets (HVTs) themselves. This was done with imagery coming from drones, but also with intercepts of cell phones. As the wars in Afghanistan and Iraq continued, the United States capitalized on the frequent use of cell phones by insurgents to track their locations through a variety of intercept methods, with a key role played by drones equipped with sensors. These drones captured imagery and signals intercepts from cell phones and other communications and combined all of this to track the movements of targets over time. An internal NSA document from 2005 described these sensors as the equivalent for the war on terror of the “Little Boy” nuclear bomb that destroyed Hiroshima in World War II.51 Such a revolution required substantial changes in bureaucracy and the information infrastructure of the US military and intelligence establishment. To deal with the flow of data coming from drones, the United States built a database holding information on SIM cards for known insurgents, collecting metadata on who they called and deploying that data to build a picture of the network responsible for an attack. This data was known to be imperfect because many people in these theaters of war would trade off SIM cards to each other, making it difficult to know who exactly carried a particular number. Yet the effort provided a better picture of the networks themselves and could be combined with other data sources from signals intelligence, spies on the ground, and other sources to identify targets. The result was that the United States began to build a “counterterrorism machine” premised on the ability to sift through and merge this data for target identification. One way that this was done was through a novel NSA program called the Real Time Regional Gateway, which scooped up data from drones and other surveillance resources and cross-referenced them to paint a picture of the enemy and their intentions. This allowed the Pentagon to “collect the whole haystack” rather than look for the needle.52 Deployed in Iraq in 2007 and in Afghanistan in 2010, the Real Time Regional Gateway also showed the degree to which the concept of information dominance became central to US thinking about war. The functionality of drones was merely being folded into a wider US military effort to know as much about the battlefield as possible.

Aside from these ISR functions, ground support—that is, using an aircraft to assist with the progress of a mission on land—became an important part of the medium-sized drone’s repertoire and vastly increased the demand for them. Only a few years after they had been introduced in Afghanistan and Iraq, ground commanders were calling for Predators and Reapers for battlefield reconnaissance so that they could avoid sending their soldiers into dangerous situations. In 2008, the US Air Force reported that it had passed 400,000 hours of Predator drone use due in good measure to demands for ground support.53 By 2019, the Predator and Reaper together had reached more than four million flight hours.54 One official remarked that “the warfighter can’t get enough Predator time.”55 To meet demand, the US Air Force began to organize drone overflights as distinct combat air patrols (CAP). According to a Pentagon estimate, the Predator alone surged its number of CAPs by more than 520% between 2001 and 2009.56 The problem that the Pentagon faced was its supply of drones, which was limited by its budget and by the natural time delay it takes to get sophisticated military technology delivered to the appropriate battlefield. By the end of the decade, Defense Secretary Robert Gates found himself under pressure from Congress to increase the number of drones on the battlefield. This dynamic—with military commanders, Pentagon leadership, and Congress responding to each other’s demands for more drones—in large measure explains the vast expansion of the US drone fleet over the last decade.

This desire for more battlefield intelligence was also seen in the extensive use of small drones for tactical reconnaissance in Iraq. Of particular importance in this respect is the Raven drone. Made by AeroVironment in California, the Raven drone is by numbers alone the most important drone in the Pentagon’s fleet, comprising over 7,000 of the 10,000 drones estimated to be owned by the US military in 2014.57 The Raven, which derived from an earlier drone called the Pointer, is a lightweight small drone designed for “over the hill” reconnaissance.58 Because it is so simple to assemble, it can be carried in a backpack by individual soldiers and flown using a simple screen and joystick interface that resembles a Nintendo Wii U controller. When later equipped with a more powerful camera and a lithium ion battery, Raven drones were among the most popular with US troops because they provided a quick and efficient way to detect whether insurgents were hiding in out-of-sight locations or planning an ambush. Crucially, they were adaptable to new technologies, and cheap enough that if they were destroyed they could easily be replaced. By the end of the decade, up to sixteen Raven drones could fly in tandem to relay information back to ground stations, producing an even more detailed account of the battlefield for ground commanders than was ever before available.59

The success of these drones led to a vast increase in demand for tactical drones in the US Army. The idea was that every soldier should ideally have a small, highly functional nano-drone which he or she could deploy at will. While the capabilities and cost of nano-drones lagged for a number of years, eventually the army got its wish. In 2019, the US Army awarded a contract for $39.6 million to FLIR Systems, an Oregon-based company, to make Black Hornet Personal Reconnaissance Drones available to soldiers on the ground in combat zones.60 These drones are small, measuring only 6.6 inches across, and light (33 grams). Resembling a small helicopter, they can fly up to 1.24 miles away from their operator and remain aloft for 25 minutes (fig. 4.2).61 Each Black Hornet has two daytime cameras and thermal imaging capability, and can fly at night and in heavy wind.62 The drone is quiet, so those on the ground cannot hear it approach, and in theory it can operate indoors and even in caves. One of its most crucial advantages is cost: the Black Hornet only runs between $15,000 and $20,000 per unit, significantly less than a Raven or a comparable alternative.63 As they become more popular and widespread, the Black Hornet and nano-drones like it will provide ground forces enormous tactical advantages in finding and striking enemies, but they will also begin to change the soldier’s “point of view” on the battlefield.

image

image

Figure 4.2 Royal Marine carrying a Black Hornet 2 Remotely Piloted Aircraft System (RPAS), 2017.

This demand for Ravens and Black Hornet drones shows the extent to which ground commanders are demanding “eyes in the sky” for reconnaissance before approaching risky targets. Some pilots have defended this as natural given the cultural changes in the way the United States fights and the value that it places on the life of its personnel. In the words of one former Reaper pilot:

The demand for information on the battlefield is crucial—and why not? If we knew where the bunkers were on Normandy, and we had the option not to drop so many bombs only to get 15% of the tunnels, why wouldn’t we do that? We could have done it without losing that many lives. Now, we don’t have the need to drop bombs like that anymore. If an RPA pilot can switch on an infrared camera, or do a pattern of life, to save the life of the guy kicking down the door of the bad guys, I’d want it. I don’t blame ground commanders for asking for this.64

Yet this growing demand began to put a strain on pilots of medium-altitude drones who believed that it was now impossible to meet all of the battlefield reconnaissance demands they faced. The US military is increasingly attached to an ISR capability that is premised on extraordinary knowledge of the battlespace; this has shifted from a desired advantage to a requirement. In some respects, as key elements of the US military became accustomed to knowing more before they acted, the tempo of operations may have slowed. At a conference in September 2016, the chairman of the Joint Chiefs of Staff General Joseph Dunford noted that the military had increased the number of ISR aircraft by 600% since 2007 but was still only meeting 30% of the requests from combatant commands for overflights.65 Such a path, he argued, was fundamentally unsustainable. As one senior Air Force commander and drone pilot put it, “what we really need to attack is how we make decisions, not how much information we have to make those decisions . . . So let’s get off that glide slope and get at what is our risk calculus, how do we determine that, and make those decisions off the information that we have, not what we want to have.”66

Despite these warnings, the demand for information continued to grow. Many drone pilots maintained that a slow and deliberative targeting process was best suited to avoiding mistakes and civilian casualties, as well as the institutional and legal penalties that flowed from them. This demand also began to affect the tempo of operations in contradictory ways. At a minimum, the speed of the demand for and supply of information on the battlefield had increased, although not equally. In the words of Lt. General Deptula:

This is a result of modern technology, it has accelerated, to use a word on top of a word, an accelerated set of capabilities which has also resulted in a greater demand for information. So there’s some good and bad associated with that. The good is the arrival of information in enough time for a commander or operator to use it. That’s very beneficial. The bad is the overwhelming mass of data that can result in slowing down operations while people wade through the vast amount of data to determine its utility.67

The danger is that the tempo of operations could slow because ground commanders and pilots could always demand another overflight to know more about the battlefield and the risks they faced. Yet one senior drone commander noted that a slower tempo and increased situational awareness allow the United States to make war less of an “ugly thing” and should be defended morally on those grounds.68 A slow war, he argued, is a precise war and one that is likely to be more humane.

Through the Global Hawk, Predator, and Raven, the United States was deploying drones to create an ISR capacity that was far beyond what anyone could have imagined a decade earlier. As late as the Persian Gulf War, pilots had to return to base to develop their film from reconnaissance missions before sending it for analysis and presenting it to decision-makers. The result was a time lag between when information was received and a decision on what to do was made. This time lag was important because it could allow enemies to move or to escape and lengthen what became known as the “kill chain,” or the steps and time needed to move from identifying a target to moving against them. The emergence of drones allowed that kill chain to be shortened dramatically, but it also placed additional pressure on intelligence analysts to urgently confirm the identities and activities of potential targets. While the widespread availability of drones made war slower and more deliberative, it also made it more information intensive as analysts had to assemble an almost criminal case against those they wish to kill. This reinforced a model of combat within parts of the US military that was closer to hunting and even executing criminals than to warfighting in the conventional sense.69

To fight this way, the Pentagon sought to add more bandwidth to its existing networks and to retrofit drones for mobile communication. One reason for the growing demand for bandwidth was that communication devices and small drones like Ravens given to soldiers were becoming so complex that they were absorbing more bandwidth for data. For example, during the 1990s, the US military used 90 megahertz of bandwidth for approximately 12,000 troops. By contrast, it needed 305 megahertz for 3,500 troops in 2014.70 In 2013, the United States leased bandwidth from a Chinese satellite to sustain drone operations and other communication tasks in Afghanistan and Iraq, even though that exposed them to considerable risks of espionage.71 As early as 2005, the Pentagon noted that “airborne data link rates and processor speeds are in a race to enable future UA capabilities.”72 The problems were clear: poor connectivity, costly satellites, and stove-piped infrastructures that prevented the flow of information, which in turn led to poor information sharing.73 Today, the Pentagon is working with private companies to construct a networked communications infrastructure that combines high capacity and robust data links (to prevent hacking) with powerful processing systems to collect, transmit, and even analyze visual images and data. An even longer-term goal is to build smarter drones that would transmit “the results of their data to the ground for decision-making.”74 In other words, the next generation of smart drones, perhaps enabled by artificial intelligence, will do the first cut of intelligence analysis before even sending its images onward.

By the end of the decade, the wartime experiences in Afghanistan and Iraq had created a vast infrastructure of data collection and imagery from drones and other sources. In 2009 alone, US drones collected the equivalent of twenty-four years’ worth of video footage, more than could ever be processed by individuals from the intelligence community.75 This was only a small portion of the total data gathered by the NSA, which collected every six hours as much data as is held by the Library of Congress.76 But the amounts of data continued to increase. By 2012, the US Air Force was recording fifty days’ worth of video every twenty-four hours and flying 1,500 hours of airborne ISR missions every day.77 Another estimate in 2015 suggested that the air force’s Distributed Common Ground System (DCGS) collected 100 terabytes of data every eighty hours.78 The Air Force itself estimated that it had spent $67 billion on ISR operations since September 11 and that the number of ISR platforms increased 238% between 2008 and 2012. ISR platforms now constituted one-third of all US military aircraft.79 If this was not managed properly, as Lt. General Deptula noted, the US military could soon be “swimming in sensors and drowning in data.”80

One solution was to expand the ranks of the intelligence analysts employed to analyze the video feed from drones as well as other data. In 2010, the air force reportedly hired another 2,500 intelligence analysts to cope with the volume of data that it was generating.81 By 2012, the air force had 20,000 airmen operating spy planes and analyzing the resulting intelligence.82 To a much greater extent than their critics realized, drones are manpower-intensive, requiring fifty-nine people in the field doing launch and recovery, forty-five on mission control, and eighty-two analyzing the data gathered.83 Many of these analysts are from the air force and the intelligence agencies (CIA, NSA, Defense Intelligence Agency, and others) but a growing number of these analysts are private contractors paid to wade through the thousands of hours of footage collected. This is one of the reasons why the operations of drones has been so often likened to a machine, because it is a vast, labor-intensive, complex system that collects information and finds what is useful for targeting.84 One estimate suggested that the air force alone could need 117,000 people devoted to exploiting motion imagery by 2015.85

All of these analysts face a significant challenge. The chief problem with collecting such a haul of data is separating what is useful from the substantial level of noise in the data itself. According to Deptula:

This produces another problem which is how you separate the wheat from the chaff. If you look at airborne surveillance capabilities nowadays, it can collect with an MQ-1 or MQ-9 with an electro-optical imagery ball that can look through a “soda straw,” an area of the earth around 800–1,000 feet in diameter. With wide-area surveillance pods you can now look at an area about 8km in diameter. You have multiple orders of magnitude greater data involved in that look, but you also find that 99 percent of what is collected is not actionable data—it is “chaff.” This has some impact on the issue of how you separate the elements you are interested in from the rest of the stuff. That is a growing problem.86

The shift toward information dominance had gone beyond collecting useful information about the battlefield and become an effort to collect everything that might be relevant, even if that produced real difficulties in curating and analyzing that information. The voracious appetite for information seen across the military and intelligence establishment resulted in an almost unchecked collection of data outside US territory, a fact later emphasized by Edward Snowden’s NSA revelations. Such an approach does not always yield quality intelligence. As one former drone pilot put it: “we’ve been taking the ‘Snowden approach’—collect it all and sort it out later. The problem is that different people—civilians, pilots, intel analysts—may all see different things with that data. We have tested this case with some imagery and we found that different groups saw entirely different things. Even the intelligence analysts from the military and civilian agency saw different things.”87 Noticed less in that controversy over NSA surveillance was the role played by drones in that effort and the unprecedented bureaucratic and logistic problems that emerged when converting that data haul into intelligence. For the Air Force, this effort was generating an incomplete cultural and organizational shift from its primary emphasis on destroying targets to becoming a flying intelligence service devoted to finding and fixing those targets.

The natural culmination of the drive for information dominance was an effort by the US government to track all signs of life in a small city or town. As chapter 1 noted, the United States has been developing the Gorgon Stare, a video capture technology that offers persistent, wide-area surveillance of small towns, since 2009.88 Attached to the Reaper drone, the sensors of Gorgon Stare can provide a wide-angle view of the battlefield and record and store video for up to thirty days. It also has the highest resolution camera in the world, at 1.8 billion pixels.89 With the Gorgon Stare, intelligence analysts can rewind or forward the video to learn where bombs appeared from or where trucks went; it offers a panoptic view of the battlefield over time unlike any produced by other sensors.90 The expansion of the Gorgon Stare and the surveillance capacities of other drones has placed even more pressure on the Pentagon: in 2012, it was estimated that the program could collect the equivalent of 53,000 full-length movies per day of video. To deal with this unprecedented flow of data, the Pentagon installed new computer systems and turned to ESPN for their techniques for processing, analyzing, and preserving video images from sports.91 The Pentagon also turned to Google, which agreed to help deploy artificial intelligence to interpret and decipher the video images in the hopes of improving intelligence analysis and targeting decisions. This project, called Project Maven, attracted considerable controversy and was canceled in May 2018 after a number of Google employees publicly objected to the idea of helping the military with drone strikes.92 With both projects, the idea was to reduce the burden on the intelligence infrastructure by culling the millions of hours of footage but also to allow for instant recall to allow analysts to investigate events and people over time. It was an admission that the unrelenting quest for “information dominance” through drones and other means had stretched the capabilities of the military and subtly transformed its roles and responsibilities. It was also an indication of how this quest for information dominance had begun to change how the United States fights its wars.

Precision and Humanity

In contemporary US doctrine and strategic thought, the quest for information dominance has been accompanied by an emphasis on acquiring and using precision weapons. The dream of precision in warfare is an old one. As early as World War II, US strategists were hailing new aircraft and bombs as capable of hitting targets with unprecedented levels of precision.93 But what counted as “precise” at the time was very different from now. Many World War II era precision bombs fell within hundreds, if not thousands, of feet from their intended target.94 Even by this standard, precision weapons made up only a tiny portion of the overall aerial attacks during World War II. In the Vietnam War, interest in precision weapons grew because US planners were eager to avoid killing civilians and adding fuel to the anti-war protests in the United States. While US planners often touted their precision bombing campaigns to a skeptical public, precision bombs accounted for less than 1% of the total bombs dropped in the Vietnam War.95 The technology and the policy for protecting civilians in aerial bombardment lagged behind the glowing rhetoric around precision weapons during the Vietnam War and many of the Cold War proxy wars that followed.

Despite this fact, the United States and other leading military powers never gave up on precision weapons because they offered the promise of a “surgical” form of warfare that could destroy enemies while sparing everyone else. The Pentagon invested heavily in precision weapons for decades following the end of the Vietnam War, all in the hopes of achieving a dramatic change in what was possible in targeting enemy forces and avoiding “collateral damage” in the civilian population. The search for these weapons was hardly altruistic; precision weapons could also shield the US military and public from some of the costs of war. Especially if employed with information dominance, precision weapons could spare the lives of US personnel by avoiding mistakes and friendly fire incidents while minimizing the backlash that naturally comes when bombs go astray. Yet perhaps the greatest attraction of precision weapons is that they could conform to the principle of humanity as defined under international law, which requires governments to fight as carefully as possible and to spare civilian lives whenever possible.96 Seen in this light, the seductiveness of precision weapons becomes apparent: it reconciles what you have to do in war (i.e., to fight) with what you are morally obliged to do in most normal circumstances (i.e., to save lives). Advocates of precision warfare pointed to the overlapping strategic, legal, and ethical rationales and emphasized the twin benefits of a humane approach to fighting: that it was not only strategically wise but also a moral good worth pursuing on its own terms. These two imperatives—to make war precise and in doing so to make it humane—have been intertwined in almost all US debates over airpower since the mid-1970s.97

Following the Vietnam War, US military officials incorporated this principle into military doctrine and practice, subtly transforming targeting standards and ruling out the aerial strikes considered acceptable in prior decades.98 The Persian Gulf War was the first time that the technology caught up with the doctrine of precision warfare. By capitalizing on 1980s advances in satellite technology, the United States was able to employ its superior knowledge of the battlefield to its advantage to generate overwhelming battlefield success against Iraqi forces. It did this though a targeted campaign to hit leadership and key infrastructure targets while avoiding the high levels of civilian casualties that accompanied Vietnam-style aerial bombardment. The Persian Gulf War also showed the extent to which efforts inside the Pentagon had internalized the concern for humanity in warfare by developing mechanisms of legal oversight and accountability that governed target selection. US lawyers carefully reviewed and authorized prospective strikes to ensure that they minimized harm to the civilian population while achieving maximum “effect.” The United States touted the degree of precision in its war strategy, dazzling journalists with videos of its “smart bombs” and engaging in a sustained public relations campaign about how precise and humane their new way of fighting was. A number of prominent commentators saw the Persian Gulf War as a harbinger of a new type of warfare that was both precise and humane, in contrast to the bloody battles of attrition that ended the largest conflicts of the twentieth century.

The reality in the Persian Gulf War was more complicated than these laudatory accounts suggested. Without a doubt, it was a dress rehearsal for the new age of precision warfare, but the flaws in the production were evident. Many “smart bombs” turned out to be more propaganda than fact—some missed their targets or failed after launch. While the accuracy of precision bombs had vastly improved compared to previous decades, they were a small percentage of the overall bombs dropped—only 9% of bombs dropped in the Persian Gulf War were classified as precision munitions.99 Some experts concluded that the greatest impact of precision weapons was psychological, in demoralizing the Iraqi army and convincing them to avoid direct military engagements.100 Even with that, the war was hardly bloodless. Approximately 20,000–26,000 military personnel and 3,500 civilians died during the Persian Gulf War.101 In reality, what made the war look “clean” was not that few Iraqis died—General Norman Schwarzkopf remarked in a televised interview that actual number of Iraqi casualty figures did not matter—but that few US military personnel did.102 The true promise of an RMA-inflected war was that it was clean for those who wage it. Only as a secondary issue did it matter whether the war spared the enemy or civilians caught in its midst. Perhaps the greatest impact of the Persian Gulf War was in language. While the war itself did not usher in a new model of conflict, it did show how concepts of precision and humanity were beginning to change how the United States talked about warfare. The United States now talked about the importance of saving civilian lives but also employed antiseptic euphemisms like “collateral damage” designed to obscure when this did not happen.

Subsequent conflicts—the wars in Bosnia (1990–1995), Kosovo (1999), Afghanistan (2001), and Iraq (2003)—showed a maturation of operational emphasis on achieving precision and preserving humanity in combat. The Pentagon got significantly better at fighting with precision weapons and portrayed itself as fighting wars that were restrained in purpose and scope. These concepts also influenced how these wars were presented to the public. The interventions in Bosnia and Kosovo were cast as “humanitarian” in nature because they were designed to save civilians from repression and civil war. The United States defended the principle of humanitarian war—waged from the air, largely bloodless for US personnel—against accusations that such a concept was a grotesque contradiction in terms. The emphasis on precision and humanity was gradually incorporated into the official doctrine, though not always in those words. One widely cited US Air Force formulation was to call for targeting the nodes of networks—so-called “net-centric warfare”—to force them to collapse while avoiding population centers entirely. Within the US Air Force, another influential conceptualization of this way of fighting, by Lt. General Deptula, called for “effects-based operations,” which were highly targeted operations against nodes of the enemy’s defenses to disable them to achieve “effects” without sequentially destroying all elements of their military power.103 Deptula argued that the changes underway in developing battlefield intelligence and translating it into targeted attacks meant a change in the character of warfare “analogous to the difference in the world views between Ptolemy and Copernicus.”104 While this conclusion is debatable, many senior military officials were persuaded that the deployment of precision weapons represented some kind of step-change in the way that fighting was done.105 By the Iraq war in 2003, 66% of US munitions were precision-guided munitions.106 This embrace of precision warfare appeared to pay dividends. The rapid collapse of the Taliban in Afghanistan in 2001 and Saddam Hussein’s forces in Iraq in 2003 lent support for the hypothesis that the United States had developed a new “way of war” which left grinding attrition behind and relied on the precise use of force against combatants through air power.107 After seeing the success of drones against Iraqi armed vehicles in 2003, the US Air Force declared that it was now the age of “mass precision.”108

The insurgencies that emerged after the overthrow of governments in Kabul and Baghdad dampened some of the enthusiasm for the revolutionary potential of precision aerial warfare but they did not eliminate it. Instead, air power advocates recast their arguments and maintained that counterinsurgency campaigns could use precision air power.109 In these conflicts, air power could be used to assist ground forces, to enable strikes on individuals, or to intimidate, even unnerve, insurgents.110 If precision air power could defend civilians from attacks against insurgents, it could win hearts and minds in counterinsurgency campaigns. By combining information dominance with precision, air power offered the possibility of pursuing “bad guys” while saving civilian lives even in messy counterinsurgency campaigns where the enemy was embedded in the civilian population. This newfound application of precision airpower to counterinsurgency began to enhance its moral hue. One advocate of precision air power described the new ethic of fighting as substantially different from traditional warfighting:

Instead, the US public standard for military action now seems to resemble the ethic that prevailed on old TV Westerns: The good guy—the one in the white hat—never killed the bad guy. He shot the gun out of his hand and arrested him. Modern air power may not solve every military problem, but thanks to the innovations of the last decade, it is the weapon in the US arsenal that comes closest to fulfilling that goal.111

Instead of attempting to destroy the enemy or force them to surrender en masse, the United States should aim to locate the “bad guy” among the general population and kill that person without affecting the rest of the population. This is a radical departure from the traditional idea that warfare involves punishing the enemy as a collective until they surrender. Rather than warfighting, this approach is closer to the ethic of the hunter, according to French philosopher Grégoire Chamayou.112 The advantage of a manhunting approach, according to one JSOC analyst, was that the United States could target key individuals of a terrorist network “without resorting to the expense and turbulence associated with deployment of major military formations.”113 While the United States attacks the target from the sky, it is invulnerable to a direct counterattack, a position which has left some calling this approach either assassination or “safe-killing.”114 This approach was a natural outgrowth of the US superiority in precision-guided weapons and aerial platforms. Only the United States was capable of employing its information dominance to find that “bad guy” and employ precision munitions in order to eliminate him. Only the United States, it follows, was capable of removing itself and much of the civilian population from harm’s way and rendering the war humane.

As the technology for information dominance developed, the doctrinal and operational commitment to precision and humanity in warfare led the United States to apply this hunting model of fighting an enemy in an increasing number of theaters worldwide. Although the United States continued to perform traditional ground support and combat operations in theaters like Afghanistan and Iraq, it hunted “bad guys” during these wars and expanded the policy of targeted killing outside of declared war zones. The emphasis on precision and humanity gradually changed the tenor of the combat missions, as commanders demanded more drone overflights and imagery to be certain that their attacks were directed at the appropriate targets, rather than civilian bystanders. They also regularly requested overflights to ensure that they were not sending US soldiers into dangerous encounters with the enemy. Ground commanders sought to use the information dominance that the United States had acquired to reduce some of the uncertainties in confronting enemy forces and ensure fewer US personnel were killed in accidents and from so-called friendly fire.

Drones proved ideal for this approach to fighting. Without much fanfare, this hunting model, based on information dominance and dedicated to killing individuals and small groups rather than massed enemy formations, became a central operating ethos for the US Air Force drone fleet. Medium-sized drones, such as Predator and Reaper, were equipped with sophisticated gimble cameras that were useful for long-endurance missions watching and accurately killing targets. Over the last decade, thousands of US pilots have been trained to fly unmanned vehicles over declared and undeclared battlefields and to find militants from an array of organizations (including al Qaeda, ISIS, the Taliban, and others) located in over a dozen countries worldwide. With many of these militants hiding in rural regions or blended with the civilian population in cities, the hunt for these targets with drones was painstaking and careful. Yet many pilots with experience in both manned and unmanned aircraft came away with the conclusion that drones had notable advantages in avoiding harm to people and property in theaters of war. As one experienced drone pilot put it:

In general the precision itself is no different than it is for a manned weapons system. My ability to hit a target is no different. Our ability to reduce casualties and property damage is. Why? For one, it is time and endurance. We can strike at our own choosing. But we also have intel analysts to help us decide the attack angle or vector to avoid civilian casualties. We have models to show how the blast damage from a Hellfire missile on surrounding buildings happens. I can choose how I come in towards the target. Or we can wait until the target moves, to not take out other people.115

Some pilots have argued that drones allow for military operations without the “urgency of time,” which allows them to be more deliberative in nature.116 As one pilot remarked, “[With RPAs] you can realize that things are not time sensitive as they appear. A lot of the time when civilian casualties happen it’s because you are rushed and don’t have the time to make the decision. Sometimes it may not be the best decision you can ever take, but that’s not different with manned aircraft.”117 The ability of intelligence analysts and commanders to watch the video feed of a drone remotely means that their missions naturally come equipped with more legal oversight. In the words of one former Reaper pilot:

If we are comparing manned and unmanned aircraft, conservatively I’d say it’s the same level of operational precision in manned and unmanned aircraft. More aggressively, I think that the RPAs are better because we have 3–4 people looking at an operation to make sure it’s right. Why people think that an RPA could not distinguish a target when a manned aircraft that is flying thousands of feet above can mystifies me. At my disposal, I have an intel coordinator with me that is capable of talking to me. I can telephone other crews. I have the ability to talk to anyone I want to. I can make a decision without the pressure of having to stay alive in the line of fire.118

Others highlighted the accuracy of drones relative to other options. Lt. General David Deptula noted that:

It is not widely known that the accuracy of a 150mm howitzer is around 1,000 feet. The accuracy of the most accurate mortar is around 250 feet. What’s the accuracy of a projectile launched by an 18 year old scared shitless coming under fire? The difference is with remotely piloted aircraft we record every single second of not just engagement time, but also flight time. Whereas the flight path and results of artillery rounds or mortar rounds or rifle rounds are not recorded. What if you put a camera on the shoulder of every soldier and Marine on the ground carrying a rifle in combat and said we are going to record everything you do. Or before you can fire we are going to have to review and approve your engagement parameters before you can fire. That’s what’s going on with RPA operations. There’s a microscope on RPA engagement that is enhancing attention paid to these operations.119

Due to these tight rules of engagement and the level of oversight, one US Air Force drone commander concluded that, “we are fighting the most precise air campaign in the history of warfare.”120

The language of precision and humanity became an essential element of the Obama administration’s political defense of the use of drones worldwide. Many senior Obama administration officials touted how precise drones could be relative to other options in conventional wars. In one of the first public comments on drones made by the Obama administration, State Department legal advisor Harold Koh argued that drones “have helped to make our targeting even more precise.”121 In 2012, CIA director Leon Panetta called drones “one of the most precise weapons we have in our arsenal.”122 President Obama emphasized how drone operations produced a rate of civilian casualties lower than any other conventional option and insisted that he was choosing “the course of action least likely to result in the loss of innocent life.”123

The evidence for these claims is less clear than these accounts suggest. Some of the Obama administration’s arguments for the precision and humanity of drones relied on false comparisons—for example, comparing drones to World War II style bombardment or indiscriminate artillery shelling—rather than to realistic options.124 Concerns have been expressed over the reliability of the government’s data on deaths from drones outside war zones, with suggestions that their assertions that casualties were in the “single digits” cannot be taken at face value.125 One independent study by Micah Zenko and Amelia Mae Wolf found that drones were thirty-five times more likely to cause civilian casualties than comparable operations by manned aircraft.126 There is also evidence that President Donald Trump’s “gloves off” approach to fighting terrorism has produced more civilian casualties despite the use of precision weapons and drones.127 At a minimum, drones are not bloodless; accidents and the “fog of war” are inevitable with all forms of warfighting. It is clear that the intertwined concerns for precision and humanity have produced doctrinal changes in the US Air Force and inadvertently spurred the increase in the use of drones on the battlefield. They have led the United States to slow its tempo of operations for some ground operations and to hunt for its enemies on an ever increasing number of battlefields. But whether they have actually rendered US-led air campaigns as clean as these accounts suggest is harder to know given the secrecy surrounding their use and the lack of accurate data. At this point, a precise and humane war remains a promise of the drone age, but it may not be the reality.

Consequences

The development of information dominance and the emphasis on precision and humanity in combat have produced a series of organizational and cultural pressures on the US Air Force itself. The shift toward a hunting model inside and outside conflicts is placing stress on the air force and changing it in subtle ways. At a minimum, the US Air Force has been under considerable pressure to meet the increased demand for drone overflights. At the beginning of the decade, the United States had only a few drones and conducted almost no daily or weekly missions with them. By 2015, the US Air Force was flying approximately sixty CAPs daily, while the Army was flying sixteen and Special Operations Command an additional ten.128 For the US Air Force’s CAPs, at least 300 drones were required, sustained by thousands of personnel.129 Yet even this was not enough, as the United States estimated that it would need to deploy ninety CAPs daily by 2019 to meet the demands by military commanders across the world.130 US officials admitted that the drone pilots were at a “breaking point” due to the increased demands for drone overflights.131 The geographic range of the CAPs was wide, with overseas bases located in over a dozen far-flung locations, including Seychelles, Chad, Nigeria, Italy, Japan, Turkey, Guam, Ethiopia, and the United Arab Emirates.132 The result was that US military personnel were responsible for staffing CAPs with drones taking off all hours of the day, necessitating shift work on a twenty-four-hour-, seven-day-a-week schedule.

One organizational consequence of this growing demand for drone surveillance is that the US Air Force has struggled to train enough pilots. Under pressure to deliver drone pilots quickly, the Air Force created a streamlined training process to get trainee pilots through a shortened one-year training course, as opposed to a thirty-month course for manned aircraft. The goal is to produce cadres of specialized drone pilots who can help the Air Force meet the insatiable demands for ISR, combat support, and manhunting missions. The Air Force has also struggled to retain drone pilots, because of the demands placed on them. In response, it is considering developing a second “wing” of drone pilots to provide opportunities for promotion within a favorable career trajectory and creating a series of bonuses for retention and performance, as well as performance medals. Yet the Air Force has been mired in an internal debate about whether to reward, and hence valorize, drone pilots for their performance as for pilots of manned aircraft. The culture of the Air Force traditionally portrays manned aircraft as more risky and authentic than unmanned aircraft. For this reason, the Pentagon has struggled with the decision to offer medals to drone pilots—and faced an internal revolt when it did so in 2013; they were dubbed “Nintendo medals” by some pilots and regarded as inferior to those earned by traditional pilots.133

Drone operations are also scrambling the command structures within which air force pilots typically operate, because they are remotely controlled. In theory, drones are subject to the same command structure as manned aircraft, but in practice this breaks down because the video feed is conveyed to so many different screens in Washington, DC, and elsewhere. Drone feeds can be monitored by as many as 200 people, including intelligence analysts, military officers, civilian officials from the Pentagon and intelligence agencies, and government lawyers. For many pilots, the extensive remote control over drone operations can be a surreal experience that is remarkably different from flying in manned aircraft. In a manned aircraft, the pilot’s actions are unsupervised and only audio communication links the pilot back to those commanding the mission. In a Predator or Reaper drone, an unknown number of people may scrutinize the pilot’s actions in real time and criticize them or penalize them later for actions seen as excessively dangerous. Far from the media image of the pilot as unrestrained and free to fire in “push button warfare,” drone pilots often report feeling micromanaged by the number of people watching their remote feeds. The result is that they will become remarkably careful in following their “checklist” of approved steps for the mission in order to avoid professional penalties. As one drone trainer put it:

For pilots, this means that you know your actions will be monitored and you will be called to account for your actions. The result is that you follow your checklist. Even a small deviation from the checklist which would allow you to achieve the mission is a risk. You know that you will be blamed for violating the checklist, even if the fact you did is unrelated to the mistake itself. When you fly RPAs, you know that a lot of people will be watching. Some of them are civilians. The problem is that high level military officials can more directly interfere in the targeting process. There are stories of 4-star admirals calling a pilot of an RPA saying “Attack this guy.” The problem is that this person may be in your line of command, but you are supposed to listen to the Air Operations Command. This puts pressure on the pilots. The proper response is to say “No, sir, I am sorry but I have to follow my checklist.”134

This picture of closely watched pilots stands in contrast to the media depiction of drone pilots with broad discretion to pull the trigger. While many pilots emphasize that the ultimate decision to launch a strike remains their own, the fact that commanders, lawyers, and intelligence analysts watches them may subtly affect their behavior and make them less prone to taking risks. It also reveals the double irony of remote-controlled warfare. While drones are controlled remotely by their pilots, the pilots themselves are monitored and sometimes controlled remotely by their commanders and other senior government officials with access to their video feeds on screens and laptops worldwide.

Among the dangers of senior military commanders and civilian officials becoming immersed in the video feeds of drones are micromanagement and a loss of strategic perspective. As one former Predator pilot put it, “if someone wants your feed, they can get it. From a pilot, you are helping with a piece of the puzzle that you can. But when you get the tactical side influencing and being shown to the strategic thinker, it produces a danger of micromanagement.”135 With drones, senior military and civilian officials are now capable of ordering adaptations in the flight patterns or attacks on targets in real time, but they may not have the full complement of information available to the pilot. They may also make such orders in contravention of the mission’s objectives or unaware of the larger strategic issues at play. It is at this point that the disjuncture between less-constrained political decision-making and the constrained rules of engagement of the pilots matters. One former Predator pilot remarked that, “because I can pass the video out to everyone, losing sight of the larger strategic picture is a danger.”136 Drones allow the visual immediacy of the battlefield to be beamed into a number of different screens, but this in turn exerts a unique pressure on the Air Force not to lose sight of the purpose of the mission and to ensure that messages to the pilot are proper and conveyed through appropriate command structures. The diffusion of the battlespace—with the pilot viewing the targets on the ground and hundreds of eyes on the pilot around the world—is posing an organizational and cultural challenge for militaries more comfortable with clean and hierarchical command structures.

Conclusion

The United States is now confronting a situation in which drone operations are arguably on their way to becoming the most important component of its Air Force’s mission. This dramatic increase in the use of drones is due to two intertwined hopes: that the United States could develop “information dominance” with a panoramic view of the battlefield and that doing so would allow it to fight precisely and humanely in a way never before achieved. The irony is that these two impulses gradually led the United States to adopt a manhunting approach to warfare that now permeates its operations both inside and outside declared battlefields. While drone technology only incompletely delivers on the promise of a precise, humane way of fighting, the aspirational language around these priorities combined with popular view that drone technology is clean and antiseptic serves to insulate this hunting approach from sustained public criticism. The lack of transparency concerning drone operations means that the average person has little ability to assess if drone technology is as precise and humane as its advocates suggest.

Drones are also exerting an effect on the militaries that are adopting them. In part due to the size and sophistication of its drone fleet, the US Air Force has been among the first to grapple with the organizational and cultural pressures involved in making drones a centerpiece of their operations. But as other countries begin to develop larger and more powerful fleets, their militaries will transform as they convert to a model of unmanned piloting and scramble their command structures in response to remote control. It is not self-evident that they will adapt or behave as the United States has. At a minimum, it is clear that this transformation will affect their behavior by shifting their calculations of risks and giving rise to what the military calls “mission creep.” Just as the arrival of reconnaissance aircraft changed the doctrine and culture of militaries in World War I, the spread of drones is changing how military organizations see their missions and exercise control over their own pilots. Drone surveillance and combat imposes a “point of view” on the battlefield, as early theorists of air power acknowledged, but it also changes the field of vision for the pilots and the militaries that use them. By changing their approach to piloting, command structures, and targeting thresholds, drones may ultimately exert a more lasting impact on the military and its conduct of war than those concerned with “push button” warfare currently imagine.