[ELEVEN]
“ADVANCED” WARFARE: HOW WE MIGHT FIGHT WITH ROBOTS
Once in a while, everything about the world changes at once. This is one of those times.
CHUCK KLOSTERMAN
 
 
 
Lieutenant Colonel Bob Bateman of the U.S. Army is “advanced.”
“Advancement theory” is a school of thought that explains how old paradigms are broken by people who look at the world in a fresh way. Appropriately enough, the thesis comes not from some wood-paneled Ivy League professor’s office, but was originally created in a Pizza Hut in 1990 by two University of South Carolina graduate students and later popularized through an article in Esquire magazine by social commentator Chuck Klosterman.
Advancement theory seeks to explain not only how change occurs in various fields from fashion to science, but also how brilliant people can do something that makes no sense to 99 percent of the population at the time, but then later on seems like pure genius. The classic example from music would be Lou Reed, the guitarist and principal singer-songwriter of the band the Velvet Underground. The band was little known during its lifetime (1965-73), but was the seed from which all of alternative music grew. If there was no Lou Reed, there would have been no punk rock, no glam rock, no grunge, no indie rock, no emo, or whatever genre is popular as you read this now. But even within this influence, Reed would repeatedly surprise the world with things that only seemed to indicate he had gone off the deep end, but would later prove brilliant. His perhaps greatest moment of “advancement” was in 1986, when he released the song “The Original Wrapper.” Before either hip-hop music or the terrible disease were mainstream, the white, forty-four-year-old founder of punk rock rapped about AIDS.
Examples of advanced people, or what professor James Q. Wilson calls “change-orientated personalities,” extend beyond rock music, of course. Einstein is the ultimate example in science. As a youngster, he jumped from school to school and was so lightly regarded by the scientists of the day that he could only find a job as an assistant in a patent office. During this time, however, he wrote four articles that laid the foundation for all of modern physics.
People who are “advanced” create ideas that seem almost crazy at the time, but make perfect sense once the old paradigms are swept aside. What once was odd then becomes the new “normal.” Advanced thinkers don’t just do something weird for the sake of change. They are part of the very change itself, usually from the inside of the system. In the military world, for example, such figures as Billy Mitchell or J. F. C. Fuller may have been visionary in predicting the importance of air power and tanks, but they were not advanced. They were so strident in their opposition to the status quo that they never effected the changes they foresaw (Mitchell was court-martialed for insubordination and Fuller ostracized; his public admiration of the odd mix of fascism and Kabbalah not helping matters). Instead, the “advanced” innovators in these fields were figures like U.S. admiral William Moffett, the father of the aircraft carrier, even though he was not a flyer himself, or the German general Heinz Guderian, the inventor of the blitzkrieg, even though he had not previously commanded tanks. In the military, advanced officers are those who help make the changes they foresee actually come true.
Big, bald, and imposing, Bob Bateman seems an unlikely candidate for advancement theory. But his Vic Mackey-like exterior hides a wicked wit and a startlingly sharp intellect. Bateman grew up in semirural Ohio, far from any military base, and had no real links to the military in his family or friends. Instead, his youthful fascination with military history led him to join the army. His postings then included training in the Army Rangers, commanding a unit in the historic 7th Cavalry, being designated as one of about 150 official “Army Strategists,” and service in Iraq. He also kept his interest in history going, serving as a professor of military history at West Point and Georgetown University.
Much like his exterior, this background, however, hides a few other surprises. Bateman may be a senior army officer, but he is also a frequent blogger on current events and even has a Facebook account. He is a historian whose skill at researching the past is evidenced by No Gun Ri, his award-winning book on the Korean War. But he also looked forward in a book called Digital War: A View from the Front Lines. For this book, Bateman assembled a team of young officers to wrestle with what modern technology was doing to war, from the perspective of those in the field.
“When people think about the future of technology, they think of things like The Jetsons and all that. But it’s not going to be like that,” explains Bateman. He is not a pure proponent or cheerleader for unmanned systems. Indeed, this soldier is dubious of some of the rosier futuristic visions like Ray Kurzweil’s prediction. “Kurzweil, while an interesting technologist, is not much of a success as a cultural (or economic) anthropologist.” Bateman thinks Kurzweil misses that technology advances in fits and starts, not so much a steady upward curve. Bateman does, however, think that something akin to the Singularity is on its way. “The Turing test [where a machine will finally be able to trick a human into thinking it is a person] is going to fall fairly soon, and that will cause some squeamish responses.”
Bateman is representative of the first generation of officers to truly ponder an idea once seen as not merely insane but even sinful within the military. After he came back from Iraq, where he served as a strategist for then Lieutenant General David Petraeus, he was assigned to the Office of Net Assessment, the Pentagon’s shop for figuring out how to master the upcoming RMA. He is now helping to shape how the military will fight future wars, using unmanned systems.
More than technology itself, explains Bateman, it is history that is driving the U.S. military toward using more unmanned systems. “First and foremost, it’s due to an inclination extant since the Second World War that the United States will always spend money instead of lives if at all possible. Exacerbating that is a trend towards preferences for increasingly complex systems.” He sees a U.S. military that will become increasingly automated over the next two decades, but, just like his critique of Kurzweil, at uneven rates, with some services and specialties adapting quicker than others.
Bateman, though, is worried by the lack of an overarching plan for how the military might operate in such a future. There is much going on, but it is “completely bottom up right now.” As a historian, he thinks the best parallel might be to the difficulties the army had before World War II at integrating tanks into its plans and operations, especially when it was led by “leaders not able to think beyond their [World War I] war experiences, where the pace of war was at a two-and-a-half-mile-an-hour clip.”
As a result, the U.S. Army entered World War II as mostly mechanized, but without a workable plan to make the most of the new technologies. For example, unlike the Germans, it hadn’t yet worked out that tanks would fight best if coordinated together with their own onboard two-way radios, which would allow units to move together effectively in the midst of battle. “So, in 1942, the U.S. Army had to rip out the radios from Rhode Island State Police cars to equip its tanks on the way to North Africa.”

DOCTRINE: YOU BETTER GET IT RIGHT

Bateman is talking about the need for a “doctrine.” A doctrine is the central idea that guides a military, essentially its vision of how to fight wars. A military’s doctrine then shapes everything it does, from how it trains soldiers and what type of weapons it buys to the tactics it uses to fight with them in the field. Doctrines also depend on a bit of prediction about the future. In a sense, doctrine is an “outline of how we fight, based on past experience and an educated guess about likely future circumstance.”
Yogi Berra put it best: “If you don’t know where you are going, you will wind up somewhere else.” Hence the stakes for choosing the right doctrine are huge. Technologies matter greatly in war, but so do the visions that shape the institutions that use them. A telling historic example comes from that same period between the world wars that Robert Bateman referred to. The British were the first to introduce tanks, or “landships,” as their original sponsor Winston Churchill called them, near the end of World War I. But they had no doctrine at all on how to use them. At the 1917 battle of Cambrai, for example, the British tanks finally broke through the Germans’ trench lines, but there was no plan on what to do next and the offensive ended only six miles in.
Doctrines began to be developed after the war, and the British and French were widely recognized as the leaders at armored warfare. In 1927, the Germans didn’t have a single tank, while the British had put together a mechanized force consisting of tanks, trucks, and armored cars. The British, however, chose a doctrine that envisaged tanks as suitable only for either scouting ahead of the force or supporting infantry units. So they bought a mix of small and light tanks and heavy and slow tanks. They did not plan to gather tanks together for rapid, mass attacks, nor did they foresee the importance of tanks’ being able to coordinate and communicate (so, like the U.S. Army, no two-way radios). When it came to organizing them as units, the greatest premium was placed on preserving the identity of the old British army regiments that dated back centuries, not on what structures worked best for tank warfare. Finally, there was no plan to coordinate ground operations with another new technology, the airplane. The British army had little interest in what its officers described as those “infernal machines” in the air, while the leaders of the new Royal Air Force saw supporting the forces on the ground as akin to the “prostitution of the air force.”
The French made similar doctrinal choices with their revolutionary new technology. They only saw the new machines as suitable for supporting infantry. Their designs did not plan for coordination with other units, nor even for fighting other tanks. Once built, the French tanks were mainly distributed across the force in small numbers. This doctrinal choice wasn’t just because of tradition and bureaucratic politics, as in Britain, but also because the socialist French civilian government was distrustful of the professional military, fearing a coup. So it resisted any highly technical doctrine that gave professionals more sway.
Having lost the previous war, the Germans were a bit more open to change. The head of the German army during the interwar period, General Hans von Seeckt, focused on fostering an atmosphere of innovation among his officer corps. He set up fifty-seven committees to study the lessons of World War I and to develop new doctrines, based not only on what had worked in the past, but also on what could work in the future.
The force soon centered on a doctrine that would later be called the blitzkrieg, or “lightning war.” Tanks would be coordinated with air, artillery, and infantry units to create a concentrated force that could punch through enemy lines and spread shock and chaos, ultimately overwhelming the foe. This choice of doctrine influenced the Germans to build tanks that emphasized speed (German tanks were twice as fast) and reliability (the complicated French and British tanks often broke down), and that could communicate and coordinate with each other by radio. When Hitler later took power, he supported this mechanized way of warfare not only because it melded well with his vision of Nazism as the wave of the future, but also because he had a personal fear of horses.
When war returned to Europe, it seemed unlikely that the Germans would win. The French and the British had won the last war in the trenches, and seemed well prepared for this one with the newly constructed Maginot Line of fortifications. They also seemed better off with the new technologies as well. Indeed, the French alone had more tanks than the Germans (3,245 to 2,574). But the Germans chose the better doctrine, and they conquered all of France in just over forty days. In short, both sides had access to roughly the same technology, but made vastly different choices about how to use it, choices that shaped history.

DOCTRINE, SCHMOCTRINE

Developing the right doctrine for using unmanned systems is thus essential to the future of the force. If the U.S. military gets it right, it will win the wars of tomorrow. If it doesn’t, it might build what one army officer called “the Maginot Line of the 21st century.”
The problem today is that there isn’t much of a doctrine being implemented, let alone a right or wrong one. Robert Bateman and his colleagues worry that the United States is in a similar position as the British toward the end of World War I. It has developed an exciting new technology, which may well be the future of war. And it is even using the technology in growing numbers. Indeed, the number of unmanned ground systems today in Iraq is just about the same as the number of tanks that the British had at the end of World War I. But it doesn’t yet have an overall doctrine on how to use them or how they fit together. “There is no guiding pattern, no guiding vision,” laments Bateman.
A survey of U.S. military officers backs him up. When the officers were questioned about robots’ future in war, they identified developing a strategy and doctrine as the third least important aspect to figure out (only ahead of solving interservice rivalry and allaying allies’ concerns). One commentator said the military’s process of purchasing systems, despite not having operational plans for them, “smacked of attention deficit disorder.”
Soldiers down the chain are also noticing this lack of an overall doctrine out in the field. An air force captain, who coordinates unmanned operations over Iraq, insists, “There’s got to be a better way than just to fly a Pred along a road hoping to see an IED. . . . There’s no long-term plan for what you do. It’s not ‘Let’s think this better.’ It’s just ‘Give me more.’ ” Enlisted troops make similar comments, pointing out how there are not even dedicated test ranges for these new technologies. They even joke about the fact that the SWORDS robotic machine-gun system ended up having its first field trials on a test range originally designed to help the army figure out which boots and socks to buy. One army sergeant complained that “every time we turn around they are putting some new technology in our hands,” and yet no one seems to have a master plan of where it all fits together. When his unit in Iraq was given a Raven UAV, no one instructed them on how, when, or where to use it. So his unit tried the drone out on their own, putting a sticker on it that said in Arabic, “Reward if you return to U.S. base.” A few days later, they “lost it somewhere in Iraq” and never saw the drone again. (In 2008, two U.S.-made Ravens were found hidden in Iraqi insurgent caches, which may indicate where it ended up, as well as that insurgents operate under a “finders keepers” ethic.)
Many others outside the military note the same lack of an overarching plan. “We don’t have the strategy or the doctrine,” says robotics pioneer Robert Finkelstein. “We are just now thinking how to use UAVs, when we should be thinking about how to use them in groups. What are the collectives of air and ground systems that might be most optimal?” “It’s a mess,” adds another scientist. “And it’s been a mess for decades.” Technology journalist Noah Shachtman comments that the plans for weaponizing robotics, a huge doctrinal step, were developed “mainly bottom-up.... With the Predator, it was almost, ‘Hey we got this thing, let’s arm it.’ ”
The robot makers concur. iRobot executives complain that the military is “behind” the technology, when it comes to developing plans for how best to use it, especially in recognizing robots’ growing smarts and autonomy. “They still think of robots as RC [remote-control] cars.” Similarly, at Foster-Miller, executives point to the lack of an overall plan for support structures as evidence of the gap. They note that there is “nothing yet on logistics to support or maintain robots.... The Army is just bootstrapping it.”
In the military’s defense, it is not just trying to figure out how to use a revolutionary new technology; it is trying to do so in the middle of a war. So it’s hard to pull back and do the kind of peacetime study and experimentation that the Germans did with tanks, when the force still faces the day-to-day challenges of battle.
Even the popularity of the new technology can end up hampering the development of doctrine to guide its uses. Explains a military scientist, “It started out with people arguing over who would get stuck with it [robotics programs], as no one wanted it. Now everyone is arguing over it, as everyone wants it.” Another complains that people are working on robots programs “in all sorts of offices, everywhere.” It sometimes leads to redundancy and waste, as well as a “not invented here” mentality among the various programs, which keeps unified doctrine from being developed. Indeed, very often I found myself in the odd position of telling military interviewees about a program just like the one they were working on at another base.
Gordon Johnson, who headed a program on unmanned systems at the U.S. Joint Forces Command, explains, “The Navy has programs, the Air Force has programs, the Army has programs. But there’s no one at the DoD [Department of Defense] level who has a clear vision of where we’re going to go with these things. How do we want them to interoperate? How do we want them to communicate with each other? How do we want them to interact with humans? Across the Department of Defense, people don’t really have the big picture. They don’t understand how close we really are to being able to implement these technologies in some sort of cohesive way into a cohesive force to achieve the desired effects.”

THE CURSE OF SUPERIORITY: INSURGENCY

Arthur C. Clarke may have been the science fiction writer behind 2001 and HAL the evil supercomputer, but one of his most militarily instructive stories is called “Superiority.” Set in a distant future, the story is written from the perspective of a captured military officer, who is now sitting in a prison cell. He tries to explain how his side lost a war even though it had the far better and newer weapons.
“We were defeated by one thing only—by the inferior science of our enemies,” the officer writes. “I repeat, by the inferior science of our enemies.” Clarke’s future officer explains that his side was seduced by the possibilities of new technology. It created a new doctrine for how it wanted war to be, rather than how it turned out. “We now realize this was our first mistake,” he writes. “I still think it was a natural one, for it seemed to us that all our existing weapons had become obsolete overnight, and we already regarded them as almost primitive.”
While his side builds around ever more complex technologies, the enemy keeps on using the same, seemingly outdated but still effective weapons and strategies. When the war comes, it doesn’t play out how the officer’s side hopes. The side with technologic superiority can’t figure out how to apply its new strengths, while the inferior side takes advantage of all its enemy’s new vulnerabilities, eventually winning the war.
Many think that this problem of “superiority” will be a central challenge to the American military in the future. Indeed, Clarke’s vision was so compelling that one air force general even published a series of similar stories on “How We Lost the High-Tech War,” written from the same fictional perspective of an American officer made prisoner after the United States loses a future war.
Generating what war will look like is a key aspect of picking the right doctrine. Much of war is no longer battles between equally matched state armies in open fields, but rather “irregular warfare,” that amalgam of counterinsurgency, counterterrorism, peace, stability, and support operations. None of these, as professor of strategy Jeffrey Record notes, “are part of the traditional U.S. military repertoire of capabilities.” (Record made this argument in the U.S. Army’s journal, in an article titled “Why the Strong Lose.”)
Whether in Iraq, Afghanistan, or some future failed state, it is reasonable to predict that the U.S. military will find itself embroiled in a fair number of insurgencies in the years ahead. As Army War College expert Steven Metz writes, “During the Cold War, insurgent success in China, Vietnam, Algeria, and Cuba spawned emulators. While not all of them succeeded, they did try. That is likely to happen again. By failing to prepare for counterinsurgency in Iraq and by failing to avoid it, the United States has increased the chances of facing it again in the near future.” Many even see a future of world wars not being localized battles of asymmetry, but global insurgencies, carried out by networks of affiliated national insurgencies and transnational terrorist movements, linking all the various conflicts together. The result, explains army secretary Francis Harvey, is that “in discussing any modernization effort, in discussing any new system for the Army, one must address its applicability to pre-insurgencies and insurgencies.”
The problem that many foresee for the United States in battling these insurgencies is the very same one as for Clarke’s fictional officer. Much of war will be shaped by “asymmetry.” But just like David facing Goliath with his sling, the advantage does not always lie with the bigger, technologically superior power. Retired marine officer T. X. Hammes notes that the only wars the United States has ever lost were against unconventional enemies using worse technology. In his opinion, this isn’t going to change anytime soon. “We continue to focus on technological solutions at the tactical and operational levels without a serious discussion of the strategic imperatives or the nature of the war we are fighting. I strongly disagree with the idea that technology provides an inherent advantage to the United States.”
Others around the globe agree. A set of Chinese military thinkers flavorfully described the military dilemmas the U.S. military will face: “On the battlefields of the future, the digitized forces may very possibly be like a great cook who is good at cooking lobsters sprinkled with butter. When faced with guerrillas who resolutely gnaw corncobs, they can only sigh in despair.” Or, as one U.S. Air Force general said of the IED challenge in Iraq, “We have made huge leaps in technology, but we’re still getting guys killed by idiotic technology—a 155mm shell with a wire strung out.”
Wrestling with such issues is another “advanced” contemporary of Robert Bateman’s, Lieutenant Colonel John Nagl. Like Bateman, Nagl is a bit of a Renaissance man. A recently retired armor officer, he served in both the Gulf War and the Iraq war, as well as taught at West Point. Nagl is also considered one of the world’s top experts on counterinsurgency.
During his Rhodes scholarship at Oxford University, long before the issue was the hot topic it is today, the former tank commander researched how nations won (or more typically lost) against insurgencies. Capturing the difficulty that professional militaries face in such wars, his thesis was tellingly entitled Learning to Eat Soup with a Knife. Years later, when the U.S. Army realized in Iraq that it needed to relearn how to fight insurgencies, Nagl’s book became required reading among its officer corps. As a later review described of its influence, “The success of DPhil papers by Oxford students is usually gauged by the amount of dust they gather on library shelves. But there is one that is so influential that General George Casey, the commander in Iraq, is said to carry it with him everywhere.” Nagl was then asked to help write the U.S. Army and Marine Corps’ new Counterinsurgency Field Manual, which became the basis for U.S. operations in Iraq from 2007 onward.
As Nagl explains, even the most advanced technology cannot resolve the political challenges that drive insurgencies. “Defeating an insurgency is not primarily a military task.... Counterinsurgency is a long, slow process that requires the integration of all elements of national power—military, diplomatic, economic, financial, intelligence, and informational—to accomplish the tasks of creating and supporting legitimate host governments that can then defeat the insurgency that afflicts them.”
By Nagl’s calculations, winning these sorts of wars is not simply about putting steel on a target. It is about creating an environment in which an insurgent force loses the popular support it needs to hide and sustain itself. Indeed, as the British philosopher Edmund Burke said back in 1775, when America’s founding fathers were planning their own asymmetric battle against a vastly superior foe, “The use of force is but temporary. It may subdue for a moment, but does not remove the necessity of subduing again....A nation is not governed which is perpetually to be conquered.”
So, while the United States may enter such battles as the technologically superior side, its unmanned systems aren’t the silver bullet, especially when so much of these wars isn’t about warfare. Explains military expert Fred Kagan, “When it comes to reorganizing or building political, economic, and social institutions, there is no substitute for human beings in large numbers.” Or, as only an enlisted U.S. Marine could put it, good troops and good tactics are “more effective than all the high-tech shit.”
Nagl found that winning these sorts of fights depends on building an intimate knowledge of the local political, economic, and social landscape. You have to know who are your friends, who are your foes, and figure out how to persuade those standing on the sidelines to join in against the bad guys. In this effort, not all technology is useful. As one U.S. general complained of the challenges in Iraq, “Insurgents don’t show up in satellite imagery very well.” And the type of distance war that unmanned systems enable can even make the problem worse. “People sitting in air conditioned command cells in distant countries, betting the farm on UAV optics or Blue Force Tracker symbology, will never get it right. You have to ‘walk the field’ to fight the war,” argued an army officer. “After all the GBUs [guided bomb units] have been dropped and the UAVs have landed, war remains a very human business. It cannot be done long-distance or over croissants and lattes in teak-lined rooms. It is done in the dirt, over chai, conversation, and mutual understanding.”

A WIRELESS REVOLUTION TO FACE THE FACELESS INSURGENCY

With technology not a silver bullet and insurgents frequently able to flummox their American foes in places like Iraq or Afghanistan, there is a growing attitude among many analysts that technology has no place in the kind of irregular warfare that seems to be the future of conflict. They argue that this means that the doctrine that shapes how militaries fight these wars will move away from using new technologies, including even unmanned systems.
This kind of “all or none” attitude is just as incorrect as those that claim technology as the cure-all. While high technology may not be the “silver bullet solution” to insurgencies, it doesn’t mean that technology, and especially unmanned systems, doesn’t matter in these fights. “I’m bothered by the old canard that counterinsurgency is purely a ‘human’ endeavor where technology plays a little role,” says Steven Metz, a professor at the Army War College and author of the book Perdition’s Gate: Insurgency in the 21st Century. “That may be true if we are talking only about the ‘Joint Vision’ [i.e., the Cebrowski-Rumsfeld network-centric] type of technology designed for major conventional war, but I am convinced there is the opportunity for technological breakthrough, perhaps even a revolution, if we approach the issue differently. Robotics, AI, and nonlethality are, I think, the key technologies in this realm.”
In 2007, one security analyst summed up the antitechnology position to me by declaring that “Iraq proved how technology doesn’t have a big place in any doctrine of future war.” In fact, the Iraq war has had the opposite effect for unmanned systems. It was actually the war that proved robots could be useful, which finally led them to be truly accepted. “We’ve already crossed the watershed. This was the war where people said, ‘UAVs? Yes, give me more!’ ” says strategic studies expert and Pentagon adviser Eliot Cohen.
It is interesting how quickly these attitudes changed. Lieutenant General Walter Buchanan, the U.S. Air Force commander in the Middle East, recalls the run-up to the Iraq war. “In March of 2002, [during] the mission briefings over Southern Iraq at that time, the mission commander would get up and he’d say, ‘OK, we’re going to have the F-15Cs fly here, the F-16s are going to fly here, the A-6s are going to fly here, tankers are going to be here today.’ Then they would say, ‘And oh by the way, way over here is going to be the Predator.’ We don’t go over there, and he’s not going to come over here and bother us....It was almost like nobody wanted to talk to them.”
Other commanders remember the same attitude at the time toward drones in the army, as the units planned to cross into Iraq. “For the entire U.S. Army’s V Corps, we had one UAV baseline available to the corps,” recalls the commander, General William Wallace, who went on to lead the U.S. Army’s Training and Doctrine Command. “It was a Hunter UAV.”
Attitudes changed, and so did the numbers and use of UAVs. “It wasn’t too long before...people were incorporating the Predator into the mission plan as part of your ‘gorilla package,’ ” described General Buchanan of what soon became the standard air force strike operations in Iraq. By 2007, the air force’s drones were logging more than 250,000 flight hours a year. Similarly, General Wallace’s unit was soon using not one Hunter drone, but more than seven hundred Hunters and other types of UAVs; the entire fleet of army drones in Iraq logged another 300,000 flight hours in 2007. Indeed, when the military surveyed its commanders in the field about their views of UAVs, at every level of command, they responded that they wanted more. In 2008, the Pentagon estimated that the demand for drones has gone up 300 percent each and every year since the start of the war. Demand was so high that the air force retooled its pilot training program to churn out more drone pilots in 2009 than pilots for all its manned fighter planes combined.
The ultimate proof of the weapons’ acceptance came in the form of a bureaucratic food fight over who got to control them. Whereas drones had once been shunned, by 2007, the air force saw that it was using unmanned planes as never before. But so was the army. Even worse, from the air force perspective, the army was using robotic planes at a greater number and scale (the army flew 54 percent of all drone flights from 2006 to 2008). So the air force issued a memo in 2007 offering to be the “executive agent” for all UAVs that fly above thirty-five hundred feet, controlling not only what drones would get built, but also how they would be used. The army, of course, saw the air force’s memo not as a generous offer to take those troublesome robots off its hands, but as “a power grab.” The Pentagon ultimately took the King Solomon approach and created the Joint Center of Excellence. Its commander slot will rotate back and forth between an army and air force general.
The same sort of change also happened in military attitudes toward robots on the ground. “When I joined [Foster-Miller] we had a hard time selling them,” recalls Ed Godere. “Robots were only used for EOD and the EOD techs thought robots were for sissies.... It really didn’t take off until we went into Iraq.” Other leaders at the firm concur. Says engineer Anthony Aponick, “After five years of trying to push robots into the market, Iraq created customer pull.” Foster-Miller’s vice president Bob Quinn agrees. “The user perception changed overnight from ‘We don’t want robots’ to ‘Holy shit, we can’t do without them.’ ”
Having no real use for ground robots in 2001, the U.S. military was sending them out on more than thirty thousand missions a year by 2006. In 2007, the army and Marine Corps announced they wanted to expand these numbers even more, by buying a thousand new robots by the end of the year, and planning to buy an additional two thousand within the next five years, each of which would go out on hundreds of missions a year. In 2008, the military revised these plans. It wanted to double the amount of ground robots it had planned to buy just a year earlier.
Perhaps the person best equipped to weigh the overall change is Senator John Warner, the Virginia Republican who once had to “fire his shotgun into the heavens” in order to try to force the military to start buying robots. “For a long time, the only thing most generals could agree on was that they didn’t want any unmanned vehicles. Now everyone wants as many as they can get.”

THE WA R BEHIND THE WAR

Insurgencies are sometimes framed as an asymmetric battle between one side that depends on high-tech weapons and the other side that eschews them. This may have been true of battles in the past, where rifle- and machine-gun-wielding imperialists took on tribes armed with spears, but it just isn’t the case in modern war, including in Iraq. Instead, there is a sophisticated back-and-forth going on between the two sides in technology, the second reason why Iraq didn’t end the role of unmanned technology in war. “We adapt, they adapt,” says John Nagl. “It’s a constant competition to gain the upper hand.” Concurs one of the robot makers at Foster-Miller, “There is a huge intellectual battle going on between U.S. technology and the insurgents.”
The battle over what that general called the “idiotic technology” of IEDs aptly illustrates the technology war behind the scenes in insurgencies. When IEDs were first used, they were pretty simple and straightforward, usually homemade, jury-rigged bombs that were ignited by a detonating wire (hence the military term “improvised,” a sort of putdown). The attacks were deadly, but U.S. soldiers could avoid them by keeping an eye out for wires and then quickly track down the insurgent by following the wire to their hide-site. Soon, the insurgents’ IEDs became more sophisticated and complex, using timing devices or pressure switches. Then came passive infrared triggers, like the ones used in burglar alarms, which left no telltale wires. After this the insurgents started to use wireless triggers, such as reconfigured car door openers and cordless telephones, which allowed distance between them and their targets. The U.S. military responded with electronic jammers and the insurgents developed systems designed to fool the jammers. As the technologic cat-and-mouse game went back and forth, by 2007, the U.S. military reported that the insurgents in Iraq had developed more than ninety ways of triggering IEDs.
The same kind of advancement happened with the payloads of these bombs. As IED attacks grew more common, the U.S. military began to “up-armor” its vehicles, so that they could resist the explosions of roadside bombs. The insurgents then countered with specially designed explosively formed projectiles (EFPs). These are shaped explosive charges, which send out a slug of molten metal that can burn through most armor, even a tank’s. Illustrating their technical savvy, the insurgents then spread the word on how to make these weapons in instructional DVDs and over the Internet.
In this technical war within the insurgency, robots emerged as one of the U.S.’s best weapons, and so here too has emerged a back-and-forth between the two sides. “The enemy realizes that if they can take out [the robot] they can really hurt our capabilities,” says Cliff Hudson, coordinator of the Pentagon’s Joint Robotics Program. Soon after U.S. robots hit the battlefield, insurgents began to shield their IEDs with anything that could make the robots’ job harder. They placed tiny “walls” of concrete and even garbage around the bomb, to keep the robot from getting close enough to reach the bomb with its arm. They began to place the bombs high off the ground. As time went on, they began to experiment with their own jamming. American robot operators describe the challenge of facing an enemy who is constantly observing and studying their operations. “They’re always trying to outsmart us, and we’re always trying to outsmart them,” said air force technical sergeant Ronald Wilson.
And insurgents began to specially target both the EOD teams and their robots. Indeed, in 2007, al-Furqan, the insurgents’ media outlet, released a twenty-five-minute video, available on DVD, that profiled their vehicles and equipment and how best to attack them. It was entitled “The Hunters of Minesweepers.”
Attacks on robots soon reached the point that the military had to create the Joint Robotics Repair Facility, better known as the “robot hospital.” The facility repairs as many as 150 robots a month. Described Foster-Miller’s William Ribich, “Insurgents have been intensifying their attacks on robots because they know if they can disable them, soldiers will have to go out and defuse IEDs. The robot hospitals do whatever it takes to meet a four-hour turnaround time and get damaged Talons back and fully operational.”
Then came the next step in the technologic back-and-forth. As one side evolves to using more and more robots, the other side is following suit. In Iraq, insurgents have been able to capture U.S. robots on occasion. And in certain instances, they used them back. One U.S. soldier recounted arriving at a bomb scene after an IED went off, only to be flummoxed by how a bomb got there in the first place. “We figured it out by the track marks.” An American counter-IED robot had been transformed into a mobile IED.
Far from being uninterested in new technology or only able to use captured weapons, “Jihadis are also concerned about developing their own technology,” described one insurgent I interviewed in 2006. Much like their homemade bombs, the diversity of the insurgent-made delivery systems took off. They ranged from jury-rigged remote-controlled toy cars, much like the U.S. military’s MARCBOT, to a remote-controlled skateboard that one U.S. Army colonel came across in 2005. It slowly rolled toward his unit, “like the wind was pushing it. But a smart soldier noticed that the wind was going in the opposite direction.”
At Foster-Miller’s offices in Massachusetts, a photo up on the wall shows what one day may be their future competition, an insurgent’s version of a Talon robot that looks like it was built in a backyard. “It’s pretty lame, only able to drive in a straight line,” says one engineer with a laugh. That may be true for now, but experts in robotics see this back-and-forth continuing well into the future. Describes military robotics pioneer Bart Everett, “It’s basically a game of one-upsmanship. A threat is introduced, we find some means to counter it. The bad guys change the threat; we have to then change our counterstrategy. The robot is just another standoff means to that end, with the decided advantage of being very flexible when the time comes to try something different.”

“A N ASYMMETRIC SOLUTION TO AN ASYMMETRIC PROBLEM”

“Check that dude next to the white Nissan,” says marine captain Bert Lewis. It is 2006 and Lewis is watching live video from a UAV circling over Anbar province in Iraq. On the screen is a man in a white dishdasha (the garment many Arabs wear, almost akin to a robe). He is innocently standing alongside a busy street, but then starts to hide a boxy package in the dirt. “FedEx delivery,” Lewis jokes of the temerity of the likely IED bomber. “I don’t believe this dude.”
The man then runs away from the package he’s buried and darts along a nearby riverbank. He starts to think he is being followed, so he doubles back, running as hard as he can. He sneaks between houses, crosses a field, and then back to the riverbank. After fifteen minutes of running, the man is spent. He slows down to a walk and then stops, bent over, with his hands on his knees. Lewis knows this, as he is still watching the man via the drone. “Sucking wind,” Lewis speaks into the radio. “Get the coordinates to the QRF.”
A Quick Reaction Force of marines heads out to capture the man. Just as they are about to arrive, the drone spies a small wooden boat pulling up at the riverbank. “A twofer!” exclaims Lewis. When the marines get there, the man scrambles to his feet. But with no place to run or hide, he and the boatman raise their arms and give up without a fight.
Technology is certainly not a magical cure-all in fighting irregular wars. But experiences like the capture of that “IED dude,” described by marine veteran Bing West, are showing the final reason why Iraq didn’t end the revolution of unmanned systems just as it was starting. Unmanned systems are not making war easy or perfect, as the network-centric crowd would have it, but they still are proving to be incredibly useful, even in counterinsurgency.
One of the primary challenges in fighting an insurgency is that the stakes are higher for the local foes. Not only do they know the landscape, but they usually care more about the outcome, and are willing to spend more blood on it. So weaker forces often win not by defeating technologically superior forces in battle, but simply by outlasting them, dragging the wars on long enough until the publics back home get worn out. As Lieutenant General David Barno, the former commander of U.S. forces in Afghanistan, described the Taliban’s strategy, “Americans have the watches, they have the time.”
Robotics, however, may be viewed as “an asymmetric solution to an asymmetric problem,” according to one executive at Foster-Miller. If the political leaders on one side aren’t willing to send enough troops, as seems to have happened in Iraq, “we can use robots to augment the number of boots on the ground.” If the enemy’s strategy is to wear down its foe’s stamina, by gradually bleeding away public support, robotics turns this strategy inside out. Writes army expert Steven Metz, “Robotics also hold great promise for helping to protect any American forces that become involved in counterinsurgency. The lower the American casualties, the greater the chances that the United States would stick with a counterinsurgency effort over the long period of time that success demands.”
Robots are also helpful to the task at hand, beating the enemy. As one general warns, defeating an insurgency is not just about “winning hearts and minds with teams of anthropologists, propagandists and civil-affairs officers armed with democracy-in-a-box kits and volleyball nets.” It still requires putting some people in the dirt. That is, killing insurgents doesn’t automatically lead to victory. But, as Metz puts it, “Solving root causes is certainly easier with insurgent leaders and cadre out of the way.”
The primary challenge in fighting irregular wars is the difficulty of “finding and fixing” foes, not the actual killing part. Insurgents don’t just take advantage of complex terrain (hiding out in the jungle or cities), they also do their best to mix in with the civilian population. They make it difficult for the force fighting them to figure out where they are and who they are. Here is where unmanned technologies are proving especially helpful, particularly by providing an all-seeing “eye in the sky.” Drones not only can stay over a target for lengthy periods of time (often unnoticed from the ground), but also have tremendous resolution on their cameras, allowing them to pick out details, such as what weapon someone is carrying or the make and color of the car they are driving. This ability to “dwell and stare,” as one Predator pilot described, means that the unit can get a sense of the area and “see things develop over time.” Another describes how by watching from above, units can build up a sense of what is normal or not in a neighborhood, much the way a policeman gradually gets to know his beat. “If we can work one section of a city for a week,” says Lieutenant Colonel John “Ajax” Neumann, commander of the UAV detachment in Fallujah, “we can spot the bad guys in their pickups, follow them to their safe houses and develop a full intelligence profile—all from the air. We’ve brought the roof down on some. Others we’ve kept under surveillance until they drive out on a highway, then we’ve vectored in a mounted patrol to capture them alive.”
The advantage of UAVs is not merely the dwell time, and the accuracy of their sensors, but also that they create a backlog of events that can prove incredibly useful. For example, if an insurgent enters a building, analysts can then bring up a history of what happened at that site in the past, such as if other insurgents dropped off a package at it four days back. One system, called Angel Fire, even has “TiVo-like capabilities” that watch entire neighborhoods, but allow the user to zoom in on particular areas or buildings of interest and then replay video of past events at the site.
An example of just how useful this technology can be came in 2006, when the army set up a high-tech, classified unit called Task Force Odin (the chief Norse god, but also short for “Observe-Detect-Identify-Neutralize”). A Sky Warrior, the army’s version of the Predator drone, was matched up with a 100-person team of intelligence analysts and a set of Apache attack helicopters (the “neutralize” part). The Odin team was able to find and kill more than 2,400 insurgents either making or planting bombs, as well as capture 141 more, all in just one year.
Soon these systems will be integrated with AI, allowing automated monitoring, akin to the way a TiVo will pick out and record TV programs that it thinks the viewer might later find of interest. The most promising may be the “Gotcha sensor,” an air force program to “provide persistent staring” at an area, where the system will automatically note any significant changes.
Such footage can also be used as the sort of evidence needed to roll up insurgent cells. A 10th Mountain Division soldier recounts how one of their drones watched a group of pickup trucks swerve into an empty lot, fire off rockets, and then drive away before any response could be made. “We followed one pickup after it fired some rockets,” says Staff Sergeant Francisco Tataje. “The driver had a perfect ID. No incriminating stuff. We gave the interrogation team a copy of our video. They called back later to say the guy confessed.”
Finally, in insurgencies with no fixed front lines, it is especially wearing on soldiers to know that they are always under potential attack, even when back at base. Here too added eyes are now viewed as almost indispensable. Said Sergeant First Class Roger Lyon, a 10th Mountain Division intelligence specialist, “It’s a comforting sound on the battlefield, when you’re going to sleep and you hear that sound of the Predator engine, somewhere between a propeller airplane and a lawn mower, knowing it is looking out for you.”
Of course, not every challenge presented by insurgencies is solved by having robotic eyes in the sky. For one thing, the cameras watching in drones above are akin to those at traffic stoplights. While people may be less likely to run a red light when a cop is nearby, they are more likely to do so when it’s just a camera watching them. “Situational awareness ain’t deterrence,” as one marine colonel put it. Similarly, insurgents do all they can to look like civilians. So even a great sensor can have a tough time distinguishing between the two if it is only operating from above. A truck carrying boxes of fruit looks just like a truck carrying boxes of rifles.
The reality is that a combination of the age-old methods with the new technologies seems to work best in cracking what is going on in these complex fights. For example, in 2006, Jordanian intelligence captured a mid-level al-Qaeda operative. He then indicated that Abu Musab al-Zarqawi, the leader of al-Qaeda in Iraq, was increasingly listening to the advice of a certain cleric. They passed this on to the U.S. military, which deployed a UAV to follow the cleric around 24/7. The drone eventually tailed the cleric to a farmhouse, where he turned out to be meeting with Zarqawi. The farmhouse was then taken out by a pinpoint airstrike, guided in by lasers and GPS coordinates courtesy of the drone. As U.S. Air Force captain John Bellflower put it, “While technology is not the sole answer, an old-school solution matched with modern technology can assist with the problems of today’s modern insurgencies.”

THE MOTHERSHIP HAS LANDED

As we enter what one marine officer called “an era of ‘oh gee’ technology coming to warfare,” it is becoming clear that robots are going to be a major player in the future of U.S. military doctrine, even in irregular wars and counterinsurgencies. In many ways, the most apt historic parallel to Iraq may well turn out to be World War I. Strange, exciting new technologies, which had been science fiction just years earlier, were introduced and then used in greater numbers on the battlefield. They didn’t really change the fundamentals of the war and in many ways the fighting remained frustrating. But these early models did prove useful enough that it was clear that the new technologies weren’t going away and militaries had better figure out how to use them most effectively. But much like what happened after that war, the exact shape and contours of the possible new doctrines are only slowly developing, despite the early efforts of the “advanced” thinkers wrestling with it. One air force officer joked about his force’s looming future of unmanned fighter planes, “UCAVs are the answer, but what is the question?”
Akin to the intense interwar doctrinal debates of the 1920s and 1930s over how to use tanks and airplanes, there is not yet agreement on how best to fight with the new robotic weapons. There appear to be two directions in which the doctrine might shake out, with a bit of tension between the operating concepts. The first is the idea of the “mothership,” perhaps best illustrated by the future tack the U.S. Navy is moving toward with unmanned systems at sea.
The sea is becoming a much more dangerous place for navies in the twenty-first century. Drawing comparisons to the problems traditional armies are facing with insurgencies on the land, Admiral Vern Clerk, former chief of naval operations, believes that “the most significant threat to naval vessels today is the asymmetric threat.” The United States may have the largest “blue water” fleet in the world, numbering just under three hundred ships, but the overall numbers are no longer on its side. Seventy different nations now possess over seventy-five thousand antiship missiles, made all the more deadly through “faster speeds, greater stealth capabilities, and more accurate, GPS-enhanced targeting.”
The dangers are even greater in the “brown water” close to shore. Here, small, fast motorboats, like the ones that attacked the U.S.S. Cole, can hide among regular traffic and dart in and out. Relatively cheap diesel-powered submarines can silently hide among water currents and thermal layers. Then there is the problem of mines. There are more than three hundred varieties of mines available on the world market today, ranging from basic ones that detonate by simple contact to a new generation of “smart” mines, stealthy robotic systems equipped with tiny motors that allow them to shift positions, so as to create a moving minefield.
As evidenced by the intense work with robotics at places like the Office of Naval Research in Arlington and SPAWAR in San Diego, the U.S. Navy is becoming increasingly interested in using unmanned systems to face this dangerous environment. Describing the “great promise” unmanned systems hold for naval war, one report told how “we are just beginning to understand how to use and build these vehicles. The concepts of operations are in their infancy, as is the technology. The Navy must think about how to exploit the unmanned concepts and integrate them into the manned operations.”
One of the early ideas for trying to take these technologies out to sea comes in the form of the U.S. Navy’s Littoral Combat Ship (LCS) concept. Much smaller and faster than the navy ships used now, the ships are to be incredibly automated. For example, the prototype ship in the series has only forty crew members, about a fourth of what was needed before. Only one person serves as the engine crew, mainly just monitoring computers, and only two in the bridge, driving the ship not with a traditional wheel but with a joystick and computer mouse. One sailor said that piloting the ship “is like playing a very expensive video game.” Notably, the ship actually maneuvers better under autopilot than when a human operates it. “Sometimes computers are better than humans,” admits a member of the bridge crew. Besides the crew onboard, there’s also a crew onshore, sitting at computer cubicles and providing support thousands of miles away.
Less important than the automation of the ship itself is the concept of change it represents. It has a modular “plug and play” capacity, allowing various unmanned systems and the control stations to be swapped in and out, depending on the mission. If the ship is clearing sea lanes of mines, it might pack on board a set of mine-hunting robotic mini-subs, which it would carry near to shore and then drop off for their searches. If the ship was patrolling a harbor, it might carry some mini-motorboats that would scatter about inspecting any suspicious ships. Or if it needs to patrol a wider area, it might carry a few UAVs. Each of these drones is controlled by crew members, sitting at control module stations, who themselves join the team only for the time needed. The manned ship really then is a sort of moving mothership, hosting and controlling an agile network of unmanned systems that multiply its reach and power.
The mothership concept isn’t just one planned for new, specially built ships like the LCS. Older ships all the way up to aircraft carriers might be converted to this mode. Already serving as a sort of mothership for manned planes, the U.S. Navy’s current plan for aircraft carriers entails adding up to twelve unmanned planes to each carrier. This number might grow. In a 2006 war game that simulated a battle with a “near-peer competitor” that followed the mode of fighting an asymmetric war with submarines, cruise missiles, and antiship ballistic missiles (i.e., China), the navy planners hit upon a novel solution. Because the unmanned planes take up less deck space and have far greater endurance and range, they reversed the ratio, offloading all but twelve of the manned planes and loading on eighty-four unmanned planes. Their “spot on, almost visionary” idea reportedly tripled the strike power of the carrier. As UAVs shrink in size, the numbers of drones that could fly off such flattops could go up further. In 2005, one of the largest aircraft carriers in the world, the 1,092-foot-long U.S.S. Nimitz, tested out Wasp Micro Air Vehicles, tiny drones that are only thirteen inches long.
The same developments are taking place under the sea. In 2007, a U.S. Navy attack sub shot a small robotic sub out of its torpedo tubes, which then carried out a mission. The robotic mini-sub drove back to the mother submarine. A robotic arm then extended out of the tube and pulled the baby sub back into the ship, whereupon the crew downloaded its data and fueled it back up for another launch. It all sounds simple enough, but the test of a robotic underwater launch and recovery system represented “a critical next step for the U.S. Navy and opens the door for a whole new set of advanced submarine missions,” according to one report.
The challenge the U.S. Navy is facing in undersea warfare is that potential rivals like China, Iran, and North Korea have diesel subs that “can sit at the bottom in absolute quiet,” describes one engineer. When these diesel subs hide in the littoral waters close to shore, all the advantages held by America’s fleet of nuclear subs disappear. Continues the expert, “You aren’t going to risk a billion-dollar nuclear sub in the littoral.”
Unmanned systems, particularly those snuck in by a fellow submarine, “turn the asymmetry around by doing [with unmanned craft] what no human would do.” For example, sonar waves are the traditional way to find foes under the sea. But these active sensors are akin to using a flashlight in the dark. They help you find what you are looking for, but also let everyone nearby know exactly where you are. Manned submarines instead usually quietly listen for their foes, waiting for them to make a noise first. By contrast, unmanned systems can be sent out on missions and blast out their sonar, actively searching for the diesel subs hiding below, without giving away where the mothership is hiding. Having its own fleet of tiny subs also multiplies the reach of a submarine. For example, a mother submarine able to send out just a dozen tiny subs can search a grid the size of the entire Persian Gulf in just over a day. A submarine that can launch a UAV that can fly in and out of the water like the Cormorant extends its reach even farther.
Such capabilities will lead to new operating concepts. One naval officer talked about how the robotic mini-subs would be like the unmanned “whiskers” used in the 1990s science fiction TV show SeaQuest DSV. (Basically, imagine a crappy version of Star Trek, set underwater in a futuristic submarine instead of a spaceship, with a dolphin instead of a Vulcan as the alien crew member, and you get SeaQuest.) “They would act as ‘force multipliers,’ taking care of programmable tasks and freeing up manned warships to take on more complex ones. And they could be sent on the riskiest missions, to help keep sailors and Marines out of harm’s way.” The robotic sub could be sent in to clear minefields from below, lurk around enemy harbors, or track enemy subs as they leave port. The U.S.S. Jimmy Carter, one of the navy’s Seawolf class subs, reportedly even has tiny robotic drones that can launch underwater and tap into “the under-sea fiber-optic cables that carry most of the world’s data.”
By pushing its robotic “eyes,” “ears,” “whiskers,” and “teeth” farther away from the body, the mothership doesn’t even have to be a warship itself. For example, with foreign nations increasingly unwilling to host U.S. bases ashore, the navy is moving to a doctrinal concept of “sea basing.” These would be large container ships that act like a floating harbor. Such ships, though, are slow, ungainly, and certainly not stealthy, hence vulnerable to attack. So the navy is developing a plan to protect them called Sea Sentry. The sea base would not just provide a supply station for visiting ships and troops ashore, but would also host its own protective screen of unmanned boats, drones, and mini-subs. Similar plans are being developed for other vulnerable targets at sea, such as big merchant ships, oil tankers, and even private oil rigs.
The concept of the mothership is not limited to the sea. For example, one firm in Ohio has fitted out a propeller-powered C-130 cargo plane so that it can not only launch UAVs, but also recover them in the air. The drones fly in and out of the cargo bay in the back, turning the plane into an aircraft carrier that is actually airborne.
Such motherships will entail a significant doctrinal shift in how militaries fight. One report described its effect at sea as being as big a transformation as the shift to aircraft carriers, projecting it would be the biggest “fork in the road” for the U.S. Navy in the twenty-first century.
Naval war doctrine, for example, has long been influenced by the thinking of the American admiral Alfred Thayer Mahan (1840-1914). Mahan didn’t have a distinguished career at sea (he reputedly would get seasick even in a pond), but in 1890 he wrote a book called The Influence of Sea Power on History, which soon changed the history of war at sea.
Navies, Mahan argued, were what shaped whether a nation became great or not (an argument of obvious appeal to any sailor). In turn, the battles that mattered were the big showdowns of fleets at sea, “cataclysmic clashes of capital ships concentrated in deep blue water.” Mahan’s prescriptions for war quickly became the doctrine of the U.S. Navy, guiding Teddy Roosevelt to build a “Great White Fleet” of battleships at the turn of the twentieth century and shaping the strategy that the navy used to fight the great battles in the Pacific in World War II. Analysts still describe it as “the touchstone for U.S. naval force planning” and note how it is still cited in nearly every speech by senior admirals, even a century after its publication.
The future of war at sea, however, bodes to look less and less like that which Mahan envisaged. With the new asymmetric threats and unmanned responses, the U.S. Navy of the twenty-first century is not planning for confrontations that only take place between two fleets, made up of the biggest ships, concentrated together into one place. Even more so, the places where ships fight won’t only be the blue waters far from shore. Instead, these battles are predicted to take place closer to shore. The ships involved won’t be “concentrated” together like Mahan wanted into one fleet, but rather be made up of many tiny constellations of smaller, often unmanned systems, linked back to their host “mother” ships. These, in turn, might be much smaller than Mahan’s capital ships of the past (one navy officer, an aircraft carrier man, joked that the LCS really stood for “little crappy ship”).
With Mahan’s vision looking less and less applicable to modern wars and technology, a new “advanced” thinker on twenty-first-century naval war doctrine is coming into vogue. The only twist is that he was born just fourteen years after Mahan.
Sir Julian Stafford Corbett (1854-1922) was a British novelist turned naval historian. Notably, Corbett was a friend and ally of naval reformer Admiral John “Jackie” Fisher, who introduced such new developments as dreadnoughts, submarines, and aircraft carriers into the Royal Navy. While he and Mahan lived in the same era, Corbett took a completely different tack toward war at sea. They both saw the sea as a critical chokepoint to a nation’s survival, but Corbett thought that the idea of concentrating all your ships together in the hope of one big battle was “a kind of shibboleth” that would do more harm than good. The principle of concentration, he described, is “a truism—no one would dispute it. As a canon of practical strategy, it is untrue.”
In his masterwork on naval war doctrine, modestly titled Some Principles of Maritime Strategy, Corbett described how the idea of putting all one’s ships together into one place didn’t induce all enemies into one big battle. Only the foe that thought it would win such a big battle would enter it. Any other sensible foe would just avoid the big battle and disperse to attack the other places where the strong fleet was not (something borne out later by the Germans in World War II). Moreover, the more a fleet concentrated in one place, the harder it would be to keep its location concealed. So the only thing that Mahan’s big fleet doctrine accomplishes in an asymmetric war, Corbett felt, is to make the enemy’s job easier.
Instead, argued Corbett, the fleet should spread out and focus on protecting shipping lanes, blockading supply routes, and generally menacing the enemy at as many locales as possible. Concentrations of a few battleships weren’t the way to go. Rather, much like the British Royal Navy policed the world’s oceans during the 1700s and 1800s, it was better to have a large number of tiny constellations of mixed ships, large and small, each able to operate independently. In short, a doctrine far more apt for today’s robotic motherships.
Even more shocking at the time, but now clearly “advanced,” Corbett emphasized that the navy should not just think about operations in the blue waters in the middle of the ocean, but also about how it could play a role in supporting operations on land. Describes one biographer, “Well before it was fashionable, he stressed the interrelationship between navies and armies.” This seems much more attuned to the role of the U.S. Navy today, which must figure out not merely how to beat an enemy fleet and protect shipping lanes, but also aid the fight on the land (it carried out over half of the fifteen thousand airstrikes during the 2003 invasion of Iraq).
Mahan won the first round in the twentieth century, but Corbett’s doctrine may well come true through twenty-first-century technology. It is not shocking, then, that many current “advanced” military thinkers are huge fans of Corbett’s and articles about him are proliferating in U.S. Navy journals; amusingly, despite the fact that he was an army officer, Robert Bateman even entered a 2007 U.S. Navy writing contest with an article extolling Corbett’s vision.

SWARMING THE FUTURE

The concept of motherships comes with a certain built-in irony. It entails a dispersion, rather than a concentration, of firepower. But the power of decision is still highly centralized and concentrated. Like the spokes in a wheel, the various unmanned systems may be far more spread out, but they are always linked back to the persons sitting inside the mothership. With unmanned systems, it becomes a top-down, “point and click” model of war, where it is always clear who is in charge. General Ronald Keys, the air force chief of air combat, describes a typical scenario that might take place: “An [enemy] air defense system pops up, and I click on a UCAS icon and drag it over and click. The UCAS throttles over and jams it, blows it up, or whatever.”
This philosophy of unmanned war is very mechanical, almost Newtonian, and certainly not one in which the robots will have much autonomy. It is not, however, the only possible direction that we might see doctrines of war move in, much as there were multiple choices on how to use tanks and airplanes after World War I. Places like DARPA, ONR, and the Marine Corps Warfighting Lab are also looking at “biological systems inspiration” for how robot doctrine might take advantage of their growing autonomy. As one analyst explains, “If you look at nature’s most efficient predators, most of them don’t hunt by themselves. They hunt in packs. They hunt in groups. And the military is hoping their robots can do the same.”
The main doctrinal concept that is emerging from these programs is “swarming.” This idea takes its name from how insects like bees and ants work together in groups, but other parallels in nature are how birds flock or wolves hunt in a pack. Rather than being centrally controlled, swarms are made up of highly mobile, individually autonomous parts. They each decide what to do on their own, but somehow still manage to organize themselves into highly effective groups. After the hunt is done, they then disperse. Individually, each part is weak, but the overall effect of the swarm can be powerful.
Swarming is not just something that happens in nature. In war, it is actually akin to how the Parthians, Huns, Mongols, and other mass armies of horsemen would fight. They would spread out over vast areas until they found the foe, and then encircle them, usually wiping them out by firing huge numbers of arrows into the foe’s huddled army, until it broke and ran. Similarly, the Germans organized their U-boats into “wolfpacks” during the Battle of the Atlantic in World War II. Each submarine would individually scour the ocean for convoys of merchant ships to attack. Once one U-boat found the convoy, all the others would converge, first pecking away at the defenses, and then, as more and more U-boats arrived on the scene, eventually overwhelming them. And it’s a style of fighting that is pretty effective. In one study of historic battles going all the way back to the wars of Alexander the Great, the side using swarm tactics won 61 percent of the battles.
Notably, 40 percent of these victories were battles that took place in cities. Perhaps because of this historic success of urban swarms, this same style of fighting is increasingly used by insurgents in today’s asymmetric wars. Whether it’s the Black Hawk Down battle in Somalia (1993), the battles of Grozny in Chechnya (1994, 1996), or the battles of Baghdad (2003, 2004) and Fallujah (2004), the usual mode is that insurgents hide out in small, dispersed bands, until they think they can overwhelm some exposed unit of the enemy force. The various bands, each of which often has its own commander, then come together from various directions and try to encircle, isolate, and overwhelm the enemy unit. This echoes T. E. Lawrence’s (better known as Lawrence of Arabia) account of how his Arab raiders in World War I used their mobility, speed, and surprise to become “an influence, a thing invulnerable, intangible, without front or back, drifting about like a gas.”
Swarms are made up of independent parts, whether it’s buzzing bees or insurgents with AK-47s, that have no one central leader or controller. So the self-organization of these groupings is key to how the whole works. The beauty of the swarm, and why it is so appealing to military thinkers for unmanned war, is how it can perform incredibly complex tasks by each part’s following incredibly simple rules.
A good example of this is a flock of birds. Hundreds of birds can move together almost as if they have a single bird in charge, speeding in one direction, then turning in unison and flying off in a different direction and speed, without any bird bumping into the other. They don’t just use this for what one can think of as tactical operations, but also at the strategic level, with flocks migrating in unison over thousands of miles. As one army colonel asked, “Obviously the birds lack published doctrine and are not receiving instructions from their flight leader, so how can they accomplish the kind of self-organization necessary for flocking?”
The answer actually comes from a researcher, Craig Reynolds, who built a program for what he called “boids,” artificial birds. As an army report on the experience described, all the boids needed to do to organize themselves together as a flock was for each individual boid to follow three simple rules: “1. Separation: Don’t get too close to any object, including other boids. 2. Alignment: Try to match the speed and direction of nearby boids. 3. Cohesion: Head for the perceived center of mass of the boids in your immediate neighborhood.” This basic boid system worked so well that it was also used in the movie Batman Returns, to create the realistic-looking bat sequences.
From simple rules then emerge complex behaviors. There are many other examples of how complex, self-organizing systems work outside of nature. One is how big cities like New York never run out of food, despite the fact that no one is in charge of creating a master plan for moving food into and around the city. Another is the odd phenomenon known as “the wisdom of crowds,” where a mass of relatively uninformed people tend to make smarter decisions in the aggregate than better-informed individuals do on their own. This explains how the index of the stock market beats almost every professional stock picker.
Roboticists are now using these same approaches to get relatively unsophisticated robots to carry out very sophisticated tasks. James McLurkin, Swarm Project manager at iRobot, describes how bees and ants helped inspire his team. “We don’t want to copy their behavior, but want to look at a working system that basically recruits workers to different sites.”
The only limit is that the individual parts in the swarm have to be able to stay in contact with at least some of the other parts. This allows them to relay information across the system on where each part is and where the swarm should form or head to. The U.S. military hopes to do this by building what it calls “an unassailable wireless ‘Internet in the sky.’ ” Basically, it plans to take the kind of wireless network you might use at Starbucks and make it global by beaming it off of satellites, so a robot anywhere in the world could hook into and share information instantaneously. Of course, others think that this will make U.S. military doctrine inherently vulnerable to computer hacking, or even worse. As one military researcher put it, “They should just go ahead and call it Skynet.”
Just as the birds and the boids follow very simple rules to carry out very complex operations, so would an unmanned swarm in war. Each system would be given a few operating orders and let loose, each robot acting on its own, but also in collaboration with all the others. The direction of the swarm could be roughly guided by giving the robots a series of objectives ranked in priority, such as a list of targets given point value rankings. Just as a bird might have preferences between eating a bug or a Saltine cracker, taking out an enemy tank might be more useful than taking out an enemy outhouse. The swarm would then follow Napoleon’s simple credo about what works best in war: “March to the sound of the guns.”
The Santa Fe Institute carried out a study on “Proliferated Autonomous Weapons,” or PRAWNs, which shows how this concept might work in robotic warfare (Lockheed Martin has a similar program on robot swarms funded by DARPA, called the “Wolves of War”). Very basic unmanned weapons would use simple sensors to find targets, an automatic targeting recognition algorithm to identify them, and easy communications like radio and infrared (as the scientists thought the military’s idea of using only the Internet would be too easy to jam) to pass on information about what the other robots in the swarm are seeing and doing. The robots would be given simple rules to follow, which mimic those birds use to flock or ants use to forage for food. As the PRAWNs spread around in an almost random search, they would broadcast to the group any enemy targets they find. Swarms would then form to attack the targets. But each individual robot would have knowledge of how many fellow robots were attacking the same target. So if there were already too many PRAWNs attacking one target, the other robot shrimpies would move on to search for new targets. Much as ants have different types working in their swarms (soldier ants and worker ants), the individual PRAWNs might also carry different weapons or sensors, allowing them to match themselves to the needs of the overall swarm.
While each PRAWN would be very simple, and almost dumb (indeed, their AI would be less than the systems already on the market today), the sum of their swarm would be far more effective than any single system. Why drive a single SWORDS or PackBot into a building, room by room, to see if an enemy is hiding there, when a soldier could let loose a swarm of tiny robots that would scramble out and automatically search on their own? Similarly, a system of basic drones using this doctrine could efficiently cover a wide geographic area. Without any controls from below, they would loiter in the sky, spreading out to cover great distances, but converge whenever one drone in the swarm finds a target. They might conduct active searches or just wait for an enemy to reveal itself by emitting radar or shooting off a rocket. This task is simple enough for a swarm, but proved incredibly difficult for the U.S. military during the “SCUD hunt” of the first Gulf War and the Israeli military during its search for Hezbollah rocket sites in 2006, as they lacked the swarm’s ability to cover wide areas efficiently. Or a swarm might be loosed on an area where the targets are already known, such as bunker complexes or communications nodes. Rather than a controller back in a mothership furiously trying to point and click at which target to hit, which has been taken out and so doesn’t need any more drones to go after it, and which targets were missed and therefore need more attention, the autonomous swarm would just figure it all out on its own.
Swarm tactics go beyond just a basic bum rush, where every system charges at the enemy from one direction. They might act as a “cloud,” arriving into battle in one mass and then splitting up to envelop the target or targets from various directions. As Clausewitz described such a tactic in guerrilla campaigns, the systems would become “a dark and menacing cloud out of which a bolt of lightning may strike at any time.” Or the swarm might work as a “vapor,” covering a wide area, but never fully congealing in one place.
The pace of the attacks can also vary, which further complicates the tactics a swarm might present an enemy with. The systems might converge on a target all at once. Describes Naval War College expert John Arquilla, “My vision of the future is a lot of small robots, capable of attacking an enemy force from all directions simultaneously. And the point would be to overload the defense of the target.” Or they might “pulse” the target, attacking, dispersing, and reattacking again and again, aiming to wear the defenses down. They might even draw inspiration from how the Indians in Hollywood westerns would attack a wagon train, circling around and around the target, firing at it from a distance, until some opening or weakness is found.
Much like being surrounded by bees, the experience of fighting against swarms may also prove incredibly frustrating and even psychologically debilitating. As Arniss Mangolds, a vice president of Foster-Miller, puts it, “When you see one robot coming down, it’s interesting and even if it has a weapon on it, maybe it’s a little scary and you give it a little respect.... But if you’re standing somewhere and see ten robots coming at you, it’s scary.”
Ten machine-gun-armed robots headed your way is fearsome enough. But with the simple rules guiding them and the simpler, cheaper robots that they require, there is no limit on the size of swarms. iRobot has already run programs with swarms sized up to ten thousand, while one DARPA researcher describes swarms that eventually could reach the size of “zillions and zillions of robots.”

MOM AGAINST THE BEES

Swarms are thus the conceptual opposite of motherships, despite both using robotics. Swarms are decentralized in control, but concentrate firepower, while motherships are centralized but disperse firepower. If you imagine a system of motherships laid out on a big operational map, it would look like a series of hubs, each with spokes coming out of them. Like checkers pieces, each of these mothership hubs could be moved around the map by a commander, much as each of their tiny robotic spokes could be pointed and clicked into place by the people sitting inside the motherships. With swarms, the map would instead look like a mesh-work of nodes. It would almost appear like drawing lines between the stars in the galaxy or drawing a “map” of all the sites in the Internet. Every tiny node would be linked together with every other node, either directly or indirectly. Where the linkages cluster together most is where the action is, but these clusters could rapidly shift and move.
Every doctrine has its advantages and disadvantages. The mothership style of operations has very specific roles for specific units, as well as central lines of communication. Chop off one limb and the task might not get done. By contrast, self-organizing entities like swarms come with built-in redundancies. Swarms are made up of a multitude of units, each acting in parallel, so that there is no one chain of command, communications link, or supply line to chop. Attacking a swarm is akin to going after bees with a sword. Similarly, swarms are constantly acting, reacting, and adapting to the situation. So they have a feature of “perpetual novelty” built in; it is really hard to predict exactly what they will do next, which can be a very good thing in war.
The disadvantages of swarm systems are almost the inverse. In war, “not all novelty is desirable,” says retired army officer Thomas Adams. Swarms may be unpredictable to the enemy, but they are also not exactly controllable, which can lead to unexpected results for your side as well. Instead of being able to “point and click” and get the immediate action desired, a swarm takes the action on its own, which may not always be exactly where and when the commander wants it. Nothing happens in a swarm directly, but rather through the complex relationships among the parts. So swarms are also almost “nonunderstandable” in how they get a task done. Adams explains, “Complex adaptive systems are a swamp of intersecting logic. Instead of A causing B, which in turn causes C, A indirectly causes everything else and everything else indirectly causes A.”
The human commander’s job won’t be the kind of detailed point and click with a swarm. Rather, it is almost like what Gandhi said when he was sitting on the side of the road and a crowd of people went by: “There go my people. I must get up and follow them, for I am their leader!” The commander’s job will be to set the right goals and objectives. They may even place a few limits on such things as the “radius of cooperation” of the units (to prevent the entire swarm from acting like kids’ soccer teams, which tend to “beehive,” with all the kids chasing the ball when a few should stay back and guard the goal). Then, other than perhaps parceling out reserves and updating the point values on each of the targets to reflect changing needs, the human commanders would, as Naval War College expert John Arquilla describes, “Basically stay the hell out of the way of the swarm.” This type of truly “decentralized decision making,” says one marine general, “flies in the face of the American way of war....But it works.”
Whether it is motherships, swarms, or some other concept of organizing for war that we haven’t yet seen, it is still unclear what doctrines the U.S. military will ultimately choose to organize its robots around. In turn, it is also unclear which one will prove to be the best. Indeed, the choices may mix and mingle. Some envision that the concepts of swarms and motherships could be blended, with the human commanders inserting themselves at the points where swarms start to cluster. It wouldn’t be the same as the direct control of the mothership’s hub and spoke system, but it would still be a flexible way to make sure the leader was influencing what’s going on at the major point of action.
Whatever doctrine prevails, it is clear that the American military is getting ready for a battlefield where it sends out fewer humans and more robots. And so, just as the technologies and modes of wars are changing, so are the theories of how to fight them. Thinking about what robot doctrine to use in warfare will not be viewed as “advanced” for much longer.