[TWENTY-TWO]
CONCLUSION: THE DUALITY OF ROBOTS AND HUMANS
For every complex problem there is an answer that is clear, simple, and wrong.
H. L . MENCKEN
 
 
 
In 2003, the American Film Institute assembled a jury of experts to name Hollywood’s hundred greatest villains and heroes of all time. For the purposes of the vote, a “villain” was defined as a character whose “wickedness of mind, selfishness of character and will to power are sometimes masked by beauty and nobility, while others may rage unmasked. They can be horribly evil or grandiosely funny, but are ultimately tragic.” A “hero” was defined as a character “who prevails in extreme circumstances and dramatizes a sense of morality, courage and purpose. Though they may be ambiguous or flawed, they often sacrifice themselves to show humanity at its best.”
Hannibal Lecter, the cunning serial killer from The Silence of the Lambs, and Atticus Finch, the principled attorney and father in To Kill a Mockingbird, headed the lists of villains and heroes. But a single character ended up on both lists. And it was a robot, Arnold Schwarzenegger’s portrayal of the Terminator.
The same year that Hollywood put together its team of experts, the U.S. government did the same. The National Science Foundation assembled hundreds of scientists to try to examine what would happen over the next ten to twenty years, as everything from robotics and artificial intelligence to nanotechnology and bioscience continued to advance and converge, intertwining and feeding off one another. The product of their efforts was an immense report, weighing over three pounds. They did a masterful job, exploring the impact of these developments on fields that ranged from national security to kindergarten education. And yet, ultimately, the top minds in the U.S. government could conclude that the only thing we could be certain of was uncertainty itself. “This will be an Age of Transitions, and it will last for at least a half-century.”
It is this sense of duality and uncertainty that perhaps best captures how we may have to ultimately weigh what is going on in war and politics. Revolutionary new technologies are not only being introduced to war, but used in ever greater numbers, with novel and often unexpected effects. That said, everything that seems so futuristic is playing out in a present that follows familiar historical lines. Robots are doing amazing things in Iraq and Afghanistan, and yet, as I sat down to write this sentence, the news carried the story of five American troops tragically killed by a roadside bomb.
This sense of simultaneous change and stasis is nothing new. As obvious as a great change often seems after the fact, it rarely happens in one fell swoop, where you see a complete elimination of the old. The battleship, for example, went from being the dominant beast in the jungle of war to an endangered species in the course of the first few minutes of Pearl Harbor. And yet battleships stayed in naval service for another fifty years (the last ones firing their big guns during the 1991 Gulf War, directed where to shoot by unmanned drones). Atomic weapons had an unmistakable debut, a mushroom cloud that helped end the very same world war. Yet their real impact was that which played out over the following decades, driving a heated, global competition between two superpowers, but also making sure that war stayed “cold.”
Change is also hard to tease out if you just look at the numbers. For instance, any outside observer could tell that tanks clearly were somewhat important when they swept across France in the German blitzkrieg that began World War II. And yet only 10 percent of the German army’s units had converted to armor, meaning that this revolutionary new force still had, in historian Max Boot’s words, “more ponies than panzers.”
Given this uncertainty, how do we really know whether any new technology does matter, that it really is changing things? More to the point, how do we know that is the case with robotics?
The answer is simple. From little EOD #129 “dying” on the battlefield of Iraq to the all too real questions now looming in machine ethics, the revolution in robotics is forcing us to reexamine what is possible, probable, and proper in war and politics. It is forcing us to reshape, reevaluate, and reconsider what we thought we knew before. That is the essence of revolution.
Our very vocabulary illustrates. Right now, we refer to these systems as “unmanned” or “artificial,” calling them by what they are not, akin to how cars were once called “horseless carriages.” This is not only because we can’t yet conceptualize exactly what these technologies are and what they can do. It is also because their nonhumanity sums up their difference from all previous weapons. It is why their effect on war and politics is beginning to play out in such a new and revolutionary manner.
Because they are not human, these new technologies are being used in ways that were impossible before. Because they are not human, these new technologies have capabilities that were impossible before. And, because they are not human, these new technologies are creating new dilemmas and problems, as well as complicating old ones, in a manner and pace that was impossible before.
Robots in Iraq and Afghanistan today are sketching out the contours of what bodes to be a historic revolution in warfare. The wars of the future will feature robots of a wide variety of sizes, designs, capabilities, autonomy, and intelligence. The plans, strategies, and tactics used in these future conflicts will be built from new doctrines that are just now being created, potentially involving everything from robotic motherships and swarms of autonomous drones to cubicle warriors managing war from a distance. The forces that fight these wars may well represent both governments and nonstate groups or even crazed individuals bearing a lethality once held by nations. In these battles, machines will take on greater roles, not just in executing missions, but also in planning them.
In turn, the humans still fighting will reflect changed demographics, often not matching our traditional assumptions of who we have thought of as soldiers over the last five thousand years of war. They will be younger, older, trained differently, use different equipment, fight from new locales, and even have altered concepts of their own identities and roles in war. For many, their experiences of battle will be fundamentally different from those of every soldier who went to war in every generation past. The relationships that these combatants will have with their leaders and even with each other will also be altered.
The public back home will be further distanced from the human costs of war, perhaps making such wars easier to start, but maybe also harder to end, even in democracies. In turn, the very technology itself might lead to new social, economic, even religious conflicts and maybe even create new sparks of war among those either left behind or so fearful as to lash out in anger and confusion.
Finally, these wars will feature new questions about what is legal and ethical, including even how to control our own weapons. The resulting dilemmas and debates will not only be intense, but will challenge many of the codes that have long shaped and regulated the very practice of war.
In short, the systems and stories captured in this book are just the start of a process that will be of historic importance to the story of humanity itself. Our robotic creations are creating new dimensions and dynamics for our human wars and politics that we are only now just beginning to fathom.

ROBOTIC HOPES AND FEARS

Many, including nearly every roboticist I met while writing this book, hope that these new technologies will finally end our species’ bent toward war. Indeed, even that very sober and lengthy U.S. government report about the future of technology and society expressed a similar optimism, describing how “the twenty-first century could end in world peace, universal prosperity, and evolution to a higher level of compassion and accomplishment.”
Then again, it’s hard to imagine us getting rid of conflict anytime soon. And indeed, as we learn about the new temptations, questions, confusions, and even anger that our new technologies might spark, there could be even more war and deadlier conflict. As Bertrand Russell once said, “Without more kindliness in the world, technological power would mainly serve to increase men’s capacity to inflict harm on one another.” Notably enough, Russell said this back in 1924, and the events of the last century, our most technologically advanced as well as violent one, bear him out.
The fear among soldiers is the very opposite of the scientists’ hope. They worry that war is disappearing. Let me be clear here: Theirs is not some selfish sense that these new technologies will somehow end violent conflict and they’ll be tossed out of work (most would gladly trade their military fatigues for a Dairy Queen uniform, if it meant the end of war and suffering). Rather, they often express fears that the unmanned planes, robot guns, and AI battle managers are turning their experience of war into something else altogether. Lives may be saved in unmanned warfare, but war itself is becoming almost unrecognizable, something they are not all that comfortable with.
From Homer’s Achilles to Shakespeare’s Henry V to my grandfather in World War II, war and the life of the warrior was never simply just about killing. Rather, it was the ideals that lay behind an accompanying sense of sacrifice, the acceptance that one might also have to die. Indeed, military historian Martin van Creveld argued that this willingness to sacrifice “represents the single most important factor” in modern war. “War does not begin when some people kill others; instead, it starts at the point where they themselves risk being killed in return.”
This willingness to bear the most horrible burdens, face the most terrible risks, and even make the ultimate sacrifice, for your nation or just for your buddies, has always made war defy the normal rules of logic. All the great writers on war focus on this aspect because it gives war its humanity, its sense of purpose, and its heroism. From the knights’ codes of chivalry to today’s goals of ending tyranny or terrorism, war has always had to be linked to some ideal. Of course, these ideals weren’t always followed and were rife with double standards. But they influenced how war itself was viewed, as something terrible but always linked to a higher purpose. Without these ideals, war’s often horrific costs and sacrifices would be deemed unworthy.
Robotics starts to take these ideals, so essential to the definition of war, out of the equation. In so doing, they might just make the way we have framed war, and rationalized the killing that takes place in it, fall apart. Paul Kahn, a professor at Yale Law School, describes it as the “paradox of riskless warfare. It was tough enough to describe war as something permissible or moral, when combat involves deliberately choosing to inflict destruction and mayhem on your fellow humans. Yet, as long as there was a sense of mutuality, that the two sides were both accepting to bear the risks involved, there was some sense of equality and fairness.”
As technologies have distanced soldiers farther and farther from the fighting, the risks, and the destruction, this sense of equality and fairness becomes harder to claim. When it becomes not just a matter of distance, but actual disconnection, as Kahn describes, it “propels us beyond the ethics of warfare.”
This doesn’t mean, then, that any war is instantly evil, immoral, or purposeless. And it certainly doesn’t mean that there will be no more wars. Rather, wars using these new technologies are looking less like war as we once knew and understood it. The old definitions and codes don’t fit so well with the realities brought on by our new technologies of killing.
A parallel is what happened to the old codes of chivalry, such as those of the Japanese samurai. Having a system of understanding and ethics that had lasted over a millennium, the samurai held back from using guns for as long as they could. They knew that the technology was useful, but they saw these new weapons as depersonalizing war and, in so doing, dishonoring the codes and values that defined them as warriors. And yet, eventually, the pace of history and technology moved on, and Japanese society soon had to adjust to the new ways of both the world and war (despite the best efforts of Tom Cruise in The Last Samurai). They started using guns instead of swords, and redefined both what they viewed as war and warriors’ values.
The same redefinition may well happen with unmanned systems. If we are lucky, these new technologies might even redefine our sense of the acceptable human costs of war. In 1424, Machiavelli said that the side which lost the battle of Zagorara had suffered “a defeat renowned throughout all Italy.” He was describing war in the time of the condottieri (hired armies), however, in which battles involved more maneuvering and posturing than actual fighting. So in this “renowned” defeat that no one now remembers, only three men perished, not from actual combat wounds, but from accidentally falling off their horses.
Maybe the shift to unmanned systems, which bear the risks of war instead and move humans out of harm’s way, will lead to a similar change in how we redefine war and its human costs. But history provides counterexamples. The Japanese redefined their warrior code in the late 1800s, but they turned it into a chauvinist militarism, which just a few decades later bore such bitter fruit as the Rape of Nanking, Pearl Harbor, and the horrors of the Bataan Death March. In turn, the condottieri of Machiavelli’s time may have valued their soldiers’ lives so much that they were unwilling to risk them in open battle with each other, but they had no such qualms in their definition of war about killing, raping, or pillaging any civilians they came across. A similar concern pops up time and again with robotics. In making war less human, we may also be making it less humane.

FINAGLE’S LAW

My own worry is a bit different. I believe strongly in an adage that riffs on the better-known “Murphy’s law.” What science fiction calls “Finagle’s law” states a simple truth that I have come to believe holds for both humans and their creations, including their robots: “Anything that can go wrong, will—at the worst possible moment.”
I have experienced Finagle’s law again and again. The best example from my personal life may have been when the tuxedo store delivered a shirt sized for a four-year-old boy on the day before my wedding. But I also saw it at play time and again during the research for this book. The best example of this may have been when I was touring the U.S. Air Force’s Middle East command center and the electricity went out. Even worse, the backup power generator didn’t come on because, at that very moment, a breaker wasn’t working. In the most high-tech military facility in the world, from which all unmanned operations in Iraq and Afghanistan are coordinated, airmen were finding their way around with flashlights as they rushed to turn off the computers before their batteries died.
Finagle’s law is important, as we are experiencing one of the most amazing changes in humanity’s history, and yet we are completely unprepared. This is nothing new. Our forebears were likely just as unprepared for such groundbreaking new technologies as fire, printing presses, machine guns, and Pudding Pops. But they muddled through and figured it all out. That is, until some new technology came along and shook everything up again.
But today, in our overcrowded, interconnected world, the stakes are far higher than they’ve ever been before. Even the most ardent supporters of robotics, AI, and the Singularity warn that we have to “get it right the first time,” as there is little room for “non-recoverable errors.”
Unfortunately, we should expect errors. It is not merely Finagle’s law at work in our machines. Mistakes are not just in robot nature, but also in human nature. As Albert Einstein advised, “Never attribute to malice that which can be adequately explained by stupidity. Only two things are infinite, the universe and human stupidity, and I’m not sure about the former.”
Compounding the challenge is the fact that we have less time to react and adjust to these immense changes than ever before. Our concern shouldn’t be merely change itself. Rather, as Admiral Michael Mullen, the chairman of the Joint Chiefs of Staff, put it, “What has gripped me the most is the pace of all this change.”
In the blink of an eye, things that were just fodder for science fiction are creeping, crawling, flying, swimming, and shooting on today’s battlefields. And these machines are just the first generation of these new technologies, some of which may already be antiquated as you read these lines.
One army officer captured well what happens when you combine an incredible pace of change with a lack of preparation: “We will only be able to react, and by the time we have responded we will be even further behind the next wave of change and very quickly left in the dust of accelerating change . . . . Change is coming, it is coming faster than nearly everyone expects, and nothing can be done to stop it.”

CREATIVITY CONCERNS

This all heightens the need to start discussing the issues that come as unmanned technologies are increasingly used in our society and, even more so, in our wars.
Part of the reason is to take some of the shock and sting out of these transitions, which will feel overwhelming to many, and might even spur some to violence. As terrorism expert Richard Clarke explained, “We need to have discussion of issues before they are on us. Violence comes if people feel surprises.” Or as the aptly named band Army of Me put it in their aptly named song “Going Through Changes”: “It’s hard to accept what you don’t understand, And it’s hard to launch without knowing how to land.”
But most of all, the reason for having these discussions is that our scientists, our military, and our political and business leaders are making decisions now that will matter for all of human society in decades to come. We are not merely building machines that will be with us for years, but setting in motion potentially irreversible research and development on what these machines can and cannot do. Even more so, we are just now creating the frameworks that will fill the current vacuum of policy, law, doctrine, and ethics. That is, how we frame an issue now will shape how we will understand and respond to the challenges that will pop up years from now.
Yet for the most part, we are deciding such important matters from a position of ignorance. “Ignorance” actually has two meanings. The first is the one we tend to think of, “the state of being uninformed.” But it can also mean the “willful neglect or refusal to acquire knowledge which one may acquire and it is his duty to have.” This latter definition may be more apt when it comes to robotics today. We fund, research, and use these new technologies more and more, especially in war. Yet we willfully refuse to acknowledge that the reality of robotics is now upon us. “We are already in A Brave New World, but just don’t want to admit it,” says one military consultant on unmanned systems. “We refuse to take our blinders off.”
The result is that leaders are ill equipped to handle all the emerging complications and dilemmas. Tells one worker in the military’s robotics test programs, “The people higher up, who are making the decisions that matter, do not have a good understanding of this technology. They are older and more mature, but they don’t get it. People fear what they don’t know.”
Most of all, we have to start questioning into what exactly we want to invest our society’s collective intellect, energy, drive, and resources. These are exciting, thrilling times, but I cannot think about them without a bit of disappointment. There is an inherent sadness in the fact that war remains one of those things that humankind is especially good at. As Eisenhower once said, “Every gun that is made, every warship launched, every rocket fired, signifies in the final sense a theft from those who hunger and are not fed, those who are cold and are not clothed. The world in arms is not spending money alone. It is spending the sweat of its laborers, the genius of its scientists and the hopes of its children.”
Humans have long been distinguished from other animals by our ability to create. Our distant ancestors learned how to tame the wild, reach the top of the food chain, and build civilization. Our more recent forebears figured out how to crack the codes of science, and even escape the bonds of gravity, taking our species beyond our home planet. Through our art, literature, poetry, music, architecture, and culture, we have fashioned awe-inspiring ways to express ourselves and our love for one another.
And now we are creating something exciting and new, a technology that might just transform humans’ role in their world, perhaps even create a new species. But this revolution is mainly driven by our inability to move beyond the conflicts that have shaped human history from the very start. Sadly, our machines may not be the only thing wired for war.