The Morality of War, Terror, and Deterrence
To warn of an evil is justified only if, along with the warning, there is a way of escape.
—Cicero, On Divination, Book II, 44 BCE
In an episode of the original Star Trek television series, titled “Arena,” an alien species called the Gorn attacks and destroys an Earth outpost at Cestus III, leading Captain Kirk and the Enterprise to give chase to avenge the unprovoked attack. Spock is not so sure about the aliens’ motives, and wonders aloud about having “regard for another sentient being,” but is interrupted by the more martial Kirk, who reminds him, “out here we’re the only policemen around.” Their moral quandary is interrupted when both ships are stopped by an advanced civilization called the Metrons, who explain, “We have analyzed you and have learned that your violent tendencies are inherent. So be it. We will control them. We will resolve your conflict in the way most suited to your limited mentalities.” Kirk and the captain of the Gorn ship—a big-brained bipedal reptile—are transported to a neutral planet where they are instructed to fight to the death, at which point the loser’s ship and crew will be destroyed. The Gorn is stronger than Kirk and is easily able to rebuff his assaults that escalate from tree branch strikes to a massive boulder impact. He tells Captain Kirk that if Kirk surrenders he will be “merciful and quick.” “Like you were at Cestus III?” Kirk rejoins. “You were intruding! You established an outpost in our space,” the Gorn counters. “You butchered helpless human beings,” Kirk protests. “We destroyed invaders, as I shall destroy you!” the Gorn retorts. Back on the ship, where the crew is watching the battle unfold on the viewing screen, Dr. McCoy wonders aloud, “Can that be true? Was Cestus III an intrusion on their space?” “It may well be possible, Doctor,” Spock reflects. “We know very little about that section of the galaxy.” “Then we could be in the wrong,” McCoy admits. “The Gorn might have been simply trying to protect themselves.”
At the climax of the episode Kirk recalls the formula for making gunpowder after seeing the various elements readily available on the planet’s surface—sulfur, charcoal, and potassium nitrate, with diamonds as the deadly projectile—elements that the Metrons had provided to see if judicious reason could triumph over brute strength. Kirk puts them all together into a lethal weapon that he fires at the Gorn as the latter closes in for the kill. Now incapacitated, the Gorn drops his stone dagger, which Kirk grabs and places at the throat of his opponent to deliver the coup de grâce. Then, at this moment of moral choice, Kirk opts for mercy. His reason drives him to take the moral perspective of his opponent. “No, I won’t kill you. Maybe you thought you were protecting yourself when you attacked the outpost.” Kirk tosses the dagger aside, at which point a Metron appears. “You surprise me, Captain. By sparing your helpless enemy who surely would have destroyed you, you demonstrated the advanced trait of mercy, something we hardly expected. We feel that there may be hope for your kind. Therefore, you will not be destroyed. Perhaps in several thousand years, your people and mine shall meet to reach an agreement. You are still half savage, but there is hope.”1
The creator of Star Trek, Gene Roddenberry, invented a genre unto itself with the creation of the magnificent starship Enterprise, whose twenty-third-century mission was to expand humanity’s horizons, both physically and morally, via extraterrestrial interactions with the starship’s interracial, transnational, trans-species mixed crew. Each episode was both intrepid space adventure and thoughtful morality play, and many episodes explored the controversial issues of the age—war and conflict, imperialism and authoritarianism, duty and loyalty, racism and sexism, and how humanity might handle them centuries hence. Roddenberry made it clear that one of his goals with the series was to smuggle onto TV allegorical moral commentaries on current events. He said that by creating “a new world with new rules, I could make statements about sex, religion, Vietnam, politics, and intercontinental missiles. Indeed, we did make them on Star Trek: We were sending messages and fortunately they all got by the network.”2 This is one method, among many, of bringing about social change, and it is revealing to note that Roddenberry was personally well acquainted with war. In 1941, as a young man of only twenty, he’d enlisted in the US Army Air Corps and flew eighty-nine missions in the South Pacific, for which he was decorated with the Distinguished Flying Cross. So he knew of what he wrote: “The strength of a civilization is not measured by its ability to fight wars, but rather by its ability to prevent them.”3
PIRATE MORALITY, OR CONFLICT AND COSTLY SIGNALING THEORY
Determining how civilizations have learned to prevent wars involves understanding the psychology of violence and deterrence—grounded in the logic of conflict and our moral emotions—that we examined in the previous chapter. Consider the following question: instead of using stealth and camouflage, why would a pirate choose to travel in a giant spotlight by flying the Jolly Roger flag with its ominous skull and crossbones, thereby signaling to potential prey that they’re being pursued by a predator? This is an example of a phenomenon called Costly Signaling Theory (CST), which posits that organisms (including people) will sometimes do things that are costly to themselves to send a signal to others.4 There are both positive and negative examples.
On the positive side, people sometimes act in certain ways not just to help those who are genetically related to them (explained by kin selection), and not just to help those who will return the favor (explained by reciprocal altruism), but to send a signal that says, in essence, “my altruistic and charitable acts demonstrate that I am so successful that I can afford to make such sacrifices for others.” That is, some altruistic acts are a form of information that carries a signal to others of trust and status—trust that I can be counted on to help others when they need it so that I can expect others to do the same for me; and status that I have the health, intelligence, and resources to afford to be so kind and generous. This type of CST explains why some people make large donations to charity or drive expensive cars or wear expensive jewelry—as signals to others. And since the signal must be genuine, the signalers tend to believe in what they are doing, so to that extent the giving is done with honest motive.
The negative side of CST may be seen in high-risk-taking behavior by young males—behavior that occasionally ends very badly indeed, with the young male accidentally taking himself out of the gene pool entirely (for which Darwin Awards are given out annually5). Risky behavior may be a male’s way of signaling to a female that his genes are so good and he is such a superior specimen that he can, for example, drink twelve beers and still drive home at more than one hundred miles per hour in total safety. He may be signaling to females that he is so genetically exceptional he can afford to risk life and limb, and would therefore be a good mate and an excellent genetic and resource provider for her and her future offspring with him. Dangerous and risky acts may also signal to other males that the danger seeker and risk taker is powerful and not to be messed with, as in Jim Croce’s warning, “You don’t tug on Superman’s cape, you don’t spit into the wind, you don’t pull the mask off the old Lone Ranger, and you don’t mess around with Jim.”
With this background in mind we can see how Costly Signaling Theory explains why pirates flew the Jolly Roger flag on their ships. It was a signal that you—the innocent merchant ship—were about to be boarded by a lawless beastly bunch of wild and unruly maniacs hell-bent on murder and mayhem. This was especially clever because, according to the economist Peter Leeson in his pirate myth–busting book The Invisible Hook, pirates were not, in fact, the criminally insane, traitorous terrorists of popular lore, in which anarchy was the rule and the rule of law was nonexistent. This piratical mythology can’t be true because ships packed full of riotous sociopaths, ruled by chaos and treachery, couldn’t possibly be successful at anything for any length of time. The truth was much less exciting and mysterious; pirate communities were “orderly and honest,” says Leeson, and had to be to meet buccaneers’ economic goal of turning a profit. “To cooperate for mutual gain—indeed, to advance their criminal organization at all—pirates needed to prevent their outlaw society from degenerating into bedlam.”6 So there is, after all, honor among thieves. As Adam Smith noted in The Wealth of Nations, “Society cannot subsist among those who are at all times ready to hurt and injure one another.… If there is any society among robbers and murderers, they must at least … abstain from robbing and murdering one another.”7
Pirate societies provide evidence for Smith’s theory that economies are the result of bottom-up spontaneous self-organized order that naturally arises from social interactions. Leeson shows how pirate communities democratically elected their captains and quartermasters, and constructed constitutions that outlined rules about drinking, smoking, gambling, sex (no boys or women allowed on board), the use of fire and candles (a shipboard fire could prove disastrous for crew and cargo), fighting and disorderly conduct (the result of high-testosterone, risk-taking men confined in tight quarters for long stretches of time), desertion, and especially shirking one’s duties during battle. Like any other society, pirates had to deal with the “free rider” problem because the equitable division of loot among inequitable efforts would inevitably lead to resentment, retaliation, and economic chaos.
Enforcement was key. Just as criminal courts required witnesses to swear on the Bible, pirate crews had to consent to the captain’s code before sailing. In the words of one observer, “All swore to ’em, upon a Hatchet for want of a Bible. When ever any enter on board of these Ships voluntarily, they are obliged to sign all their Articles of Agreement to prevent Disputes and Ranglings afterwards.” Leeson even tracked down the sharing of contractual arrangements between captains, made possible by the fact that “more than 70 percent of Anglo-American pirates active between 1716 and 1726, for example, can be connected back to one of three pirate captains.” Thus, the pirate code “emerged from piratical interactions and information sharing, not from a pirate king who centrally designed and imposed a common code on all current and future sea bandits.”8
Whence, then, did the myth of piratical lawlessness and anarchy arise? It arose from the pirates themselves, naturally, in whose best interests it was to perpetuate the myth to minimize losses and maximize profits. They flew the Jolly Roger to signal their reputation for mayhem but, in fact, the pirates didn’t actually want a fight, because fighting is costly and dangerous and might result in economic loss. Pirates just want booty, and they prefer a low-risk surrender to a high-risk battle. From the merchant’s perspective, the nonviolent surrender of their booty was also preferable to fighting back, because violence is costly to them too. Of course, to maintain a reputation that you are a badass, you actually have to occasionally be a badass, so pirates intermittently engaged in violence, reports of which they happily provided to newspaper editors, who duly published them in gory and exaggerated detail. As the eighteenth-century English pirate captain Sam Bellamy explained, “I scorn to do any one a Mischief, when it is not for my Advantage.” Leeson concludes, “By signaling pirates’ identity to potential targets, the Jolly Roger prevented bloody battle that would needlessly injure or kill not only pirates, but also innocent merchant seamen.”9
The Jolly Roger effect also helps to explain why Somali pirates today typically receive ransom payoffs instead of violent resistance from shipping crews and their owners. It is in everyone’s economic interest to negotiate the transactions as quickly and peacefully as possible. In Tom Hanks’s film Captain Phillips—the true-life story of the 2009 Somali hijacking of a tanker that resulted in the death of most of the Somali pirates when the US Navy came to the rescue—viewers were baffled why the ship’s owners didn’t just issue firearms to the captain and his crew to fight the pirates off. The answer is obvious once you run through the cost-benefit calculations. It’s cheaper to just pay off the pirates than risk the lives of a crew that’s not combat-trained. Those who are, such as the US Navy, may do so, but at an unrealistic financial cost given that they can’t patrol every shipping lane in such a vast expanse. (In fact, the number of successful pirate hijackings dropped to zero in 2013, but at a cost considerably greater than the ransoms paid from 2005 to 2012, which totaled $376 million. In 2013, shipping companies shelled out an additional $423 million in armed security guards over the $531 million from the year before [totaling $954 million], plus incurred an additional $1.53 billion in fuel costs to cruise at the higher rate of 18 knots.10) By killing every one of the pirates who commandeered Captain Phillips’s ship, the US Navy was sending a powerful signal to Somali pirates to stop boarding US cargo ships—or else. However, unless deterrence signals are consistent and long-term, pirates will run through the same set of calculations and see that the “or else” was a bluff and treat the risk as a cost-of-doing-business expense.
The long-term solution lies elsewhere—in Somalia itself. Markets operating in a lawless society are more like black markets than free markets, and since the Somali government has lost control of its people, Somali pirates are essentially free to take the law into their own hands. Until Somalia establishes a rule of law and a lawful free market in which its citizens can find gainful employment, lawless black market piracy will remain profitable. The pirates themselves may have their own set of rules by which they organize themselves into mini Leviathans of order, but on the high seas anarchy reigns.
THE EVOLUTIONARY LOGIC OF DETERRENCE
This brings us back to the prisoner’s dilemma and the Hobbesian trap discussed in the previous chapter. Pinker calls this the “other guy problem.” The other guy may be nice, but you know that he also wants to “win” (continuing the sports analogy), so he might be tempted to defect (cheat), especially if he thinks that you too want to win and might be tempted to cheat. And he knows that you know that he is running through the same game matrix as you are, and you know that he knows that you know …
In international relations the “other guy” is another nation or state, and if they have nuclear weapons, and so do you, it can lead to an arms race resulting in something like a Nash equilibrium, such as the one that kept the United States and the Soviet Union in a Cold War nuclear freeze called a “balance of terror” or Mutual Assured Destruction (MAD) for more than half a century. Let’s look at how this worked, and what more we can do to further reduce the risks of nuclear war based on what we know about human nature and the logic of deterrence. This exercise will serve as another example of both moral progress and how we can apply science and reason to solving a serious threat to our security.
When I was an undergraduate at Pepperdine University in 1974, the father of the hydrogen bomb—Edward Teller—spoke at our campus in conjunction with the awarding of an honorary doctorate. His message was that deterrence works, even though at the time I remember thinking—like so many politicos were saying—“yeah, but a single slipup is all it takes.” Popular films such as Fail-Safe and Dr. Strangelove reinforced the point. But the slipup never came. MAD has worked because neither side has anything to gain by initiating a first strike against the other nation—the retaliatory capability of both is such that a first strike would most likely lead to the utter annihilation of both countries (along with much of the rest of the world). “It’s not mad!” proclaimed Secretary of Defense Robert S. McNamara. “Mutual Assured Destruction is the foundation of deterrence. Nuclear weapons have no military utility whatsoever, excepting only to deter one’s opponent from their use. Which means you should never, never, never initiate their use against a nuclear-equipped opponent. If you do, it’s suicide.”11
The logic of deterrence was first articulated in 1946 by the American military strategist Bernard Brodie in his appropriately titled book The Absolute Weapon, in which he noted the break in history that atomic weapons brought with their development: “Thus far the chief purpose of our military establishment has been to win wars. From now on, its chief purpose must be to avert them. It can have almost no other purpose.”12 As Dr. Strangelove explained in Stanley Kubrick’s classic Cold War film (in the famous war room scene where fighting is not allowed): “Deterrence is the art of producing in the mind of the enemy the fear to attack.” Said enemy, of course, must know that you have at the ready such destructive devices, and that is why “The whole point of a doomsday machine is lost if you keep it a secret!”13
Dr. Strangelove was a black comedy that parodied MAD by showing what can happen when things go terribly wrong, in this case when General Jack D. Ripper becomes unhinged at the thought of “Communist infiltration, Communist indoctrination, Communist subversion, and the international Communist conspiracy to sap and impurify all of our precious bodily fluids”; thus he orders a nuclear first strike against the Soviet Union. Given this unfortunate incident and knowing that the Russians know about it and will therefore retaliate, General “Buck” Turgidson pleads with the president to go all out and launch a full first strike. “Mr. President, I’m not saying we wouldn’t get our hair mussed, but I do say no more than ten to twenty million killed, tops, uh, depending on the breaks.”14
He wasn’t far off from real projected casualties (Kubrick was a student of Cold War strategy), as computed by Robert McNamara: “What kind of amount of destruction must we be able to inflict upon the attacker in the retaliation to ensure that he would indeed be deterred from initiating such an attack? In the case of the Soviet Union, I would judge that a capability on our part to destroy, say, one-fifth to one-fourth of their population and one-half of her industrial capacity would serve as an effective deterrent.”15 When he spoke these words in 1968, the population of the Soviet Union was about 240 million, which translates to 48 million to 60 million dead. If that doesn’t make you shudder, Mao Zedong once said that he was willing to sacrifice 50 percent of the Chinese population, which at the time was about 600 million. “We have so many people. We can afford to lose a few. What difference does it make?”16
The difference was fully appreciated by Harold Agnew, who was something of a real-life Dr. Strangelove. Director of the Los Alamos National Laboratory for a decade, prior to that he worked on the Manhattan Project at Los Alamos, building the first atomic bombs—“Fat Man” and “Little Boy”—and he flew in a B-29 plane parallel to the Enola Gay bomber to observe and measure the yield of the explosion over Hiroshima; he even sneaked onto the plane his own 16-millimeter movie camera and captured the only footage of the explosion that killed eighty thousand people. Deterrence is what Agnew had in mind when he said that he would require every world leader to witness an atomic blast every five years while standing in his underwear “so he feels the heat and understands just what he’s screwing around with because we’re fast approaching an era where there aren’t any of us left that have ever seen a megaton bomb go off. And once you’ve seen one, it’s rather sobering.”17
A 1979 report from the Office of Technology Assessment for the US Congress, titled The Effects of Nuclear War, estimated that 155 million to 165 million Americans would die in an all-out Soviet first strike (unless people used existing shelters near their homes, reducing fatalities to 110 to 120 million). The population of the United States at the time was 225 million, so the estimated percent that would be killed ranged from 49 percent to 73 percent. Staggering. The report then lays out a scenario for what would happen to one city the size of Detroit if it were hit by a 1-megaton (Mt) nuclear bomb. By comparison, Little Boy—the atomic bomb dropped on Hiroshima—had a yield of 16 kilotons. A 1-Mt bomb is a thousand kilotons, or the equivalent of 62.5 Little Boy bombs.
A 1-Mt [megaton] explosion on the surface leaves a crater about 1,000 feet in diameter and 200 feet deep, surrounded by a rim of highly radioactive soil about twice this diameter thrown out of the crater. Out to a distance of 0.6 miles from the center there will be nothing recognizable remaining.… Of the 70,000 people in this area during nonworking hours, there will be virtually no survivors.… Individual residences in this region will be totally destroyed, with only foundations and basements remaining.… Whether fallout comes from the stem or the cap of the mushroom is a major concern in the general vicinity of the detonation because of the time element and its effect on general emergency operations.… The near half-million injured present a medical task of incredible magnitude. Hospitals and beds within 4 miles of the blast would be totally destroyed. Another 15 percent in the 4- to 8-mile distance range will be severely damaged, leaving 5,000 beds remaining outside the region of significant damage. Since this is only 1 percent of the number injured, these beds are incapable of providing significant medical assistance.… Burn victims will number in the tens of thousands; yet in 1977 there were only 85 specialized burn centers, with probably 1,000 to 2,000 beds, in the entire United States.
The report goes on like this for pages. Multiply those effects by 250 (the number of American cities believed to be targeted by the Soviet Union) and you get the report’s stark conclusion: “The effects on U.S. society would be catastrophic.”18 The catastrophe would be no less so for the Soviet Union and its allies. A 1957 report by the Strategic Air Command (SAC) estimated that around 360 million casualties (killed and wounded) would be inflicted on both sides in the first week of a nuclear exchange with the Soviet bloc.19 Such figures are so unimaginable that it is hard to wrap our minds around them.
Figure 2-1 shows a Civil Defense Air Raid card from the 1950s, instructing citizens to “drop and cover” in the event of a nuclear attack.20 As a child growing up in the early 1960s I recall our periodic Friday morning drills at Montrose Elementary School, trusting my teacher that our flimsy wooden desks would protect us from a thermonuclear blast over Los Angeles.
Figure 2-1. Civil Defense Air Raid card circa 1950s
Deterrence has worked so far—no nuclear weapon has been detonated in a conflict of any kind since August 1945—but it would be foolish to think of deterrence as a permanent solution.21 As long ago as 1795, in an essay titled Perpetual Peace, Immanuel Kant worked out what such deterrence ultimately leads to: “A war, therefore, which might cause the destruction of both parties at once … would permit the conclusion of a perpetual peace only upon the vast burial-ground of the human species.”22 (Kant’s book title came from an innkeeper’s sign featuring a cemetery—not the type of perpetual peace most of us strive for.) Deterrence acts as only a temporary solution to the Hobbesian temptation to strike first, allowing both Leviathans to go about their business in relative peace, settling for small proxy wars in swampy Third World countries.
In addition to the immediate deaths due to the explosion, heat, and radiation, there are possible long-term effects explored by the astronomer Carl Sagan and the atmospheric scientist Richard Turco in their book titled A Path Where No Man Thought23 (based on a technical paper in Science24)—in which the smoke, soot, and debris from the fires caused by an all-out thermonuclear war would render the planet nearly uninhabitable by blocking the sun’s radiation and triggering another ice age. They called this scenario “nuclear winter,” which has since been dismissed by most scientists as highly unlikely, or that at most it would lead to a “nuclear autumn” instead of winter.25 As one critic noted, millions of people would die from hunger due to the disruption of the international food supply system rather than from climate change.26 Well, that’s a relief—only millions instead of billions. Regardless of the details of that particular debate, in terms of tracking moral progress toward a nuclear-free world, Sagan and Turco outlined a realistic proposal to reduce the world’s nuclear stockpile to a level of Minimum Sufficiency Deterrence (MSD)—large enough to deter a nuclear first strike, but small enough so that if a mistake or a madman detonated a weapon it would not result in an all-out nuclear winter (or autumn).
It would appear that we are now well on our way to MSD, as evidenced in figure 2-2, showing a dramatic decline in nuclear stockpiles from a peak in 1986 of approximately 70,000, to about 16,400 to 17,200 total nuclear warheads in 2014.27 That’s still a long way from the figure of about 1,000 weapons that Sagan and Turco estimated for MSD,28 but at the rate we’re going we could get there by 2025. And since the end of the Cold War, it has become strategically less necessary and economically less desirable to retain so many nuclear weapons, resulting in a dramatic decline in stockpiles between the United States (7,315) and Russia (8,000), who together account for 93.4 percent of the total. Even more encouragingly, there are only about 4,200 operationally active nuclear warheads held by Russia (1,600), the United States (1,920), France (290), and the United Kingdom (160), making the world safer from being blown to smithereens by tens of thousands of nuclear warheads than it has been since 1945.29
Figure 2-2. The Decline in Global Nuclear Stockpiles
Total nuclear warheads (16,400 to 17,200) and operationally active nuclear warheads (about 4,200) are the lowest since the 1950s.30
Can the nuclear stockpile get to zero? To find out I audited a class called “Perspectives on War and Peace” at Claremont Graduate University taught by the political scientist Jacek Kugler. His answer is no, for at least seven reasons: (1) Credible deterrence among nations that trust each other is stable and predictable. (2) Unstable or unpredictable states such as North Korea, which periodically rattle their nuclear swords inside their silo sheaths, require threat of retaliation. (3) There still exist rogue states such as Iran that need the threat of retaliation because they threaten to join the nuclear club but are not so keen on joining the club of nations. (4) States waging conventional wars that might escalate to using weapons of mass destruction require the threat of retaliation to keep them in check. (5) Nonstate entities such as terrorist groups that we either do not trust or do not know enough about also require the threat of retaliation. (6) There may be a taboo against using nuclear weapons, but as yet there is no taboo against owning them. (7) The nuclear genie of how to make an atomic bomb is out of the bottle, so there is always the chance that other nations or terrorists will obtain nuclear weapons and thereby destabilize deterrence as well as increase the probability of an accidental detonation.
Kugler thinks we can get to “regional zero”—nuclear-free zones such as South America and Australia—provided the major global nuclear powers (the United States, Russia, China, and perhaps the EU and India) provide a secure nonveto response to any preemptive use of nuclear weapons by potential rogue states or terrorist groups. But because of the trust problem, he says, global zero is unattainable. Kugler worries that a Middle East nuclear exchange or a nuclear terrorist attack on Israel is likely if current conditions persist. A major danger, he says, is that fissile material may well be available on the black market at a price that rogue states and terrorists can afford.
Analysts of various stripes make a compelling case that nuclear safety is an illusion and that we’ve come perilously close to a Dr. Strangelove ending of the world as we know it. Scientists with the Federation of Atomic Scientists and the Bulletin of Atomic Scientists sponsor the “Doomsday Clock,” which seems to be perpetually set a few minutes to midnight and Armageddon. Popular authors such as Richard Rhodes in his nuclear tetralogy (The Making of the Atomic Bomb, Dark Sun, Arsenals of Folly, and The Twilight of the Bombs31) and Eric Schlosser in Command and Control32 leave readers with vertigo knowing how many close calls there have been: the jettisoning of a Mark IV atomic bomb in British Columbia in 1950, the crash of a B-52 carrying two Mark 39 nuclear bombs in North Carolina, the Cuban Missile Crisis, the Able Archer 83 Exercise in Western Europe that the Soviets misread as the buildup to a nuclear strike against them, and the Titan II missile explosion in Damascus, Arkansas, that narrowly avoided eradicating the entire city off the map. As Rhodes noted in reflecting on a career spent researching and writing about nuclear weapons:
You know one of the sad things about nuclear weapons from the beginning is that they’ve been called weapons. They’re vast destructive forces encompassed in this small, portable mechanism. They have no earthly use that I can see except to destroy whole cities full of human beings. This sort of mentality [has policymakers] thinking that nuclear weapons are like guns. There’s a reason why no one has exploded one in anger since 1945. The risk is too great.33
According to a report published by the Global Zero US Nuclear Policy Commission chaired by the former Joint Chiefs of Staff general James E. Cartwright, the United States and Russia could maintain deterrence and still reduce their nuclear arsenals to nine hundred weapons each, ensuring that only half deploy at any one time and all with twenty-four to seventy-two-hour launch lag times to allow for fail-safe measures to prevent accidental strikes.34 The global zero plan has been endorsed by such senior politicians as President Barack Obama, President Dmitry Medvedev of Russia, Prime Minister David Cameron of the United Kingdom, Prime Minister Manmohan Singh of India, Prime Minister Yoshihiko Noda of Japan, and Secretary-General Ban Ki-moon of the United Nations.35 Of course, endorsement is one thing, action is another, but the global zero movement is gaining momentum.36
It is notable that of the 194 countries in the world, only 9 have nuclear weapons. That means 185 countries (95 percent) manage just fine without nukes. Some may want them but can’t afford to produce the fissile (and other) materials, but it is noteworthy that since 1964 more nations have started and abandoned nuclear weapons programs than started and completed them, including Italy, West Germany, Switzerland, Sweden, Australia, South Korea, Taiwan, Brazil, Iraq, Algeria, Romania, South Africa, and Libya.37 There are many good reasons not to own nuclear weapons, one of which is that they are very expensive. During the Cold War the United States and the Soviet Union spent an almost unfathomable $5.5 trillion to build 125,000 nuclear weapons, and the United States still spends $35 billion a year on its nuclear program.38 Turning plowshares into swords is expensive. It also makes you a target. A 2012 study by the political scientist David Sobek and his colleagues tested the “conventional wisdom” hypothesis that possession of nuclear weapons confers many benefits to its owner by conducting a cross-national analysis of the relationship between a state initiating a nuclear weapons program and the onset of militarized conflict. They found that between 1945 and 2001 “the closer a state gets to acquiring nuclear weapons, the greater the risk it will be attacked.” Why? “When a state initiates a nuclear weapons program, it signals its intent to fundamentally alter its bargaining environment. States that once had an advantage will now be disadvantaged.” Once a state gets itself a nuke or two the risks of being the target of a first strike go down, but not below the level it was at before it set off to join the nuclear club.39 In other words, either way it’s better not to possess nuclear weapons.
That was the opinion of the preeminent Cold Warrior himself—the cowboy president Ronald Reagan—who called for the abolishment of “all nuclear weapons.” According to Jack Matlock, the US ambassador to the Soviet Union in the late 1980s, President Reagan considered nuclear weapons to be “totally irrational, totally inhumane, good for nothing but killing, possibly destructive of life on earth and civilization.” Kenneth Adelman, head of the US Arms Control and Disarmament Agency under Reagan, said that his boss would often “pop out with ‘Let’s abolish all nuclear weapons.’” As Adelman recalled, “I was surprised that for an anti-Communist hawk, how antinuclear he was. He would make comments that seemed to me to come from the far left rather than from the far right. He hated nuclear weapons.” The whole point of the Strategic Defense Initiative (SDI, also known as “Star Wars”), in fact, was to eliminate the need for MAD. Matlock paraphrased what Reagan told him on the matter: “How can you tell me, the president of the United States, that the only way I can defend my people is by threatening other people and maybe civilization itself? That is unacceptable.”40
Not everyone was on board with Reagan’s vision of a nuclear-free world. Reagan’s secretary of state George Shultz recalled an incident in which he got “handbagged” by British prime minister Margaret Thatcher when she found out that his boss suggested to Soviet premier Mikhail Gorbachev that they abolish nuclear weapons:
When we came back from [the 1986 U.S.-Soviet summit in] Reykjavik with an agreement that it was desirable to get rid of all nuclear weapons, she came over to Washington, and I was summoned to the British Embassy. It was then that I discovered the meaning of the British expression to be “handbagged.” She said: “George, how could you sit there and allow the president to agree to abolish nuclear weapons?” To which I said: “Margaret, he’s the president.” She replied, “But you’re supposed to be the one with the feet on the ground.” My answer was, “But, Margaret, I agreed with him.”41
Today, not only is George Shultz a nuclear abolitionist, but so too are his Cold Warrior colleagues former secretary of state Henry Kissinger, former senator Sam Nunn, and former secretary of defense William Perry. The four of them went on record calling for “a world free of nuclear weapons” in, of all places, the Wall Street Journal.42 There (and elsewhere) they outlined the realpolitik difficulties of achieving nuclear zero, which they equated to climbing a mountain: “From the vantage point of our troubled world today, we can’t even see the top of the mountain, and it is tempting and easy to say we can’t get there from here. But the risks from continuing to go down the mountain or standing pat are too real to ignore. We must chart a course to higher ground where the mountaintop becomes more visible.”43
Some theorists think that the path to peace is more deterrence. The late political scientist Kenneth Waltz, for example, thought that a nuclear Iran would bring stability to the Middle East because “in no other region of the world does a lone, unchecked nuclear state exist. It is Israel’s nuclear arsenal, not Iran’s desire for one, that has contributed most to the current crisis. Power, after all, begs to be balanced.”44 Except for when it doesn’t, as in the post-1991 period after the collapse of the Soviet Union and the unipolar dominance of the United States—no other medium-size power rose to fill the vacuum, no rising power started wars of conquest to consolidate more power, and the only other candidate, China, has remained war-free for almost four decades. Plus, as Jacek Kugler points out, the Islamic Republic of Iran doesn’t play by the rules of the international system; it has no formal diplomatic relations with the United States or Israel, thereby making communication in the event of an emergency problematic; and it is so close to Israel that it lowers the warning time of a missile launch to minutes, thereby limiting the effectiveness of countermeasures such as anti-ballistic missiles, and makes sneaking a dirty bomb into the country more likely.45
To that I would add: Iran has a history of training terrorist groups such as Hamas and Hezbollah, both of whom are opposed to the United States and Israel, and its leaders have repeatedly and clearly expressed their anti-Semitic views, as happened in 2005 when the new president, Mahmoud Ahmadinejad, said that Israel must be “wiped off the map,” a view he expressed before an audience of about four thousand students at a program starkly titled “The World Without Zionism.”46 Given what unfolded after another state leader proclaimed on numerous occasions in the 1930s that he wanted to rid the world of Jews—and almost did—we can hardly fault Israel for not fully embracing the image of an Allahu Akbar–bellowing imam with his finger on a nuclear trigger.
As the political scientist Christopher Fettweis notes in his book Dangerous Times?, despite the popularity of such intuitive notions as the “balance of power”—based on a small number of nongeneralizable cases from the past that are in any case no longer applicable to the present—“clashes of civilization” like the world wars of the twentieth century are extremely unlikely to happen in the highly interdependent world of the twenty-first century. In fact, he shows, never in history has such a high percentage of the world’s population lived in peace, conflicts of all forms have been steadily dropping since the early 1990s, and even terrorism can bring states together in international cooperation to combat a common enemy.47
Fat Man Morality and Little Boy Brawls
In addition to all this moral deliberation over the possible future use of nuclear weapons, there is an ongoing debate about the past use of the only atomic bombs dropped on cities—Little Boy, which obliterated Hiroshima, and Fat Man, which annihilated Nagasaki. Over the past couple of decades a cadre of critics have put forth the claim that neither bomb was necessary to bring about the end of the Second World War and thus their use was immoral, illegal, or even a crime against humanity. In 1946 the Federal Council of Churches issued a statement declaring, “As American Christians, we are deeply penitent for the irresponsible use already made of the atomic bomb. We are agreed that, whatever be one’s judgment of the war in principle, the surprise bombings of Hiroshima and Nagasaki are morally indefensible.”48 In 1967 the linguist and contrarian politico Noam Chomsky called the two bombings “the most unspeakable crimes in history.”49
More recently, in an otherwise deeply insightful history of genocide titled Worse Than War, the historian Daniel Goldhagen opens his analysis by calling US president Harry Truman “a mass murderer” because in ordering the use of atomic weapons he “chose to snuff out the lives of approximately 300,000 men, women and children.” Goldhagen proceeds to opine that “it is hard to understand how any right-thinking person could fail to call slaughtering unthreatening Japanese mass murder.”50 In morally equating Harry Truman with Adolf Hitler, Joseph Stalin, Mao Zedong, and Pol Pot, Goldhagen allows himself to be constrained by the categorical thinking that prevents one from discerning the different kinds, levels, and motives of genocide (although he does this for other mass killings). If one defines “genocide” broadly enough, as when Goldhagen equates it with “mass murder” (without ever defining what, exactly, that means) then nearly every act of killing large numbers of people could be considered genocidal because there are only two categories—mass murder and non-mass murder. The virtue of continuous thinking allows us to distinguish the differences between types of mass killings (some scholars define genocide as one-sided killing by armed people of unarmed people), their context (during a state war, civil war, “ethnic cleansing”), motivations (termination of hostilities or extermination of a people), and quantities (hundreds to millions) along a sliding scale. In 1946 the Polish jurist Raphael Lemkin created the term genocide and defined it as “a conspiracy to exterminate national, religious or racial groups.”51 In that same year the UN General Assembly defined genocide as “a denial of the right of existence of entire human groups.”52 More recently, in 1994 the highly respected philosopher Steven Katz defined genocide as “the actualization of the intent, however successfully carried out, to murder in its totality any national, ethnic, racial, religious, political, social, gender or economic group.”53
By these definitions, the dropping of Fat Man and Little Boy were not acts of genocide, and the difference between Truman and the others is in the context and motivation of the act, apparent in the subtitle of Goldhagen’s book, Genocide, Eliminationism, and the Ongoing Assault on Humanity. In their genocidal actions against targeted people, Hitler, Stalin, Mao, and Pol Pot had as their objective the total elimination of a group. The killing would only stop when every last pursued person was exterminated (or if the perpetrators were stopped or defeated). Truman’s goal in dropping the bombs was to end the war, not to eliminate the Japanese people. If eliminationism was the goal, then why did the United States lead the Marshall Plan after the war to rebuild both Japan and (West) Germany such that within twenty years both were world economic powers?54 This would seem to be the very opposite of an eliminationist program.
As to the morality of dropping the bombs, by the definition of morality in this book—the survival and flourishing of sentient beings—then not only did Fat Man and Little Boy end the war and stop the killings, they also saved lives, possibly millions of lives, both Japanese and American. My father was possibly one such survivor. During the Second World War he was aboard the USS Wren, a navy destroyer assigned to escort aircraft carriers and other large capital ships to protect them from Japanese submarines and especially for shooting down kamikaze planes. He was part of the larger fleet that was working its way toward Japan for a planned invasion of the home islands. He told me that everyone on board dreaded that day because they had heard of the horrific carnage resulting from the invasion of just two tiny islands held by the Japanese—Iwo Jima and Okinawa. During the invasion of Iwo Jima there were approximately 26,000 American casualties, which included 6,821 dead in the 36-day battle. How fiercely did the Japanese defend that little volcanic rock 700 miles from Japan? Of the 22,060 Japanese soldiers assigned to fight to the bitter end, only 216 were captured during the battle.55 The subsequent battle for Okinawa, only 340 miles from the Japanese mainland, was fought even more ferociously, resulting in a staggering body count of 240,931 dead, including 77,166 Japanese soldiers, 14,009 American soldiers, plus an additional 149,193 Japanese civilians living on the island who either died fighting or committed suicide rather than let themselves be captured.56 No wonder, as my father told me, when the atomic bombs were dropped there was an emotional relief among the crew.57 With an estimated 2.3 million Japanese soldiers and 28 million Japanese civilian militia prepared to defend their island nation to the end,58 it was clear to all what an invasion of the Japanese mainland would mean.
It is from these cold, hard facts that Truman’s advisers estimated that between 250,000 and 1 million American lives would be lost in an invasion of Japan.59 General Douglas MacArthur estimated that there could be a 22:1 ratio of Japanese to American deaths, which translates to a minimum death toll of 5.5 million Japanese.60 By comparison (cold though it may sound), the body count from both atomic bombs—about 200,000 to 300,000 total (Hiroshima: 90,000 to 166,000 deaths, Nagasaki: 60,000 to 80,000 deaths61)—was a bargain. In any case, if Truman hadn’t ordered the bombs dropped, General Curtis LeMay and his fleet of B-29 bombers would have continued pummeling Tokyo and other Japanese cities into rubble before the invasion, and the death toll from conventional bombing would have been just as high as that produced by the two atomic bombs, if not higher, given the fact that previous mass bombing raids had produced Hiroshima-level death rates, plus the likelihood that more than just two cities would have been destroyed before the Japanese surrendered. For example, Little Boy was the energy equivalent of 16,000 tons of TNT. By comparison, the US Strategic Bombing Survey estimated that this was the equivalent of 220 B-29s carrying 1,200 tons of incendiary bombs, 400 tons of high-explosive bombs, and 500 tons of antipersonnel fragmentation bombs, with an equivalent number of casualties.62 In fact, on the night of March 9–10, 1945, 279 B-29s dropped 1,665 tons of bombs on Tokyo, leveling 15.8 square miles of the city, killing 88,000 people, injuring another 41,000, and leaving another 1 million homeless.63
On balance, then, dropping the atomic bombs was the least destructive of the options on the table. Although we wouldn’t want to call it a moral act, it was in the context of the time the least immoral act by the criteria of lives saved. That said, we should also recognize that the several hundred thousand killed is still a colossal loss of life, and the fact that the invisible killer of radiation continued its effects long after the bombings should dissuade us from ever using such weapons again. Along that sliding scale of evil, in the context of one of the worst wars in human history that included the singularly destructive Holocaust of 6 million murdered, it was not, pace Chomsky, the most unspeakable crime in history—not even close—but it was an event in the annals of humanity never to be forgotten and, hopefully, never to be repeated.
A Path to Nuclear Zero
The abolition of nuclear weapons is an exceptionally complex and difficult puzzle that has been studied extensively by scholars and scientists for more than half a century. The problems and permutations of getting from here to there are legion, and there is no single surefire pathway to zero, but the following steps that have been proposed by various experts and organizations seem reasonable and realistic long-term goals.64
1. Continue nuclear stockpile reduction. Following the trend lines in figure 2-2, work to reduce the global stockpile of nuclear weapons from more than 10,000 now to 1,000 by 2020, and to less than 100 by 2030. This is enough nuclear firepower to maintain Minimum Sufficiency Deterrence to keep the peace among nuclear states, and yet in the event of a mistake or a madman a nuclear war will not result in the annihilation of civilization.65 The global zero campaign calls for “the phased, verified, proportional reduction of all nuclear arsenals to zero total warheads by 2030,” noting that if this seems unrealistic “the United States and Russia retired and destroyed twice as many nuclear warheads (40,000+) as this action plan proposes (20,000+) over the next twenty years (2009–2030).”66 That is encouraging, but it is far easier to go from 70,000 to 10,000 than it is from 10,000 to 1,000, and even harder to go from 1,000 to 0 because of the security dilemma that will always exist until and unless the rest of the steps below are met.
2. No first use. Make all “first strike” strategies illegal by international law. Nuclear weapons should only be used defensively, in a retaliatory function. Any nation that violates the law and initiates a first strike will be subject to global condemnation, economic sanctions, nuclear retaliation, and possible invasion, toppling their government and putting their leaders on trial for crimes against humanity. China and India have both signed on to No First Use (NFU), but NATO, Russia, and the United States have not. Russian military doctrine calls for the right to use nukes “in response to a large-scale conventional aggression.”67 France, Pakistan, the United Kingdom, and the United States have stated that they will only use nukes defensively, although Pakistan said it would strike India even if the latter did not use nuclear weapons first,68 and the United Kingdom said it would use nuclear weapons against “rogue states” such as Iraq were the latter to use weapons of mass destruction against British troops in the field.69 For its part the United States reiterated its long-term policy that “The fundamental role of US nuclear weapons, which will continue as long as nuclear weapons exist, is to deter nuclear attack on the United States, our allies, and partners,” adding that it “will not use or threaten to use nuclear weapons against non-nuclear weapons states that are party to the NPT [Non-Proliferation Treaty] and in compliance with their nuclear non-proliferation obligations.”70
3. Form a nuclear great power pact. Such an alliance would hold strong against small powers and terrorists who either have nuclear weapons or are trying to obtain them with an intent to use them. Jacek Kugler has outlined a model for reducing nuclear stockpiles worldwide while maintaining deterrence that would include, in addition to the no first use policy, a proviso that “nuclear great powers must guarantee that any first nuclear strike by small nuclear powers will face automatic nuclear devastating retaliation from a member of the nuclear club.” Kugler and his colleague Kyungkook Kang propose a “Nuclear Security Council” composed of the four great powers—the United States, China, Russia, and the European Union (France and the United Kingdom)—that have enough nuclear weapons for a second strike should any of the smaller nuclear powers, or a terrorist group, launch a first strike against them or anyone else.71 For example, if North Korea attacked either South Korea or Japan, the United States would retaliate. Or if Iran acquired nuclear weapons and used them against Israel, the United States (and maybe the European Union, or maybe not) would counterstrike.
4. Shift the taboo from using nuclear weapons to owning nuclear weapons. This is what the Nobel Foundation had in mind in 2009 when it awarded President Barack Obama the Peace Prize: “The Committee has attached special importance to Obama’s vision of and work for a world without nuclear weapons.”72 Taboos are effective psychological mechanisms for deterring all sorts of human behaviors, and they worked well in keeping poison gas from being used in the Second World War, although other states (England and Germany) have used them at other times (in the First World War), and sometimes even on their own people (Saddam Hussein on the Kurds in Iraq). But across the board and over time, the taboo against using chemical and biological weapons has grown stronger, and their use is considered by most nations and international law to be a crime against humanity (it was one of the crimes for which Saddam Hussein was hanged73), although the taboo was not instantaneous.
Nukes began as sexy weapons, as evidenced by the fact that the bikini bathing suit was so named by its French designer, Louis Réard, because he hoped its revealing design would create an explosive reaction not unlike that of the two atomic bombs detonated earlier that summer of 1946 on the atoll of Bikini in the South Pacific.74 As the political scientist Nina Tannenwald writes in her history of the origin of the nuclear taboo, throughout the 1950s everyone accepted nuclear weapons as conventional and articulated “a view with a long tradition in the history of weapons and warfare: a weapon once introduced inevitably comes to be widely accepted as legitimate.” But that didn’t happen. Instead, “nuclear weapons have come to be defined as abhorrent and unacceptable weapons of mass destruction, with a taboo on their use. This taboo is associated with a widespread revulsion toward nuclear weapons and broadly held inhibitions on their use. The opprobrium has come to apply to all nuclear weapons, not just to large bombs or to certain types or uses of nuclear weapons.” The taboo developed as a result of three forces, Tannenwald argues: “a global grassroots antinuclear weapons movement, the role of Cold War power politics, and the ongoing efforts of nonnuclear states to delegitimize nuclear weapons.”75
The psychology behind the taboo against chemical and biological weapons transfers readily to that of nuclear weapons. Deadly heat and radiation—like poison gas and lethal diseases—are invisible killers that are indiscriminate in the carnage they wreak. This is a psychological break from traditional warfare between two armies bearing spears, swords, firearms, grenades, or even cannons and rocket launchers. The moral emotion of moralistic punishment that evolved to deter free riders and bullies brings no satisfaction if one’s enemy simply disappears in a white flash on some other continent.76 Also, the revulsion people feel toward nuclear weapons may be linked in the brain to the emotion of disgust that psychologists have identified as being associated with invisible disease contagions, toxic poisons, and revolting materials (such as vomit and feces) that carry them—reactions that evolved to direct organisms away from these substances for survival reasons.77
5. Nuclear weapons should no longer be seen as a deterrence solution. Former foreign minister of Australia Gareth Evans, who is also chair of the International Commission on Nuclear Non-Proliferation and Disarmament, makes a convincing case that nuclear weapons no longer make sense for deterrence. It is not at all clear, Evans argues, that it was nuclear weapons that kept the great powers at a standoff throughout the Cold War, given the fact that prior to their invention in the mid-1940s, the great powers fought in spite of the existence of weapons of mass destruction. “Concern about being on the receiving end of the extreme destructive power of nuclear weapons may simply not be, in itself, as decisive for decision-makers as usually presumed,” says Evans. Instead, the long peace since 1945 may be the result of “a realisation, after the experience of World War II and in the light of all the rapid technological advances that followed it, that the damage that would be inflicted by any war would be unbelievably horrific, and far outweighing, in today’s economically interdependent world, any conceivable benefit to be derived.”78
6. Evolution instead of revolution. All of these changes should be implemented gradually and incrementally with “trust but verify” strategy and with as much transparency as possible. Gareth Evans proposes a two-stage process: minimization, then elimination, “with some inevitable discontinuity between them.” He uses the target date of 2025 for the achievement of a minimization objective set by the International Commission on Nuclear Non-Proliferation and Disarmament that “would involve reducing the global stockpile of all existing warheads to no more than 2,000 (a maximum of 500 each for the United States and Russia and 1,000 for the other nuclear-armed states combined), with all states being committed by then to ‘No First Use’—and with these doctrinal declarations being given real credibility by dramatically reduced weapons deployments and launch readiness.”79
7. Reduce spending on nuclear weapons and research. Nuclear weapons are indefensibly costly, with estimates coming in at more than $100 billion a year for maintenance by the nine nuclear states, and $1 trillion projected over the next decade.80 Building budgets now that wind down monies allocated for all nuclear weapons–related agencies over the next twenty years will drive nations to consider other solutions to the problems that nuclear weapons have historically been created to solve.
8. Revise twentieth-century nuclear plans and policies for the twenty-first century. In their manifesto “A World Free of Nuclear Weapons,” the aforementioned Shultz, Perry, Kissinger, and Nunn proposed that we “discard any existing operational plans for massive attacks that still remain from the Cold War days”; “increase the warning and decision times for the launch of all nuclear-armed ballistic missiles, thereby reducing the risks of accidental or unauthorized attacks”; “undertake negotiations toward developing cooperative multilateral ballistic-missile defense and early warning systems”; and “dramatically accelerate work to provide the highest possible standards of security for nuclear weapons … to prevent terrorists from acquiring a nuclear bomb.”81
9. Economic interdependency. The more two countries trade with one another the less likely they are to fight. It’s not a perfect correlation—there are exceptions—but countries that are economically interdependent are less likely to allow political tensions to escalate to the point of conflict. Wars are expensive—economic duties, sanctions, embargoes, and blockades are costly; and business often suffers on both sides of a conflict (except for weapons manufacturers, of course). In democracies, for better or for worse, politicians are more beholden to monied interests, who generally prefer to keep their transaction costs as low as possible, and those go way up in wartime. Thus, the sooner nations such as North Korea and Iran can be brought into economic trading blocs that make them codependent with the nuclear great powers, the less likely they are to feel the need to develop nuclear weapons in the first place, much less use them.
10. Democratic governance. The more democratic two nations are, the less likely they are to fight. As with economic interdependency, democratic peace is a general trend, not a law of nature, but the effect is found in the transparency of a political system that includes checks and balances on power and the ability to change leaders so that a liar, lunatic, or lord wannabe obsessed with power, intent on revenge, or fixated on a nation’s racial purity or precious bodily fluids will not be allowed to escalate tensions to the point of launching nuclear-tipped missiles.
With nuclear weapons there is no easy way out of the security dilemma. Even though Reagan said he wanted to go to zero, in Iceland he refused offers from Gorbachev to reduce nuclear weapons drastically precisely because he did not trust the Russians and was only willing to trust and verify—which is a form of distrust. I am hopeful that we can get to zero before we annihilate ourselves, but it’s going to be a long row to hoe. One place to begin is to apply the principle of interchangeable perspectives when negotiating—for example, a spin-down of arms, which is what Gareth Evans recommends as the first step: “In each case the key to progress, as in all diplomacy, is to try to understand the interests and perspectives of the other side, and to find ways of accommodating them by all means short of putting at real risk genuinely vital interests of one’s own.”
For example, Evans cites the United States’ failure to give Russia “an acceptable response to its concerns … about BMD [Ballistic Missile Defense] and new long-range conventional weapons systems” in Europe, which, he says, seriously diminished “its second-strike retaliatory capability.”82 Encouragingly, in 2013, the United States took an important step by canceling part of its BMD program in Europe, even though it was done for budgetary reasons and to bolster BMD in Asia over concerns about North Korea.83 Or, to keep China in its current “minimal deterrence” posture instead of ramping up into MAD deterrence, Evans writes, the United States should acknowledge “that its nuclear relationship with China is one of ‘mutual vulnerability,’ meaning in practice that the United States ‘should plan and posture its force and base its own policy on the assumption that an attempted US disarming first strike, combined with US missile defences, could not reliably deny a Chinese nuclear retaliatory strike on the United States.’”84 For example, the United States already has a strong enough BMD system in place in the Pacific region to counter any North Korean missile launches, so any further development could be perceived as threatening by the Chinese.
There are dozens of such scenarios that are played out in search of what—in the spirit of creative acronyms so common in this field—we might call a Minimally Dangerous Pathway to Zero (MDPZ). I do not believe that the deterrence trap is one from which we can never extricate ourselves, and the remaining threats should direct us to work toward nuclear zero sooner rather than later. In the meantime, minimum is the best we can hope for given the complexities of international relations, but given enough time, as Shakespeare poetically observed,
Time’s glory is to calm contending kings,
To unmask falsehood and bring truth to light,
To stamp the seal of time in aged things,
To wake the morn and sentinel the night …
To slay the tiger that doth live by slaughter …
To cheer the ploughman with increased crops,
And waste huge stones with little water-drops.85
WHAT ABOUT TERRORISM?
All this game-theoretic computation assumes that humans are rational actors. As the international relations scholar Hedley Bull noted, “mutual nuclear deterrence … does not make nuclear war impossible, but simply renders it irrational,” but then added that a rational strategist is one “who on further acquaintance reveals himself as a university professor of unusual intellectual subtlety.”86
Are terrorists rational actors? How rational is it for a Muslim terrorist to look forward to martyrdom and a reward of seventy-two virgins in heaven? (That’s if you’re a man, of course; a female terrorist has no equivalent consolation.) At least the godless Communists didn’t harbor such delusions. MAD deters you from launching a first strike if you think your target has retaliatory capability and you don’t want to die—the drive to stay alive and all that. But if your religion has convinced you that you’re not really going to die, and that the next life is spectacularly better than this life, and that you’ll be a hero among those whom you’ve left behind … it changes the calculation. As the nuclear zero advocate Sam Nunn said, “I’m much more concerned about a terrorist without a return address that cannot be deterred than I am about deliberate war between nuclear powers. You can’t deter a group who is willing to commit suicide.”87 Nevertheless, I lean toward optimism because of the dismal history of terrorism to achieve its stated goals through violence. Despite the seemingly constant barrage of media stories of suicide bombers blowing themselves up, the long-term trends in social change over the past half century are in the direction of less violence and more moral action, even with terrorism.
Terrorism is a form of asymmetrical warfare by nonstate actors against innocent, noncombatant civilians. As its name suggests, it does so by evoking terror. This exercises our alarmist emotions, which in turn confound our reasoning, making clear thinking about terrorism well nigh impossible. As such, I suggest that there are at least seven myths that have arisen that need to be debunked to properly understand the causes of terrorism in order to continue to reduce its frequency and effectiveness.
1. Terrorists are pure evil. This first myth took root in September 2001 when President George W. Bush announced, “We will rid the world of the evil-doers” because they hate us for “our freedom of religion, our freedom of speech, our freedom to vote and assemble and disagree with each other.”88 This sentiment embodies what the social psychologist Roy Baumeister calls “the myth of pure evil” (more on this in chapter 9, on moral regress), which holds that perpetrators of violence act only to commit senseless injury and pointless death for no rational reason. The “terrorists-as-evil-doers” myth is busted through the scientific study of violence, of which at least four types motivate terrorists: instrumental, dominance/honor, revenge, and ideology.
In a study of fifty-two cases of Islamic extremists who have targeted the United States, for instance, the political scientist John Mueller concluded that terrorist motives include instrumental violence and revenge: “a simmering, and more commonly boiling, outrage at U.S. foreign policy—the wars in Iraq and Afghanistan, in particular, and the country’s support for Israel in the Palestinian conflict.” Ideology in the form of religion “was a part of the consideration for most,” Mueller suggests, “but not because they wished to spread Sharia law or to establish caliphates (few of the culprits would be able to spell either word). Rather they wanted to protect their co-religionists against what was commonly seen to be a concentrated war upon them in the Middle East by the U.S. government.”89 As for dominance and honor as drivers of violence, through his extensive ethnography of terrorist cells the anthropologist Scott Atran has demonstrated that suicide bombers (and their families) are showered with status and honor in this life (and, secondarily, the promise of virgins in the next life), and that most “belong to loose, homegrown networks of family and friends who die not just for a cause, but for each other.” Most terrorists are in their late teens or early twenties, especially students and immigrants “who are especially prone to movements that promise a meaningful cause, camaraderie, adventure, and glory.”90 All of these motives are on display in the 2013 documentary film by Jeremy Scahill called Dirty Wars, a sobering look at the effects of US drone attacks and assassinations in foreign countries such as Somalia and Yemen—countries with whom the United States is not at war—in which we see citizens swearing revenge against Americans for these violations of their honor and ideology.91
2. Terrorists are organized. This myth depicts terrorists as part of a vast global network of top-down, centrally controlled conspiracies against the West. But as Atran shows, terrorism is “a decentralized, self-organizing, and constantly evolving complex of social networks,” often organized through social groups and sports organizations such as soccer clubs.92
3. Terrorists are diabolical geniuses. This myth began with the 9/11 Commission report that described the terrorists as “sophisticated, patient, disciplined, and lethal.”93 But according to the political scientist Max Abrahms, after the decapitation of the leadership of the top terrorist organizations, “terrorists targeting the American homeland have been neither sophisticated nor masterminds, but incompetent fools.”94 Examples abound: The 2001 airplane shoe bomber Richard Reid was unable to ignite the fuse because it was wet from the rain and his own foot perspiration; the 2009 underwear bomber Umar Farouk Abdulmutallab succeeded only in setting his pants ablaze, burning his hands, inner thighs, and genitals, and getting himself arrested; the 2010 Times Square bomber Faisal Shahzad managed merely to torch the inside of his 1993 Nissan Pathfinder; the 2012 model airplane bomber Rezwan Ferdaus purchased C-4 explosives for his rig from FBI agents who promptly arrested him; and the 2013 Boston Marathon bombers were equipped with only one gun for defense and had no money and no exit strategy beyond hijacking a car with no gas in it that Dzhokhar Tsarnaev used to run over his brother Tamerlan, followed by a failed suicide attempt inside a land-based boat. Evidently terrorism is a race to the bottom.
4. Terrorists are poor and uneducated. This myth appeals to many in the West who like to think that if we throw enough money at a problem it will go away, or if only everyone went to college they’d be like us. The economist Alan Krueger, in his book What Makes a Terrorist, writes: “Instead of being drawn from the ranks of the poor, numerous academic and government studies find that terrorists tend to be drawn from well-educated, middle-class or high-income families. Among those who have seriously and impartially studied the issue, there is not much question that poverty has little to do with terrorism.”95
5. Terrorism is a deadly problem. In comparison to homicides in America, deaths from terrorism are in the statistical noise, barely a blip on a graph compared to the 13,700 homicides a year. By comparison, after the 3,000 deaths on 9/11, the total number of people killed by terrorists in the 38 years before totals 340, and the number killed after 9/11 and including the Boston bombing is 33, and that includes the 13 soldiers killed in the Fort Hood massacre by Nidal Hasan in 2009.96 That’s a total of 373 killed, or 7.8 per year. Even if we include the 3,000 people who perished on 9/11, that brings the average annual total to 70.3, compared to that of the annual homicide rate of 13,700. No comparison.
6. Terrorists will obtain and use a nuclear weapon or a dirty bomb. Osama bin Laden said he wanted to use such weapons if he could get them, and Secretary of Homeland Security Tom Ridge pressed the point in calling for more support for his agency: “Weapons of mass destruction, including those containing chemical, biological or radiological agents or materials, cannot be discounted.”97 But as Michael Levi of the Council on Foreign Relations reminds us, “Politicians love to scare the wits out of people, and nothing suits that purpose better than talking about nuclear terrorism. From President Bush warning in 2002 that the ‘smoking gun’ might be a mushroom cloud, to John Kerry in 2004 conjuring ‘shadowy figures’ with a ‘finger on the nuclear button’ and Mitt Romney invoking the specter of ‘radical nuclear jihad’ last spring, the pattern is impossible to miss.”98 But most experts agree that acquiring the necessary materials and knowledge for building either weapon is far beyond the reach of most (if not all) terrorists. George Harper’s delightful 1979 article in Analog titled “Build Your Own A-Bomb and Wake Up the Neighborhood” is revealing in showing just how difficult it is to actually make a bomb:
As a terrorist one of the best methods for your purposes is the gaseous diffusion approach. This was the one used for the earliest A-bombs, and in many respects it is the most reliable and requires the least sophisticated technology. It is, however, a bit expensive and does require certain chemicals apt to raise a few eyebrows. You have to start with something on the order of a dozen miles of special glass-lined steel tubing and about sixty tons of hydrofluoric acid which can be employed to create the compound uranium-hexafluoride. Once your uranium has been converted into hexafluoride it can be blown up against a number of special low-porosity membranes. The molecules of uranium hexafluoride which contain an atom of U-238 are somewhat heavier than those containing an atom of U-235. As the gas is blown across the membranes more of the heavier molecules are trapped than the light ones. The area on the other side of the membrane is thus further enriched with the U-235 containing material; possibly by as much as ½% per pass. Repeat this enough times and you wind up with uranium hexafluoride containing virtually 100% core atoms of U-235. You then separate the fluorine from the uranium and arrive at a nice little pile of domesticated U-235. From there it’s all downhill.99
In his book On Nuclear Terrorism, Levi invokes what he calls “Murphy’s Law of Nuclear Terrorism: What Can Go Wrong Might Go Wrong,” and recounts numerous failed terrorist attacks due to sheer incompetence by the terrorists to build and detonate even the simplest of chemical weapons.100 In this context it is important to note that no dirty bomb has ever been successfully deployed resulting in casualties by anyone anywhere, and that according to the US Nuclear Regulatory Commission—which tracks fissile materials—“most reports of lost or stolen material involve small or short-lived radioactive sources that are not useful for a RDD [radiological disbursal device, or dirty bomb]. Past experience suggests there has not been a pattern of collecting such sources for the purpose of assembling a RDD. It is important to note that the radioactivity of the combined total of all unrecovered sources over the past 5 years would not reach the threshold for one high-risk radioactive source.”101 In short, the chances of terrorists successfully building and launching a nuclear device of any sort are so low that we would be far better off investing our limited resources in diffusing the problem of terrorism in other areas.
7. Terrorism works. In a study of forty-two foreign terrorist organizations active for several decades, Max Abrahms concluded that only two achieved their stated goals—Hezbollah achieved control over southern Lebanon in 1984 and 2000, and the Tamil Tigers took over parts of Sri Lanka in 1990, which they then lost in 2009. That results in a success rate of less than 5 percent.102 In a subsequent study, Abrahms and his colleague Matthew Gottfried found that when terrorists kill civilians or take captives it significantly lowers the likelihood of bargaining success with states, because violence begets violence and public sentiments turn against the perpetrators of violence. Further, they found that when terrorists did get what they want it is more likely to be money or the release of political prisoners, not political objectives. They also found that liberal democracies are more resilient to terrorism, despite the perception that because of their commitment to civil liberties democracies tend to shy away from harsh countermeasures against terrorists.103 Finally, in terms of the overall effectiveness of terrorism as a means to an end, in an analysis of 457 terrorist campaigns since 1968 the political scientist Audrey Cronin found that not one terrorism group had conquered a state and that a full 94 percent had failed to gain even one of their strategic political goals. And the number of terrorist groups who accomplished all of their objectives? Zero. Cronin’s book is titled How Terrorism Ends. It ends swiftly (groups survive only five to nine years on average) and badly (the deaths of its leaders).104
A rejoinder I often hear when recounting these studies is that terrorism has worked in terms of terrorizing the government into expending enormous resources into combating its threat, and along the way sacrificed our freedom and privacy. It’s a valid point. The United States alone has spent upwards of $6 trillion since 9/11 on two wars and a bloated bureaucracy in response to the loss of 3,000 lives,105 less than a tenth of the number of people who die annually on American highways. The explosive revelations by Edward Snowden about the National Security Agency’s surveillance programs launched a national conversation about the balance between privacy and transparency, freedom and security. As Snowden told the 2014 TED audience in Vancouver via video link from an undisclosed location in Moscow:
Terrorism provokes an emotional response that allows people to rationalize and authorize programs they wouldn’t have otherwise. The U.S. asked for this authority in the 1990s; it asked the FBI to make the case in Congress, and they said no, it’s not worth the risk to the economy, it would do too much damage to society to justify gains. But in the post 9/11 era, they used secrecy and justification of terrorism to start programs in secret without asking Congress or the American people. Government behind closed doors is what we must guard against. We don’t have to give up privacy to have good government, we don’t have to give up liberty to have security.106
That balance between liberty and security is one all governments contend with in many areas of society.107 We must be vigilant always, of course, but these seven myths point to the unavoidable conclusion that in the course of history terrorism fails utterly to achieve its goals or divert civilization from its path toward greater justice and freedom unless we fall victim to fear itself.
VIOLENT VS. NONVIOLENT CHANGE
Violence as a means of attaining political change is a problematic strategy. What about nonviolent social change? In a classic work in political philosophy published in 1970 titled Exit, Voice, and Loyalty, the Harvard economist Albert Hirschman observed that when organizations such as firms and nations begin to stagnate and decline, members and interested parties can employ one of two nonviolent strategies to turn things around: voice their opinions by making suggestions, proposing changes, filing grievances, or protesting; or exit and start a new organization that incorporates their ideas for change.108 In response to political repression, for example, citizens of a nation can either protest (voice) or emigrate (exit); employees or customers of a company can either file a complaint or take their business elsewhere. In both cases, people can vote with their voices and their feet (and dollars). Loyalty keeps exit in check so that nations and companies are not constantly failing or going bankrupt. A certain amount of stability is required for progress and profits, so in attenuating the exit strategy loyalty enables voice to be a more effective (and nonviolent) means of bringing about change. When people feel that their voices are heard—and can see real change made—they are less likely to exit. Conversely, when voices are not heard—as when nations silence political dissenters by locking them up or executing them—exit becomes the only viable strategy for change, and that can lead to violence.
In terms of marking moral progress, which strategy is better, voice or exit? It depends on how the change is brought about—through nonviolent resistance or violent response. Historically, political regime change has often come about by butchery and bloodshed. Regicide, for example, was a common method of regime change throughout most of European history. In a study of 1,513 monarchs in 45 monarchies across Europe between AD 600 and 1800, for example, the criminologist Manuel Eisner found that about 15 percent (227) were assassinated, corresponding to a homicide rate of about 1,000 per 100,000 ruler-years—10 times the background rate of homicide during those centuries.109 Mao Zedong was being a realist when he proclaimed in 1938, “Political power grows out of the barrel of a gun.”110 But that is changing.
Like the many other forms of moral progress tracked in this book, nonviolent resistance has now overtaken violent response. The political scientists Erica Chenoweth and Maria Stephan entered all forms of both nonviolent and violent revolutions and reforms since 1900 into a database and then crunched the numbers.111 Results: “From 1900 to 2006, nonviolent campaigns worldwide were twice as likely to succeed outright as violent insurgencies.” Chenoweth added that “this trend has been increasing over time—in the last 50 years civil resistance has become increasingly frequent and effective, whereas violent insurgencies have become increasingly rare and unsuccessful. This is true even in extremely repressive, authoritarian conditions where we might expect nonviolent resistance to fail.” Why does nonviolence trump violence in the long run as a means to an end? “People power,” Chenoweth says. How many people? According to her data, “no campaigns failed once they’d achieved the active and sustained participation of just 3.5 percent of the population—and lots of them succeeded with far less than that.” Further, she notes, “Every single campaign that did surpass that 3.5 percent threshold was a nonviolent one. In fact, campaigns that relied solely on nonviolent methods were on average four times larger than the average violent campaign. And they were often much more representative in terms of gender, age, race, political party, class, and urban-rural distinctions.”112
How does this nonviolent strategy translate into political change? If your movement is based on violence, you are necessarily going to be limiting yourself to mostly young, strong, violence-prone males who have a propensity for boozing and brawling, whereas, Chenoweth explains, “Civil resistance allows people of all different levels of physical ability to participate—including the elderly, people with disabilities, women, children, and virtually anyone else who wants to.” It’s a faster track to the magic 3.5 percent number when you’re more inclusive and participation barriers are low. Plus, you don’t need expensive guns and weapons systems. Civil disobedience often takes the form of strikes, boycotts, stay-at-home demonstrations, banging on pots and pans and other noise generators, and—like a scene out of the 1951 film The Day the Earth Stood Still—shutting off the electricity at a designated time of the day. A diffuse group of isolated individuals scattered about a city employing such measures is difficult for oppressive regimes to stop. Plus, by including the mainstream instead of the marginalized in your movement, your shock troops are more likely to know people on the other side. In the case of Serbia and its dictator Slobodan Milosevic, Chenoweth notes that “once it became clear that hundreds of thousands of Serbs were descending on Belgrade to demand that Milosevic leave office, policemen ignored the order to shoot on demonstrators. When asked why he did so, one of them said: ‘I knew my kids were in the crowd.’”113
There is one more benefit to nonviolent resistance: what you’re left with afterward. Nonviolent campaigns of change are far more likely to result in democratic institutions than are violent insurgencies, and they are 15 percent less likely to relapse into civil war. “The data are clear,” Chenoweth concludes: “When people rely on civil resistance, their size grows. And when large numbers of people withdraw their cooperation from an oppressive system, the odds are ever in their favor.”114 Figure 2-3 and figure 2-4 show these remarkable trends.
Figure 2-3. Nonviolent Campaigns for Political Change
Success rate of campaigns for political change since the 1940s comparing violent and nonviolent methods reveals that violence is a failed strategy and nonviolence is the method of choice.115
Figure 2-4. Progress in Nonviolent Campaigns for Political Change
The percentage of successful campaigns for political change comparing violent and nonviolent methods.116
WAR, VIOLENCE, AND MORAL PROGRESS
As with the many myths surrounding terrorism, there are myths about the origins and causes of war that have clouded our thinking, starting with the myth that humans are by nature relatively nonviolent and that prestate peoples were peaceful and lived in relative harmony with each other and their environment. A convergence of evidence from multiple lines of scientific inquiry, however, tells us that this view of human prehistory is at the very least misleading and very probably wrong. The reason has less to do with the character of human nature being either pacifist or bellicose and more to do with the logic of how organisms respond to free riders, bullies, challenges, and threats to our survival and flourishing. That is, the data I review below are meant less to settle the debate that has long raged about what humans were like in a state of nature as either noble savages or the war of all against all; instead, I am building on the logic of our moral emotions and how they direct us to respond one way or another to other sentient beings who, in turn, respond to our actions accordingly.
The ecologist Bobbi Low used data from the Standard Cross-Cultural Sample to analyze 186 Hunting-Fishing-Gathering (HFG) societies around the world to show that people in traditional societies are not living in balanced eco-harmony with nature. In fact, she found that their use of the environment is constrained by limited ecological resources and not by attitudes (such as sacred prohibitions against harming Mother Earth), and that their relatively low environmental impact is the result of low population density, inefficient technology, and the lack of profitable markets, not from any conscious effort at conservation. Low also found that in 32 percent of HFG societies, not only were they not practicing conservation, environmental degradation was severe.117
In his book Sick Societies: Challenging the Myth of Primitive Harmony, the anthropologist Robert Edgerton surveyed the ethnographic record of traditional societies that have not been exposed to Western civilization and found clear evidence of drug addiction, abuse of women and children, bodily mutilation, economic exploitation of the group by political leaders, suicide, and mental illness.118
In War Before Civilization: The Myth of the Peaceful Savage, the archaeologist Lawrence Keeley tested the hypothesis that prehistoric warfare was rare, harmless, and little more than ritualized sport. Surveying primitive and civilized societies, he found that prehistoric war was—relative to population densities and fighting technologies—at least as frequent (as measured in years at war versus peace), as deadly (as measured by percentage of conflict deaths), and as ruthless (as measured by killing and maiming of noncombatant women and children) as modern warfare. One prehistoric mass grave in South Dakota, for example, yielded the remains of five hundred scalped and mutilated men, women, and children, and this happened half a century before Europeans arrived on the continent. Overall, says Keeley, “Fossil evidence of human warfare dates back at least 200,000 years, and it is estimated that as many as 20–30% of ancestral men died from intergroup violence.”119
Archaeologist Steven LeBlanc’s Constant Battles: The Myth of the Peaceful, Noble Savage documented his title’s description of the state of nature with examples such as a ten-thousand-year-old grave site along the Nile River containing “the remains of fifty-nine people, at least twenty-four of whom showed direct evidence of violent death, including stone points from arrows or spears within the body cavity, and many contained several points. There were six multiple burials, and almost all those individuals had points in them, indicating that the people in each mass grave were killed in a single event and then buried together.” At another site, in Utah, ninety-seven bodies were unearthed, revealing that “six had stone spear heads in them … several breast bones shot through with arrows and many broken heads and arms.… Individuals of all ages and both sexes were killed, and individuals were shot with atlatl darts, stabbed, and bludgeoned, suggesting that fighting was at close quarters.” Another half dozen archaeological sites in Mexico, Fiji, Spain, and other parts of Europe showed human bones broken open lengthwise and cooked, and a pre-Columbian Native American coprolite was laced with the human muscle protein myoglobin, all of which adds up to the fact that humans once ate one another.120 LeBlanc identified ten societies that did not show “constant battles” between groups, but he noted that “some of these same ‘peaceful’ societies have extremely high homicide rates. Among the Copper Eskimo and the New Guinea Gebusi, for example, a third of all adult deaths were from homicide.” So, he asked rhetorically, “Which killing is considered a homicide and which killing is an act of warfare? Such questions and answers become somewhat fuzzy. So some of this so-called peacefulness is more dependent on the definition of homicide and warfare than on reality.”121
A visual reminder of the reality of what life was often like for many of our ancestors may be seen in figure 2-5, featuring two skulls of people who died violent deaths some 8,500 to 10,700 years ago in northern Europe, putting a face on the numbers of our violent past.
Figure 2-5. The Face of Violent Death
Two people who died violently are part of an exhibition of what life was like in northern Europe from 6,500 to 8,700 BCE, from the National Museum of Denmark collection in Copenhagen. A blow to the head that shattered the skull and left a gaping hole and an arrow point in the sternum ensured that the man on the left succumbed, and a dagger in the chest and an arrow through the face terminated the life of the man on the right.122 Although undoubtedly traditional societies varied greatly in their rates of violence—as do modern societies—in general, if you were a man living at that time, there was about a one in four chance that you would die violently.
Like historic societies, prehistoric societies varied considerably in their rates of violence, but statistically speaking, the chances of dying violently in a prestate society versus a state society is as unmistakable as it is terrifying. Compared to modern humans, prehistoric people were far more murderous in terms of the percentage of the population slaughtered in combat and offed by one another, as Steven Pinker explained to me in an interview in which I asked him to summarize the massive datasets he compiled for his book The Better Angels of Our Nature. “Violent deaths of all kinds have declined, from around 500 per 100,000 people per year in pre-state societies to around 50 in the Middle Ages, to around 6 to 8 today worldwide, and fewer than 1 in most of Europe.” What about gun-toting Americans and our inordinate rate of homicides (currently about 5 per 100,000 per year)? In 2005, Pinker computes, a grand total of 0.008, or eight tenths of 1 percent of all Americans died of domestic homicides and in two foreign wars combined. In the world as a whole that year, in fact, the rate of violence from war, terrorism, genocide, and killings by warlords and militias was 0.0003 of the total population of 6.5 billion, or three hundredths of 1 percent.123
What about wars? Surely more people have died due to state-sponsored conflicts than in prestate battles? This isn’t the case if you compute it as a percentage of the population killed, says Pinker: “On average, non-state societies kill around 15 percent of their people in wars, whereas today’s states kill a few hundredths of a percent.” Pinker calculates that even in the murderous twentieth century, about 40 million people died in direct battle deaths (including civilians caught in the crossfire) out of the approximately six billion people who lived, or 0.7 percent. Even if we include war-related deaths of citizens from disease, famine, and genocide, that brings the death toll up to 180 million deaths, or about 3 percent. But what about the two world wars and the Holocaust, Stalin’s gulags, and Mao’s purges? “A very pessimistic estimate of the human damage from all wars, genocides, and war-induced and man-made famines in the 20th century would be 60 per 100,000 per year—still an order of magnitude less than tribal warfare. And of course those numbers are dominated by 1914–1950 in Europe and 1920–1980 in East Asia, both of which have since calmed down.”124 In a moment of comic relief in reviewing such grave matters, when Pinker appeared on The Colbert Report, the comedian Stephen Colbert wondered how he could say that violence is declining when the twentieth century was the most violent in human history. Pinker responded with a wry smile, “a century lasts for a hundred years, and the last 55 years of the twentieth century had unusually low rates of death in warfare so after that spike of wars between 1914 and 1918 and ’39 to ’45, the rate of killing in war went down.”125 The second half of the twentieth century, continuing into the twenty-first century—the long peace, as it is called—is the real mystery to be explained.
Figure 2-6 presents the aggregated data compiled by Pinker from multiple sources for the percentage of deaths in warfare for prehistoric people vs. modern hunter-gatherers vs. modern hunter-horticulturalists and other tribal groups vs. modern states. The difference is visually striking and unambiguous in its conclusion because so many datasets all point in the same direction. While one might be skeptical of how the numbers were computed for any one dataset, it is highly unlikely that all of these studies could be wrong so consistently.
The reason for computing the rates of death as a percentage of the population or as a number per hundred thousand—instead of the raw numbers of total killed—is threefold: (1) this is customary among scholars of war and violence, (2) raw numbers will increase over time as a result of larger populations, larger armed forces, and improved technologies for killing, thereby distorting what we really want to know, which is … (3) determining the chances that a given individual (you or I) will die violently. This takes us back to the first principle of moral consideration of this book: the survival and flourishing of individual sentient beings. The violent death of an individual is what I’m focusing on here because it is the individual who suffers the ultimate loss—not a group, race, nation, or statistical collective. Although big Leviathans can put up big numbers in both armies and war deaths, if you had to choose a time in history in which it was safest for you personally to survive and thrive, by these criteria alone there is no time like the present.
Given the implications that these data have for one’s view of human nature and the causes and future of war and violence, the scientific debate has turned ideological, even tribal. On one side are the “peace and harmony mafia”127—those who hold that war is a recent learned cultural phenomenon and that humans are by nature peaceful, a view they defend with vigor and even ferocity. On the other side are what the peace and harmony mafia pejoratively call the “Harvard Hawks” (an invidious smear meant to imply that they favor war over peace)128—Richard Wrangham, Steven LeBlanc, Edward O. Wilson, and Steven Pinker—who contend that war is the outcome of the logic of evolutionary dynamics. The “evolution wars” (as the “anthropologists of peace” call it) have been going on since the 1970s, which I have documented in a previous book.129
Figure 2-6. The Decline of War Deaths from Prehistoric Bands to Modern States
The percentage of deaths in warfare for prehistoric people vs. modern hunter-gatherers vs. hunter-horticulturalists and other tribal groups vs. modern states. This long-term precipitous fall in violence occurred even while the quantity and efficiency of deadly weapons evolved into the killing tools they are today.126
In the latest round of debates over the nature of human nature, the extensive datasets compiled by others and coalesced by Pinker have been challenged in both scholarly and popular publications. In a 2013 edited volume titled War, Peace, and Human Nature, Brian Ferguson claims that the datasets in “Pinker’s List” greatly exaggerate prehistoric war mortality; unfortunately, he then confuses frequency (rates) with tendency (inevitability) when he challenges “the idea that deadly intergroup violence has been common enough in our species’ evolutionary history to act as a selection force shaping human psychological tendencies toward either external violence or internal cooperation.”130 But Pinker is not arguing that the rate of violence in the past acts as a selective force in human evolution; quite the opposite, in fact—the logic of game theoretic interactions (like the Prisoner’s Dilemma matrices I outlined in chapter 1) between players means that a certain amount of defection (in games) or violence (in life) is inevitable when there is no outside governing body (an association in sports, a government in society) to tilt the matrix toward more cooperative and peaceful choices via both rewards and punishments. Over the millennia we have learned how to adjust the conditions of the matrix of life to make people interact in less violent and more peaceable and cooperative ways, and it is those adjustments that have led to a decline of violence and to moral progress.131
In a 2013 paper in the prestigious journal Science, Douglas Fry and Patrik Söderberg disputed the theory that war is prevalent in mobile foraging band societies (MFBS) by claiming that in a sample of 148 episodes from 21 MFBS “more than half of the lethal aggression events were perpetrated by lone individuals, and almost two-thirds resulted from accidents, interfamilial disputes, within-group executions, or interpersonal motives such as competition over a particular woman.” From this they conclude, “most incidents of lethal aggression among MFBS may be classified as homicides, a few others as feuds, and a minority as war.”132 Well, that’s comforting! So the skulls in figure 2-5 were of guys who either got bludgeoned to death by friends within their tribes rather than by enemies from other tribes, or they accidentally, savagely shot themselves in the face or brutally stabbed themselves in the chest with arrows and daggers. Addressing the point about what the data actually indicate, Fry and Söderberg target Samuel Bowles (whose data are included in figure 2-6), accusing him of claiming that “war is prevalent in MFBS” and that “war has been pervasive during human evolution.” Bowles responded, saying, “I did not make the two claims that the authors attribute to me for the simple reason that I was answering a different question.” The question Bowles was trying to answer is, “Did warfare among ancestral hunter-gatherer groups affect the evolution of human social behaviors?” To that end, Bowles explains, “I needed data on the fraction of all deaths that were due to intergroup conflict, not the evidence that Fry and Söderberg present, namely data on whether ‘war’ is ‘prevalent’ or ‘pervasive’ or the major source of violent deaths.”133
Here again we see how categorical and binary thinking cloud the issue. Forcing a continuum of violence into a category of “prevalent” or “pervasive” misses the point of what we’re interested in knowing here: whatever the rate of violence in the past—by whatever the means and whatever the cause—was it enough to affect human evolution? If you insist that the rate must be high enough to be called “prevalent” or “pervasive,” then you have to operationally define these terms with a quantity, including the term “war” that by today’s definition has no meaning for the type of intergroup conflicts that happened during the Late Pleistocene epoch in which our species came of age. As Bowles explains, “In my models of the evolution of human behaviour, the appropriate usage of the term [war] is ‘events in which coalitions of members of a group seek to inflict bodily harm on one or more members of another group’; and I have included ‘ambushes, revenge murders and other kinds of hostilities’ analogizing human intergroup conflict during the Late Pleistocene to ‘boundary conflicts among chimpanzees’ rather than ‘pitched battles of modern warfare.’”134
Modern urban gangs, for example, engage in violent intergroup conflict that cumulatively can rack up significant body counts. Think of the ongoing Mexican drug wars among rival cartels in which more than one hundred thousand people have been killed and more than a million displaced since 2006.135 Yet scholars would not classify these events as “wars” because the motives are more commonly related to honor, revenge, feuding, or turf conflicts. But as Bowles points out, urban gangs meet the criteria for what constitutes an MFBS: small group size, fluctuating group membership, multilocal residence, and a type of egalitarianism with no authority to order others to fight. Bowles carefully examined the data that Fry and Söderberg published in their paper and noted that “motives such as ‘revenge’ or killing ‘over a particular man’ or the fact that a killing was ‘interpersonal’ mean that the event did not fall under the heading of ‘war.’” And yet, Bowles concludes, “From the standpoint of evolutionary biology these aspects of the killing are irrelevant: what matters for the dynamics of population composition is that members of a group (more than one) cooperated in killing a member of another group, for any reason whatever.”136 To that end, the data in figure 2-6 showing the dramatic decline in violence over time remain indisputable, and that is real moral progress however one defines any particular kind of violence.
It’s a point well made in The Arc of War by the political scientists Jack Levy and William Thompson, who begin by adopting a continuum rather than a categorical style of reasoning: “War is a persistent feature of world politics, but it is not a constant. It varies over time and space in frequency, duration, severity, causes, consequences, and other dimensions. War is a social practice adopted to achieve specific purposes, but those practices vary with changing political, economic, and social environments and with the goals and constraints induced by those environments.”137 When nuanced in this continuous rather than categorical manner, we can see both how and when rates of warfare change. By defining war as “sustained, coordinated violence between political organizations,”138 however, Levy and Thompson have defined away prehistoric group conflicts that don’t at all resemble political organizations of today. As such, “war” cannot even begin until there are political organizations of a substantive size, which necessarily means that what we think of as war, by definition, was impossible before civilization began.
Nevertheless, Levy and Thompson acknowledge that the rudimentary foundations for war as they define it were already there in our earliest ancestors—even suggesting that “border skirmishes” with Neanderthals in northern Europe may account for the latter’s extinction some thirty-five thousand years ago—including “the observation that hunting and homicide skills made suitable weaponry, tactics, and rudimentary military organization available” and that “group segmentation helped define group identities and enemies, thereby also facilitating the potential for organizing politically and militarily.”139 Thus they endorse “an early if infrequent start for warfare among hunter-gatherers,” which then increased over time in lethality with improved weapons and increased population sizes, and this continued throughout the history of civilization as states increased in size until nations fought nations, leading to an increase in the total number of deaths but a decrease in the total number of conflicts.
Working in prehistoric South America, archaeologists Elizabeth Arkush and Charles Stanish use a convergence-of-evidence approach from many different sources to compile extensive evidence showing indisputably that “Late Andean prehistory was profoundly shaped by warfare.” And by “warfare” they don’t just mean “ritual battle” in which “real” violence was rare. Archaeological remains of defensive walls and fortifications match the records of Spanish conquistadors who reported, for example, that they “encountered huge Inca armies supported by a superb logistical framework of roads, supply depots, secondary centers, and forts.” Spanish chroniclers and Incan oral histories also make clear that “military might was a cornerstone of imperial power. The empire had emerged from military victories over some groups, the peaceful submission of others persuaded by the threat of military reprisals, and the violent suppression of several rebellions. Inca histories also describe a period of frequent warfare before the empire arose in which local war leaders battled each other for plunder or political dominance.”140
Archaeologist George Milner includes a photograph of an antler point embedded in a lumbar vertebrate from prehistoric Kentucky as a visual example of the numerous datapoints indicating that, if anything, “surely underestimate warfare casualties.” But even if the estimates are not exaggerated, “even low frequencies of deaths from intergroup conflicts added just one more element of uncertainty to lives already full of uncertainties. Warfare is likely to have had a broader impact than the immediate loss of life alone. Sudden and unexpected deaths of people who played critical roles in the survival of small groups increased the risk of death for remaining household and community members.”141 Time travel fantasists and postmodernists who bewail modern society and long for simpler times would do themselves a favor by examining this evidence a little more closely.
In his 1996 book Demonic Males, Richard Wrangham traced the origins of patriarchy and violence all the way back to our hominid origins millions of years before the Neolithic Revolution.142 In 2012 and 2013 Wrangham published two papers with his graduate student Luke Glowacki that painted a much more nuanced portrait of hunter-gatherers (HG) as highly risk-averse when it comes to violence and war. Most rational agents don’t want to get maimed or killed, so they only risk going to war when, Wrangham and Glowacki write, “cultural systems of reward, punishment, and coercion rather than evolved adaptations to greater risk-taking” are in place. These cultural systems include the “teaching of specific war skills, apprenticeship, games and contests, pain endurance tests, other endurance tests, and the use of legends and stories.”143 Cultural systems also shower would-be warriors with promises of honor and glory—for themselves and their families—and since death may preclude fallen warriors from cashing in the promises, seeing your late comrade’s family so rewarded acts as a social signal to help individuals overcome their natural risk aversion to dangerous and deadly conflict. Wrangham and Glowacki call this the “cultural rewards war-risk hypothesis” and predict that the greater the risk—the higher the probability of being maimed or killed in battle—the greater the number of benefits that accrue to participating individuals. An assessment of the ethnographic literature on simple warfare among small-scale societies found just that.144
Far from portraying humans as innately violent and warlike, Wrangham admits that “whether humans have evolved specific psychological adaptations for war is uncertain.”145 Instead, a review of all the available literature on group conflicts in both humans and chimpanzees shows that, like their chimp cousins, hunter-gatherers follow a game theoretic strategy of an imbalance of power: if we outnumber them, invade; if they outnumber us, evade. As Lawrence Keeley concluded his extensive studies on war and conflict in HG bands: “The most elementary form of warfare is a raid (or type of raid) in which a small group of men endeavour to enter enemy territory undetected in order to ambush and kill an unsuspecting isolated individual, and to then withdraw rapidly without suffering any casualties.”146
Apocalypto, Mel Gibson’s film about the collapse of the Mayan civilization, portrays a visually striking example of a typical prehistoric raid in which one tribe of Mesoamericans launches an early morning attack on the hero’s village while they are asleep, setting fire to their huts and delivering devastating blows before they can organize a concerted counterstrike. By the time the defending warriors come out of their slumberous state it is all over but the shouting. The writers of the script did their homework and offered a more realistic visage of what life was like before civilization than, say, Kevin Costner’s Dances with Wolves. According to archaeologist Michael Coe, “Maya civilization in the Central Area reached its full glory in the early eighth century, but it must have contained the seeds of its own destruction, for in the century and a half that followed, all its magnificent cities had fallen into decline and ultimately suffered abandonment. This was surely one of the most profound social and demographic catastrophes of all human history.”147 And it unfolded long before the arrival of European guns, germs, and steel at the end of the fifteenth century, which is why Gibson opened the film with a quote from the historian Will Durant: “A great civilization is not conquered from without until it has destroyed itself from within.”148
* * *
The military historian John Keegan once reflected, “War, it seems to me, after a lifetime of reading about the subject, mingling with men of war, visiting the sites of war and observing its effects, may well be ceasing to commend itself to human beings as a desirable, or productive, let alone rational, means of reconciling their discontents.”149 Joshua Goldstein’s 2011 book Winning the War on War compiled massive datasets to support that conclusion when he wrote, “We have avoided nuclear wars, left behind world war, nearly extinguished interstate war, and reduced civil wars to fewer countries with fewer casualties.”150 The political scientist Richard Ned Lebow drew similar conclusions in his 2010 book Why Nations Fight, delineating four motives behind wars of the past 350 years, all of which are in decline: fear, interest, status/standing, and revenge.151 According to Lebow, none of these motives is any longer effectively served by going to war, and more and more nations’ leaders are finding ways to avoid conflict when these motives arise, especially the one that he identifies as the most common motive, status and standing. “I contend that standing has been the most common cause of war historically and that war has declined in large part because it no longer confers standing.” Lebow makes the compelling point that the mechanistic and unheroic destructiveness of the two world wars effectively ended the notion of bravery and heroism that wars allegedly bestowed on their survivors, which gave individuals and states higher status and standing among their peers. Referencing the First World War, for example, Lebow writes, “It is worth considering the counterfactual that opposition to war would not have been nearly so pronounced if the war had been more like its Napoleonic predecessor, a war of maneuver that encouraged individual, recognizable acts of bravery that might have more than minor tactical consequences. War deprived of its heroic and romantic associations, and considered instead an irrational source of slaughter, destruction and suffering, was no longer able to win honor for its combatants or standing for the states that sent them to their deaths.”152
A comprehensive 2014 report by a team of social scientists at Simon Fraser University tested the “declinist” hypothesis by reviewing all of the available data, concluding, “There are now compelling reasons for believing that the historical decline in violence is both real and remarkably large—and also that the future may well be less violent than the past.” About that future, they note that “there are ample grounds for cautious optimism but absolutely none for complacency.”153
The underlying goal in the study of the nature and causes of violence and war—whatever the blend of biology, culture, and circumstance turns out to be—is to attenuate them. (See, for example, the group Vision of Humanity, which tracks trends toward or away from peace and ranks countries in a peace index.154) Because the stakes are so high, emotions in those who conduct such studies run deep. Samuel Bowles said it best in a casual remark to me: “It seems to be a highly ideologically charged debate, which is unfortunate, because finding that war was frequent in the past, or that out-group hostility might have a genetic basis says something about our legacy, not our destiny.”155
Science is—and ought to be—concerned with understanding both our legacy and our destiny, for as Cicero noted in the epigram to this chapter, warnings of evil are only justified if there is a way of escape. In the next chapter we will consider how science and reason are not only two of the main drivers of moral progress, but they also show us the way of escape from the traps we have set for ourselves.