Fitting In: Some Thoughts on Scholarship, Sources, and Methods
WARNING: THIS AFTERWORD IS intended for academics. I want to describe to them how this book fits in with the existing literature on decision-making and prediction as well as to explain how the findings from related fields have informed my own work. The title is slightly tongue-in-cheek, since fitting in is something I have never done well. Because this book does not resemble a traditional work of history, I need to explain my particular methodological approach and the sources I employ. If you are not an academic, you might want to avoid this afterword altogether. Alternatively, if you are not a scholar but you suffer from severe insomnia, then please read on. This chapter might just be the cure you’ve been searching for.
Though most of us long to know the future, especially in troubled times, lately behavioral scientists have been shattering our crystal balls. The noted psychologist Philip Tetlock has been widely cited for revealing that the more renowned the expert, the more likely his predictions will be false.1 The Harvard psychologist Daniel Gilbert tells us that we cannot even predict what will bring us joy, since our expectations are almost always off.2 And the gleefully irreverent market trader Nassim Taleb argues that the massive impact of black swans—improbable but surprisingly frequent anomalies—makes most efforts at prediction fruitless.3 Most notable of all, the economist Dan Ariely has exposed the flawed models for predicting our behavior in everything from the products we buy to the daily choices we make.4 Of course, they’re all right. We are abysmal at prediction. But the skeptics have missed a crucial point: we have no other choice.
National leaders are always in need of thoughtful approaches to prediction, especially when lives are on the line in matters of war and peace. We therefore need to have some sense of what scholars from a range of disciplines have learned about predictions in general and enemy assessments in particular. One of the most recent observers to find fault with the prediction business is Nate Silver, the election guru whose work I described in chapter 9.5 Silver is only the latest thinker to tackle the question of how we can enhance our predictive prowess. Much of this work has involved, in one form or another, the question of discerning signals amid noise. Writing in 1973, the economist Michael Spence asked how an employer can distinguish potential good employees from bad ones before hiring anyone.6 Spence proposed that good and bad employees signal to employers by dint of their educational credentials. More recently, the sociologist Diego Gambetta has used signaling theory to understand criminal networks.7 Gambetta observes, for example, that a mafia must be especially prudent before including members into its organization. If candidates are not carefully vetted, the mafia might enlist an undercover police officer. By asking new recruits to commit a murder, the mafia imposes a substantial cost upon the undercover agent, who, presumably, would be unwilling to pay that price. A genuine would-be mobster, in contrast, can signal his commitment with a single shot. In Colombia, youth gangs have even been known to require that prospective members first kill one of their closest relatives to prove their sincerity. Imposing high costs upon ourselves is one way of signaling what we value.
Another economist who focused on asymmetric information, George Akerlof, examined the market for used cars. In 1970, he suggested that because used-car buyers have no easy means of knowing the quality of a particular car, they will pay only what they believe to be the price of an average used car of a given model and year. As a result, owners of high-quality used cars (ones that were barely driven and well-maintained) will refuse to sell because they will not get the price they deserve, thereby reducing the overall average quality of used cars on the market. Although assessing enemy behavior and buying used cars are dramatically different realms, Akerlof’s notions do suggest the dangers that result from assuming that the seller (or the enemy) is of low quality. In chapter 8 we saw what happens when statesmen and their advisers project negative qualities onto their opponents without an accurate understanding of the other side.
One obvious difference between these studies in economics and sociology on the one hand and the history of international relations on the other is that much of the time foreign leaders do not want to signal their true commitments. The strategic empath must therefore locate ways of identifying an adversary’s drivers amidst conflicting signals. Nevertheless, the idea of costs is useful. At meaningful pattern breaks, statesmen make choices with significant costs to themselves and with likely long-term implications. These actions can be valuable signals to foreign statesmen, even though they are unintentionally transmitted.8
The natural sciences have also aided our understanding of prediction through the development of information theory. John Archibald Wheeler, the physicist who coined the term “black hole,” is also famous for crafting the catchphrase “it from bit.” Wheeler was a leading light in the development of nuclear fission, having studied under Niels Bohr and later having taught Richard Feynman. When Wheeler uttered his pithy slogan in 1989, he intended to imply that all matter as well as energy, the whole of our universe, emerged from information. The bit is a particle that cannot be split. Everything, in the end, reduces to information. But it was Claude Shannon, father of information theory in computer science, who truly brought about the information turn in scientific study.
Shannon recognized that not all information is created equal. To test this, he pulled a Raymond Chandler detective novel from his bookcase and read a random passage to his wife: “A small oblong reading lamp on the—.” He asked Betty to guess which word came next. She failed to guess correctly at first, but once he told her that the first letter was d, the rest was easy. It was more than mere pattern recognition that mattered here. Shannon wanted to show that the information that counted most came before the missing letters, whereas the letters that followed the d in desk were of lesser value. For Shannon, information equaled surprise. The binary digits, or “bits,” as they came to be known, that mattered in any message were the ones that gave us something new or unexpected.9 It was a valuable insight, and one with applicability across fields. The historical cases in this book bore this out. Each case examined the particular bits of surprising information on which leaders focused and why that focus helped or hindered them.
Evolutionary biology, specifically regarding the literature on theory of mind, is equally important for historians of decision-making. The classic experiment on theory of mind involves researchers who placed a candy in front of two little girls. We’ll call them Sally and Jane. The researchers then covered the candy with a box so it could not be seen. While Jane exited the room, the researchers removed the candy from under the box and hid it elsewhere, while leaving the box in place. When Jane returned, they asked Sally where Jane thinks the candy is located. Below the age of four most children think that Jane, the girl who did not see the candy being removed, will somehow know that the candy is no longer under the box. Most children believe that everyone else knows what they themselves know. It turns out that only after children reach the age of four do they discover that each of us has a distinct perspective on the world, shaped by access to different information. Before that age, children do not possess “theory of mind.” They cannot imagine that someone else does not possess the same knowledge or perspective that they themselves do.
Compelling as they are, these theories have real limits when it comes to understanding the kinds of complex decisions that statesmen face. Although the theory of mind shows how we develop a kind of mental empathy—the ability to see things from another’s point of view—this work was initially centered on primates and very small children. That said, there does exist work of relevance to statecraft. In a paper by Alison Gopnik and others, for example, researchers describe the differences between two common ways in which we predict the actions of others.10 Most people, it seems, assume that past behavior is the best indicator of future behavior. If someone lied in the past, for example, that person can be expected to lie again. But others take a different view. Some people place greater weight on the current context. They do not discount past behavior, but they ask how the present context is likely to affect another’s actions. In my own study of statecraft, I find that the leaders who succeeded most at anticipating enemy actions incorporated analysis of both prior patterns and current context, but they heavily weighted the information gleaned at certain moments.
One other scientific contribution bears indirectly, though significantly, on this book. Ray Kurzweil, the scientist who developed speech recognition software (and who is now the Director of Engineering at Google), has advanced a theory of how our brains function. In his 2012 book, How to Create a Mind, Kurzweil proposes the pattern recognition theory of mind to explain how the neocortex functions.11 Kurzweil points out that the primary purpose of our brains is in fact to predict the future through pattern recognition. Whether we are trying to anticipate threats, locate food sources, catch a ball, or catch a train, our brains are constantly performing complex calculations of probability.
Kurzweil asserts that the neocortex—the large frontal region of the brain where most such calculations are conducted—is composed of layers upon layers of hierarchical pattern recognizers. These pattern recognizers, he maintains, are constantly at work making and adjusting predictions. He offers the simple sentence:
Consider that we see what we expect to—
Most people will automatically complete that sentence based on their brain’s recognition of a familiar pattern of words. Yet the pattern recognizers extend far deeper than that. To recognize the word “apple,” for example, Kurzweil notes that our brains not only anticipate the letter “e” after having read a-p-p-l, the brain must also recognize the letter “a” by identifying familiar curves and line strokes. Even when an image of an object is smudged or partially obscured, our brains are often able to complete the pattern and recognize the letter, or word, or familiar face. Kurzweil believes that the brain’s most basic and indeed vital function is pattern recognition.
This ability is exceptionally advanced in mammals and especially in humans. It is an area where, for the moment, we still have a limited advantage over computers, though the technology for pattern recognition is rapidly improving, as evidenced by the Apple iPhone’s use of Siri speech recognition software. For a quick example of your own brain’s gifts in this arena, try to place an ad on the website Craigslist. At the time of this writing, in order to prove that you are a human and not a nefarious robot, Craigslist requires users to input a random string of letters or numbers presented on the screen. The image, however, is intentionally blurred. Most likely, you will have no difficulty identifying the symbols correctly. Robots, in contrast, will be baffled, unable to make sense of these distorted shapes. For fun, try the option for an audio clue. Instead of typing the image, listen to the spoken representation of those letters and numbers. You will hear them spoken in a highly distorted manner amidst background noise. The word “three,” for example, might be elongated, stressed, or intoned in a very odd way. “Thaaaaaaaa-reeeeeeee.” It sounds like the speaker is either drunk, on drugs, or just being silly. The point is that a computer program attempting to access the site could not recognize the numbers and letters when they do not appear in their usual patterns. Our brains possess an amazing ability to detect patterns even under extremely confusing conditions. But before you start feeling too smug, Kurzweil predicts that we have until the year 2029, when computers will rival humans in this and other regards. So enjoy it while it lasts.
Let me sum up this section: Kurzweil’s theory suggests that pattern recognition is the brain’s most crucial function, and our sophisticated development of this ability is what gives human beings the edge over other animals and, for now, over computers as well. I suggest that the best strategic empaths are those who focus not only on enemy patterns but also on meaningful pattern breaks and correctly interpret what they mean. Next, Claude Shannon’s information theory shows that it is the new and surprising information that is more valuable than other data. I observe that pattern breaks are, in fact, markers of new and surprising information, possessed of greater value to leaders than the enemy’s routine actions. Finally, the theory of mind scholarship provides ways of thinking about how we mentalize, or enter another’s mind, which I employed throughout this book, but especially when scrutinizing how Stalin tried to think like Hitler.
Before we grow too enamored of all these theories, we should remember that theories are not always right. Often their proponents, in their well-intentioned enthusiasm, exaggerate the scope and significance of their discoveries. This is particularly true of some recent works in social science—studies that bear directly on the nature of prediction.
One of the striking features infusing much of the recent social science scholarship on prediction is its tendency to expose alleged human silliness. Across fields as diverse as behavioral economics, cognitive psychology, and even the science of happiness or intuition, studies consistently show how poor we are at rational decision-making, particularly when those choices involve our expectations of the future. Yet too often these studies draw sweeping conclusions about human nature from exceedingly limited data. In the process, they typically imply that their subjects in the lab will respond the same way in real life. Before we can apply the lessons of cognitive science to history, we must first be clear on the limits of those exciting new fields. We should temper our enthusiasm and must not be seduced by science.
Consider one daring experiment by the behavioral economist Dan Ariely. Ariely recruited male students at the University of California Berkeley to answer intimate questions about what they thought they might do under unusual sexual settings. After the subjects had completed the questionnaires, he then asked them to watch sexually arousing videos while masturbating—in the privacy of their dorm rooms, of course. The young men were then asked these intimate questions again, only this time their answers on average were strikingly different. Things that the subjects had previously thought they would not find appealing, such as having sex with a very fat person or slipping a date a drug to increase the chance of having sex with her, now seemed much more plausible in their excited state. Ariely concluded from these results that teenagers are not themselves when their emotions take control. “In every case, the participants in our experiment got it wrong,” Ariely explains. “Even the most brilliant and rational person, in the heat of passion seems to be absolutely and completely divorced from the person he thought he was.”12
Ariely is one of America’s most intriguing and innovative investigators of behavioral psychology. His research has advanced our understanding of how poorly we all know ourselves. And yet there is a vast difference between what we imagine we would do in a situation as compared to what we would actually do if we found ourselves in that situation. In other words, just because a young man in an aroused state says that he would drug his date does not guarantee that he truly would do it. He might feel very differently if the context changed from masturbating alone in his dorm room to being present with a woman on the real date. Can we be so certain that he really would slip the drug from his pocket into her drink? Or would he truly have sex with a very overweight person if she were there before him? Would he have sex with a sixty-year-old woman or a twelve-year-old girl, or any of Ariely’s other scenarios, if he were presented with the opportunity in real life? Life is not only different from the lab; real life has a funny way of being rather different from the fantasy.
A great many recent studies suffer from a similar shortcoming. They suggest profound real-world implications from remarkably limited laboratory findings. In his wide-ranging book on cognitive psychology, Nobel Laureate Daniel Kahneman describes the priming experiments conducted by Kathleen Vohs in which subjects were shown stacks of Monopoly money on a desk or computers with screen savers displaying dollar bills floating in water. With these symbols priming their subconscious minds, the subjects were given difficult tests. The true test, however, came when one of the experimenters “accidentally” dropped a bunch of pencils on the floor. Apparently, those who were primed to think about money helped the experimenter pick up fewer pencils than those who were not primed. Kahneman asserts that the implications of this and many similar studies are profound. They suggest that “. . . living in a culture that surrounds us with reminders of money may shape our behavior and our attitudes in ways that we do not know about and of which we may not be proud.”13
If the implications of such studies mean that American society is more selfish than other societies, then we would have to explain why Americans typically donate more of their time and more of their income to charities than do those of nearly any other nation.14 We would also need to explain why some of the wealthiest Americans, such as Bill Gates, Warren Buffett, Mark Zuckerberg, and a host of billionaires, have pledged to donate half of their wealth within their lifetimes.15 Surely these people were thinking hard about their money before they chose to give it away. We simply cannot draw sweeping conclusions from snapshots of data.
I want to mention one other curious study from psychology. Its underlying assumption has much to do with how we behave during pattern breaks. Gerd Gigerenzer is the highly sensible Director of the Max Planck Institute for Human Development and an expert on both risk and intuition. Some of his work, which he related in a book titled Gut Feelings, was popularized in Malcolm Gladwell’s Blink. Gigerenzer has never been shy to point out perceived weaknesses and shallow logic in his own field. He has written cogently on the flaws embedded in Daniel Kahnemann’s and Amos Tversky’s heuristics and biases project.16 Yet even Gigerenzer has occasionally fallen into the “how silly are we?” camp, though the following topic he certainly did not take lightly. Unfortunately, this particular study suggests that Americans behaved irrationally after 9/11, though their reactions may have been perfectly sound.
Gigerenzer found that American fatalities from road accidents increased after 9/11.17 Because many Americans were afraid to fly in the year following the attacks, they drove instead. Presumably, the increased number of drivers increased the number of collisions, leading to roughly 1,500 more deaths than usual. Gigerenzer’s main aim is prudent and wise. Governments should anticipate likely shifts in behavior following terrorist attacks and should take steps to reduce indirect damage such as greater accidents from changed behavior. But the underlying assumption is that many Americans cannot think rationally about probability. Gigerenzer implies that the decision not to fly after 9/11 was based on irrational fears. Had they continued to fly instead of drive, fewer Americans would have died.
The problem with such reasoning, as you’ve likely already guessed, is that it ignores the pattern-break problem. A statistician might argue that, despite the 9/11 hijackings, the odds of dying in a plane crash were still extremely low. But those odds are based on a prior pattern—prior to a meaningful and dramatic pattern break. After 9/11, Americans had to wonder whether other terrorist plots using airplanes were still to come. If the terrorists could defeat our security checks once, could they do it again? Given that these were the acts of an organization and not of a single, crazed individual, and given that the leader of that organization vowed to strike America again, it was wise to adopt a wait-and-see approach. The past odds of flying safely no longer mattered in light of a potentially ongoing threat. Without any means of determining how great that threat would be, driving was a perfectly rational alternative, even knowing that one’s odds of dying in a car crash might rise. Until a new pattern is established (or a prior one returns), the odds of dying in a hijacked plane might be even higher.
In his article, Gigerenzer did observe that following the Madrid train bombings in 2004, Spaniards reduced their ridership on trains, but those rates returned to normal within a few months. Gigerenzer speculates that one reason might have been that the Spanish are more accustomed than Americans to dealing with terrorist attacks. In other words, the Madrid train bombings represented less of a pattern break than did 9/11.
Another way of thinking about this problem is to compare it with the horrific movie theater shootings in Aurora, Colorado, on July 20, 2012, in which a lone gunman shot twelve people to death and wounded fifty-eight others. As frightening as this incident was, it would not have made sense for Americans across the country, or even in Aurora, to have stopped attending films in theaters. The incident marked no new breach in security and no innovation in killing techniques. The same risk has long been present. The assailant operated alone, not as part of an international terrorist network. While there is always the chance of copy-cat attacks, it remained valid to consider the odds of being murdered in a movie theater based on the pattern of past killings in theaters or in public spaces in general. The Aurora attacks did not represent a meaningful break in the pattern of American gun violence. Like Spaniards and the Madrid train bombings, Americans have sadly become accustomed to episodes like these.
Judging probability is an excellent way of assessing risk only when we focus on the right data and recognize when the old odds no longer matter. The famed English economist John Maynard Keynes is often quoted for his snappy remark, “When the facts change, I change my mind. And what do you do, sir?” I would offer a variation of Keynes’s quip.
When the pattern breaks, I change my behavior.
How about you?
My goal in this discussion is not to disparage the work of behavioral scientists. On the contrary, their work can help us challenge the assumptions we have too long taken for granted. My aim instead is to caution us against carrying the implications of such studies too far. The experiments of behavioral scientists can help guide our thinking about how we think, as long as we remain cognizant of the gulf between labs and real life.18 And here is where I believe historians can add true value.
Although history holds great potential for understanding how we think, historians typically focus their studies on how one or two particular individuals in a narrow time period thought. For example, the historian might derive deep insight into the thinking of key historical figures, such as Abraham Lincoln or John Brown. Alternatively, historians might trace a particular historical event across time, such as the slave trade or the abolition movement, scrutinizing its many causes and consequences. As a result, they might comprehend how large groups of people thought about a particular subject over time. Rarely, however, do historians attempt to investigate types of thinking across both time and space—meaning at various historic moments in various regions of the world. That is what this study of historical decision-making aimed to do.
In contrast, the subfield of political science known as international relations often examines disparate cases of conflict across time and space, but it does so with definite theories it seeks to prove. Beginning with the assumption that nations relate to each other according to fixed laws of behavior, international relations scholars aim to advance, refine, or refute existing theories. When such theories are actually grounded in richly corroborated historical sources, awareness of these theories can be highly useful to the historian because they can alert us to common patterns in international conflict as well as cooperation.
This book does not advance a theory of how states behave—at least not in the traditional sense. My argument is both generalizable and parsimonious, but it is not predictive. It does not suggest that if x and y occur, then z will result. Instead, this book makes observations about how particular cases of twentieth-century conflict unfolded. It draws modest conclusions about how certain leaders have thought about their enemies, and it does this by probing a handful of key clashes.
Fostering a sense of the enemy typically involves gathering information specifically on intentions and capabilities. By examining these two elements of power, the experts believe they can comprehend or even anticipate that adversary’s behavior. This categorization is, however, far too narrow. A more inclusive categorization focuses instead on drivers and constraints.
The first step in strategic empathy involves a cold assessment of constraints. We look first not at what the other side might want to do but at what it is able to do based on context. Capabilities are not constraints. Capabilities are what enable us to achieve our wants, but constraints are what render those capabilities useless. The worst strategic empaths think about capabilities in mainly military terms. They count missiles and tanks, factor in firepower, and dissect strategic doctrine for clues to enemy intentions. If China today builds an aircraft carrier, it must be planning to challenge America on the high seas . . . or so the thinking goes. But military capabilities, just like intentions, are often constrained by nonmilitary factors, such as financial, political, organizational, environmental, or cultural impediments to action. Even something as ineffable as the Zeitgeist can be a powerful constraint, as Egyptian President Hosni Mubarak and Libyan leader Muammar Gaddafi recently discovered, much to their regret. The best strategic empaths seek out the less obvious, underlying constraints on their enemy’s behavior as well as their own.
Once the underlying constraints are grasped and it is clear that the enemy actually has room to maneuver, strategic empaths then turn to exploring the enemy’s key drivers. (In reality, of course, most leaders cannot set the order in which they assess these factors. Typically that analysis occurs in tandem or in whichever order circumstances allow.) If intentions are the things we want to do, drivers are what shape those wants. We can be driven by an ideological worldview, such as communist, capitalist, or racialist dogma. We can be driven by psychological makeups, with all the myriad complexes and schema they entail. Or we could be driven by religious and cultural imperatives: to conquer the infidels, to convert the heathens, or to Russify, Francofy, or democratize the Other. Political scientists have produced a vast literature on enemy intentions. Each scholar offers an ever more nuanced explication of how states signal their intentions and how other states perceive them. Yet intentions are best anticipated, and strategic empathy is best achieved, when the underlying drivers are clearly understood.
International relations has a long tradition of scholarship on recognizing enemy intentions. As this discipline is frequently concerned with how states manage threats in foreign affairs, it has developed numerous studies dealing specifically with the failure to predict correctly. More often than not, states are caught off-guard when prior trends are broken. Blame then falls first upon the spies. A body of literature on intelligence failures has recently cropped up. These studies deal in part with assessing enemy intentions, but they are largely America-centric, spurred by the failures to predict the 9/11 attacks and to correctly estimate the presence of Iraq’s alleged weapons of mass destruction. These works include Richard Betts’s Enemies of Intelligence,19 Robert Jervis’s Why Intelligence Fails,20 and Joshua Rovner’s Fixing the Facts.21 One work focused primarily on assessing military threats is Daryl Press’s Calculating Credibility, in which the author argues, unconvincingly in my view, that leaders do not concern themselves with an enemy’s past behavior when determining the extent of that enemy’s likely threat.22 A more recent study of assessing enemy intentions that expands its scope to cover statecraft as well as intelligence agencies is the doctoral dissertation and now book by Keren Yarhi-Milo, Knowing the Adversary.23 That author concludes: “Decision-makers’ own explicit or implicit theories or beliefs about how the world operates and their expectations significantly affect both the selection and interpretation of signals.”24 In other words, our beliefs affect how we think.
A Sense of the Enemy is not part of this political science canon. Rather than focusing only on failure, it also studies success. Instead of positing theories, it seeks explanations for why events unfolded as they did. It moves beyond the America-centric or Anglocentric story by concentrating on German, Russian, Indian, and Vietnamese leaders as well as American statesmen. And it aims not primarily to improve intelligence work but instead to understand how one aspect of statecraft contributes to shaping historical outcomes.
In addition to the many political scientists who have tackled the problem of enemy assessment, most notable among them being Alexander George,25 historians have also specifically sought explanations for how opponents understand each other. In Knowing One’s Enemies: Intelligence Assessments Before the Two World Wars,26 a cast of distinguished historians investigates the faulty intelligence estimates within all combatants prior to war. Again, this work centers on the intelligence assessments rather than the statesmen. It focuses on failures, not successes, and it does not ask the question: What enabled statesmen to think like their enemies?
In 1986, the editor of Knowing One’s Enemies teamed up with another Harvard scholar, Richard Neustadt, to assist policymakers with a book called Thinking in Time. The authors drew upon recent American history at the highest levels of decision-making, mainly over four decades from Roosevelt to Reagan. By examining a series of case studies and analyzing what went right and wrong—but mostly what went wrong—they hoped to provide sensible guidelines that would help dedicated public servants to perform better. Their conclusions could hardly be faulted: challenge your assumptions, be wary of historical analogies, distinguish what is certain from what is presumed, and read as much history as you can. Within their narratives, they also revisited the question of how we can know our enemies. They urged policymakers to engage in “placement”: the act of placing individuals in their historical context in order to determine which major events, both public and personal, shaped their worldviews. Placement, they argue, can offer clues to another person’s views, including that person’s opinions of others. Sensibly, the authors conceded that placing others in historical context cannot guarantee correct predictions about their actions. I could not agree more, but the question remains how to know which information matters.
At one point the authors give examples of the key bits of information that would have helped predict particular decision-makers’ actions during the lead-up to American escalation in Vietnam. They argue, for example, that to understand Defense Secretary Robert McNamara, “. . . It appears worth knowing that he made his way at Ford and built his reputation there from a base in statistical control.” Regarding the Secretary of State, Dean Rusk, the authors observe that it helps to know that Rusk had served in the Army in World War II and that General George C. Marshall was his hero and role model. “Each piece of information from the rest of personal history,” they maintain, “enriches or enlivens guesses drawn from conjunctions of age and job.”27
The crucial phrase here is “it appears worth knowing.” It only appears worth knowing these facts in retrospect. It is far harder to know at the time which of the countless bits of data about a person’s life will be most salient in shaping his actions in a given moment. Placing others in their historical context is essential to learning about them, but it cannot reveal that person’s underlying drivers. The basic problem with Neustadt’s and May’s notion of placement is not that it suffers from “hindsight bias,” the tendency to view outcomes as inevitable and predictable. While it is unreasonable for historians to look back at events and assume that their outcomes were foreseeable, it is perfectly legitimate for scholars to analyze what occurred and identify which information would have been useful at the time, regardless of the ultimate outcome. The real problem with “placement” is that as a guide for policymakers it is too diffuse. It leaves one with too much information and no guide to identifying the right chunk.
I did not begin this study with a hypothesis about pattern breaks. I started only with a question: What produces strategic empathy? When leaders do get it right, is their success random—a product of pure luck—or could there be a signal amidst the noise? After analyzing in depth the cases described in this book, I found one such signal: an enemy’s underlying drivers and constraints became apparent at times of pattern breaks.
All decision-makers need heuristics for cutting through the excessive amount of data about their opponents to distinguish what moves them to act. This book focuses on pattern breaks as a heuristic for exposing hidden drivers. It is not intended primarily as a guidebook to policymakers, though if it could be of use to them, that can only be to the good. It is instead a study of history, yet it differs from standard histories in two main ways.
The first unorthodox aspect of this book is that, unlike traditional diplomatic histories, this study strives to incorporate the useful, recent findings in decision-making gleaned by other fields, especially those from cognitive neuroscience, information theory, and psychology. Like many historians, I am skeptical of sweeping generalizations drawn from limited data. Yet we cannot allow skepticism to produce obscurantism. We simply know much more today about how the mind works than we did in the 1980s when Neustadt and May were writing. If one goal of historians is to understand how their subjects were thinking—and that is certainly a major goal of this book—then we have a responsibility to be informed by advances in knowledge of the decision-making process. Naturally, we can only be informed by this knowledge; we cannot apply it indiscriminately to the figures we want to understand. Since we cannot place Hitler or Stalin inside an fMRI machine any more than we could place them on the psychologist’s proverbial couch, we dare not draw definitive conclusions about how those figures thought. But what we can do is to use the historian’s craft of combing the extant records, diaries, memoirs, and private and official papers and combine those concrete insights with an understanding of how most people process information. Again, we can never reach total certainty about how historical actors thought, but we can sometimes get pretty close. The rest is left to our own judgment about what is reasonable and likely.
This is, in fact, the approach taken by Christopher R. Browning in his thoughtful historical study of Holocaust murderers, Ordinary Men. In attempting to understand how these seemingly average individuals came to act as cold-blooded killers of innocent Jewish men, women, and children, Browning cautiously applies insights drawn from psychology. Most notably, he considers the lessons from two separate and equally shocking experiments conducted by Stanley Milgram and Philip Zimbardo, both of which explored questions of conformity and deference to authority. Yet Browning remains judicious in his application of these findings and modest in his conclusions. Here is how Browning explains both the limits and benefits of psychology to history:
Was the massacre at Józefów a kind of radical Milgram experiment that took place in a Polish forest with real killers and victims rather than in a social psychology laboratory with naïve subjects and actor/victims? Are the actions of Reserve Police Battalion 101 explained by Milgram’s observations and conclusions? There are some difficulties in explaining Józefów as a case of deference to authority, for none of Milgram’s experimental variations exactly paralleled the historical situation at Józefów, and the relevant differences constitute too many variables to draw firm conclusions in any scientific sense. Nonetheless, many of Milgram’s insights find graphic confirmation in the behavior and testimony of the men of Reserve Police Battalion 101.28
I believe that precisely this type of cautious yet open-minded approach is not only sensible but invaluable for studies of decision-making. Today, the historian has the benefit of more than just psychology for understanding how people think. In the twenty years since Browning wrote Ordinary Men, we have made astonishing advances in cognitive neuroscience and other related fields, all of which have expanded our knowledge of how the human brain functions.
The second curious aspect of this book is that it combines two forms of scholarship: original, primary-source historical research and interpretive essays reflecting how historical actors thought about their enemies. To accomplish this, the book draws on a wide range of English, German, Russian, and Vietnamese primary sources, many of which are published archival records, as well as the substantial secondary literature on relevant topics. Like all historians, I plumb the extant record, watch for corroborating evidence, try to ascertain causes and verify claims. But I also apply a conceptual framework to my analysis by focusing on the effects of pattern breaks on the way that leaders thought.
For the chapter on Gandhi’s assessments of the British I draw heavily on the Collected Works of Mahatma Gandhi. I also use memoirs of one close to the Mahatma during the events in question, as well as British newspaper accounts and the Hansard transcripts of debates in the House of Commons.
For the section concerning Germany in the 1920s, I use materials as diverse as Reichstag session transcriptions, records of Cabinet meetings, newspaper accounts, and the diaries and memoirs of leading decision-makers. I harness all of the standard resources available to diplomatic historians, such as the Foreign Relations of the United States series, Documents on German Foreign Policy, and British Documents on Foreign Affairs, including the more selective Confidential Print series.
For the section on Roosevelt’s and Stalin’s attempts to think like Hitler, I rely on similar sources as noted above, as well as the reports of Under Secretary of State Sumner Welles in private communication with President Roosevelt on the Welles mission in 1940 to see Hitler and Mussolini. I tapped the Franklin D. Roosevelt President’s Secretary’s Files, particularly Parts I and II, with their records of correspondence between the President and the American Ambassador to Germany, William E. Dodd. I also consult the published Soviet archival documents concerning Stalin’s intelligence on the eve of Operation Barbarossa, including materials from the Stalin Digital Archive.
In the section on North Vietnamese statesmen I employ a variety of newly available sources. The Vietnamese state has recently released a massive collection of Politburo and Central Committee directives, cables, and speeches (called the Van Kien Dang), providing the first official glimpse into Hanoi’s decision-making over several decades. This section also makes use of official Vietnamese histories of its military and diplomatic corps. These include histories of the Foreign Ministry, the People’s Army, the People’s Navy, the Sapper Forces, the Central Office of South Vietnam, histories of combat operations, histories of the Tonkin Gulf incident, memoirs of prominent military and diplomatic officials, records of the secret negotiations with the Johnson administration, and some Vietnamese newspapers.
I have engaged all of these sources in an effort to gain purchase on how strategic empathy shaped matters of war and peace. I selected these cases both for their significance to twentieth-century international history and their capacity to illuminate strategic empathy’s impact.
One of my goals in this book, as in my previous work, is to use history to help us understand how people think. Whereas the cognitive sciences can suggest much about our decision-making process in the lab, the study of historical decision-making can provide us with real-life subjects under genuine pressures. Historians must examine how people behaved not in the confines of controlled procedures but in the real world, where so much is beyond anyone’s control. If we want to understand how people think, it makes sense to probe historical cases for clues. In this way, studies of historical decision-making can greatly complement the cognitive sciences.
Ultimately, history must never be a mere recounting of facts, strewn together into a story about the past. Instead, it must be used to advance our understanding of why events occurred and why individuals acted as they did. Viewed in this way, historical scholarship holds enormous practical value for anyone who seeks to comprehend the world around us. As the international historian Marc Trachtenberg put it, the aim of historical analysis is to bring forth the logic underlying the course of events. “In working out that logic,” he writes, “you have to draw on your whole understanding of why states behave the way they do and why they sometimes go to war with each other.”29 Part of our understanding—our whole understanding—must come not only from assessing the structure of state relations at the systemic level, or by analyzing the domestic-level and organizational politics affecting state behavior, but also from a study of how individual leaders thought about the other side.