Do leaders matter? In International Relations (IR), this question has been answered differently in different time periods. The “Great Man Theory,” which assumed little else other than leadership mattered in explanations of foreign policy, was in vogue prior to World War II. During the Cold War, Great Man approaches fell into disfavor, and the most important elements in understanding at least superpower behavior seemed to be defined at the level of state or system attributes. After the Cold War, crises such as those involving Iraq and North Korea inclined specialists to look once again at leader characteristics to help understand the foreign policy of these nations. In addition, a new, cognitivist paradigm emerged that built upon advances in the study of human psychology. Cognitivism began to produce intersectional subfields in other disciplines, such as the subfield of behavioral economics. This paradigm also advanced understanding of how individual characteristics of leaders might influence foreign policy decisionmaking (for good overviews, see Rosati, 2000 and Hafner-Burton et al., 2017).
More recently, as we have seen, the election of Donald Trump—with his distinct personality traits that are atypical of U.S. Presidents (McAdams, 2016)—vaulted interest in political psychology beyond the academy and into the public consciousness. For example, it has become routine for journalists and commentators to reach for psychological explanations for various policy decisions made by the Trump administration, including those concerning the foreign policy. In fact, the extent to which the global media cycle revolves around parsing Trump’s tweets for evidence of policy insights or his psychological state suggests that, if anything, we may have become overly fixated on Trump’s psychology. And yet, the apparent influence of “strong man” leaders such as Xi Jinping, Vladimir Putin, Recep Tayyip Erdoğan, and Kim Jong-un on contemporary global politics suggests we ignore the role of individual factors at our peril. These developments underscore, in a new way, the importance of developing an informed understanding of how the connections between the political psychology of world leaders and foreign policy choices operate and the degree to which we might foreground them in various situations.
While the academy has traditionally been tentative about the value of leader analysis, governments are much less so. An office of leadership analysis was created in the CIA in the 1970s and continues to offer analysis and briefings about world leaders to presidents and high-level diplomats to this day. For example, an advertisement recruiting a “Leadership Analyst” posted by the CIA in 2018 describes a role that supports policymakers by “producing and delivering written and oral assessments of foreign leaders and key decision-makers.” Such analysis, the advertisement continues, “will help US policymakers understand their foreign counterparts by examining worldviews, national ambitions and constraints, and the social context for these leaders” (CIA, 2018). Of course, whether they are based in the United States or anywhere else, “Policymakers desperately want to understand just what kinds of adversaries they are facing” (Omestad, 1994). Strategies of deterrence and negotiation depend significantly upon an understanding of the other’s worldview. Communication between nations can also be affected in important ways by leadership idiosyncrasies.
The desperate desire of policymakers to understand their counterparts in other nations is not without foundation. However, a better initial question to ask might be, When do leaders matter (table 2.1)? Surely not every foreign policy decision carries the imprint of the leader’s distinctive personal characteristics and perceptions. A related question might be, Which leaders matter? Government personnel other than the top leader may leave more of an impression on a particular foreign policy than the chief executive. It is to these questions that we now turn.
Under what conditions might it be more fruitful to examine leader characteristics? A variety of hypotheses come to mind.
First, regime type may play a role in answering this question. Different regime types offer different levels of constraint on the leader’s control of policy. It might be more imperative to assess leader characteristics in one-man dictatorships, such as Kim Jong-un’s North Korea, than it would be to examine them in some long-established parliamentary democracies. Jerrold Post, one of the founders of the CIA’s Office of Leadership Analysis in the 1970s, argues that the need to assess leader characteristics “is perhaps most important in cases where you have a leader who dominates the society, who can act virtually without constraint” (quoted in Carey, 2011). Nevertheless, it must be kept in mind that there is no regime type that precludes a leader’s personal influence on policy altogether, as have seen with Donald Trump and the United States.
Second, it matters whether a leader is interested in foreign policy. Leaders uninterested in foreign policy may delegate a large measure of authority to subordinates, in which case it would be vital to identify and examine their characteristics as well. For example, after World War II, Francisco Franco openly commented on his disinterest in foreign affairs, delegating most decisionmaking power to his foreign minister. Nevertheless, over the years his foreign minister began to make choices that did not sit well with Franco, and eventually the minister was dismissed. Even a disinterested leader can become interested if the context is right. Leaders who have an emotional response to the issues under discussion because of prior experience or memory are also likely to leave more of a personal imprint on foreign policy. When dealing with Saddam Hussein’s Iraq, it mattered that President George W. Bush knew Hussein had tried to assassinate his father, George H. W. Bush (Houghton, 2008).
Part of that context may provide us a third scope condition: crisis situations will invariably be handled at the highest levels of government power, and almost by definition top leaders will be involved regardless of their general level of interest in foreign affairs. However, an important caveat must be mentioned here. If the crisis is so extreme that the country’s survival is at stake, a leader may try to keep his or her psychological predispositions in check in order to avoid making any unnecessary mistakes. But for every example of such restraint (John F. Kennedy and the Cuban missile crisis), we can find numerous examples of how crisis situations brought a leader’s personality and predispositions to the fore in a very strong way (Richard Nixon and Watergate).
A related context that may allow a leader’s personal characteristics to play more of a role in decisionmaking is in ambiguous or uncertain situations, our fourth contextual variable. When advisors are unable to “read” a situation because information is sparse or contradictory, a leader may be called upon to exercise his or her judgment so that a basis for foreign policy decisionmaking is laid. One subcategory of these types of situations is that involving long-range planning, where sweeping strategic doctrines or approaches to particular problems are decided for an uncertain and unpredictable future.
Margaret Hermann has proffered a fifth contextual variable, namely, the degree to which a leader has had diplomatic training (1984). Hermann argues that leaders with prior training have learned to subordinate their personal characteristics to the diplomatic requirements of the situation at hand. Untrained leaders, especially those with what she has termed “insensitive” orientations to the international context, are likely to rely more on their personal worldviews in any foreign policy response. Again, an interesting pair of cases is George H. W. Bush, who spent many years in diplomatic service, and his son George W. Bush, who had no diplomatic training before becoming president. Elizabeth Saunders, for example, examines the seminal foreign policy decisions of these two presidents—the invasions of Iraq in 1991 and 2003, respectively—to illustrate how foreign policy experience “influences the assessment and mitigation of risks in war” (Saunders, 2017, S220). She argues that experienced leaders will be more adept at effectively monitoring experienced advisers, be more capable of meaningfully delegating to experienced advisers, and be more successful in diversifying the advice they are given.
Expertise in a particular issue area or region of the world may also signal that a particular leader, even if he is not the top leader, may leave a personal imprint on the policy eventually chosen. It is not uncommon in the post-Vietnam era for U.S. presidents to defer to military leaders when conflict is being discussed as an option. Indeed, in a number of cases it is the military leadership that makes the strongest case against intervention options being weighed by the president. Larger-than-life figures such as Henry Kissinger may dominate foreign policymaking, even though they do not occupy the top leadership position. Patterns of deference to acknowledged experts must be tracked by the analyst in order to identify which leaders bear further examination in any particular case, and this constitutes a sixth condition to consider.
A seventh variable concerns the style of leadership: Does the leader like to delegate information processing and decision tasks? Or does the leader prefer to sort through the intelligence himself or herself, providing a much more hands-on style of leadership? There are pros and cons to each style, but clearly the hands-on style of leadership lends itself to a much more prominent effect of the leader’s personality on decisionmaking, such as was the case with Jimmy Carter. Carter was a micro-manager, so determined to read every policy paper produced by the executive branch—instead of delegating that task to his staff—that he undertook a speed reading course. He admitted, “I was sometimes accused of ‘micromanaging’ the affairs of government and being excessively autocratic, and I must admit that my critics probably had a valid point” (quoted in Plant, 2012). More recently, Australian Prime Minister Kevin Rudd’s inveterate micromanaging contributed to his own party voting to replace him as party leader—and therefore as Prime Minister—in 2010 (Gyngell, 2017).
Finally, a fuller exploration of the eighth contextual variable must wait until the next chapter, when we discuss group interactions. Groups, whether small or large, tend to evolve into contexts in which particular individuals play a given role on a fairly consistent basis. For example, one person may play the devil’s advocate role, while another views himself as a loyal “mind-guard.” Still others may view themselves as advocates of particular policies, or as the group’s diplomats, frequently brokering agreements. Examination of the top leadership must not overlook the advantage provided by examining it not only in isolation, but also in group settings.
Before we can understand FPA scholarship on leaders, we must first adopt a language based in psychology that allows us to name and relate components of an individual’s mental framework. It must be acknowledged at the outset that there are many schools within the field of psychology, and many of the terms we will use here have subtle or not-so-subtle differences in definition and interpretation between these schools. Nevertheless, to effect the kind of analysis desired in FPA, we must start somewhere.
Figure 2.1 outlines the key concepts that we will be exploring in this chapter.
It is through our senses that our minds make contact with the world around it. Some psychologists have posited a mental capacity for the brief storage of sensory information as it is processed, usually a quarter of a second in duration. However, our senses take in vastly more information than the mind is ever capable of processing. If we label those sensory inputs perception, then it is clear we perceive more than we notice. The mind apparently builds a “filter” that helps it decide which sensory inputs are worthy of more detailed processing, which processing we would call cognition. These filters might include stereotypes, biases, and heuristics. These are all shortcuts to help the mind decide which sensory inputs should be focused on in a given situation. Each person has an individually tailored set of filters that arise from the person’s larger experiences. Young children have fewer filters than adults, and often “see” more in a situation than their parents. I often ask my students if they can say what color shoes I am wearing without looking. The majority of students cannot. In their assumptions about what to pay attention to in a college classroom, the color of the professor’s shoes is considered to be unimportant. Therefore, although their retinas surely did register the color of my shoes as I walked around the classroom, their minds deemed the information irrelevant and filtered it out.
These perceptual filters can trip us up, however. In some cases, our filters don’t help us in a particular situation. For example, the serial killer turns out to be the nice, quiet man with an immaculate lawn next door. Our stereotypes about serial killers do not include such innocuous characteristics. In other cases, our filters are so strong that they prevent us from receiving accurate sensory perceptions. As Jervis notes, new information may be assimilated into existing images. For example, in one famous experiment, subjects were tasked with playing cards in multiple rounds. At one point, the researchers substituted cards wherein the hearts and diamonds were black, and the spades and clubs were red. At first, it was hard for the subjects to identify that something was amiss. When alerted to the mismatch between suit and color, it was then very difficult for them to play with the abnormal cards (Bruner and Postman, 1949). We perceive what we expect to perceive and may even ignore what our senses are telling us. In the famous Gorilla Experiment, for example, if primed to perform a demanding visual task (counting how many times people with a particular shirt color passed a ball in a large group), about half of subjects fail to notice a man in a gorilla suit walking through the group (Chabris and Simons, 2011; you can try this experiment for yourself, or on a friend, at www.theinvisiblegorilla.com). In a related experiment where subjects hefted two balls, one larger than the other but both weighing the same, most subjects reported that the larger ball was heavier. Our expectations clearly shape what we perceive to be real, sometimes overriding our own senses (D. Dunne, 2017).
In a very real way, then, our human capacity to be rational is bounded. Herbert Simon, the Nobel laureate, notes that our bounded rationality stems from our inability to know everything, think everything, and understand everything (including ourselves). We construct a simplified mental model of reality and behave fairly rationally within its confines, but those confines may be quite severe. Mental models are inescapable, and they are very useful in many circumstances, such as when we find ourselves in danger and must react instantaneously to save ours or others’ lives. Even so, they do have their downsides. They are hard to change—even when we are aware of them—and they are based only upon what we know. Mind-sets and categories based on these mental models are quick to form and resistant to change. Thus, we are attempting to reason through the use of mental hardware that is profoundly constrained. For example, let’s look at some common heuristics, or ways of processing information.
Many of the insights we overview in this section were first articulated and proved experimentally by longtime collaborators Daniel Kahneman and Amos Tversky. Their article “Judgement under Uncertainty: Heuristics and Biases,” published in Science in 1974, is one of the most cited academic works in history (Tversky and Kahneman, 1974). Its publication initiated a remarkable wave of research across numerous disciplines, from economics to psychology, business and even sports, and was foundational to the emergence of the paradigm called cognitivism. While Tversky passed away in 1996, Kahneman was awarded the 2002 Nobel Memorial Prize in Economic Sciences for this work. (For a wonderful account of this incredible friendship, see Michael Lewis’s The Undoing Project 2017.)
Other excellent works on heuristic fallacies include Richards Heuer’s The Psychology of Intelligence Analysis (1999) and Judgment under Uncertainty: Heuristics and Biases by Daniel Kahneman, Paul Slovic, and Amos Tversky (1982). Each of these works tackles the human brain as it is, rather than as we would like to believe it is. Our brains evolved over long millennia to use particular mental “machinery.” We have an almost limitless storage capacity in our long-term memory, but most of our day-to-day mental activity involves short-term memory and associative recall. Short-term memory has a limited capacity, usually defined at approximately five to seven items. Once you exceed the limits of your short-term memory, some of the items will be dropped from active consideration in your mind. These will be dropped according to some mental definition of priority. So, for example, though you may have vivid recall of a striking experience for several days, you may be unable to remember what you had for breakfast yesterday. After a week, even a vivid experience may fade, and you may only be able to remember generalities about the event. That is why it is not uncommon for two people who have lived through the very same event to disagree over the facts of what happened (something to keep in mind as you progress with your own research and decide how to weigh evidence when reconstructing decisionmaking episodes).
If deemed important enough, items in short-term memory can be stored in long-term memory. The advantage of long-term memory is that it is of almost limitless capacity (although unless the experience was traumatic, you are unlikely to be able to recover raw sensory data about a memory—what you will recover instead is an interpretation of the memory). The disadvantage is that usually the only way to retrieve such information is through associative recall. Have you ever tried to remember where you put your keys the day before, or what you named a computer file you created six months ago? What follows is typically an indirect and laborious process of remembering other things you were doing or thinking while you were holding your keys or working on the file. Oftentimes, we have to “sleep on it,” with the mind processing the retrieval request through the night and recalling it upon waking. Some create intricate “mind palaces” to find what they need more surely and quickly (Zielinski, 2014).
One common approach to overcoming this problem is to bunch several items in long-term memory together, into a “schema.” For example, you may have a schema about renewing your driver’s license, in which memories and knowledge about the process are bundled together and recalled together as a template. When the schema for renewing your license is brought to the fore, all the pieces will come, too, such as forms to be filled, the location of the office you need to submit the forms, and so forth. As Schrodt puts it, “Recall usually substitutes for reasoning” (Hudson, Schrodt, and Whitmer, 2004). This is so because the human brain is hardwired to find patterns in complexity. While recall and pattern recognition are almost effortless for a human being (processes Kahneman labels “fast thinking”), logic and deductive reasoning require the application of conscious mental energy (“slow thinking”). In fact, Kahneman describes mental life as comprising two agents, System 1 and System 2, “which respectively produce fast and slow thinking” (Kahneman, 2011b, 13). The interplay between these systems—one automatic, effortless, intuitive, and spontaneous; the other deliberative and effortful—has implications not only for our day-to-day life, but also foreign policy decisionmaking.
Humans even attempt to speed up “slow thinking” by developing “rules” to govern our mental activity, allowing us to become “cognitive misers” concerning our limited cognitive resources or expenditure of mental energy. Often these rules are shortcuts that allow for recall or interpretation with a minimum of inputs, thus minimizing reaction time. These heuristics usually help us; occasionally they can trip us up. Let’s look at a few examples.
Some of the most common heuristic fallacies involve the estimation of probabilities. Humans turn out to be pretty bad at this task, which is no doubt why the gambling business is so lucrative. The “availability fallacy” notes that people judge something to be more probable if they can easily recall instances of it from memory. Thus, if certain types of events have happened more recently, or more frequently, or more vividly, humans will judge these events to be more probable, regardless of the underlying causal factors at work. Another, the “anchoring fallacy,” points out that when trying to make an estimation, humans usually begin at a starting point that may be relatively arbitrary. After setting that initial estimate, people use additional information to adjust the probability up or down from that starting point. However, the starting point, or anchor, is a drag on the estimator’s ability to make adjustments to his or her estimate. In one experiment cited by Heuer, students were asked to estimate what percentage of the membership of the United Nations were African countries. Students who started with low anchors, say, 10 percent, never guessed higher than 25 percent despite additional information designed to help them estimate more accurately. On the other hand, students who started with high anchors, say 65 percent, could not lower their estimate by very much even with the very same additional information, settling on approximately 45 percent as their final estimate. Thus, although each was given the same additional information, which was specifically designed to improve the accuracy of their estimate, their anchors limited the accuracy of their final estimates (Heuer, 1999).
Humans are also notoriously bad at the calculation of joint probabilities. Take the scenario where you wish to perform well on a test, and a series of things must occur for this to happen. You have to get up when the alarm clock rings (90 percent probability). Your car has to start (90 percent probability). You have to find a parking space in time (80 percent probability). And you have to perform to your capacity on the test (80 percent probability). Most will predict that the probability of your doing well on the test is about 80 percent. That is, they take the lowest single probability and extend it to the entire scenario. But this would be incorrect. The probability of this scenario is the joint probability defined as the product of the individual probabilities. The true probability of you doing well on the test is .90 × .90 × .80 × .80, or about 52 percent.
But probabilities are not the only thing that humans are not very good at evaluating. Humans are also fairly bad at evaluating evidence, which no doubt accounts for the persistence of even rudimentary scams and frauds in our societies. Humans are eager, even impelled, to seek causal explanations for what is happening in their environment. In other words, we gravitate imperceptibly towards “connecting the dots” even when there are no real connections. When you present a person with a plausible causal stream to explain a certain event, for example, “bad” cholesterol causes heart disease because it promotes inflammation and clogging of arteries, if the person “gets” the explanation—that is, if the person exerts effort to understand the explanation as given—it will be almost impossible to subsequently disabuse that person of that causal inference. Even if you told the person a lie, the person would still cling to that causal understanding even when told it was a lie. Because it made sense to the person once, it would not stop making sense to him or her after such a revelation (for a dramatic example involving an apocalyptic cult, see Festinger, Riecken, and Schachter, 1956). Many conspiracy theories retain adherents for long periods of time because of this heuristic pitfall. Furthermore, if a person has a prior belief that two things are unrelated, he or she may not be able to perceive or register evidence of a relationship; likewise, if a person has a prior belief that two things are related, he or she may not be able to perceive or register evidence that there is no relationship (Fiske and Taylor, 1984, 264). Apparently, humans tune in to information that supports their beliefs and tend to ignore or fail to register information that is discrepant with their beliefs (Zimbardo and Leippe, 1991, 144), and humans interpret mixed evidence as supporting their prior beliefs (163). In a recent experiment, those with strong beliefs about climate change who were presented with disconfirming facts changed their estimates of future temperatures less than half of the amount than when the facts they were given confirmed their beliefs (Sharot and Sunstein, 2016). This speaks volumes about human ability to evaluate the evidence for an explanation.
Even more troubling is that the heuristic device of schema invites the mind to fill in any blanks within the template, even without the benefit of empirical investigation. For example, Hudson once had a student whose schema about the Soviets involved images that they were evil and had the goal of destroying the United States. As the events of the end of the Cold War transpired, such as the fall of the Berlin Wall, the transition to the Commonwealth of Independent States (CIS), and the breakup of the old USSR, the treaties such as the Intermediate-Range Nuclear Forces Treaty (INF) and Treaty on Conventional Armed Forces in Europe (CFE) that were signed that diminished the hair-trigger situation between the two nations, and so forth, Hudson could tell that her student was very uncomfortable. The student confided to Hudson that he felt that the Soviets were deceiving the United States, and that they would wait until they had lulled us into complacency, and then let fly all those missiles that they pretended to get rid of, but which they were stockpiling for just such an eventuality. In addition to the dismissal of ill-fitting information, as discussed above, he also “filled in the blank” that an evil power would never actually get rid of its weapons, even if it had signed an agreement to do so. His mind was asserting an empirical reality to fill in that blank in his schema, even though the “reality” was completely falsifiable (after all, the INF Treaty called for U.S. inspectors to be stationed at Vokhtinsk to oversee these weapons’ destruction).
Schemas can also develop on the basis of shared experience. In his book documenting the foreign policy dynamics of President Obama’s first term, Mann (2012) makes a point of highlighting how two distinct camps emerged within Obama’s foreign policy team early in his presidency. On the one hand, the more senior members of the team initially charged with overseeing key pillars of the foreign policy apparatus—Hillary Clinton as Secretary of State, Robert Gates as Secretary of Defense, Leon Panetta as CIA director, and James Jones as National Security Advisor—had their worldviews shaped by their personal and political experience of the Vietnam War and the Cold War. On the other hand, the formative experiences of the inner circle of foreign policy aides that Obama came to rely upon, “were the Iraq War and the financial crisis of 2008” (Mann, 2012, xxi). This group, which Mann labels “the Obamians,” “self-consciously thought of themselves as a new generation in American foreign policy” (Mann, 2012, xxi) and included senior National Security Council staffers Ben Rhodes and Denis McDonough (who would later become Obama’s Chief of Staff), as well as Obama himself. Over the course of Obama’s presidency, the subtly different schemas of these groups were evident during key foreign policy decisions, dividing the team along often predictable lines.
This conclusion that humans are bad at processing empirical evidence because of our use of heuristics even applies to self-interpretation. Psychologists note that humans are terrible at figuring out why they themselves do what they do (Nisbett and Wilson, 1977, 231–59). Humans appear to have little or no access to their own cognitive processes and attributions about the self are notoriously inaccurate. We cannot even effectively analyze evidence about ourselves. For example, Kruger and Dunning (1999) point out that students in the bottom quartile on grammar tests still felt they had scored above average even when they were allowed to see the test papers of the students in the top quartile. Similarly, Kahneman and Renshon (2007, 34) highlight that “about 80 percent of us believe that our driving skills are better than average.” Apparently, if you are not competent in a particular task, you are not competent to know you are not competent—and hence, no matter the feedback provided, everyone thinks of themselves as above average! This tendency to “naturally assume that everyone else is more susceptible to thinking errors” than we ourselves are is known as “blind spot bias” and functions as what Lehrer terms a “meta-bias” (Lehrer, 2012). And lest you think that your superior intelligence or cautious demeanor protects you from such cognitive oversights, consider Lehrer’s appraisal of recent research on cognitive biases: “smarter people … and those more likely to engage in deliberation were slightly more vulnerable to common mental mistakes” (see also R. Brooks, 2014).
The bottom line is that humans are not very picky about evidence, because their first priority is to “get” the explanation, that is, to understand their world. Stopping the explainer at every other word to demand empirical evidence for their assertions is not standard human practice. For example, researchers now ask whether the conventional distinction between “bad” and “good” cholesterol even makes sense. Other researchers are not sure that the inflammation in heart disease is caused primarily by the cholesterol ratios; they now wonder whether it isn’t low-level infections that are the chief culprit. Generally speaking, only a modicum of evidence is sufficient to “sell” a causal story. The most persuasive evidence, research shows, is evidence that is vivid and anecdotal, and resonates with personal experiences the listener has had. Abstract, aggregate data pales in comparison. When selling weight-loss products, a couple of good testimonials accompanied by striking before-and-after photos will outsell large-N trials every time.
This brings up a second problem with evidence that has to do with its representativeness. When we see those two weight-loss testimonials, our mind assumes that such results (if true) represent what the average person could expect from using the product. This is an erroneous assumption. The two testimonials may be the only two positive testimonials the company received.
Similarly, humans are predisposed to work within a given framework of understanding, which also limits their ability to evaluate the evidence for a particular explanation. In the aforementioned example concerning heart disease, if we stick to the framework of “bad” cholesterol and “good” cholesterol and of cholesterol-induced inflammation, the story outcome is predetermined. “Bad” cholesterol is going to be bad for you and is going to cause inflammation, and by golly we’d better do something about it. But if you start asking questions that upset the framework, the story gets fuzzier—what if there’s no valid reason to call one type of cholesterol “bad”? What if inflammation has many causes, and could these other causes be operating in heart disease? Asking such questions is going to cripple your ability to reach closure on a causal explanation and act, however. Because humans are hardwired to explain the world around them in order to feel a sense of control that provides a basis for (in)action, reaching such closure provides mental and emotional satisfaction. Therefore, it is not strange that humans are poor at evidence evaluation; they are more interested in the emotional relief of explanations than in the evidence.
Finally, our use of heuristics, as inevitable and natural as it may be actually leads to the fallacy of “overconfidence.” When we first try to, say, make a prediction with limited information, we may feel unsure about its accuracy. As we obtain more and more information, our confidence in our predictions rises. Interestingly, psychological experiments have shown that this level of confidence is unrelated to the actual accuracy of our predictions. Confidence was related solely to how much information the predictor obtained. Perhaps this interesting emotional response is necessary in providing humans with enough confidence to act upon what they believe they know. But the lack of correlation to accuracy means there will also be a steep learning curve from the mistakes invariably made as a result. Or not: Philip Tetlock’s infamous book, Expert Political Judgment: How Good Is It? How Can We Know?, shows that expert political judgments are usually no better than nonexpert judgments and that experts appear indifferent to that fact (Tetlock, 2006). Unfortunately, research has shown that most people tend to switch off the decisionmaking parts of their brain when interacting with an expert, such as a physician or a military general (Hertz, 2013).
Consider also that humans are prone to interpret overconfidence in another person as competence and that human groups thus gravitate toward leadership by the most confident. Unfortunately, there is no correlation between confidence and competence (Chamorro-Premuzic, 2013; R. Brooks, 2014). Furthermore, psychologists find overconfidence is highly gendered male (Kay and Shipman, 2014). Humans are thus primed to select for overconfident males as leaders even if they have not a shred of competence. To extend the point, humans seem especially drawn to tall, overconfident males with testosterone markers such as a strong jaw line and stubbornly regard them as more competent—regardless of their actual competence (Murray and Schmitz, 2011; Oh, Buck, and Todoro, 2019).
As expected, even when we know these things, we do not alter our behavior. Kahneman tells the story of how he empirically proved that there was absolutely no correlation between the performance of top Wall Street stockbrokers from year to year, but when he presented these findings at a conference with those same brokers and their bosses both present, they elicited no comment whatsoever—and certainly no change in the practice of awarding bonuses based on performance (Kahneman, 2011a). In the realm of foreign policy, Steve Yetiv has argued that the 2003 invasion of Iraq was a “war of overconfidence,” in which President George W. Bush and his senior foreign policy staff overestimated their chances of success (Yetiv, 2013, Chapter 4). The Bush team was mistaken about the degree of support they would receive from global allies, overestimated how American forces would be received in Iraq, were overconfident in how quickly the mission could be accomplished, and were mistaken in how many troops would be required. For Yetiv, the sources of this overconfidence were numerous but included information problems, an ineffective media, the adoption of misplaced analogies with the 1991 Iraq invasion, and the proclivity of George W. Bush’s personality to be prone to overconfidence (Yetiv, 2013: 64–69; see also Houghton, 2008). Notably, Yetiv’s account, while focusing on overconfidence, demonstrates how various cognitive biases often operate simultaneously, interacting in complex ways to jointly lead to a given decision.
Within the intelligence community, there is a concerted effort to train analysts to avoid the common and not-so-common heuristic fallacies. Sherman Kent, who helped create the first Office of National Estimates (ONE), developed what he called “the analytic code,” which embraced three maxims of intelligence analysis: (i) watch confirmation bias; (ii) encourage dissent; and (iii) assign quantitative probabilities to clarify judgments of likelihood. Even so, National Intelligence Estimates (NIEs) failed to predict the Berlin Wall, Khruschev’s ouster, and the timing of the 1968 Czechoslovak invasion, according to Robert Gates, who was once a young analyst himself (Scoblic, 2018). Though the ONE was disbanded and reconstituted as the National Intelligence Council (NIC) in the 1970s, the CIA named its analyst training facility the Sherman Kent School to honor Kent’s contributions to the collective understanding of heuristic bias and miscalculation.
In the same way that cognitive constraints affect reasoning, so do emotions. Though an important topic of research in psychology, the implications for foreign policy decisionmaking are only beginning to be explored. This is because most decisionmaking theories in IR have either ignored emotion or have seen it as an impediment to rational choice. However, psychologists are now beginning to assert that decisionmaking depends upon emotional assessment. McDermott notes that “individuals who cannot reference emotional memory because of brain lesions are unable to make rational decisions at all” (2004b, 153). McDermott also points out that “emotions can facilitate motivation and arousal… . Emotion arouses an individual to take action with regard to an imagined or experienced event. Emotion can also direct and sustain behavior in response to various situations” (167). Emotion is one of the most effective ways by which humans can change goal emphasis. For example, you might be focused on getting to work on time, but if there is a car accident occurring in front of you, emotional arousal will sweep that goal from your mind so that you can concentrate on the more immediately important goal of avoiding the accident. Our motivations, such as the need for power, the need for affiliation, and the need for achievement, are all laden with deep emotion (Winter, 2003). The effects of emotion on decisionmaking are diverse, and not all effects are yet understood. Intangible inputs to rational choice equations, such as level of trust, are clearly emotionally based. Studies have also shown that emotion-based attitudes are held with greater confidence than those that are not connected to emotion.
Future advances in the study of emotion will be facilitated by new methodologies. For example, developing fields of neuroscientific inquiry help us to understand that emotion is as important to decisionmaking as cognition is. “Seeing” the limbic system “light up” on an MRI as a person makes a difficult decision gives us a whole new way of thinking about decisionmaking. McDermott is optimistic that “neuroscientific advances might bridge rationally and psychologically-oriented models” (2004b, 186). Furthermore, we are also beginning to understand that genetics underlie the expression, and epigenetics underlie the transcription, of a variety of neurotransmitters that may affect mood and behavior. With a combination of neuroscience and DNA sequencing, we are starting to see studies that, for example, assert that there is a genetic basis to conservatism and that the brain functions of self-identified conservatives and liberals are slightly different (Hatemi and McDermott, 2011). (We discuss genetics more in the forthcoming section “The Body and Decisionmaking.”)
Psychologist Barry Schwartz and colleagues have described the paradox of choice, wherein proliferation of choices leads to lower satisfaction and greater regrets than fewer choices (Schwartz, 2004). This may even lead to a situation where, frustrated by the plethora of choices available, decisionmakers find it impossible to make a choice and so do nothing. For example, Schwartz notes that one of his colleagues discovered that as the number of mutual funds in a set of retirement investment options offered to employees goes up, the likelihood they will choose any mutual fund plan actually goes down (McDermott, 2004b, 27).
Researchers have also distinguished between emotions that stymie learning and belief change, and those which facilitate it. For example, Dolan (2016) demonstrates that in war, the emotion of frustration does not lead to change in military tactics/strategy, but the emotion of anxiety does. Through case studies tied to the Russo-Finnish Winter War, Dolan is able to show that expected setbacks that produced only frustration did not lead to change in military leaders’ mental models of how to win the conflict. On the other hand, unexpected negative events did produce such change (Dolan, 2016).
Other psychologists, such as Daniel Gilbert, suggest that humans really do not understand their own emotions. When asked to estimate how a particular event would affect their lives for better or worse (such as winning $1 million on a game show), respondents overestimated how such an event would affect them and for how long. Each person appears to have a happiness “set point” and, over time, will return to that set point no matter their circumstances. Both bad and good events turn out to have less intense and briefer emotional effects than people generally believe. Studies have shown that over time, lottery winners were not happier and persons who became paraplegics not unhappier than control groups (Kahneman, 2000, 673–92). Both midwesterners and Californians describe themselves as similarly happy, but both groups expect that Californians will report themselves happier. Gilbert calls this misunderstanding of happiness “miswanting”: the inability to really understand what their own feelings would be in a particular situation. For example, Gilbert says, “If you ask, ‘What would you rather have, a broken leg or a trick knee?’ they’d probably say, ‘Trick knee.’ And yet, if your goal is to accumulate maximum happiness over your lifetime, you just made the wrong choice. A trick knee is a bad thing to have” (Gertner, 2003, 47).
This misunderstanding of our emotions is especially acute when comparing “hot” emotional states (rage, fear, arousal) to more composed emotional states. In experiments conducted about unprotected sexual behavior, people in composed emotional states would generally state that they would never engage in such risky behavior. But when subject to arousal, most would, in fact, so engage. In a sense, our decisionmaking has the potential to produce profoundly different outcomes depending upon our emotional state. And it also turns out that we are not good at predicting that such differences would ever occur.
Our circumstances may affect our emotions in other ways as well. For example, individuals in positions of power begin to lose the ability to empathize with others (Hogeveen, Inzlicht, and Obhi, 2014). This can be problematic for foreign policy leaders, whose inability to empathize with enemies and allies can have outsized consequences for the world. Furthermore, even an average individual will feel less compassion for someone suffering from an affliction they themselves have overcome. For example, in recent studies those who had experienced joblessness in the past were less compassionate toward those currently unemployed (DeSteno, 2015). Conversely, powerlessness has its own effects on decisionmaking. In a series of groundbreaking experiments, researchers found that feeling powerless, here operationalized as poverty, made long-term payoffs fade almost completely from the cost-benefit calculations of subjects. However, powerlessness can come in many guises, and in a foreign policy realm, a situation of relative international powerlessness may also change leader calculations (Thompson, 2013).
Humans also seem to be hardwired to detect unfairness, and the presence of unfairness makes humans very upset. Reaction to unfairness elicits a strong, persistent negative emotional response. When members of a team are presented with the choice to have one of their members win $50 and the rest win $5 each, or to have none of their team members win anything, most persons chose the latter. They would rather not gain at all than acquiesce to an obviously unfair situation in which they would still gain something. Even children make sacrifices when they see an unfair situation where others have less than they do (McAuliffe et al., 2017). Monkeys, too, when rewarded with cucumbers rather than the juicy grapes they see other monkeys receiving, are likely to simply throw the cucumber back at the researcher because of their emotional reaction to unfairness. A recent study has shown that “air rage” incidents are far more likely to occur in planes with first-class cabins where economy customers have to walk through the first-class cabin to their own seat (Kristof, 2017). These findings may account for why issues of relative gains and losses, and not simply absolute gains and losses, often derail ally relationships and peace treaties.
Furthermore, emotions affect our tolerance of risk. Prospect theory has shown that losses hurt more than gains please. After a substantial loss, people are much more willing to take risks to regain what they perceive to be theirs, much as a gambler who loses may bet more intensively in an effort to recoup losses. Furthermore, people react differently to certain gains as opposed to probable gains. If they have not experienced a prior loss, humans are much more apt to prefer certain gains to probable gains, even if the probable gains would be far larger if attained. Thus, depending on context of loss and the emotional pain it has inflicted, human beings may act in a more risk-averse or in a more risk-seeking way. This certainly has applications for choice in international politics, as Jack Levy and others have shown (see, for example, Levy, 1997; McDermott, 2001). It will be much easier, for instance, to deter an adversary from making gains than it will be to deter them from recovering losses. Alex Mintz and his colleagues have shown that leaders may rule out options that rank low on certain important dimensions because loss on those dimensions is emotionally intolerable (perhaps because they are politically intolerable); that is, the anticipated gains are “noncompensatory” for the losses expected (Mintz, 2004).
Of course, what we are seeing in these examples is an interplay between how cognitive biases and emotions affect decisionmaking. A recent example of research that has sought to explicitly link these into a “psychological learning model” is Michael D. Cohen’s When Proliferation Causes Peace (2017a), which examines escalation patterns between nuclear-armed states. Cohen finds that “the critical variable is not the possession of nuclear weapons nor a specific nuclear doctrine but whether a leader has personally experienced the imminent prospect of total destruction.” The experience of fear, of “reaching the nuclear brink” is, according to Cohen, what ultimately leads them to subsequently authorize more restrained policies. His book demonstrates how the variations we see in the foreign policies of nuclear powers can be explained via this “deeply emotional reaction” (Cohen, 2017a, 9; see also Cohen, 2017b).
Emotions are not the only thing capable of altering our normal cognitive function. Our cognition operates in the context of a physical body and what happens to that body can affect our decisionmaking (an excellent overview is McDermott, 2007).
Mental illness can strike leaders. Indeed, political psychologist Jerrold Post believes that certain mental illnesses are overrepresented in the population of world leaders (2003a), such as narcissism and paranoia. Narcissists, for example, may be more willing than a normal person to pay any price to become a leader. Post also hypothesizes that the stresses and power of national leadership may cause a predisposition to mental illness to bloom into a pathological state, especially in systems where the leader’s power is unchecked. This was true, for example, in the case of Saddam Hussein, whom Post diagnoses as a malignant narcissist. As Saddam Hussein’s power became ever greater within his society, his mental illness began to overtake his normal powers of judgment. He could not admit ignorance and so could not learn. He could not brook dissent, and so received no dissonant information from his advisors. His power fantasies, lack of impulse control, willingness to use force, and absence of conscience warped his decisionmaking to the point where what was good for Saddam Hussein was defined as the national interest of Iraq. An unhealthy obsession with power and control appears as part of the mental illnesses most often suffered by world leaders, with one estimate that up to 13 percent of world leaders express this trait (D. Weiner, 2002). Narcissism may not be a completely bad thing in a leader, however. Watts et al. (2013) expert-coded narcissism for 42 U.S. presidents and found that what they call “grandiose narcissism” was associated with greater presidential success, defined in terms of crisis management, public persuasiveness, margin of electoral victory, and initiation of legislation. Of course, they also found such presidents to be at greater risk for unethical behavior and impeachment (Lilienfeld and Watts, 2015).
The body’s experience of stress may also alter decisionmaking. Stress’s effect on the body appears to follow a U-shaped curve: our mental acuity seems best when under a moderate amount of stress. We function at less than our peak capacity when under higher (and, ironically, lower) levels of stress. Chronic, high-level stress not only impairs judgment, but induces fatigue and confusion. The body’s hormonal, metabolic, and immune functions are also compromised by chronically high levels of stress. Under chronic high stress, the mental effort required to think something through may seem unattainable. Studies show that a rat exposed to repeated uncontrollable stressors cannot learn to avoid an electric shock: the stress has caused it to become helpless and incapable of becoming motivated enough to expend the mental energy to learn to avoid pain (Sapolsky, 1997, 218). The predisposition may be to decide a matter quickly on gut instinct, or to not make a decision at all. And it is interesting to consider common sources of stress: an overabundance of information is a reliable stressor, one that probably plagues most foreign policy decisionmakers every day. One study asserts that the life spans of American presidents are significantly shorter than controls, and that most have died from stress-related causes (Gilbert, 1993).
Though it is always a matter of speculation whether our leaders have used illicit drugs, there is no shortage of evidence that leaders commonly use licit drugs, such as alcohol, caffeine, and prescription medications. A fairly famous case in point is that of Richard M. Nixon, who, while abusing alcohol, was also self-medicating with relatively high doses of Dilantin in addition to taking prescribed medication for depression and mood swings. Dilantin causes memory loss, irritability, and confusion. President George H. W. Bush’s use of Haldol as a sleep aid around the time of Desert Storm was also a focus of speculation concerning its effects on his decisionmaking. President John F. Kennedy’s use of steroids and high-dose pain medication for his back problems is not as well known as his suffering from Addison’s disease, but may also have affected his cognition. Equally troubling is the early twenty-first-century practice of providing stimulant and sleep aid prescriptions to American troops stationed in battle zones. According to Friedman (2012), such prescriptions rose 1,000 percent in the five years between 2005 and 2010. Friedman asserts that such abuse of these medications may be making posttraumatic stress disorder (PTSD) more likely, and more severe, than otherwise, with clear ramifications for behavior and decisionmaking.
Physical pain and suffering from disease and its treatment must also be mentioned as a bodily experience that may alter decisionmaking. Living with high levels of chronic pain often induces irritability and frequent changes of opinion. Certain types of pathology, such as cerebral strokes, may in fact change cognitive function permanently, as occurred with President Woodrow Wilson in the last part of his presidency. Recent research points to a syndrome of lowered impulse control in patients who have undergone bypass surgery, ostensibly due to the mechanical rerouting of the bloodstream. The devastating side effects of chemotherapy and radiation treatment can cause temporary depression. But we must not forget that even ordinary physical ailments, such as jet lag, the flu, and gastric distress, may be distracting and serve to diminish acuity.
Fatigue deserves a special mention because new research indicates that its effects on behavior are much more striking than had been supposed. For example, Tierney has reported that prisoners appealing for parole are much more likely to be granted it when the judge is feeling fresh—such as early in the morning or immediately after lunch—than prisoners whose appeals are heard when judges are tired, such as right before lunch or later in the afternoon. Fatigue makes complex decisions feel overwhelming, and the tired decider determines that inaction is better than action. Risky shortcuts for avoiding complexity may also be favored when one is very tired. On the other hand, breaking one’s self-discipline is also associated with feeling fatigued, which is why grocery stores place candy by checkout aisles. Our bodies crave glucose to restore our willpower and acuity. Those jellybeans on President Ronald Reagan’s desk, in light of this research, are looking like a useful decision aid (Tierney, 2011).
Many world leaders are elderly. Aging may bring wisdom, but research tells us that aging may also bring rigidity and overconfidence, difficulty in dealing with complexity, and a preference for extreme choices. Furthermore, research shows that long-term memory storage is impaired in the elderly, probably due to lower quality and quantity of sleep, making long-ago memories which are already in storage seem more fresh than memories of even six months previous (Carey, 2013). Once again, the hardware we have been given in the form of our embodied mind provides some significant constraints on our reasoning.
Genes and epigenetics may also affect decisionmaking. There are specific alleles that have been linked to authoritarianism, impulse control, and extroversion, for example. Funk et al. (2013), in twin studies, find that about 56 percent of self-identified political ideology is explained by genetic factors. Hatemi et al. (2014) also find a wide variety of ideology-related attitudes related to particular genetic single-nucleotide polymorphisms (SNPs) in their review of the literature. Even behaviors, such as voting behavior, have been linked to genetic factors (Hatemi and McDermott, 2012).
Sex hormones also apparently play an important role in decisionmaking. Testosterone can produce a sense of unwarranted overconfidence after success and then unwarranted pessimism after failure. For example, quite a few observers have noted the overwhelming predominance of males in the decisionmaking that produced the Great Recession of 2008. Behavioral economists such as Robert Shiller argue that emotional factors, such as the fear of being left out, or optimistic gut feelings, or media hype producing a sense of confidence and control, all substitute for reasoned analysis on the part of investors, especially if they are male. “I can present my research and findings to a bunch of academics and they seem to agree,” Shiller said. “But afterward at dinner, they tell me they are 100 percent in stocks. They say: ‘What you argue is interesting, but I bet stocks will go up. I have this feeling’ ” (Uchitelle, 2000, 1). John Coates and Joseph Herbert (2008), publishing in the Proceedings of the National Academy of Sciences, found that testosterone levels correlated significantly with risk taking among stock market traders. Victories on the stock floor led to higher levels of testosterone and higher levels of risk taking. Coates comments, “Male traders simply don’t respond rationally” (Dobrzynski, 2008). Male hormonal fluctuations may similarly be affecting foreign policymaking, since over 90 percent of world leaders are men.
The need to maintain male gender performance, fueled by sex hormones, is also a factor to consider. When U.S. President Donald Trump met French President Emmanuel Macron for the first time at a NATO (North Atlantic Treaty Organization) summit in Brussels in May 2017, the world witnessed a very long and apparently very painful handshake between the two of them. Their hands were white by the time they let go of each other (Smith, 2017). Trump withdrew first, so the next day when they met again, Trump took Macron’s hand and then pulled him suddenly toward him, forcing Macron into an awkward position. In the video, you can see Macron trying to extricate himself. Trump finally lets go, but as Macron turns to go, Trump pats him on the back as one would pat an inferior. These handshakes and pats were clear male dominance contests and displays, which have been noted not only in humans, but also in many higher order animal species. Macron even clarified his motivation for the press: “You have to show you won’t make little concessions, even symbolic ones. It’s not the be-all-and-end-all … but a moment of truth. I don’t miss a thing, that’s how you get respect” (quoted in Collman and Smith, 2017).
The study of the effect of women making foreign policy is in its infancy because there have been so few female heads of state or foreign ministers (McGlen and Sarkees, 1993; Bashevkin, 2018). However, there is a large literature on women and leadership in psychology, sociology, and even in business management to serve as a springboard for work in FPA. For example, many studies in those fields point to the higher levels of risk aversion among women compared to men, and such findings surely have implications for FPDM (see Eckel and Grossman, 2008, for a good overview). One FPA-relevant study, by Michael Koch and Sarah Fulton (2011, 1), found that when adjusting for measures of party control in the legislature, their study of 22 democracies found that “increases in women’s legislative representation decreases conflict behavior and defense spending, while the presence of women executives increases both.” We anticipate that the question of sex-based influences on FPDM will garner more attention in the future.
The particulars of the situation in which the person finds himself or herself are also very pertinent to the final choice of action. One germane characteristic is the presence or absence of others. For example, when a person has been seriously injured, psychologists have shown that the actions of bystanders depend on how many bystanders there are. Counterintuitively, the greater the number of bystanders, the less likely it is that someone will come forward to help the injured person. Everyone among the bystanders is thinking, “Surely someone in this crowd is more qualified than I to help this person,” and so they fail to act. EMT (emergency medical technician) training emphasizes that the person who does step forward to help (finally) should make specific assignments to bystanders: “You there, call the police”; “You there, get a blanket out of your car”; and so on. In other studies, individuals took their cues from those around them in interpreting a situation. In a room filling with smoke, only 10 percent of subjects seated with others who had been instructed to act indifferently to the smoke treated the situation as an emergency. When seated alone, 75 percent did (Tippett, 2016). Similar incidents can be found in the realm of security policy: in July 2001, Special Agent Kenneth Williams in the Phoenix FBI office was concerned about the number of individuals “of investigative interest” attending civil aviation schools in Arizona. Though his memo did reach FBI headquarters, it was not acted upon.
Pressures to conform are also part of the influence of others’ presence. A high school kid may find that everyone in his circle of friends drinks alcohol; the resulting social pressure may be so great that the student will begin to drink alcohol even if he has no personal desire to do so, or even if he actively does not want to drink. This can work in positive ways, as well. If you want to raise voting participation, convince potential voters that everyone else has already voted and they wouldn’t want to be left out (D. Brooks, 2013).
In a series of famous experiments in the 1950s, Solomon Asch assembled groups of male college students where all but one person in the groups were actually working for Asch. The groups were asked to determine relative length of parallel lines, and the real subject would always answer last. When the others in the group gave obviously and unmistakably erroneous answers, over 70 percent of real subjects would conform at least once to the erroneous answer (Zimbardo and Leippe, 1991, 56–57). The need for social acceptance is very deeply rooted in most human beings and may cause abnormal or even irrational behavior in many individuals given a relevant social situational context. In Asch’s experiments, only 25 percent of the real subjects never conformed.
Furthermore, chronic ostracism has outsized effects on decisionmaking. In a recent experiment, researchers found that those who were socially excluded in a video game context expressed a greater willingness to fight and die for the nonreligious causes they had previously identified as important to them, as compared to their willingness before the video game. The humiliation of being ostracized from a group was painful in the extreme, so painful that violence became more attractive to them (Pretus et al., 2018). In fact, researchers have found that those experiencing such social exclusion actually feel the temperature in the room is colder than it really is and their skin temperature drops (Ijzerman and Saddlemyer, 2012).
There is also the issue of time constraints. The reaction to a situation is going to be somewhat different if it is an emergency-type situation in which action must be taken quickly. There may not be time for an extensive information search; there may not be time for extended deliberation. In such a situation, the role of emotions, or “gut feelings,” may be prominent. In a threatening situation with time constraints, even more basic responses, such as the “fight or flight” (male) or “tend and befriend” (female) reactions, may occur without much conscious reasoning.
The stakes of the situation are also formative. When one is risking nuclear war, a more careful deliberation process may occur than when a situation is routine and of little consequence. Furthermore, gains and losses that arise from a situational context may be processed differently in the human brain. As we have seen, prospect theory tells us that humans do not like situations where one alternative is a certain loss. If I gave you a choice between losing $5 for sure, or betting $5 in a gamble with 1,000-to-1 odds of keeping your $5, you would always choose the gamble over the sure loss, though there is little practical difference in outcome. Humans also prefer sure wins to riskier higher gains. If I offered you a choice of $5 or a 1 in 100 chance of winning $500, you would probably take the $5. Prospect theory also tells us that previous wins and losses affect our subsequent behavior. If I have just experienced a sure loss, I will be more willing to engage in riskier behavior in the next round of play to make up my previous loss (Thaler, 2000). An interesting corollary of prospect theory with relevance for international negotiations is that we process the concessions of others as having far less value than any concessions we ourselves make (McDermott, 2004b). Psychologists believe the discounting of another’s concessions may be as high as 50 percent, meaning that the other person would have to concede twice as much to make the concessions feel as valuable to you as the concession you are making.
Even the physical environment can affect decisionmaking. In chapter 1, we noted how even the color of a room can change the decisions made. But there are many other physical contexts that can alter behavior. For example, being in a crowded environment makes one less prosocial. Blue lights deter crime. People act more honestly when there is a mirror or a picture of a pair of eyes nearby. We litter more when there is already litter on the ground (Alter, 2013). The shape of a conference table can alter the willingness of people to speak; the amount of sunlight reaching a room also alters the conversation. Many architects and urban planners are taking the physical environment much more seriously as a result. Perhaps it matters more than we might think in what kind of room a nation’s National Security Council meets.
Social roles and rules can also affect decisionmaking, especially as they tie in with the existing schema. Hudson helped to organize a conference once, and in the middle of one of the presentations, a member of the audience stood up and began to verbally harass the speaker. Now, this was not a large and public group, but a small, private group of approximately fifty persons, where such aggressive heckling would typically not take place, according to social rules. Most of the participants simply sat there, wondering what to do. But one member of the audience was a security contractor for the government. He got up, deftly pinned the man’s arm behind his back without hurting him, escorted him from the room, and made sure that he left the building. His social role gave him a precise and effective schema for handling this situation that had so perplexed the other members of the audience.
Finally, the personal stakes for the leader are always part of the situational context for foreign policy decisionmaking. While we will be discussing domestic politics more fully in a later chapter, here we mention that there may be life-and-death stakes for some leaders depending on their decisions. If foreign policy decisions are likely to lead to the ouster of a leader, for example, as the result of war or economic crisis, leaders may rightfully worry if they will be killed by their successors. The decision to keep fighting even when all seems lost, for example, may result less from a leader’s nationalism than by a leader’s fear of a sudden and violent death (Debs, 2016).
Though all of us possess the type of cognitive constraints enumerated above, we are not all the same. Each of us is a unique mix of genetic information, life experience, and deeply held values and beliefs. Political psychologists who study world leaders are interested in these deeper elements of personality, as well. We have spoken of how perception is filtered through to cognition, but a person’s reaction to a cognition in a particular situational context—their attitudes (easily accessed mental judgments or evaluations) that will shape their immediate response—is largely shaped by their mental model of the world. That model will contain elements such as beliefs, values, and memories, which are drawn upon to form these attitudes. We have already examined characteristics of memory, short-term, long-term, and memory “schema.” However, we need to say a few words about beliefs and values.
Beliefs are often called attributions in the psychological literature. These are beliefs about causality in the world. For example, person A might believe that when his neighbor B mowed down a flower in A’s yard that was very near their joint property line, B was acting out of malicious intent. “He mowed down the flower because he holds malice toward me and acted on that malicious intent.” A different person in A’s shoes might believe that B’s mind was on other things and the mowing-down of the flower was accidental, not intended, and not even noticed. Still another person might believe that B was impaired by alcohol when mowing his lawn and attribute the flower-mowing to alcohol abuse. Why things happen, or what causes what, are crucial elements in our understanding of the world.
Psychologists often speak of a “fundamental attribution error,” fundamental in this case meaning common to virtually all humans. Almost all of us attribute our behavior to situational necessity, but the behavior of others to free choice or disposition. Thus, in the example above, if we had mowed our neighbor’s flower down, we would tend to think it was because we had no choice—but if he mowed our flower down, we would tend to think that he wanted to mow it down. One could see how this fundamental attribution error could play out in international relations: North Korea feels it has no choice but to build nuclear weapons given U.S. policy; the United States, on the other hand, believes that North Korea is building nuclear weapons not because it has to, but because it wants to. The North Koreans believe that the U.S. policy of denuclearization of North Korea is a choice based on antipathy; Americans believe their stance is forced by the situation of having to protect themselves and allies from a madman intent upon obtaining nuclear weapons and long-range delivery capabilities.
The fundamental attribution error almost led to war with the Soviets during the Reagan administration. A recently declassified top-secret intelligence review concluded that the Soviets interpreted the November 1983 NATO Able Archer exercises as a cover for the launching of a nuclear first strike against the Soviet Union. A few months earlier, Reagan had called the USSR an “evil empire,” and U.S. GLCMs (ground launched cruise missiles) had begun to be placed in Europe. Furthermore, the KAL jetliner had been shot down on September 1, 1983 and on September 26, the Soviets’ early warning radar falsely reported a five-missile ICBM (intercontinental ballistic missile) launch by the United States. During the exercise, NATO planes were loaded with dummy warheads and there were live mobilization exercises, unlike in previous years. Furthermore the Soviets were convinced that the United States would launch a first strike if their VRYAN computer simulation of Soviet strength fell below 60. At the time of the 1983 Able Archer, the number was 45. After Reagan was briefed on the extraordinary precautions the Soviets were taking to prepare for war during Able Archer, he wrote in his diary on November 18, “I feel the Soviets are so defense minded, so paranoid about being attacked that without being in any way soft on them, we ought to tell them no one here has any intention of doing anything like that. What the h--l have they got that anyone would want” (quoted in Hoffman, 2015). In Reagan’s later memoir, the fundamental attribution error was clear for all to see:
Three years had taught me something surprising about the Russians: Many people at the top of the Soviet hierarchy were genuinely afraid of America and Americans. Perhaps this shouldn’t have surprised me, but it did. In fact, I had difficulty accepting my own conclusion at first… . I think many of us in the administration took it for granted that the Russians, like ourselves, considered it unthinkable that the United States would launch a first strike against them. But the more experience I had with the Soviet leaders and other heads of state who knew them, the more I began to realize that many Soviet officials feared us not only as adversaries but as potential aggressors who might hurl nuclear weapons at them in a first strike. (quoted in Hoffman, 2015)
How lucky we all are that the fundamental attribution error did not produce nuclear war in 1983.
Values, our final component of the mental model, may be created fairly early in life. Values refer to the relative ranking individuals use to justify preferring one thing over another. These values cannot exist without attribution, and attribution cannot exist without memory of experience, but probably it is values that allow us to make judgments—to hold attitudes in a particular situation that will lead to our speech and behavioral actions. Values, in a sense, “energize” our mental model. Values are also very much influenced by our motivations and emotions. “Values” are often used when discussing morality: we “value” honesty and prefer it to dishonesty, and so we are not going to lie in situation X. But values may also be about things that may have little reference to moral issues: a president may value the advice of his or her ANSA (special assistant to the president for National Security Affairs) over the advice of the Secretary of Defense. In situation X, then, the advice of the ANSA may be more influential on the president’s decision than the advice of the secretary of defense. Values have also been linked to behavioral preferences in foreign policy. Rathbun et al. (2016) find that conservation values are linked to what they term “militant internationalism” in foreign policy, while universalist values are associated with multilateralism in foreign policy.
To summarize a bit at this point, perceptions are filtered, and only certain perceptions become cognitions. Cognitions are both new inputs and a function of the existing mental model that makes them possible in the first place. The mental model itself is quite complex, containing previously constructed elements such as attributional beliefs (beliefs about what causes what), values, and norms created or assimilated from the larger cultural context, and memories, along with a categorization and relational scheme probably unique to the individual that allows the model to both persist and change over time.
Important to this conceptualization is the understanding that change in any part of this system of perception/cognition/mental theory/attitude can lead to change in other elements. Belief change can cause attitude change; attitude change can cause behavioral change; change in cognition can cause attitude change; attitudes and cognitions can even change beliefs (Zimbardo and Leippe, 1991, 34). Indeed, the subfield of behavioral economics attempts to “hack” this insight for prosocial goals. Under the Obama administration, a newly created Social and Behavioral Sciences Team run out of the White House encouraged government agencies to experiment with cognitive approaches to “nudging” humans in a better direction. In a newspaper article reporting on the activities of this team, Appelbaum (2015) explained how, during one experiment:
Some vendors who provide federal agencies with goods and services as varied as paper clips and translators were given a slightly different version of the form used to report rebates they owe the government. The only difference: The signature box was at the beginning of the form rather than the end. The result: a rash of honesty. Companies using the new form acknowledged they owed an extra $1.59 million in rebates during the three-month experiment, apparently because promising to be truthful at the outset actually caused them to answer more truthfully.
But there is more to human beings than cognition. While we can conceptualize the mental model’s structural components to be beliefs/attributions, values, and memories, the mental model is also shaped by the personality of the leader, with personality being the constellation of traits possessed by the leader. Though personality is undoubtedly shaped by one’s experiences and background, it is also true that some elements of personality seem genetically determined. For example, scholars now assert that a predisposition toward social conservatism may be inherited (Hatemi and McDermott, 2011). Specific traits of personality might be the person’s overall level of distrust of others, the individual’s level of conceptual complexity in understanding the world around him or her, the individual’s level of loyalty to relevant social groups (such as the nation), the individual’s degree of focus on task completion. Other traits might include energy level, sociability, emotional stability, or degree to which the individual can control his or her impulses.
Furthermore, we cannot overlook the broad influence of emotions, motivations, and the state of the body on personality, as well as on mental constructs formed and even on cognitions. We have previously discussed emotions and the state of the body, but we must also mention here that there are several psychological models of human motivation. One conceptual framework that has recently been applied to world leaders is that of David Winter, based upon previous work of McClelland (1985). Winter postulates three fundamental human motivations, which can exist to greater or lesser degree in any individual. These motivations include need for power, need for affiliation, and need for achievement. For example, according to Winter’s scoring system (1990), the strongest motivation for John F. Kennedy was need for achievement. But these motivations are not one-dimensional. Nixon’s need for affiliation was almost as great as his need for achievement, and Nixon rates rather average on need for power in Winter’s scoring.
The deeper element of character may contain underlying structural parameters of the individual’s personality. Character is relatively underconceptualized in psychology, but most psychologists use the term to refer to some deep organizing principles of the human psyche. One example could be the individual’s predisposition toward abstractive versus practicalist reasoning. Another example might be integrity, here meaning the degree to which constructs, emotions, beliefs, and attitudes are consistent in the individual. A related concept might be the degree to which the individual is able to tolerate dissonance between beliefs and action. Such dissonance is often termed cognitive dissonance, and this concept can inform our concept of mental models.
To understand the concept of cognitive dissonance, it is useful to discuss an example. Suppose a person is absolutely convinced that smoking harms you. And yet that person smokes. If the person’s deep character is not shaken by this inconsistency because his or her character has a high tolerance for it, the person may simply both continue to smoke and continue to think it will harm him or her. However, if the person’s character has a low tolerance for inconsistency, the person may be forced to either change his or her actions and stop smoking, or may be forced to change, add to, or delete certain attributional beliefs about smoking. Interestingly, empirical study seems to demonstrate that the likeliest course of action in a case of cognitive dissonance is a change in belief, as it is less costly than a change in behavior.
Most empirical work in psychology derives from experiments and simulations, some of which are embedded in survey instruments and some of which take place in laboratory settings. Most work examining particular individuals’ psychology is performed using standard psychological profile testing and/or in-depth psychoanalytic examination. All of it is fascinating. However, its applicability to the assessment of the personalities and views of world leaders is obviously limited. Most leaders refuse to take personality tests. Most leaders refuse to participate in psychoanalysis. Some of us are old enough to remember when Thomas Eagleton had to drop out as a vice presidential candidate because years previously he had visited a therapist to help him cope with a family loss (and, worse yet, he had undergone electroshock treatments). He also happened to shed a few tears once during an interview that touched on that loss. There are real costs to a leader letting someone assess his or her personality and views. As a result, there are several FPA scholars who do use experiments and simulations to probe general psychological phenomena in FPDM—for example, the decision board approach of Alex Mintz et al. (1997), or the FPDM simulations of the ICONS Project (International Communication & Negotiation Simulations Project) (ICONS, 2004), or the excellent experimental work undertaken by Rose McDermott and numerous colleagues (McDermott, 2011; see also McDermott and Mintz, 2011).
Nevertheless, the assessment of leader personality, with a concomitant understanding of a leader’s mental model, is clearly a high priority for political psychologists and foreign policy analysts. The problem is that one does not have the luxury of extended person-to-person contact with world leaders. At-a-distance measures are required for this task. The two primary at-a-distance methodologies in use by those who wish to study the personality and views of world leaders are psychobiography and content analysis.
There have been many examples of “psychologizing” leaders by examining their lives. Sigmund Freud (Freud and Bullitt, 1967) himself psychoanalyzed Woodrow Wilson based upon biographical material, and Wilson was reanalyzed in a famous psychobiography by Alexander and Juliette George (1956). Numerous others have attempted to psychoanalyze leaders such as Hitler and Stalin. One of the benefits of psychobiography is the ability to bring to light emotional and experiential factors that play a role in motivation and decisionmaking. In this section, we will concentrate on the work of two scholars who have famously employed psychobiography in the study of world leaders: James David Barber and Jerrold Post.
James David Barber, who died in 2004, is most famous for the successive editions of his book The Presidential Character. Barber was of the opinion that we should not elect leaders with dysfunctional personalities. He developed a fourfold categorization scheme for leaders using two axes: active-passive and positive-negative. The active-passive dimension taps into the leader’s energy level and senses that personal effort can make a difference in human affairs. The positive-negative dimension addresses the leader’s motivation for seeking office and overall outlook on life, probing whether the leader was basically optimistic or pessimistic; trusting or suspicious; motivated by feelings of neediness, shame, or obligation, or motivated by feelings of confidence and joy in the work to be done. Barber believed that these two traits, or elements of personality, are shaped long before a president is elected to office. In Barber’s view, a careful examination of the leader’s background, upbringing, early successes and failures, and career could provide insight into what type of leader an individual would be.
Not surprisingly, Barber felt that active-positive leaders, such as FDR, Harry Truman, and JFK, made the best presidents. They are not driven by twisted and dark motives and are willing to work hard to effect improvements. They are also willing to reverse course when things do not turn out well, for they are not constrained by a rigid ideology, but rather are motivated by the sense that they should search for policies that actually produce the results they desire.
On the other hand, Barber fervently wished that Americans would not elect leaders who were active-negative in orientation. Leaders thus categorized include Woodrow Wilson, Herbert Hoover, Lyndon B. Johnson, and Richard M. Nixon. These leaders are compelled to power by deep-seated feelings of inadequacy and fear of humiliation and ostracism. They may become rigid in thinking and in action, especially when threatened, and cannot relate to others with genuine warmth and empathy. They may be feared, but they are not loved—and they know it. They may be willing to circumvent convention or even rules and laws in order to maintain or increase their power.
Of the remaining two types of leaders, passive-positive and passive-negative, Barber actually preferred the passive-negatives. These are leaders who take the mantle of leadership out of a sense of obligation or duty, not out of a desire for power and control. At the same time, passive-negatives may have a hard time effecting significant change, given their lower level of activity. Barber identifies Calvin Coolidge and Dwight D. Eisenhower as passive-negative presidents. Interestingly, new research seems to indicate that Coolidge only became passive-negative, as opposed to active-positive, after the death of his son in 1924, an event that caused Coolidge to become clinically depressed (Gilbert, 2003).
Passive-positive leaders, while not posing as great a danger as active-negative leaders, present a persistent risk of scandal and corruption. So focused as they are on issues of affiliation and acceptance, while also dependent upon others for reassurance, support, and even direction, these passive-positive leaders may find that others are willing to take advantage of their emotional neediness and their willingness to turn a blind eye to their own excesses and those of their friends. William Howard Taft, Warren G. Harding, and Ronald Reagan were passive-positive presidents, according to Barber. Barber’s framework thus serves the dual purposes of analysis and evaluation, and this is true of all psychobiographical efforts.
We noted earlier that Jerrold Post was one of the founders of the CIA’s Office of Leadership Analysis in the 1970s. Having spent the better part of his career analyzing foreign leaders, Post has developed a fairly systematic approach to the task. He calls his methodology anamnesis, and believes that a good political psychological analysis will contain several components (Post, 2003a). The first is a psychobiography that compares the timeline of the leader’s life to the timeline of events taking place in the nation and the world. The family saga must be understood, as well as birth order and relationship among siblings. Has the family emigrated from another land? Is the family wealthy, or have they lost wealth over the generations? Have family patriarchs been war heroes? Have there been traumatic deaths in the family? Early heroes and dreams are important to examine. For example, Post notes that Indira Gandhi’s favorite childhood game was to be the commanding general over her forces of toy soldiers. And, interestingly, when Anwar Sadat was a boy, he dressed up as Mahatma Gandhi and led goats around. The leader’s education, mentors, and adolescent life experiences should be examined for influences that will shape the leader’s personality. For example, when FDR’s mother or father would forbid him to do something, he would find a way to please them while still doing what he wanted to do. When his grandfather was assassinated, King Hussein of Jordan was saved from death by a medal that had been pinned to his chest earlier that day by the slain king, reinforcing his sense of destiny as a leader. Early successes and failures are often a template for high-stakes decisions later in the leader’s career. In addition, each generation has particular memories from the world of their early adulthood, usually around ages 17–22, that will shape their mental models of the world for the rest of their lives (Schuman and Corning, 2012).
The second part of the anamnesis concerns the leader’s personality. A recounting of the leader’s balance between work and personal life is useful, as is an investigation of his health and habits, such as drinking and drug use. Bodily experiences, such as chronic pain, or even attributes such as short stature, can influence personality. For example, according to Post, during the Cuban missile crisis, John F. Kennedy was on stimulants, sleeping aids, narcotics for pain, testosterone, and steroids. Hitler’s incoherent rages are often attributed to the more than two dozen medications he was prescribed, including cocaine and methamphetamines. The leader’s intellectual capacity, knowledge, and judgment will be probed. Emotional stability, mood disorders, and impulse control will be assessed. Motivations; conscience and values; and the quality of interpersonal relationships with family, friends, and coworkers will also be noted. The leader’s reaction to criticism, attack, or failure will be important to discover.
The third part of the anamnesis inquires about the actual substantive beliefs held by the leader about issues such as the security of the nation, or about the nature of power. But other beliefs, such as core political philosophy or ideology, will also be examined. The fourth part of the analysis surveys the leader’s style, examining factors such as oratorical skill, ability to communicate to the public, aspects of strategy and tactics preferred in particular situations, and negotiating style. As we have noted previously, Post, as a trained psychiatrist, is also alert to the presence of mental illness in world leaders.
Post is then able to use this four-part analysis to project a leader’s reaction to various possible situations in international relations. Which issues will be most important to the leader? What is the best way to deter such a leader? To persuade such a leader to change his mind? What type of negotiating stance will this leader prefer? How will this leader cope with high-stress, high-stakes crises? The type of analysis Post was able to offer to the CIA no doubt finds parallel in the intelligence establishments of other nations (Post, 2003a).
Content analysis is another at-a-distance measure for analyzing the traits, motivations, and personal characteristics of world leaders. It can be a complement, or an alternative, to psychobiographical techniques. The artifacts of one’s personality include the things one has said and written. There must be some relationship between these and personality. This is the primary assumption upon which content analysis as a methodology is based.
However, there are important reasons to believe that this assumption is not always valid. Politicians lie, and sometimes for good reasons, such as reasons of national security. Much of what politicians say in public has been ghostwritten. A politician may say different things—and differently—to different audiences. And even in spontaneous interviews, the answers given may be shaped, sometimes unnaturally, by the manner in which the question is posed.
Scholars who use content analysis try to get around these perturbing factors in several ways. First, spontaneous live interviews are the most preferred source of text. Second, diaries, letters to confidants, and automatic tape recordings (such as existed in the Kennedy, Johnson, and Nixon administrations) are very useful. Last, it is important to obtain a large amount of text, spanning different time periods, audiences, and subjects, in order to get a fairly accurate result from content analysis.
There are two primary forms of content analysis: thematic content analysis and quantitative (or “word-count”) content analysis. In the first technique, the scholar develops a categorization of themes he or she wishes to investigate. Sometimes the dependent variable is the appearance or frequency of a theme within the text; at other times, the scholar creates a variable from the theme and records the value of the variable. For example, Ole Holsti, in his content analysis of John Foster Dulles, secretary of state under Eisenhower, was interested in four themes: Dulles’s views on Soviet policy, Soviet capabilities, and Soviet success, and Dulles’s overall evaluation of the Soviet Union. Each of these themes allowed for variation. For example, the text commenting on Soviet policy could characterize that policy as friendly or hostile or something in between. Soviet capabilities could be seen along a continuum from strong to weak. Soviet policy might be, overall, successful or unsuccessful in Dulles’s eyes. Dulles’s evaluation of the Soviet Union could range from good to bad.
Interestingly, what Holsti found was that regardless of how Dulles viewed Soviet policy, capabilities, or success, Dulles’s overall evaluation of the Soviet Union remained constant—“bad.” Even when directly confronted by an interviewer concerning the 1956 Soviet demobilization of more than a million men, Dulles felt that the move did not lower world tensions because the men might be put to work making, for example, more atomic weapons. Holsti felt his analysis was one methodology whereby the dynamics of a rigid and closed belief system could be identified.
Thematic content analysis is only as meaningful as the analyst’s categorization scheme, of course. Word-count content analysis, on the other hand, rests upon a foundation tied to psychological theory. If words are the artifacts of personality, then particular personality traits can be linked to particular word choices. Theoretical literature in psychology can be plumbed to determine such links. Then, while parsing text, the presence and the absence of particular words may be noted, and the presence or absence of traits inferred. For example, researchers have suggested that use of the words I, me, my, mine, and myself might indicate the trait of self-confidence.
In order to use this proposition, we must go through several steps. First, in addition to noting the presence of these words, we must also be able to notice their absence. Margaret Hermann postulates that these words indicate self-confidence when used in such a way as to demonstrate that the speaker is an instigator of an activity (“This is my plan”), or as an authority figure (“Let me explain”), or as the recipient of something positive (“You flatter me”). In the case where these words are used without any of these three connotations, it would indicate the absence of the trait (“He hit me”).
Second, there must be a means of computing a score for the trait. A simple way is to simply sum the total instances where these words were used and then determine what proportion of uses corresponds to the three expressions of self-confidence. Third, the score by itself means nothing without comparison. We cannot tell if a raw score is high or low or average without a group to which to compare it. A sample population to which the leader can be compared—usually a sample of other regional or world leaders—must be available. Scores are standardized and then compared to see how many standard deviations from the mean they are. Table 2.2 shows an example developed and used by Margaret Hermann (2003a).
Next, the analyst must think again about the usage of the words in question for contextual validity. For example, while Hudson was teaching a class on political psychology many years ago, one of my students, performing just such a word-count content analysis, announced that François Mitterrand was extremely lacking in self-confidence! Knowing just a little about Mitterrand, I pronounced that impossible. Upon looking at the coded text, it became apparent that Mitterrand always used the “royal we.” That is, he referred to himself in the plural to denote that he was representing the nation, as did the French kings of old. Thus, Mitterrand would say, “This is our plan; this is what we believe would work best,” even though he was referring to himself. When we adjusted for this cultural tradition, the recoding showed Mitterrand to be possessed of abundant self-confidence. Even so, word count content analysis has been used by non-English speaking scholars on non-English text, with some modifications made for meaning and use of words in the original language (see, for example, Özdamar and Canbolat, 2018).
Last, the analyst would be well advised to see if trait scores varied significantly by time period, by audience, or by topic. In her analysis of Saddam Hussein during the time of Desert Storm, Margaret G. Hermann found that self-confidence swung widely according to time period—that is, if Hussein was preinvasion or postinvasion (M. Hermann, 2003b). A more nuanced view of such differences could avoid the masking effects of using an overall mean score for any particular trait.
Though word-count content analysis has been used by many scholars, one of the best ways of exploring its potential for FPA is to examine the work of Margaret G. Hermann. Trained as a psychologist, Hermann began to work on the comparative foreign policy analysis CREON (Comparative Research on the Events of Nations) Project at its inception. One of her earliest research endeavors was the attempt to determine if personalities mattered in classroom simulations of the outbreak of World War I. She became convinced that they did and desired to create a means by which the personal characteristics of world leaders could be both assessed and used as the basis for projections of how they would behave and react in particular circumstances. As she developed her framework, which is based on long-standing trait research in psychology (Costa and McCrae, 1992), she was called upon by the leadership analysis office in the CIA to explain her approach. Thus, her work has spanned both the academic and policymaking communities.
As with many researchers who perform content analysis, Hermann prefers spontaneous live interviews across topics, time periods, and audiences. She also states that results should be based on at least fifty-interview responses of over one hundred words apiece.
Hermann codes for seven personality traits: (1) belief in one’s own ability to control events, (2) need for power and influence, (3) conceptual complexity, (4) self-confidence, (5) task/affect orientation (problem focus or relationship focus), (6) distrust of others, and (7) in-group bias (formerly called “nationalism”). These seven traits speak to three more general characteristics of personality: whether an individual leader challenges or respects constraints is open to new information and is primarily motivated by internal or external forces.
Hermann goes further. These three general characteristics may then be combined into eight possible personality “orientations.” For example, an expansionistic leader challenges constraints, is closed to new information, and holds a problem focus. A consultative leader respects constraints, is closed to new information, and exhibits a relationship focus motivation. The following list illustrates her framework:
One of the most valuable elements of Hermann’s framework is that she is able to draw out from the psychology of the orientations hypotheses concerning such varied behavior as the style of the leader, likely foreign policy, nature of preferred advisory group, nature of information search, ability to tolerate disagreement, and method of dealing with opposition. For example, we have mentioned the expansionist leader, who is concerned with increasing his or her control over territory, resources, or people, and who perceives the world as divided into “us” and “them.” According to Hermann, an expansionist leader will prefer a very loyal advisory group where the leader’s preferences will always prevail. An expansionist’s ability to tolerate disagreement will be quite limited, for this will be interpreted as a challenge to authority. An expansionist’s usual approach to opposition is to eliminate it. And the nature of an expansionist’s information search will be characterized by the desire to find information that supports and confirms what the leader already believes and desires to have happen.
The expansionist’s style is prudent and wary, for this type of leader wants to keep one step ahead of leaders and potential opponents. When he or she enjoys a power advantage in a situation, however, the leader will attempt to exercise his or her will, by force if necessary. As a result, the foreign policy of an expansionist is not likely to be very committed unless the situation is one in which the leader’s nation holds an undisputed advantage or in which the nation has no alternative but to fight. However, the foreign policy rhetoric of such a leader is likely to be fairly hostile in tone and focused on threats and enemies. The leader may also advocate immediate change in the international system. Hermann’s framework for analyzing leader orientation, then, allows for several layers of derivative analysis that may be of use in forecasting likely behavior over time.
Another major effort using word-count content analysis to probe the foreign policy orientations of leaders is that of Stephen Walker, Mark Schafer, and Michael Young to operationalize the concept of “operational code” using that technique (Walker, Schafer, and Young, 2002; Schafer and Walker, 2006). The operational code refers to a term coined by Nathan Leites (1951) to uncover the philosophical and instrumental approaches of Bolshevik leaders. Updating for modern times, Walker, Schafer, Young, and other colleagues have posited five philosophical beliefs about the world, such as the nature of the political world (P-1), and five instrumental beliefs about the world, such as whether the locus of control is perceived as being located in the self or in others (P-2). They have created the VICS (Verbs in Context System) using an automated content analysis software program called ProfilerPlus to isolate verbs within texts produced by leaders and to classify them. They are then able to compare and contrast the foreign policy orientations of leaders and to track their evolution over time. Walker and his students have also used these orientations as inputs to a game-theoretic approach to strategic interaction in the international system (Walker, Malici, and Schafer, 2011). Other students of Walker’s have used the VICS system to determine whether President Xi will change China’s foreign policy orientation (He and Feng, 2013). These authors collected public speeches and statements to compare the operational codes of Xi Jinping and his predecessor, Hu Jintao. Based on their analysis they concluded that while these leaders share similar belief systems, Xi was more likely to be assertive in achieving his goals.
There are a few other techniques deserving of mention with regard to leader analysis. The first is that of “think aloud” protocols (Purkitt, 1998). Though difficult to use with real-world leaders, it can be used with lower-level officials who may be more accessible. In short, the interviewer presents the official with a specific foreign policy problem and then asks him or her to think out loud while deciding how to react to that problem. Though such responses could be strategically manipulated by the respondent, of course, the intent is to understand what concepts, in what order, and in what relation arise in the official’s mind while thinking the issue through. These transcripts can then be analyzed.
One such method of analysis is cognitive mapping. In cognitive mapping, a visual diagram of a text is constructed. Concepts and variables are coded thematically from the text and then linkages and relationships are mapped using lines connecting concepts. For example, if a Middle East expert believes that Palestinian suicide bombings are one motivation for the building of security walls by the Israelis, then a line from the first to the second, with a symbol denoting that the relationship is positive, will be drawn. A cognitive map, once drawn, may then be further analyzed in several ways. The consistency of the linkages and valences may be noted. The “tightness” of the conceptual clusterings can be investigated. Change over time in cognitive mapping can be discerned (Shapiro and Bonham, 1973).
Another technique is personality assessment of leaders by scholarly experts. For example, Etheredge (1978) combed scholarly works, insiders’ accounts, biographies, and autobiographies, and coded presidents and secretaries of state for personality variables. He then masked the identities of the leaders and asked several other scholars to also rank these anonymous individuals along the same personality variables. Intercoder reliability was quite high. M. Hermann performed a variant of this technique in her doctoral dissertation. Wanting to investigate the effect of personality of leaders on the outbreak of World War I, Hermann wished to run simulations of that event with students whose personalities were similar to the leaders involved in World War I, and students whose personalities were different from those same leaders. In order to perform such an analysis, Hermann used standard psychological inventories to assess the students’ personalities. But to compare them to the leaders’ personalities, she had to come up with a creative way to determine the leaders’ scores on those same tests. She immersed herself in the biographical material of each leader and then took the personality test as if she were the leader in question.
For example, one such personality test is based on the prominent taxonomy of personality traits used in psychology known as the “Big 5.” As the name implies, this taxonomy assesses personality along five dimensions—extroversion, neuroticism; conscientiousness, agreeableness, and openness. Dan McAdams, a Professor of Psychology and the author of a psychological profile of George W. Bush (McAdams, 2010), employed this taxonomy to explore what kind of decisionmaker then-candidate Trump might look like in the White House. McAdam’s (2016, 79) finds that “Across his lifetime, Donald Trump has exhibited a trait profile that you would not expect of a U.S. president; sky-high extroversion combined with off-the-chart low agreeableness.” Based on these scores, McAdams predicted that Trump might be a “daring and ruthlessly aggressive decision maker who desperately desires to create the strongest, tallest, shiniest, and most awesome result”—an assessment that appears to have held up reasonably well in light of initiatives such as the 2018 North Korea–United States summit in Singapore between President Trump and Kim Jong-un, for example.
Yet another technique is that of the Q-sort, where subjects are asked to report how strongly they agree or disagree with certain statements that relate to psychological characteristics the researcher wishes to study. These self-reports are then subjected to factor analysis. The resulting factors represent the subject’s “narration of self,” which can then be analyzed (McKeown, 1984). One can also use this technique at a distance by asking leadership experts or even public citizens about their perceptions of a leader’s beliefs, much like the aforementioned personality assessments.
Finally, this chapter would be remiss without an introduction to ProfilerPlus, a series of computer interfaces and software developed by Michael Young to effect word-count content analysis as well as cognitive mapping. Young has prepared a demonstration for FPA students to examine and that demo is available at http://socialscience.net/hudson/hudson.html.
The demo is narrated and revolves around the idea that automated text coding allows for superior analysis of textual data. The student is first introduced to four types of automated coding: tag and retrieve, frequency analysis, concept coding, and information extraction. Each type is demonstrated by conceptual discussion followed by actual coding results for presidents Bill Clinton and George W. Bush for their respective State of the Union addresses to Congress. In one case, an Iranian leader’s remarks are coded.
Tag and retrieve is simply the built-in ability to “tag” certain words in texts, retrieving the context in which the words were used.
Frequency analysis “counts” how often particular words are used, sometimes in contrast to divergent sets of words. The demo illustrates frequency analysis in two ways: the Leadership Style Analysis of Margaret G. Hermann, and the Verbal Behavior Analysis (VBA) system of Walter Weintraub. For Hermann’s scheme, the conceptual complexity and task orientation scores of Clinton and Bush are presented; for VICS (Verbs in Context System), the use of “feeling” words that might indicate either aloofness or insincerity depending on use are examined for Clinton and Bush.
Concept coding refers to the automated search for patterns in the use of word phrases. Such pattern recognition typically involves more advanced algorithms than frequency analysis. For example, the algorithms would have to distinguish between the use of positive or neutral context phrases surrounding the mention of other entities versus the use of negative context, in order to code the level of distrust. Two examples are given: first, the variables of “belief in own ability to control events,” “distrust of others,” and “need for power” from the Hermann framework, as well as the variables “nature of the political universe” and “preferred strategy for achieving goals” from the operational code analysis scheme developed by Stephen Walker, Michael Young, and Mark Shafer (VICS). For President Bush, the Operational Code variables are also displayed in a longitudinal graph, showing the effect of 9/11 on Bush’s perceptions.
Information extraction, the final type of automated coding, is illustrated by two approaches: image theory (M. Cottam, 1986; M. Cottam and Shih, 1992; M. Cottam and McCoy, 1998) and cognitive mapping. Image theory examines larger themes constructed from particular words used to describe other nations. These themes correspond to broad images the speaker has of other entities, with the example given in the demo of “degenerate.” This “degenerate” image is demonstrated to be present in the speeches of Iranian leader Ali Khamenei in reference to the United States. Cognitive mapping, on the other hand, restructures the text physically in order to display a visual picture of the relationships between concepts in text. Both sentence-level and speech-level mapping are demonstrated. Valences and/or levels of certainty may also be attached to the relationships outlined in the maps, and change in the map over time is often analyzed by comparing successive speeches.
Self-images, that is, how a leader perceives his or her nation, clearly are pertinent to understanding how that leader decides on appropriate foreign policy behavior. In this way, the concept of national role conception is integrally bound up with the psychological level of analysis in FPA, because there can be no perception of role without the perceiver. And if that is the case, surely the characteristics of the perceiver will influence the choice of national role. However, in this volume, we have chosen to examine national role conceptions and role theory in the chapter on cultural influences on foreign policy, since this approach straddles both levels of analysis.
We note with gratitude that Michael Young has offered graduate students free use of ProfilerPlus for academic purposes, and a number of Hudson’s students have employed it in their own FPA projects.
In conclusion, then, FPA asserts that leaders do matter and that analysis of perception, cognition, and personality of world leaders is well worth undertaking. In addition, FPA draws upon a wide variety of techniques to make such an analysis possible, despite the unavailability of world leaders for direct observation.
One of the more positive legacies of Operation Iraqi Freedom is that we have had broad scholarly interest focused on one man: Saddam Hussein. In this section, we review scholarly works that have focused on Saddam Hussein’s cognitions and perceptions, those that approach him from a more psychobiographical angle, and those that have content analyzed his words. Of interest will be the unique insights offered by each approach, suggesting the desirability of utilizing all three approaches.
Charles Duelfer and Stephen Benedict Dyson (2011) have examined the misperceptions under which Saddam Hussein appeared to be laboring prior to the invasion launched by George W. Bush in 2003. Duelfer provides a unique perspective because he is the former deputy executive chairman of the United Nations Special Commission on Iraq and former special advisor to the director of the Central Intelligence Agency on Iraq Weapons of Mass Destruction. Furthermore, he participated in the debriefing of Saddam Hussein and some of his top leadership circle after the invasion. Duelfer and Dyson provide a fascinating catalog of the misperceptions held by Saddam Hussein. For example, Hussein felt that after 9/11, the United States would realize that Iraq shared its interests in curbing Islamic radicalism and that the United States would turn to Iraq for assistance, particularly with intelligence. Hussein could not conceive that the Americans would ever think he had any ties to al-Qaeda.
Even more astonishing was the fact that Hussein was convinced that the Americans knew he did not have weapons of mass destruction—because he believed in the omniscience of the Central Intelligence Agency. Indeed, he felt safe lying about his possession of weapons of mass destruction in order to deter the Iranians because he was certain the Americans knew it was a lie. Saddam Hussein believed that the United States kept bringing up the subject as a pretext for continuing the economic sanctions against Iraq. According to Duelfer and Dyson, he also believed that eventually the Americans would abandon that belligerent stance, as it had abandoned it against Libya, a nation that had also divested itself of weapons of mass destruction under Qaddafi.
Probably the most stupefying anecdote to come from Saddam Hussein’s debriefings was his reaction to George W. Bush’s 2002 speech at West Point. We will let Duelfer and Dyson tell the tale:
This speech was both intended and universally interpreted in the United States as a direct warning, stopping only slightly short of a declaration of war, to Saddam’s regime. It contained fulsome talk of unbalanced dictators who could not be allowed to possess the world’s most destructive weapons. Incredibly, however, Saddam did not grasp that Bush’s words were primarily targeted at him. He did not consider himself an unbalanced dictator and assumed that the warnings were intended for North Korea. The West Point speech stressed the unique danger posed by the combination of radicalism and technology: Saddam agreed that this was a dangerous mix, and he believed that his war on Iran had been motivated by the same concerns. When Bush spoke of “tyrants who solemnly sign nonproliferation treaties and then systematically break them,” Saddam heard a denunciation of the leadership of Iran and North Korea, both of which had signed the Nonproliferation Treaty yet continued to produce WMD. Finally, when Bush lauded “leaders like John F. Kennedy and Ronald Reagan” for their staunch policies against the “brutality of tyrants,” Saddam became really confused. For him, U.S.-Iraqi relations had been excellent while Reagan was president, and he later commented in captivity that the situation only started deteriorating under the Bushes. Lauding Reagan’s policies would make Saddam believe that a return to a happier relationship was imminent.
Writing years after the fact, President George W. Bush could not comprehend how Saddam missed these warnings: “How much clearer could I have been?” Given Saddam’s style of leadership, it was also the case that none of those (few) around him who did understand Washington felt able to inform him that the Bush administration considered him unbalanced. (Duelfer and Dyson, 2011: 91–92)
This last sentence points out that misperception doesn’t just “happen”; it occurs for reasons that have quite a bit to do with the leader’s personality. Saddam Hussein never had a very good grip on reality because he killed anyone who crossed him among his leadership circle. Thus he was surrounded by sycophants who would tell him only what he wanted to hear. That in turn begs the question of how he became this kind of ruthless tyrant, who was said to have the largest collection of books on Josef Stalin, his personal hero, in the world.
To answer those questions, we need to explore Saddam Hussein’s roots and psychobiography, which will be useful here. Jerrold Post has written a psychobiography of Saddam Hussein, and it is chilling indeed (Post, 2003b). Saddam Hussein’s father died before he was born, and his mother tried to commit suicide while eight months pregnant, but failed, and then attempted abortion, but was talked out of it. She did not want to see him after he was born, and he was sent to live with a maternal uncle. However, after his mother remarried when he was three, Saddam was called back to be reunited with her; unfortunately, her new husband was abusive to them both. This was a traumatic childhood, to say the least, but it is instructive to note that Saddam’s heroes in childhood were Nebuchadnezzar and Saladin, and he began to dream of glory himself. According to Post, the wounded soul that seeks healing in power and glory is likely to be a capricious leader.
When his mother and stepfather refused to let him continue his education when he was ten years old, he ran away, back to the maternal uncle, whose name was Khayrallah Talfah Msallat. Khayrallah was a fierce nationalist who later became governor of Baghdad, and according to Post, he wrote a pamphlet that Saddam later republished: “Three Whom God Should Not Have Created: Persians, Jews, and Flies.” Saddam Hussein joined the Ba’ath Party at his uncle’s encouragement and apparently made himself useful by being a thug for the party. At twenty-two, he was given the mission to assassinate Iraq’s leader, General Qassem. The plot failed, and Saddam fled to Egypt, where he was nourished on Nasserite visions of pan-Arabism until his eventual return to Iraq several years later.
After the Ba’ath Party took control in Iraq, Saddam Hussein did not take long to stage his own coup, deposing the man who had made him his second-in-command and ordering the execution not only of those he suspected of opposing him, but also those who had helped him during the coup. His self-concept apparently did not allow him to become indebted to any other human being.
Post concludes that Saddam Hussein was a malignant narcissist and a paranoid. Along with a grandiose self-concept, Post finds no constraint of conscience and no compunction about using unrestrained aggression to achieve his goals. Furthermore, his sense of reality was compromised by his deep feelings of insecurity and inferiority. Due to these feelings, it was humiliating for him to learn things that others already knew. He surrounded himself only with people who would not challenge his interpretation of events. The astonishing misperceptions that Duelfer and Dyson document seem less so when we consider Saddam Hussein’s psychobiography.
Margaret G. Hermann contributes a Leadership Style Analysis of Saddam Hussein (M. Hermann, 2003b). She analyzes text amounting to twenty-one thousand words and performs a word-count content analysis, looking for traits such as nationalism, conceptual complexity, and the others we have previously mentioned. Compared to other world leaders, Saddam Hussein scored high on nationalism, need for power, distrust of others, and self-confidence. Hermann opines that “leaders who combine a strong sense of nationalism with a high distrust of others are likely to view politics as the art of dealing with threats” (378). Noteworthy also is Hussein’s relative emphasis on task completion as opposed to affiliation with others; while charismatic, Hussein actually displays a profound lack of empathy. Guile and deceit will seem a natural way to achieve one’s objectives. Hermann concludes that Saddam Hussein exhibited an “expansionist” orientation to foreign affairs, which would lead him to seize opportunities to make relative gains at the expense of other nations, and who would be fixated on threats that can only be countered by control, and who would be unsparing in the assertion of that control.
This case study of Saddam Hussein demonstrates that all three traditions of approaching leaders and their decisionmaking offer useful information to the foreign policy analyst, and their integration can be viewed as a more robust “mixed method” approach to this most microlevel of analysis in FPA. Personality does not determine perception, but in this case it helps us to understand the origins of misperception. Likewise, the degree and the direction of misperception can help inform our understanding of personality. Both, in turn, can point to behavioral predispositions in foreign policy. As Alexander George (2003, 296) argued:
The general notion of a rational opponent must be replaced by an “actor-specific” model of the opponent’s way of calculating costs and risks and deciding what level of costs and risks are acceptable in striving for desired gains. This also requires policymakers to estimate the value an adversary places on obtaining those benefits which influence the level of costs and risks he is willing to accept. The greater the value the adversary attaches to an objective, the stronger his motivation to pursue it and, therefore, the stronger the credible threat must be to persuade him to desist. What is needed and often very difficult to develop is a more differentiated understanding of the opponent’s values, ideology, culture, and mind-set. This is what is meant by an “actor-specific behavioral model of an opponent.”
But analysis of leaders alone is not enough, for as George further comments, “The adversary may, in fact be a small group of individuals who differ from one another in values, beliefs, perceptions, and judgment” (2003, 295). Foreign policy decisions always involve more than one individual, even in the most autocratic societies. It is to that second level of analysis that we now turn.