Jack Welch, the legendary CEO of General Electric, was not used to losing, but in June 2001 he lost big when regulators from the European Union rejected GE’s $42 billion acquisition of Honeywell International. The deal at first looked like classic Welch: big, bold, and brilliant. GE, which is one of the world’s leading manufacturers of airplane engines, had long been interested in Honeywell, which makes advanced aviation electronics. In the fall of 2000, Welch heard that a rival American airplane engine manufacturer, United Technologies, was set to acquire Honeywell. Welch sprang into action, outbidding United Technologies within forty-eight hours and nabbing the deal. The GE-Honeywell acquisition was poised to be the largest merger between two American industrial companies in history. Welch was so confident the transaction would be successful, he called it “the cleanest deal you’ll ever see,” and delayed his own retirement to see it through.1 The proposed merger sailed through the U.S. Justice Department.
AFP PHOTO/Doug Kanter
But the deal also had to be approved by the European Commission, the executive authority of the European Union.2 American and European competitors, including Rolls-Royce and United Technologies (still stinging from losing Honeywell to GE), took their concerns to Brussels. They had a ready audience: The recently appointed head of the EU’s competition authority, Mario Monti, was reportedly looking for a chance to show the EU’s independence from the United States. The United States and the EU also approached merger decisions with different philosophies and processes. While American antitrust policy aimed to protect customers through market efficiencies providing lower prices, EU antitrust policy focused on protecting competitors and whether the proposed merger increased market dominance of the new conglomerate. As for process, the European system gave competitors greater opportunities to voice objections in private testimony, and voice objections they did.3 Welch said he felt “profound regret” that eight months of effort came to nothing.4
In the end, Jack Welch moved so fast and was so focused on the economics of the deal that he did not fully consider the political factors at play. “We haven’t touched every base,” he said when the proposed merger was announced and he was asked whether GE and Honeywell had contacted regulators in the United States and Europe.5 GE apparently did not have a good system in place to ensure that he did; Welch and Honeywell’s CEO, Michael Bonsignore, were so eager to close the deal, they reportedly never consulted with their Brussels lawyers specializing in European competition concerns.6 When the European Commission announced conditions that spelled the end of the road, Welch declared that “you are never too old to get surprised.”7
Jack Welch’s EU experience raises an important question: Why is managing political risk so hard? That’s the puzzle we tackle in this chapter.
Studies repeatedly find that companies know they are not as good as they should be at political risk management. Although World Bank surveys find that companies believe political risks rank among the most important constraints on investing in emerging markets, many still do not integrate political risk analysis into their overall risk management.8 In 2015, Aon’s Global Risk Management Survey of fourteen hundred executives from public and private companies found that cyber risks were top of mind in C-suites around the world, but 58 percent of companies reported that they had never completed a cyber risk assessment. (It turns out that countries aren’t well prepared for cyber threats, either. A 2017 United Nations report found that only half of all nations in the world have a cyber security strategy or are in the process of developing one.)9 In another survey, only 19 percent of executives gave their own company an “A” grade for reputation risk management.10 We have been teaching a combined total of more than four decades, and we have never come across a group of students who would give themselves such low grades.
Peter Thiel, a cofounder of PayPal and one of Silicon Valley’s most successful investors, told us that assessing political risk is both essential and frequently elusive. “Luck and risk are ambiguous words and they can mean something about the metaphysical nature of the universe—that things are just random or lucky,” he said. “But it can also be a statement about our laziness where we don’t want to think about it. I’m open to the idea that there is such a thing as risk and randomness. But morally, when you encounter risk, you want to respond and ask what’s really going on, what’s going to happen, and avoid an excuse for laziness. When you’re investing, say, a million dollars in a company, it might be a lottery ticket, but it’s likely that if you thought about it more you’d get to a much clearer answer.”
Thinking hard to get a clearer answer sounded very familiar. It’s a lot like the work that analysts in the CIA and other agencies of the American intelligence community do every day. Amy has spent a long time studying barriers to effective intelligence analysis. Condi has spent a long time living with them. And it turns out that since 9/11, there has been a convergence between the intelligence and the business worlds: Many businesses have been creating their own mini-CIAs, political risk units whose mission is to identify political risks and opportunities and to work hand in hand with business unit leaders to mitigate losses and seize new opportunities. And not just within a certain industry. Political risk units have been arising and expanding in hotel chains, cruise lines, chemical companies, law firms, consumer products companies, oil and gas companies, banks, tech companies, and venture capital firms, among others. So on the one hand, most companies in surveys were admitting they were not doing enough to manage political risks. Yet on the other hand, we knew that some leading companies were innovating, and doing, a great deal. Managing political risk was elusive for many but considered essential by everyone.
In this chapter, we take a closer look at this capability gap and where it comes from. Drawing on psychology, our research and experience in intelligence, and real-world examples from business and international security, we highlight the most significant barriers to developing effective political risk management, even for Fortune 500 companies with legendary CEOs. We call them the “Five Hards.”
Managing political risk often means raising questions about a business decision that otherwise looks attractive. Saying, “No, you might not want to do that,” or “Hold on a minute, have you thought about this potential downside?” can be unwelcome news to the C-suite or board, particularly if the immediate economic case looks promising and the political risk may be longer-term or harder to see.
Nobody likes to be the bearer of unwelcome news. This challenge is one major reason why the role of chief information officer is less attractive today than it used to be, despite a fast-rising need for talented cyber security leaders. CIOs “end up being the wet blankets of the technology field,” notes Thomas H. Davenport, who has been teaching information technology and management in universities for over twenty years. “They have to tell the Chief Marketing Officer that he can’t buy his own server and cool software for automated ad creation. For years they had to tell executives that they must use BlackBerrys rather than iPhones. It’s their job to ensure that company employees will have less powerful and desirable technologies than the employees’ teenage children.”11
CIOs are not alone. Managing political risk takes leadership and time to ensure that alternative points of view are heard and rewarded. As one risk manager from a major international oil and gas company told us, “We have to take risks. That’s the business we’re in. Our measure of success as a political risk unit is whether we have a seat at the table, whether business units are making informed decisions based on our professional expertise. The chairman is very enlightened about political risk. There was buy-in from him almost from the beginning and we see him quarterly. Analysts are as close to the business units as possible. That’s a major key to our success.”
Organizations that do not reward good political risk analysis are unlikely to get it. Our dear Stanford colleague Bill Perry, who was one of Silicon Valley’s early successful business entrepreneurs and who served as the nineteenth secretary of defense, has studied what makes some defense secretaries more successful than others. He found that some failed for just one reason, no matter how smart they were: They would not accept opposing points of view, and when they heard one, they would come down very hard on the person providing it. “Once that happens a few times, the message gets out and they don’t get opposing points of view anymore,” Perry recounted, “and they make big mistakes because of it.”12
FedEx’s legendary founder Fred Smith is a big believer in the Perry philosophy. “You have to surround yourself with people who will tell you the truth,” he says. “If you don’t, as your organization gets bigger, you’ll fall out of touch with what’s going on.”13 Smith, who built FedEx from a small business in an abandoned Memphis hangar to a $44.6 billion global giant, says the man who most influenced his views on leadership was one of Condi’s heroes: George Marshall. Marshall is known for securing the Allies’ victory in World War II and for his tenure as secretary of state, where he conceived of the economic plan bearing his name that rebuilt Western Europe and saved it from communism. For Smith, one of Marshall’s most important and inspiring qualities was that he “wasn’t afraid to call it the way it was.” As Smith recounts:
In World War I, when Marshall was the number two man in some regiment, General John Pershing paid a visit and chewed out Marshall’s superior. Everybody just stood around, but Marshall said, “General, with all due respect, you’re wrong, and this is why you’re wrong.” Of course, everybody was astounded that he did that. But later Pershing called Marshall and said, “Look, I want you to be my chief of staff.” And that’s an important lesson that I’ve tried to follow.14
Risk management is also a cost center, which compounds the problem. Cyber protection measures, legal teams, risk officers—these functions produce no revenues and incur costs that go straight to the bottom line. Marriott International is one of the world’s leading companies when it comes to managing global political risk. Marriott believes that superior security can be a competitive advantage in a post-9/11 terrorist threat environment. But it does not own any hotels worldwide, it just operates them. As Marriott’s vice president for global safety and security, Alan Orlob, told us, sometimes it takes work to convince a hotel owner that the security investments Marriott wants are worth it.
In 2009, Orlob was meeting with the owner of a new hotel that would soon be opening in Southeast Asia. “I’m out of money,” the owner told Orlob. “I’ve gone to the bank but they won’t lend any more to me, so I cannot put in the security measures you’re requesting.” Orlob didn’t mince words: “Let me tell you, if you don’t put in these security procedures at this hotel, you’re going to be just another hotel in the city. I can guarantee you that if you spend the money, if you put these physical security measures in that we’re asking, then we can use that to drive business to your hotel. People will stay there because it’s safer than other hotels in the city. You’ll see a return on your investment.” The owner found the money and followed Orlob’s guidance. A year later, a significant terror threat was issued for the city. A large group staying at a competitor hotel next door moved to the Marriott-operated hotel because it had better security. “To me, that validated what I’d been trying to tell that owner,” Orlob reflected.
Those validating moments are rare. More often, business leaders have a hard time knowing whether all these costs are “worth it”; political risk management often entails anticipating bad things and taking action so those bad things never actually occur.
In early 2011, for example, a number of major cruise lines, including Disney and Holland America, pulled out of the Mexican port city of Mazatlán, rerouting ships to other destinations. Their principal concern: Reports of rising drug-related violence suggested increasing risks that passengers on shore excursions could become victims of wrong-place/wrong-time crime. Did these companies’ decisions actually prevent dangerous incidents involving their customers? Nobody will ever know for sure.
Similarly, in 2015, Universal Studios announced a multibillion-dollar joint venture deal with a Chinese state-owned consortium to open a Hollywood theme park in Beijing.15 Even though Universal owned a minority stake in the joint venture, the company ensured that it would have complete control over the compliance system required by the U.S. Foreign Corrupt Practices Act. Universal’s position was about prevention—ensuring that American lawyers and American executives had clear responsibility and authority for managing compliance with a far-reaching set of legal requirements from the get-go.16 Was this decision a worthwhile measure that prevented the occurrence of violations, or was it an unnecessarily cautious and overly costly move? Time probably will not tell.
The trouble with nonevents is that it is often impossible to know what caused them not to occur. Intelligence agencies are all too familiar with this challenge. Suppose one country’s intelligence agency warns the president that another country appears to be readying for a surprise attack. The president hears this message and takes some sort of action—maybe he begins to mobilize troops, or he sends a back-channel diplomatic message to the adversary. No attack occurs. Does this mean the intelligence warning was successful, prompting action that prevented an attack? Maybe. Or perhaps the adversary had no intention of attacking in the first place. Maybe they were bluffing to gain leverage in a negotiation. Or they were simply conducting a military exercise, in which case the intelligence warning of a surprise attack was just a false alarm all along.
This actually happened back in 1983, when the United States conducted a large NATO nuclear exercise called Able Archer that the Soviets mistakenly interpreted to be preparations for a surprise nuclear strike, triggering a series of responses that could have spiraled into war. Recently declassified documents reveal that this was a hair-trigger moment, the closest the two superpowers had come to nuclear war since the Cuban Missile Crisis of 1962.17
The point here is that in the anticipation business, whether it’s about cruise ships in Mexico or cruise missiles in Europe, success can be difficult to discern. Warnings can be harmless false alarms. They can be harmful false alarms that inadvertently lead both sides tumbling into bad outcomes. Or they can be true alarms that prod action forestalling disaster. In retrospect, we may never know which is the case.
For all of these reasons—our natural aversion to hearing bad news and the need for senior-level leadership to overcome it within organizations, the fact that political risk entails costs without measurable profits, and the difficulty of knowing whether any political risk analysis was “worth it”—rewarding good political risk management is hard. As one executive told us, “Nobody gets credit for fixing problems that never happened.”
Humans are terrible when it comes to probabilities. Americans are far more afraid of dying in a shark attack than in a car accident, even though fatal car crashes are about sixty thousand times more likely.18 In fact, many things are more likely causes of death than shark attacks, including being trampled in a Black Friday sale or falling off a ladder.19
A large part of this tendency to miscalculate probabilities is caused by common mental shortcuts, called heuristics, that often make decision-making easier and more efficient but can lead to serious errors. Psychologists Amos Tversky and Daniel Kahneman (who later won the Nobel Prize in Economics) were pioneers in this field. One of their most important findings was called the “availability heuristic.” The idea is that people tend to judge the frequency of an event based on how many similar instances they can readily recall. Horrifying events that stick in one’s mind are easier to remember than mundane ones. That’s why people fear airplane crashes more than automobile accidents and Ebola more than influenza—even though airplanes are estimated to be seventy times safer than cars; and the worst Ebola outbreak killed about eleven thousand people worldwide from 2014 to 2016, while influenza, the common flu, killed between half a million and a million people during the same period.20 The availability heuristic explains why we tend to attribute higher probabilities to events we hear about in the news—like shark attacks—than to more likely events like cardiac arrest or car crashes.21
The most controversial and best-known experiment that Kahneman and Tversky did together to show how human processing shortcuts can short-circuit accurate probability calculations was called the “Linda experiment.” Participants were told about an imaginary woman named Linda. She was described this way:
Linda is 31 years old, single, outspoken and very bright. She majored in philosophy. As a student, she was deeply concerned with the issue of discrimination and social justice, and also participated in antinuclear demonstrations.
Participants were then asked which was more probable:
(1) Linda is a bank teller; or
(2) Linda is a bank teller and is active in the feminist movement.22
Between 85 and 90 percent of participants in the Linda experiment chose (2). In Kahneman’s words, this outcome was totally “contrary to logic.”23 Because every feminist bank teller is a bank teller, (1) always has a higher probability of being true.
Or consider the “birthday trick,” which is a fan favorite in math circles and even stumped Johnny Carson when he was hosting The Tonight Show.24 The birthday trick asks: How many people would it take to make the odds that any two people share a birthday 50/50? The answer is that it takes just twenty-three people. Really. The human mind is bad with matching numbers to feelings about likelihood.
For decades, psychologists have found “desirability” or “optimism” bias in everything from business investments to sports contests to political events and calculations of personal risk.25 People tend to expect that their investments will perform better than average;26 that good future events will happen more to themselves than to others;27 that their favorite sports team has a higher chance of winning than it actually does;28 and that their preferred presidential candidate will win an election even when the polls suggest otherwise.29 In the 1932 presidential election, for example, 93 percent of Roosevelt supporters predicted Roosevelt would win, while 73 percent of Hoover supporters thought Hoover would win.30 A few years ago, Wharton professors Joseph Simmons and Cade Massey conducted an experiment with National Football League fans to see if this bias would persist even if individuals were given financial incentives to predict more accurately. They asked participants to predict the winner of a single NFL game. Half of the participants predicted a game involving their favorite team. The other half predicted a game involving two neutral teams. Even when participants were offered up to $50 to correctly predict the winner, fans still overpredicted victories and underpredicted losses involving their favorite teams. Simmons and Massey found that optimism bias persisted even “in the face of large incentives to be accurate.”31
Condi fully understands this because she experiences optimism bias at the start of every NFL football season. The Cleveland Browns haven’t won a championship since she was nine. But every year, she believes that they’ve turned things around. She’ll even utter the words, “I’ll only go to that game if the Browns are in the play-offs.”
Optimism bias helps explain why financial markets, political leaders, and so many experts were all stunned by the United Kingdom’s June 23, 2016, vote to leave the European Union. For weeks before the “Brexit” referendum, polls consistently showed a very tight contest. Of the thirty-five polls conducted in the weeks before the referendum, seventeen showed the “Leave” campaign ahead, and fifteen showed the “Remain” side ahead.32 Based on the Huffington Post’s polling average, the Remain side had just a 0.5-point lead. Take a look at this Bloomberg screenshot of aggregated polls that Amy captured a week after the vote. Bloomberg notes that right up to the end, it was “still too close to call.” The average polls on June 21 in fact showed the Leave campaign with a slight edge, with 10 percent of voters still undecided.
Bloomberg / Number Cruncher Politics
Brexit was never actually a long shot. But many, it seems, were hoping that the U.K. would never really leave Europe, and looked only at the bright side of the numbers they saw. The betting markets put “Remain” at 88 percent just hours before the vote. Optimism bias made Brexit seem like a low-probability event even though it wasn’t.
Finally, political risks are susceptible to being considered in isolation, making them appear to have lower probabilities than they actually do over the longer term or in the bigger picture. In the last chapter, we talked about cumulative risk to supply chains, noting that the risk of disruption in any one node of a supply chain may be low, but the cumulative risk of disruption across the entire supply chain for a company over time is much, much higher. This is also true of political risks more generally.
Let’s take another look at our political risk list.
Ten Types of Political Risk
Geopolitics: Interstate wars, great power shifts, multilateral economic sanctions and interventions
Internal conflict: Social unrest, ethnic violence, migration, nationalism, separatism, federalism, civil wars, coups, revolutions
Laws, regulations, policies: Changes in foreign ownership rules, taxation, environmental regulations, national laws
Breaches of contract: Government reneging on contracts, including expropriations and politically motivated credit defaults
Corruption: Discriminatory taxation, systemic bribery
Extraterritorial reach: Unilateral sanctions, criminal investigations and prosecutions
Natural resource manipulation: Politically motivated changes in supply of energy, rare earth minerals
Social activism: Events or opinions that “go viral,” facilitating collective action
Terrorism: Politically motivated threats or use of violence against persons, property
Cyber threats: Theft or destruction of intellectual property, espionage, extortion, massive disruption of companies, industries, governments, societies
We know that many of these political risks seem like low-probability events. Considered in isolation, many of them are. The chance that an American will be killed by a foreign-born terrorist is about 1 in 45,808—more remote than the odds that they will die from a heat wave or by choking on food.33 No EU member state has experienced a revolution or coup in thirty-five years. But here’s the thing: While the probability that a single political risk will affect Company A’s business in a particular city tomorrow may be low, the overall probability that some political risk will significantly affect Company A’s business in any one of its key locations over some period of time is surprisingly high. Add up a string of rare events and you will find that the overall incidence is not so rare after all.
Yossi Sheffi, a professor at the Massachusetts Institute of Technology, zeroed in on the importance of cumulative risk in his book The Resilient Enterprise. Sheffi describes how General Motors executives came to a startling realization when they took a closer look at disruptions to the company’s supply chain from both natural disasters and man-made ones. In 2003, GM’s enterprise risk management team compiled a list of rare events that could disrupt the supply chain and then systematically asked managers how many of these events had actually occurred during the previous twelve months. The answer: quite a lot. “We went through the list and checked off, ‘Yeah, we’ve had that one’ and ‘Yeah, we’ve had that one, too,’” said GM’s Debra Elkins, senior research engineer in manufacturing systems research. One GM plant was even hit by a tornado. As Sheffi notes, “While the likelihood for any one event that would have an impact on any one facility or supplier is small, the collective chance that some part of the supply chain will face some type of disruption is high.”34
“While the likelihood for any one event that would have an impact on any one facility or supplier is small, the collective chance that some part of the supply chain will face some type of disruption is high.”
—Yossi Sheffi, The Resilient Enterprise
When it comes to political risks, this is particularly true, because of the availability heuristic. News headlines tell us what’s happening now, not patterns over time, so patterns over time can go unrecognized until it’s too late. In chapter 2, we mentioned credit defaults and how Greece suddenly defaulted when Prime Minister Alexis Tsipras came to office in 2015. The default seemed to be a shocking bolt from the blue. It shouldn’t have been. Mike Tomz and Mark L. J. Wright found that in the 1980s, about fifty countries, making up 40 percent of all nations owing money to foreign creditors at the time, failed to pay them fully, on schedule. Looking at defaults from 1820 to 2004, Tomz and Wright found that new defaults arose every decade. Since the end of the Napoleonic Wars, 106 countries have defaulted a total of 250 times. And a handful of countries have been serial defaulters. Ecuador and Honduras have defaulted a total of 120 times since the 1820s.35 Defaults surprise investors more often than they should.
Terrorism is also not nearly as geographically isolated as you might think from following the news. According to the global terrorism database, in 2014 terrorists waged more than sixteen thousand attacks worldwide. The vast majority occurred in Iraq, Syria, Afghanistan, and Israel. That’s not a surprise. But this probably is: Ukraine, Somalia, and India each reported more than eight hundred terrorist attacks that year; the United Kingdom was home to more than one hundred terrorist attacks by various groups; and forty-seven countries (including China, the United States, South Africa, and Germany) experienced ten or more terrorist attacks in 2014. That’s nearly a quarter of all the countries in the world.36
The first step toward good political risk management is being brutally honest about the political risks your business confronts. Understanding risks requires overcoming blind spots. Mistaking easily recalled events for likely ones, believing desirable outcomes are more probable than they are, confusing low probability and zero probability, and overlooking cumulative risks are big ones.
We have just discussed how political risks are hard to understand even when they are measured and depicted clearly, in quantitative terms, like Brexit polls. This is the best-case scenario. Many political risks are hard to measure quantitatively at all. Where financial risk can be more easily modeled and assessed using metrics like GDP per capita, labor supply, demographics, interest rates, and exchange rates, political risk is qualitative. It’s squishy. It requires a sense of the corruption, regime stability, policy stability, social cleavages, the national mood, cultural norms, geopolitics, domestic politics, and the motives and capabilities of everyone from national leaders to neighborhood associations to nongovernmental organizations and transnational groups. (Sure, there are fragile state indices and other tools that attempt to provide quantitative baselines and trends for some of these key factors. But as we note later, these tools should be used with care, since they tend to record national measures while a great deal of political risk arises at the local level, and they provide snapshots in time that can mask important trends.)
Perhaps the squishiest of these squishy qualitative factors involves political intentions. Intelligence officials have long known that assessing the intentions of others is the toughest kind of information to get right. Sherman Kent, a Yale professor and one of the founding fathers of the Central Intelligence Agency’s analytic branch, famously wrote in 1964 that there are three types of information for intelligence analysis. The first is indisputable facts, information that is knowable and known by the organization. A modern-day example is the number of aircraft carriers China currently operates (the answer is two). The second category consists of information that is knowable but happens to be unknown to the organization. So, for example, the CIA may know that China operates an aircraft carrier called the Liaoning, but no American has ever captained that ship, so the Liaoning’s performance characteristics under various conditions can be estimated but not known with certainty. The third category is information that is not knowable to anyone. This is the realm of intentions and decisions that have not yet been taken. An example here would be how long the Chinese Communist Party will remain in power.37
This third category, the unknowable realm of intentions and future decisions, is where the rubber meets the road for businesses managing political risk. The cruise lines we mentioned earlier in the chapter had to consider whether drug violence in Mexico would rise or decline, and whether it would affect passengers onshore. For Universal Studios, the big question was whether Chinese partners had the will and capability to comply fully with American antibribery laws on their own. More generally, political risk considerations for companies often hinge on assessing intentions: Will Burma’s political liberalization continue? Will Iran cheat on the nuclear deal, triggering snapback multilateral sanctions? Will Colombia’s historic peace deal with the Revolutionary Armed Forces of Colombia (FARC) hold, sustaining an end to half a century of violence there? These are questions that the principal political actors themselves are probably not able to answer. And even if they could, they may very well be wrong. People often assess their own intentions incorrectly. They call off weddings, cancel vacations, switch jobs, vote for different presidential candidates than they had originally planned to—because their views and interests change, their options shift, and events intervene. Assessing others’ intentions is even more difficult than assessing your own. And remember that in international politics, leaders have an interest in deceiving others about what their true intentions are.38
Political risk is also hard to measure because it often entails anticipating events that may have a low probability of occurring but that would involve major consequences for the business if they ever did occur.
Risk always has two components: the likelihood that an event will transpire and the expected impact if it does.
Risk has two components: likelihood and impact.
Risk assessments that focus on one without the other aren’t worth much. Cyber threats, for example, are everywhere. Companies know that if they have not been breached already, they will be. It is only a matter of time. Yet few companies have a good idea of what the impact of a breach could be. And as we noted earlier, even fewer have stress tested their systems and policies to illuminate vulnerabilities and develop robust defense and response capabilities. The probability of cyber threats is known. The impact is not. Conversely, Brexit was clearly a high-impact event. The probability of its occurrence proved harder for analysts to gauge.
Low-probability/high-impact events are especially tricky. It is always harder to anticipate unusual events than typical ones. Weather forecasting, diagnosing medical diseases, analyzing intelligence, assessing political risks to businesses—these endeavors are particularly prone to outlier mistakes because they are geared toward tracking the most likely outcomes, not the most consequential ones. Judgment hinges, either explicitly or implicitly, on historical data. Professionals assess a specific case by examining accumulated evidence of what happened in similar instances in the past. The average temperature of a given city comes from tracking daily temperatures over many years. Medical diagnoses require linking an individual patient’s symptoms with the most common illnesses that tend to produce them. In intelligence, analysts typically judge an adversary’s future behavior based on its past behavior. Businesses assess whether the regulatory environment tomorrow will resemble the regulatory environment today. The process ensures that evidence, not wild guessing, informs judgment. But it also leads the analyst away from outliers. A weather forecaster will predict typical temperatures with ease, but is likely to miss predicting an unseasonably cold spell. Even the best doctors often miss rare diseases. “Investors should be skeptical of history-based models,” Warren Buffett wrote to his Berkshire Hathaway shareholders after the 2008 financial crisis. After the company posted its worst performance in four decades, Buffett offered unsparing criticism, reflection, and some golden advice: “Beware of geeks bearing formulas.”39
“Beware of geeks bearing formulas.”
—Warren Buffett, CEO of Berkshire Hathaway
Predicting sunshine in Los Angeles is no great achievement. Predicting L.A.’s once-a-decade snowfall, however, would be impressive. The more frequently something occurs, the more likely you will accurately predict its occurrence in the future. It’s the outliers that get you into trouble. The same is true for medicine. Because most patients most of the time with a certain set of symptoms suffer from the same illness, good doctors are trained to look for statistically likely diagnoses. Rare cases are the most difficult and often the most dangerous. Statistical realities naturally steer the estimator to look for the most frequent occurrences. Outliers, by definition, lie at the tail end of a distribution curve.
In politics, outliers are especially problematic because often there just isn’t the information to judge what a “typical” or likely outcome might be. Doctors can diagnose the flu easily because there is a rich store of historical data that show if a patient has certain symptoms, she probably has the flu. But imagine that instead of diagnosing the flu, you’re running a company that is investing in Iran and you want to know what the likelihood is that Iran will cheat on the 2016 nuclear deal, subjecting your investment to renewed sanctions. What can historical data tell us about the likely behavior of nuclear aspirants under these circumstances? Almost nothing. Only nine countries in the world have nuclear weapons.40 Five got the bomb so long ago that nobody had yet landed on the moon. North Korea is the most recent nuclear power, but the Hermit Kingdom is not a generalizable model for anything. The only country that ever developed a nuclear arsenal and then voluntarily dismantled it is South Africa 41—in large part because apartheid was crumbling and the outgoing white regime feared putting the bomb in the hands of a black government. A few other countries explored elements of a nuclear program but never developed a weapon, for a host of reasons that included American security guarantees, strong U.S. pressure, and domestic regime change.42 Weather forecasting or medical diagnosis this isn’t. Major political events occur so infrequently that the evidence base for predicting the future based on the past is thin.
These kinds of events are known as “black swans.” Nassim Taleb popularized the term in his 2007 book of the same name. Black swans are consequential events for which the underlying probability distribution is simply not known, or at least not known with any degree of certainty or reliability. In Taleb’s words, “Nothing in the past can convincingly point to [a black swan’s] possibility.”43 Most of us just think of black swans as major events we never saw coming, like earthquakes. Calling something a black swan has become shorthand for saying, “It’s totally unpredictable. We can’t do anything about it.”
This conventional wisdom about black swans is important to bear in mind, but it is equally important to put into perspective. Political risks often do not have the historical data required for good probability assessments. But many do have three things going for them that make them easier to anticipate than earthquakes or other “throw up your hands, it’s a black swan, nothing can be done” pure bolts from the blue: (1) Political events are man-made, not acts of God; (2) political events require people acting in some sort of concert, and this can leave telltale signs for those who are paying attention before major events arise; and (3) with political events, anticipating directionality is often enough.44 Companies, for example, do not have to pinpoint Vladimir Putin’s departure date to know that Russian authoritarianism is likely to continue for the next several years. Similarly, CEOs do not need to be able to predict the exact time, place, manner, and perpetrator behind the next cyber attack to realize that cyber threats are growing more prevalent and serious, and that they require serious C-suite attention.
So while some black swans are political risk events, let’s not get too carried away. Squishiness, intentions, and black swans all make political risk hard to measure but not impossible to handle.
It is one thing to assess political risk at the time a company is making its initial decision to move into a foreign market. It’s quite another to update that assessment so that management stays ahead of the curve. Ian Bremmer, president and founder of the political risk consultancy Eurasia Group, found that while 69 percent of firms analyzed political risks for a new investment, only 27 percent monitored political risk once the investment had been made.45 A business analyst from a major private equity fund told us the same thing. Examining investments over a period of several years, he was stunned when he could not find a single instance where the firm had updated its political risk analysis after making an initial investment. Companies often fail to ask, “What’s changed?” before it’s too late. “People assume things will continue this way forever, but frequently the consensus is wrong,” notes J. Tomilson Hill, president and CEO of Blackstone Alternative Asset Management, one of the most successful hedge funds in the world. Blackstone ensures that its political risk analysis is updated by including views that challenge the status quo. “We always include a contrarian view in our scenarios, looking at what can go wrong,” Hill told us.
While most companies do not update enough, there is also the risk of updating too much, which can desensitize leaders. Dubbed the “cry wolf” syndrome by intelligence scholars and the “normalization of deviance” by sociologists, the basic idea is that humans frequently take false comfort in false alarms.46 The more often prior warnings turn out to be nothing, the more current warnings are dismissed.
For several months preceding Japan’s December 7, 1941, surprise attack on Pearl Harbor, American military officials, including those at the Hawaii base, were warned that Japan might launch a surprise attack. But the more warnings they received, the less they paid attention. The Army commander in Hawaii received word on November 27, 1941, that Japanese officials in Honolulu were burning their secret codes. He had received many similar reports over the year, and this one did not seem especially serious. Admiral Kimmel and his staff were so tired of checking out false reports of Japanese submarines near Pearl Harbor that Admiral Stark stopped sending new reports to them.47
The cry wolf syndrome explains why NASA engineers disregarded warning signs that the space shuttle’s O-rings were cracking in cold weather conditions, a design weakness that ultimately caused the Challenger disaster in 1986.48 And it explains why seventeen years later, NASA again assumed away indicators of looming disaster. NASA officials concluded that foam debris shedding from the external fuel tank during launch probably would not be a problem for STS-107, since shedding during liftoff happened so frequently. It wasn’t supposed to happen at all. This time, a piece of debris knocked heat shield tiles off the leading edge of the shuttle wing, causing Columbia to explode on reentry, killing all seven passengers on board.49
The cry wolf syndrome is common. Ever hear a funny noise in your car? The first time, it seems alarming. After living with it for a few days, however, you think it must not be so serious after all. You tell yourself the car seems to be running just fine. You grow accustomed to the noise. After a while you don’t notice it anymore. And maybe the car really is fine. Or maybe the funny noise is an indication that the car is about to experience a major malfunction. Which is exactly what happened to Amy when she ignored a strange sound in her car for several weeks until it broke down on the 405 highway in Los Angeles, at night, “without warning.”
Like military leaders, NASA engineers, and everyday drivers, CEOs have to work hard to address the cry wolf syndrome. Risk updates have to strike the balance between too little warning and too much.
Even if political risk management is rewarded, even if it is well understood, and even if it is measured and updated well, conveying risk to others is still fraught with challenges. Political risk is hard to communicate.
We use the following mini-exercise in class every year to send this point home: Imagine we offered you a pill that would enable you to look your very best for the rest of your life. Picture your ideal weight, your favorite age, your best haircut. If you took our pill once, you could keep that look for the rest of your life. The pill is guaranteed to be 99.9 percent safe, with no side effects. How many of you would take it?
In class, every hand usually shoots up except for one or two perennial skeptics.
Now imagine that we told you the pill has a 1-in-1,000 chance of causing instant death. If you take it, 1 in 1,000 of you will drop dead, right here, right now. The other 999 will look your best for as long as you live. How many of you would agree to take the pill now?
Only a few hands go up.
Statistically speaking, 99.9 percent safe and 1-in-1,000 risk of death are exactly the same. But 99.9 percent safe sure sounds a lot better than having a 1-in-1,000 chance of instant death.
The beauty pill exercise underscores just how important risk communication is, even among Stanford MBAs with exceptional math skills. The same person will make a very different call depending on how a risk is presented.
Now imagine communicating risk between two people who may come at political risk from different jobs, vantage points, risk appetites, cultures, or expectations about the future.50 In the 1990s, an American naval officer and a Chinese naval officer were discussing China’s aspiration to acquire an aircraft carrier. The Chinese admiral said he thought China would get a carrier “in the near future.” The American admiral then asked, “When exactly?” The Chinese admiral replied, “Sometime before 2050.” Near future for the American officer was not anything close to near future for the Chinese admiral.
Condi remembers a moment when the same information was viewed quite differently by two individual American intelligence agencies, with potentially grave consequences for international security. It was December 2001, just a few months after the September 11 attacks. Condi, who was national security adviser at the time, and the rest of President Bush’s foreign policy team were facing another international crisis, this time unfolding on the Indian subcontinent. On December 13, five men carrying AK-47s and grenades led an attack on the Indian Parliament House in New Delhi, killing nine people. The Indian government suspected that the attack came from Lashkar-e-Taiba, one of the largest terrorist organizations operating in South Asia and believed to receive support from the ISI, Pakistan’s main intelligence agency. Under enormous pressure from the United States and Britain, Pakistani president Pervez Musharraf condemned the attacks and sent a letter of condolence to the Indian government. But Musharraf also issued a warning to India to not take any escalatory actions or else they would face “very serious repercussions.” The warning did not sit well with New Delhi, and within a few days, military preparations were under way in the region. It was a mobilization effort that would eventually result in nearly a million troops facing off across the border between two nuclear nations with deep-seated animosity, a war-torn history, and nuclear arsenals held in a near-constant state of alert.
Condi recalls that the National Security Council (NSC) meeting held in the Situation Room in the wake of the attack felt extremely tense, perhaps more so than on any other day since the 9/11 terrorist attacks in New York, Washington, and Pennsylvania. Pakistan and India, both nuclear powers, appeared on the brink of war.51 The NSC called on the Pentagon and the Central Intelligence Agency to assess the likelihood. Looking at the exact same events unfolding on the ground, the Pentagon and the CIA offered different answers. The Defense Department—which was largely relying on reporting and analysis from the Defense Intelligence Agency—saw the military mobilization at the border as what any country, including the United States, would do under the same circumstances: Pentagon intelligence analysts saw the buildup as routine and not necessarily an indication of anything more serious.52
The CIA, on the other hand, believed that armed conflict was unavoidable. It assessed that India had already decided to “punish” Pakistan, and that Islamabad probably felt the same way. The CIA had become reliant on Pakistani sources in its efforts to fight the Taliban and al-Qaeda in neighboring Afghanistan, and this deeper understanding of the Pakistani mind-set may have informed the CIA’s assessment of Pakistan’s unfolding conflict with India.53
Looking back on the situation now, Condi recalls that the president and other NSC principals were frustrated by the wide gap between the two agencies’ assessments. It was clear that where you stood depended on where you sat. Despite the fact that both agencies were looking at the same events, the Defense Department approached the conflict through a military lens. On the other hand, the CIA was informed by the relationships it had fostered with Pakistani intelligence sources in the few months since 9/11. The gap in the assessments showed that even when two groups are looking at identical events, the meaning of those events is colored through organizational lenses. While the CIA and the Pentagon were using some of their own sources of intelligence, senior leaders were attending the same meetings and seeing all available intelligence—yet coming to different conclusions. The same information at the same moment can mean different things to different people even when the stakes are high and everyone shares a fervent desire to “get it right.” Communicating risk is hard.54