[M]y way is to divide half a Sheet of Paper by Line into two Columns; writing over the one Pro, and the other Con. Then, during three or four Days Consideration, I put down under the different Heads short Hints of the different Motives, that at different Times occur to me, for or against the Measure.
Benjamin Franklin
In this chapter, we can’t put off the inevitable any longer. We’re going to get to the point about how to make decisions in our uncertain world. Now, let’s talk straight. We’re not about to change our spots and offer you a fail-safe method with guaranteed success. That would be dishonest as well as impossible. After all, we have been telling you throughout this book that luck nearly always plays a role in determining what happens. However, what we can promise is recommendations to increase your “batting average” of decisions. Some will be good, some will be bad, but – if you heed our advice – more of your decisions will be better than before. The chapter will be like reading a book on how to improve your skills in, say, golf or tennis. By helping you face your dilemmas consistently and methodically, as when putting in golf or serving in tennis, we’ll get you in good shape. But, be warned, the actual execution and follow-through is up to you.
No one could possibly make prescriptions for all decisions. What we will do is provide a roadmap where we distinguish between two types of decision and four ways of making them. The two types are repetitive decisions, those that form a series of very similar judgments, and unique or one-off decisions. In fact, you’ve already come across two of the four ways of making decisions in the preceding chapter: blinking – or gut-level responses; and thinking – or making decisions through a deliberate process, much of which can be articulated. The third, which we call sminking, is based on using Simple Models or decision rules. (Our apologies for creating a new word, but as you will soon see, it helps to keep the concept clear.) Finally, the fourth method involves using the opinions of others – preferably “true” experts – to make the decision for you.
It’s first important to distinguish between decisions that are unique, on the one hand, and repetitive, on the other hand. Let’s consider an example. Top Global Management Consulting runs a highly successful recruitment campaign for fresh graduates each year. Many more young high-flyers apply to work for Top Global each year than their program can accept. How, then, should Top Global make decisions about who to employ?
Across all their candidates, Top Global faces a repetitive decision making situation. Indeed, the structure of this problem is the same as in many other contexts – doctors admitting patients to hospitals, firms granting credit to customers, supermarket managers ordering stocks, universities accepting students, and so on. The same decision is taken over and over again in a relatively unchanging environment.
Now let’s look at Top Global’s recruitment decision from a different viewpoint. Alex, a twenty-one-year old student, soon to graduate from Prestigious University, has just been offered a job with Top Global. But there’s also an offer on the table from Pain & Co., while McFlimsey made very positive noises at the final interview last week. To complicate matters further, there’s a good chance of a place on a highly respected graduate program in journalism at an Ivy League university, while Alex’s gorgeous girlfriend is pressing to go traveling round Europe and Asia for six months. To many people Alex seems annoyingly perfect, but for once this brilliant student is faced with an impossible task – to make a one-off, life-changing decision. It really doesn’t help that all of the options seem too good to be true.
Before we try to help Alex out by considering the issues involved in unique decisions, let’s deal with the straightforward matter facing the recruitment partner at Top Global. Yes, it really is quite uncomplicated. Repetitive decisions, as we’ll see, lend themselves to what we call “sminking”: the use of simple models or decision rules. We’ll explain how it works through three different examples.
George and Jill are a nice young couple. He’s thirty and works at the local hospital; she’s twenty-eight and is just finishing her Ph.D. in molecular biology. Are they happily married? Their families think so but then they don’t really see them very often. You could interview them at great length and try to form an impression through a combination of “blinking” and “thinking.” That’s what marriage counselors do – but here’s an alternative method.
Psychologists John Howard and Robyn Dawes trained one partner from each of twenty-seven couples just like George and Jill to monitor their own behavior for thirty-five consecutive days. The monitors counted two types of behavior and also rated the couples on a seven-point scale of marital happiness. Howard and Dawes used a simple decision rule to predict marital happiness: the difference between the frequencies of the two types of behaviors across the thirty-five days.1 The result: the simple decision rule they discovered was a valid albeit imperfect predictor of marital happiness.
The point here is that, in a domain as complex as marital happiness, predictions based on elaborate theories typically miss the mark (remember chapter 9 and the problem of noise) – although, in hindsight, they can make great stories! On the other hand, simple decision rules can have better – even though limited – predictive validity. As Robyn Dawes and another of his research partners, Bernard Corrigan, put it: “The whole trick is to know what variables to look at and then to know how to add.”2 You do need judgment for the first task, but you can delegate the second to a calculator.
Can you guess what those two variables in the case of marital happiness were? The simple decision rule was the number of times the couple made love less the number of times they argued. Now you know what to do (?).
Given that knowledge, let’s move on to our next example, which is a multi-million dollar business decision.
The Bell System, the giant former monopoly US telephone company, had a problem with bad debts back in the 1980s. The obvious solution was to demand a deposit from each new customer, but unfortunately, state laws prohibited this. To ask any individual for a deposit, Bell executives needed to come up with a good reason.
So the company conducted an experiment. They randomly selected over 80,000 new customers and gave them access to phones without demanding a deposit. After a while, they followed up to see who – from this large sample – wasn’t paying their bills. They had only a limited amount of information about each customer, of course, but they were able to identify a few yes-or-no factors that did a good job at distinguishing between those who did and did not pay. Company executives then developed a simple decision rule to predict who in future should have to pay a deposit.
At the time of the study, the company had twelve million new residential customers each year. They estimated that the new rule would result in an annual reduction of $137 million in bad debts.3
Bell’s innovation was a forerunner of today’s credit-scoring practices. These days the variables tend to be weighted, but it’s rare even now to need more than a few of them. They can be specified judgmentally through observation, as in our example of marital happiness, or identified through statistical analysis, as carried out by Bell. In credit decisions, it’s typically factors like how long people have lived in the same home, whether or not they own it, how long they’ve held their current job, and how much the household earns in total. Applicants are awarded points for each variable – for example, the longer you’ve owned your house, the more points you get. The points are then summed and weighted to create a total score, which determines your credit-worthiness. The lesson is that repetitive decision making is straightforward. Identify the right variables (usually only a few of them) and use a simple decision rule. Or – in one word – smink! It may not be a real word, but it’s a concept that’s of very real value. Sminking involves accepting error to make less error.4 It’s the ultimate example of the paradox of control. You know it’s impossible to predict accurately, so don’t even try. Instead, be content to identify and use only the most important predictors and live with the wrong decisions, comforted by the knowledge that there’ll be a lot more right decisions. It’s like playing baseball or cricket. You can’t expect to score each time you swing your bat. But your strategy should be to achieve the highest possible batting average.
Many people resist these suggestions. There are three main reasons for this. The first is that they don’t fully understand the rationale. The sminking-skeptics argue that humans are capable of considering much more information than a few basic variables and can also cope with unusual cases. And they’re right. The problem is they’re also downplaying the fundamental inconsistency of the way we think and the fact that people’s reasoning can be wrong. Confronted with identical cases, but at different times, the same human being is liable to make different decisions, perhaps because of tiredness or mood swings. There are also cognitive biases to take into account, such as overconfidence or misplaced beliefs. It’s true that simple models can’t handle some important information in a few rare cases, but they’re perfectly consistent and immune to emotional or cognitive biases. In a nutshell, there’s a trade-off: the ability of the human mind to process additional information versus the consistency of decision rules. Time and time again, empirical evidence, as we saw in chapter 9, favors the simple models.5 Sminking outperforms human judgment in the case of repetitive decision making, such as mass recruiting, reducing bad debts, or lending money to lots of people.
The second objection is even more tempting. Despite all the empirical evidence, some people worry that the use of decision rules for recruiting, reducing bad debt, or granting mortgages might be discriminatory in some way. Could it be unfair toward certain minorities? Well, simple models can be wrong, but they tend to be less discriminatory than human beings. Even if there is a tenuous correlation between bad debts and a given group of people, the simple modeling process can ensure that it’s not a factor in granting a loan or giving a job. What’s more the decision rule can be adjusted to take into account special treatment for selected minorities. Affirmative action may be illegal for Top Global’s London or Paris recruitment team, for instance, but the firm is allowed to build quotas for women and ethnic groups into its US decision-making process.
The third major objection to sminking comes from blinking fans. They say that a simple model takes away power from the people whose intuitions it replaces: bank managers and job interviewers, for example. Again, this may be true. But why should people do routine jobs that a calculator or computer can do just as well? After all, we trust the judgments of computers for all kinds of other important calculations, such as working out our tax bills or the forces on a motor-way bridge.
More and more organizations are beginning to follow the sminking principle for their repetitive decisions, but it’s hard to see why it’s not taken up more widely – in medical diagnosis for example or in admitting students to universities. It’s that illusion of control again. People want to predict every case for themselves and are unwilling to give up control to a rule, even though it would lead to better decisions on the whole. This is illustrated nicely by our third example, a classic experiment in psychology.
Imagine you have just agreed to participate in an experiment at your local university. You are seated at a desk with two buttons in front of you: A and B. The researcher tells you that on each of many trials your task is simply to press A or B. After you press a button, a light goes on. If the light is green you win a small cash reward. If it’s red, you win nothing. Got it? OK, start.
You begin by experimenting and soon find that buttons A and B can both result in red and green lights. But you soon notice that there’s a green light with A rather more often than with B. So what do you do? As this experiment has been conducted many, many times, we can tell you exactly what most people do. They decide to keep on pushing both buttons, but rely a little more on A than B.
What you – and they – don’t know is that the scheming scientist behind the scenes has programmed the system so that the green light flashes on average 60% of the time for button A, but only 40% of the time with button B. In other words, what most people observe is correct. But their decisions are not. Having noticed that the chances are better with A, they should have pushed it every single time – which would have been sminking at its very simplest. But why don’t they do this?
As further research shows, one of the main reasons is greed. Participants can’t resist trying to out-predict the system. If the financial stakes are increased, participants are more likely to stick to A. Fear of losing the greater sums on offer overcomes greed. It’s the classic illusion of control, exaggerated by emotions.
We see the same behavior outside the laboratory. When ignorant of the implications, people are reluctant to gain control by ceding control to a simple decision rule. Many recruiters still prefer to rely on gut instinct, rather than sminking. While the smart recruiters use a combination of data about qualifications, university attended, experience, test scores, and psychological questionnaires . . . and only after all that more subjective criteria, others stubbornly insist that they know best and keep blinking. Meanwhile when large financial gains are more directly at stake, as in the laboratory, behavior changes. Large banks don’t bother to interview potential credit-card customers. They just ask a few indicative questions and check all available databases for past bad-debt. Sminking has saved so many billions of dollars that it just isn’t questioned.
We’re not claiming that simple models are always easy to build. The main challenge lies in selecting the right variables. And to do that, most of the difficulties are usually emotional rather than cognitive. What’s more, you can only hope to achieve what the level of inherent uncertainty will allow. In the laboratory experiment, the green light was set to flash only 60% of the times you pushed button A – so on average you’re not going to do any better than that. But, when all that’s said and done, you’re going to make better decisions by using a simple model. It’s exactly what Top Global did when they offered Alex a job – although they did give him a final interview just to check he’d fit in.
Alex is still pondering his own big decision. And he’s absolutely right to do so. A unique decision with little time pressure requires a great deal of thinking. But any decision method can be applied effectively or ineffectively. This is particularly the case for thinking, because it’s open to so many different possible influences. How can Alex think effectively, then? Should he follow his father’s sensible advice and accept the offer from Top Global? Should he follow his heart to Europe and Asia – and hang the career consequences? Or should he take the financially risky option of journalism, his secret ambition ever since he can remember? Alex is finding out that unique decisions are hard – both cognitively and emotionally. There are no simple models to smink with. What we can do, however, is help Alex to think through his problem. To do so, we suggest that he systematically consider three basic questions. First, what’s at stake? Second, what are the uncertainties (time to deploy the triple A approach)? Third, what is his personal attitude toward risk?
Taking each question one by one, what exactly is at stake? Here the focus is on the different alternatives and their consequences. Alex may think he’s already got too many options, but it’s time to think out of that clichéd box and see if there are any others he’s missed. Could he, for example, get some experience in business and save up for journalism school later? Does one of the three consultancies he’s considering publish a journal that he could write for? Finally, if his relationship is really so great, surely it can survive a few months of separation?
Smart Alex knows that he can’t expect a flash of inspiration, a eureka moment, or a genie. Innovative thinking requires great concentration. In addition, he recognizes the need to discuss the situation with trusted friends who – unlike his father and girlfriend – can be entirely objective. What can they suggest? Do they know other people who have direct experience of the various alternatives? Can they put him in touch with them? Alex realizes that the more alternatives he generates, the more likely he is to find good ones. He also knows that he shouldn’t discard some options too quickly because, with a little elaboration, they might become attractive. Finally, he recognizes that he has to figure out how each alternative will help him reach his overall life goals. It’s all too easy just to accept the offer from Top Global, just because everyone else graduating from his program is boasting about their starting salaries. It may be a job to die for, but he’s got his whole life ahead of him.
Having generated further alternatives, Alex now has to consider the uncertainties. More precisely, he has to accept, assess, and augment them, as we described in chapter 10. The key here is to inject a healthy dose of realism into proceedings. At once, this allows him to accept that there’s uncertainty involved in each of the alternatives. To will it away – or hope for the best – is to fall prey to the illusion of control.
Now for the assessment bit. For example, one of the advantages he sees in doing the journalism program is to get a good job with a top magazine or newspaper when he graduates from it. But recruitment in the media is much more ad hoc than in big firms like Top Global. What if they aren’t hiring when the time comes? Or just as bad, what if he can only get an unpaid internship – not an uncommon first step in the competitive world of journalism? By this time next year, Alex will be heavily in debt. He won’t be able to afford an unpaid job and his chances of a well-paid consultancy post will have evaporated. Alex continues in this vein, through each of the options, finding more and more uncertainty wherever he goes. But, luckily for him, he also finds that one of his Prestigious University colleagues’ father is a journalist with a major international business magazine and another’s mother is a partner with Top Global. He makes appointments to talk to them on the phone. Clearly, this involves a lot of work, but it is necessary if Alex wants to cover all possible angles and avoid future surprises.
In the meantime, as the uncertainty increases, so does Alex’s anxiety. That’s when he needs to be at his most careful. When uncertainty becomes too threatening, there’s often an unwillingness to consider its consequences. Instead, people adopt optimistic attitudes that make light of potential downsides. For this reason, we recommend that Alex adopt what is called an “outside view”. That is, instead of working on the decision for himself, he should imagine that he is a consultant with Top Global, hired to analyze the problem and make recommendations. How would the consultant go about assessing uncertainty? He figures that they’d start with some benchmarks. As one of the best students in his undergraduate class at Prestigious, he stands a good chance of doing well as a management consultant. As a highly numerate economics major and active member of many student societies with good interpersonal skills, he has the classic profile. But journalism is a harder case to call. Success is much more about flair, networking, and pushiness, not to mention plain luck. As for love, is it even appropriate to seek benchmarks at the age of twenty-one?
Next, as specified in chapter 10, Alex must augment the uncertainty he’s just assessed to make sure that it is as reasonable and pragmatic as possible. Moreover, the longer into the future he looks, the larger the upward adjustment for uncertainty should be. Will the demand for consultants and journalists, for example, be higher or lower in the longer-term future? Furthermore, Alex should think about the possibility of personal “coconuts.” What if he gets sick or has an accident, and can’t finish the journalism program, but still has to pay the tuition while not having any income from a job? An alternative would be to buy insurance to cover such eventualities. But how much would that cost on an already tight budget?
Augmenting is critical to assessing uncertainty in a more realistic manner. Many businesses suffer the negative consequences of what is sometimes called the “optimism bias” in planning activities. IT projects are the classic example. They typically take much longer to develop than engineers’ first estimates. Underestimates also plague the construction industry not to mention intellectual projects such as writing books or, in the political domain, attempts to bring about social change. Interestingly, the UK government requires planners to augment their estimates for budgets associated with large transport projects by a factor known as the “optimism bias uplift.”6 And they still regularly go over budget, over deadline, or both.
Finally, having adjusted for his own optimism bias, Alex has to set his personal risk levels. Whereas different people could agree as to whether, given his goals, he’s structured the alternatives well and assessed the uncertainties realistically, only Alex can decide how much risk he should face. The best advice we can give is to be active in setting his risk level. There’s a plethora of scientific literature to prove that people’s decisions in the face of risk can be influenced by either greed or fear, depending on just how the situation is presented to them. Patients, for example, are more likely to opt for a medical procedure if their doctors present the outcomes in terms of the survival rate rather than the proportion of people who don’t make it!7 So it’s important for Alex to see the glass as both half empty and half full and to consider the consequences of both.
We also suggest that Alex consider his possible decisions from different emotional perspectives. This can help diminish the negative effects of hope (mentioned above). One view should emphasize what he can gain from taking the actions and will appeal to greed. The other should emphasize what he could lose, and will appeal to fear. By considering both perspectives, Alex will better understand how he feels about his decisions, the risk involved, and where the balance lies between fear and greed.
In the end, there’s clearly no right answer and Alex will never know whether his decision was the best one. But – by thinking through the procedure we’ve outlined in the past few pages – Alex stands a better chance of determining what is right for him than by “blinking,” “sminking,” or simply tossing a coin.
However, Alex is tempted by one further course of action that we haven’t really explored yet. He’s been using the university’s career services office ever since he arrived at Prestigious, and is extremely impressed by the expertise of its staff. He sets off for one last visit, in the hope that someone there can tell him exactly what to do. Alex reasons to himself that Top Global has built an international business on making companies’ decisions for them, so it’s not unreasonable for him to delegate his decision to the equivalent expert for his own situation – a careers advisor.
Alex isn’t unusual in taking this step. Because decision making can involve a lot of work and is so fraught with the uncertainties that this book is all about, many people prefer to outsource their decisions to “experts.” It’s a strategy that’s very popular and, as we saw in the first part of this book, common practice in health, investment, and business decisions as well as many other domains. After all, if the expert turns out to be wrong, it’s not your fault – you’ve controlled the situation as much as you could.
We take issue with this viewpoint. It’s that dastardly illusion of control all over again. The ultimate decision is still your responsibility. And the expert can’t get rid of all uncertainty. Nobody can. On the other hand, if you ask the expert the right questions, you can make a much more informed decision. In the next few pages, therefore, we consider what we should be asking experts if we want to improve our decisions.
The most important point about using experts is something we call the Harry Potter Rule in honor of J. K. Rowling’s fictional boy magician. And fiction is the point. The rule is: there is no magic in our world – we are all “muggles.” You should therefore be highly suspicious of advice that seems magical. For example, there are several “One Minute” books and some have sold millions of copies. They provide advice on a range of issues: how to become an effective manager, a millionaire, a great mother or father, and so on. The titles of a few are: The One Minute Manager, The One Minute Millionaire, The One Minute Father and The One Minute Mother, even The One Minute Entrepreneur.8 But no. You cannot achieve any of these goals by reading a book – let alone in a minute!
The same applies to many other forms of advice, as we saw in chapter 6, including books on how to achieve success in life or business. An excerpt from one of these books, The Science of Success: How to Attract Prosperity and Create Harmonic Wealth® Through Proven Principles, is typical of the kinds of promises made:
The Science of Success makes universal principles of success available and practical. Anyone on Earth can apply this science, and it will make them successful every time. That’s because the Science of Success works with universal laws, laws as fundamental and unbending as the law of gravity. If you follow these laws, I guarantee that you will succeed – every time, and in whatever endeavor you undertake – just as surely as a pencil will fall down instead of up when you drop it.9
This statement clearly appeals to and feeds on the unsuspecting person’s (or should we say “victim’s”?) illusion of control. But remember, we don’t live in one of J. K. Rowling’s novels.
Strangely, while there are many books about fantasy and success, there are few books about failure and how to avoid it. Those that do make it into publication rarely become bestsellers. Yet, it’s obvious that dealing with failure and avoiding common mistakes is very important and much more common.10 A recent book by R. J. Herbold goes so far as to say that “success is a huge business vulnerability,” preventing people from seeing the need for change and diminishing their motivation.11 Easy success – particularly if we don’t understand what lies behind it – may not be so advantageous if we really want to do well in the longer term.
In short, we need to question the role of experts and their advice, whether provided in books, or in person. At the risk of sounding repetitive, we emphasize that there are limits to what experts can predict, and no one, including the best and most expensive expert, can reduce future uncertainty. Yet this is exactly what people expect from experts. They want experts to predict the future and absorb their uncertainty – an impossible dream and a classic case of the illusion of control.
On the other hand, it is important to realize that true experts, such as doctors, lawyers, psychologists, and accountants, possess state-of-the-art knowledge in their fields. We can and should use them to access this knowledge and inform our judgments. But that’s not the same as outsourcing our decisions. Instead, we recommend using expert advice as just one of many inputs in our own decision making. The process that we described for consulting doctors in chapter 3 can be generalized to other fields. This means:
• Finding as much information as possible in the area of expertise, including the extent to which opinions are divided – and, ideally, some hard, preferably empirical data about the past in order to estimate future uncertainty.
• Asking the expert what type of advice they would give if, instead of one of us, it was their father, mother, child, or spouse who was involved.
• Seeking help to identify any available options that you might not have thought of, then getting ideas about how to evaluate different options and the costs and benefits of each.
• Requesting objective advice about the urgency of the situation – or the option of further consideration if there is no time pressure.
• Getting suggestions about other experts and sources (including websites) for obtaining additional, independent advice or information.
• Posing direct questions about possible conflicts of interest that the expert may have in providing information or advice.
We don’t often have the time or the resources to consult experts at this level of detail. However, the internet is changing our ability to canvas expert opinion. It allows us to answer questions we would like to have asked experts, quite often free of charge. But in no case should we expect experts or the internet to make infallible forecasts, eliminate future uncertainty, or decide for us. Trusting experts blindly is another way of becoming a victim of the illusion of control – enough said!
That’s exactly what the Prestigious University careers advisor told Alex – if in slightly different words. He gave him lots of useful information about the three strategy consulting firms, about careers in journalism, and about the potential drawbacks of taking six months off to travel. But, as an expert with a reputation for being very good indeed at his job, he sent Alex on his way to make the final decision for himself.
In this and the previous chapter we’ve shown that a decision can be taken in one of four ways – or a combination of them. The important thing to remember is that each has its pros and cons.
Starting with blinking, it’s the only way of making decisions in practically all tasks that involve muscular reactions, such as returning a serve in tennis or stopping a car in an emergency. When there’s only a split second, there’s no time to make conscious effort.
But apart from muscular reactions, blinking doesn’t always work. Worse, we typically have little insight into when it’s been effective or not. There are good reasons for this. Yes, some of the experts described by Malcolm Gladwell in Blink claim that the Getty Kouros was a fake the moment they saw it.12 But there is still no agreement that their intuition was correct. In talking about the still-unresolved question of the Kouros’ authenticity, John Walsh, director of the J. Paul Getty Museum concluded: “After years of intensive research, we recognize that the puzzle may not be solved in our time.” In fact, the same opinion is shared by the great majority of the nineteen eminent art historians summoned to Athens to decide on the authenticity of the Getty Kouros. After considerable debate, five of the nineteen concluded that the Kouros was fake, three that it was genuine, while the remaining eleven agreed with Walsh and stated that they could not express a definite opinion.13 Thus, even in Gladwell’s celebrated illustration of blinking, there is no agreement that it works.
But even if it was agreed by everyone that the Getty Kouros was fake this would not have proven that blinking always works. Experts, like most people, typically don’t advertise their mistakes so we have limited chances of uncovering cases where experts blinked wrongly. Conversely, how many fakes exist, including those exhibited in the world’s great museums, which experts have so far failed to recognize? An especially infamous case is that of the Dutch painter Han van Meegeren who, before and during the Second World War, produced several widely acknowledged Vermeer masterpieces that were exhibited in great European galleries for many years. They were eventually revealed as forgeries by the painter himself.
Outside the realm of muscular reactions, one of the few areas where blinking produces consistently outstanding results is grand-master-level chess. But, as we saw in the last chapter, there are two important factors at work. The first is that grandmasters are only able to blink after ten years or more of extensive practice with consistent, accurate feedback. The second is that the grandmasters employ a combination of blinking and thinking – in fact they think to verify each and every blink. In 75% of cases, systematic analysis confirms that the initial intuition (blink) was right. But in a significant 25% of moves, the hunch is corrected on further reflection. In other words, thinking is critical to a grandmaster’s success and a prerequisite to successful blinking.
If time allows, our advice is to follow the practice of grandmasters every time you’re tempted to make a blinking decision – and particularly when the stakes are high. This is the only way of avoiding possible mistakes. It may even be wise to involve a third party in the post-blink thinking in order to gain a little objectivity and to overcome the influence of emotions. This is how Alex was very quickly able to overcome his initial instinct to rush out and buy a round-the-world ticket to go traveling with the love of his life.
Now for sminking, or simple modeling in the form of a decision rule. When it comes to repetitive decisions, we have little hesitation in advocating this course of action. All the available evidence points to the same conclusion, namely, sminking offers considerable improvements over intricate thinking – and with substantially lower costs. Top Global, the management consultancy that offered Alex his job, only recruits actively at the world’s best universities, for example. They used to spend time and money doing presentations at second-tier institutions, but soon realized that more than 80% of their most successful employees (and an even higher percentage of their executives) came from further up the educational hierarchy. So they simplified matters and saved money at the same time (whilst marketing their special internship programs for women, people from ethnic minorities, and disabled students at all universities). The only real danger with sminking is that the decision rules involved can become obsolete when the environment changes. So it’s always advisable to put your thinking cap on from time to time in order to recognize such changes and modify the rules appropriately. And it’s worth remembering as well that sminking relies on a considerable amount of thinking to choose the right variables and develop the simple model, or decision rule, in the first place.
Next, consider using experts. There’s an ongoing debate about the role and value of experts, not least because there’s big money in expertise. In a recent book, Philip Tetlock, a professor at Berkeley’s Haas School of Business, explored these issues using information from a mammoth study analyzing more than 82,000 decisions from experts in the field of political science.14 His findings – which echo our own and those of previous research – are quite stark. Simple models turn out to be more accurate than human forecasters. And if you really must rely on human beings, experts are rarely more accurate in predicting than informed individuals. Indeed, Tetlock’s political experts weren’t as good as non-experts at modifying their forecasts in the light of new information, as they felt they knew all the relevant facts. They were also overconfident about the accuracy of their predictions. Having said all that, as we saw earlier, good expert advice can be very valuable in helping people arrive at their own decisions – provided they handle both the experts and the advice carefully.
Last, but certainly not least, we come to thinking. This is the default option, so long as the decision in question is neither repetitive nor muscular. Thinking is also vital in confirming blinking decisions, as in the case of chess grandmasters, and in formulating sminking rules. In this chapter, we illustrated some principles of effective thinking in discussing Alex’s big career decision. We warned that thinking can be derailed by a combination of cognitive and emotional factors. However, we hope we also showed that, by setting up the problem carefully and following clear principles and procedures, it is possible to think your way to good decisions.
And Alex? Well, his conversations with his fellow students’ parents revealed that there was a shortage of journalists with a solid grounding in business. In fact, the magazine editor even went so far as to say that he was fed up with journalism grads with neither experience nor interest in their subject matter. And the Top Global partner revealed that the firm was soon to announce a new division specializing in advice for media companies. This was all just as well, as – in the course of all that reflection – Alex had realized he was extremely risk averse and couldn’t justify a further year of expensive studying. He accepted the job with Top Global and managed to get assigned to the new division, where he made lots of good contacts and met a tall, slim, attractive Associate called Ellen who quickly made him realize that his own girlfriend had not been so “gorgeous” after all.
Over the years, social scientists have typically followed the tradition of the ancient Greek philosophers who placed reason on a pedestal and denigrated the role of emotions in decision making. Yet it’s significant that later philosophers specifically recognized the importance of emotions. In the seventeenth century, Blaise Pascal famously observed that “the heart has reasons that reason does not know” and in the eighteenth century David Hume pointed out that reason was subservient to emotions. More recently, neuroscientists have demonstrated that emotions can often play an important, positive role in decision making.15 How then should we treat our emotions when making decisions?
We believe it is important to recognize that our Bermuda triangle of emotions – greed, fear, and hope – can also serve important, useful functions. In taking risks, for example, fear can protect us from excesses and hope is an important motivating force for many activities. Indeed, our emotions constitute what might be called a “primitive” decision making system. However, except for certain activities, we maintain that these need to be complemented by other considerations. In particular, we need to know when and how much attention to pay to our emotions.
We argue that the wise use of emotions depends heavily on the kind of decision being taken and the method you are using.
First, in the context of repetitive decisions, emotional influences are likely to add noise to the process – that is, sometimes an emotion may suggest one action (a job candidate is selected because you take a liking to what she said in the interview) or the opposite (if she’d caught you in a bad mood, her humorous aside would have meant instant rejection). One of the many advantages of sminking is that it explicitly avoids any distortions due to your mood or emotions. As we’ve said before, repetitive decision making becomes simpler provided you have the courage to smink.
Second, if you are blinking, you may often depend entirely on emotions. For example, if you are suddenly aware that an object is about to land on your head (perhaps a coconut), fear will automatically lead you to take protective action – get out of the way in a hurry. Here, of course, you don’t even think about emotions – the process is automatic. And a good job too.
Third, and more problematic, is how to handle emotions when taking a decision by thinking. There are two main points to consider. The first is that you cannot avoid having emotions about the decision. That’s human nature. However, as discussed earlier in this chapter, it isn’t always appropriate to rely on your first blinks. The second point is that different emotions can be triggered by different stimuli. At one moment, a proposed course of action might seem quite risky, while at other times, it might seem too safe. Our proposal is to recognize both aspects of emotions by, first, thinking of emotional reactions as “data” or information that you can include with other information in analyzing your decision.16 In other words, list and think carefully about both the emotional and other considerations of your decision. Second, make a point of deferring your decision – like Alex in our example above – until you have had time to consider it on more occasions. In this way, you’ll profit from seeing how sensitive your decision is to variations in the strength of your feelings. The strategy of deliberately “sleeping on a decision” before taking action isn’t a cliché for nothing.
Finally, when using experts you would be less than human if your emotional feelings about experts did not affect how you react to their advice. Our recommendation: recognize that this happens and try and put yourself in somebody else’s shoes. How would you feel if the expert had given the same advice to a rival?
The idea we’d most like you to remember is that of simple models or decision rules. Our advice about thinking is sound, but it’s a tad too sensible to be exciting. We confess that we get more of an intellectual kick out of sminking than thinking, because it doesn’t simply acknowledge the illusion of control. It involves going one step further and embracing the paradox of control: by relinquishing control to a simple model, rather than your own thought processes, you actually stand to gain more control over the outcome.
That’s not to say that there are any easy answers to good decision making. No recipes. Nothing’s changed. And sminking only works for genuinely repetitive decisions. But there’s no doubt that it’s under-used. Simple models are common in credit scoring or recruitment, but could be much more widely deployed in medicine, business in general, and not-for-profit organizations. Individuals can also benefit from sminking, though most of our non-work-related decisions tend to be either unique or too unimportant to merit developing simple models.
By separating decisions between repetitive and unique, we’ve done a little simple modeling of our own. We’ve presented two endpoints of what is really a continuum. Many decisions have elements of both the repetitive and the unique. So, even if they’re not the sole basis of a final decision, simple models or decision rules can – like experts – be used as benchmarks to calibrate our judgments or forecasts. But to do so means recording our decisions and their outcomes, which is always going to be problematic. The reason? Fear of being held accountable. The cost of this fear, however, is that it prevents you from obtaining the feedback necessary for learning.
Clearly, not all decisions have repetitive elements, so thinking and blinking will also be required. You’ve seen our recommendations about these two forms of making decisions in the last two chapters, so we won’t dwell on them further. But once again, we emphasize the need to supplement blinking with thinking. And we also insist on repeating that outsourcing decisions to experts should generally be avoided, even if experts’ opinions can help us to make our own decisions.
In evolutionary terms, the development of our higher level thinking processes is quite recent. Until the last few decades, they were restricted to a tiny minority of people. For most of its existence, the human race has had to spend practically all its time worrying about food and safety. There’s been little or no time left to pursue arts and sciences, to play chess, or to get involved in intellectual endeavors of any kind. There was, of course, the morally dubious exception of Ancient Greece, where slaves did most of the manual work, freeing up a large proportion of the population to pursue intellectual activities. But in the Western world, it was generally only after the Industrial Revolution and the rise of automation that ordinary people had the opportunity to study and to work with their brains instead of their muscles.
So it shouldn’t come as any surprise that we humans are ill-equipped to grasp probabilities, comprehend future uncertainty, or face its implications rationally. This is one reason why our decisions are often inferior to those of simple statistical models. At the same time, the products of our ever-developing brains have, in a short time, created a complex world in which we must make decisions and for which physical evolution hasn’t prepared us.
The good news is that we do have the intellectual ability to appreciate the full extent of future uncertainty, understand the illusion of control, and figure out their implications for decision making. The bad news is that we often fail to do so for psychological reasons and pay a high price accordingly. In our world, governed by both chance and skill, we must first penetrate the illusion of control and, where appropriate, exploit the paradox of control. In other words, by giving up control, we sometimes gain more of it. This long story, cut short to a single sentence, is the story of the last twelve chapters.