In 2007, psychologist Mahzarin Banaji, a friend and colleague of mine at Harvard, published a fascinating foreword to the book Beyond Common Sense: Psychological Science in the Courtroom about “the moral obligation to be intelligent.”1 Banaji’s argument, which started as a talk she gave to entering Yale undergraduates, was that the failure to develop one’s cognitive potential would be bad not only for the students, but for society as well. Banaji was trying to encourage her audience to feel an obligation to make wiser choices.
When we make poor decisions, we increase our likelihood of getting sick, dying younger, accepting the wrong job, losing a job, getting married to the wrong person, and losing money. Bad decisions also limit our charitable effectiveness, harm the planet, hurt other people (including family members, friends, and colleagues, not to mention others who share the planet with us), and limit the effectiveness of the organizations we care about the most.
When we see the word “intelligence,” we tend to think of it as a fairly fixed personal attribute. However, while it’s true that people have different levels of intelligence, we have the power to actively engage the best of our intellectual capabilities to make wise decisions and increase the value we can create in our lives. So, what’s standing in our way? There are a number of barriers we need to identify to access our “active intelligence”—the wisdom we bring to our decisions, rather than a fixed trait that describes who we are. Our goal should be to develop the tendency to engage our more active deliberative thought processes (most commonly, our System 2 processes) for important decisions that have ethical import. We all tend to use cognitive (System 1) shortcuts that prevent us from making better and more ethical decisions. As we’ll see, willpower and knowledge are needed to access the better decision-making processes that exist within us.
OVERCOMING THE BARRIERS TO ACTIVE INTELLIGENCE
The fields of psychology and behavioral economics provide insights into how we can engage our intelligence more fully and improve our ethical behavior. One opportunity involves reducing our biases. After Herbert Simon’s Nobel Prize–winning work on bounded rationality, Daniel Kahneman and Amos Tversky pioneered the modern field of behavioral decision research by delineating the systematic and predictable ways in which individuals deviate from rational decision making. Biases that prevent humans from acting as rationally as they would like to include:2
Don Moore and I provide a comprehensive list of decision biases in our book, Judgment in Managerial Decision Making.5
ETHICAL BIASES
Among the dozens of cognitive biases researchers have identified, some are particularly relevant to ethical decision making. These biases keep us from adhering to our own internal, more reflective moral standards—and most of us are unaware of the degree to which these factors bias our decisions and create harm. These ethical biases emanate from our innumeracy, our desire to receive a warm glow from helping others, a need for connectivity, and a focus on our own perspective.
In his bestselling book Innumeracy, mathematics professor John Allen Paulos described the mathematical equivalent of illiteracy: being ineffective with numbers rather than words. This ineffectiveness could be due to a lack of skill or a lack of motivation to think through quantitative information. Paralleling research on cognitive biases, Paulos argued that innumeracy affects both people with lesser educational opportunities and people who are educated and knowledgeable.
Systematic biases limit our ability to think clearly about quantitative information. For example, researchers asked individuals in three different groups how much they would hypothetically pay to save 2,000 (the number given to those in the first group), 20,000 (the number given to those in the second group), or 200,000 (the number given to those in the third group) migrating birds from drowning in uncovered oil ponds. A rational analysis would lead us to expect that, assuming we are concerned about the pain and suffering of birds, the value of saving these three different quantities of birds would be reflected in vastly different levels of willingness to contribute. That is, we should be willing to pay much more to save 200,000 birds than 2,000. Instead, the average amount promised by each group was $80, $78, and $88, respectively—virtually the same amount.6 This type of innumeracy has been described as scope insensitivity or scope neglect; that is, the scope of the altruistic action had little effect on the magnitude of the contribution made to solve a problem.7 Kahneman and his colleagues argue that participants in this experiment visualized “a single exhausted bird, its feathers soaked in black oil, unable to escape.”8 The emotionality of the image was the dominant motivator to contribute, regardless of whether 2,000, 20,000, or 200,000 birds were at risk. We glaze over the zeroes in the quantity and make decisions in reaction to emotional images.
In a 2007 study, decision scientists Deborah Small, George Loewenstein, and Paul Slovic gave participants five dollars each to complete questionnaires.9 Half were asked to read the following text:
Food shortages in Malawi are affecting more than 3 million children. In Zambia, severe rainfall deficits have resulted in a 42% drop in the maize production from 2000. As a result, an estimated 3 million Zambians face hunger. Four million Angolans—one third of the population—have been forced to flee their homes. More than 11 million people in Ethiopia need immediate food assistance.
The other half saw a picture of a small girl with the message:
Her life would be changed for the better as a result of your financial gift. With your support, and the support of other caring sponsors, Save the Children will work with Rokia’s family and other members of the community to help feed her, provide her with an education, as well as basic medical care and hygiene education.
Participants in both conditions were asked if they would like to donate some or all of their five dollars. In the first group, 23 percent contributed; twice as many—46 percent—contributed in the second group. According to the “identifiable victim effect,” we tend to offer greater aid when presented with a specific, identifiable victim of a problem than when told about a large, vaguely defined group with the same level of need.10
Scope neglect and the identifiable victim effect encourage our intuitive innumeracy and lead to poor decision making. In contrast, most of us would endorse the goal of choosing behaviors—such as contributing money or investing our time—where we can do as much good as possible, rather than simply feeling like we made a difference.
Why do we do nice things for other people, like that identifiable victim, in the first place? Is it to create value for others or to get credit in some strange informal competition? Most people would like to believe the former is true, yet Daniel Kahneman and his colleagues convincingly explain the phenomenon of scope neglect by arguing that we contribute enough money to receive a warm glow from participating in solving a problem rather than thinking about the maximum amount of good we could do.11
To take one example, I personally believe that more good comes from making donations to reduce hunger in emerging economies than from making similar donations to major opera houses. (I realize that the Boston Opera might view me as annoying or lacking cultural sophistication for even expressing this view.) But major cultural venues have a significant advantage over famine relief organizations when it comes to raising funds: They print event programs, and sometimes hang plaques on the wall that list donors by donation level. Similarly, universities benefit from the fact that donors enjoy seeing their names on buildings. People care about the recognition they get from their donations, to the extent that they will give less, or not at all, if they are not recognized for giving.
I would hope that many of us would reconsider our need for recognition, but there is little reason to expect this need to fully disappear. As a result, organizations that do the most good should think about how they might provide recognition to donors.
Philosopher Peter Singer opens some of his lectures by asking those in the audience to imagine that on the way to work, they pass a child drowning in a pond.12 To save the child, they would have to jump in and get their clothes wet and muddy. Do you have an obligation to save the child? he asks. The audience quickly confirms that they do have that obligation. He then points out that there are millions of children living far away from us whose lives could be saved by contributions from us that we would consider to be about as costly as the wet and the mud. However, we pass on these opportunities. Why? Because the children are far away, not directly visible to us, and not personally identifiable. Most of us do not feel connected to those suffering in far and distant places.
Singer’s anecdote helps to clarify why we prefer to give to people in our community rather than to people in distant lands, even when more good could be done with the same contribution: we want to feel a direct connection to the good we will cause. This also helps to explain why we heed the appeals of those who speak to us directly without thinking about whether a more worthy organization farther away could do more with our dollars. Yet, when people are asked to think about how much they value feeling connected to donation recipients, they have trouble justifying this preference and are more prone to contribute where they can do the most good. We intuitively seek connections, while our more active intelligence cares more about the actual impact we can have.
Relatedly, in their research, psychologists Nicholas Epley and Eugene Caruso have documented that we have an amazing capacity to think about the thoughts and the emotions of others, but simply fail to activate this capacity unless those people are right in front of us.13 We can understand the emotional experience of another person directly as we study their face. We are able to think about our partners’ preferences with keen accuracy. We are also capable of imagining what life is like for the poorest people in the world, but we typically fail to activate this imagination.
Cognitive psychologist Boaz Keysar has highlighted this failure to think about others, even when we can, with a concept he calls the “illusory transparency of intent.”14 In the days before we used GPS to get where we needed to go, Keysar described the then-common situation of giving directions to a friend on how to find your home. As you may recall, it wasn’t unusual for that friend to get lost and have to find a pay phone (remember, this was pre–cell phones) to call for clarification. Why did our smart friend get lost when our directions were so clear? The answer is that we forget to share familiar details that we rely on without thinking, such as the fact that the road forks to the left a few blocks from our house. We are similarly unhelpful when we give coworkers instructions on how to carry out a task that we perform by rote. More broadly, when instructing others, we fail to think about the task from their perspective—that is, we falsely assume that our intent and knowledge are transparent. The illusory transparency of intent overlaps with the curse of knowledge, discussed above.
For another example of the tendency to be self-focused, consider the common social task of gift giving. How can you choose the ideal gift for someone from a value-maximizing perspective? The goal would be for the recipient to get more value from the gift than the cost (in time and/or money) you incurred from the gift. To meet this goal, you might think about how your knowledge allows you to identify products and services that the recipient would value, but might not even know exist. Now think about the times you have moved and all the things that you can’t believe you own, and certainly have no interest in packing and moving. What do these items have in common? My own experience is that they tend to be “whimsical,” such as silly books, goofy artwork, or other gag gifts—fun to give and fun to receive, but of little value beyond the day they were given. The giver thought about the experience of giving you the gift, but not about the actual long-term value you would receive from the gift. The broader point is that givers can create more value by looking beyond the enjoyment they would receive from the recipient’s initial reaction to consider the recipient’s long-term experience with the gift.
Providing further evidence of our self-focus, University of California, Berkeley professor Don Moore, author of Perfectly Confident, found in his research that people are biased toward assuming they perform worse than average on objectively difficult tasks (for most people, this would include juggling) and better than average on objectively easy tasks (for most people, this would include driving a car).15 Of course, most people are bad at difficult tasks, and most people are good at easy tasks. But when assessing how we measure up, most of us simply focus on our own skill at a given task rather than on how our performance compares to that of others, even when we have access to that information.
Similarly, abundant research has shown that self-focus leads people to claim more credit for work and other tasks than they deserve. This is true of rich and poor, women and men, and across ethnic groups. In my work with Nick Epley and Eugene Caruso, we asked coauthors to estimate the percentage of the total work they did on a given academic paper. On average, for papers with four authors, they collectively claimed 140 percent of the credit.16 These people weren’t being intentionally selfish. Rather, they focused on the work they did and not on the work of others. In fact, when Nick, Eugene, and I asked authors how much of the work each author did on a paper with four authors, they thought more about the others’ work, and their own self-serving biases were reduced by half.
ENGAGING OUR ACTIVE INTELLIGENCE
Looking back at the four sources of bias described in the previous section (innumeracy, warm glow and recognition, connectivity, and self-focus), you’ll see they all put you at the center: your intuition over the correct numbers, your identification with the victim, your sense of recognition, your connection, and your tendency to focus on yourself. Creating more value in the world requires that we think beyond ourselves. A good starting point is to consider our two primary modes of decision making—System 1 and System 2.
Prescriptive models of decision making encourage us to think rationally, often by prescribing structures to help us. For example, in our book, Judgment in Managerial Decision Making, Don Moore and I outlined the following steps for choosing the right option among multiple choices:17
This list makes sense to most people, yet when you ask them if they regularly follow steps like these, they say, “Of course not.” In fact, if you methodically went through each of these steps for every decision you made at the grocery store, you’d be there for hours. Following these steps makes more sense when we’re facing important decisions, but even here, most of us are far from systematic.
As we discussed in Chapter 1, one way to make better decisions is to move from System 1 thinking to System 2 thinking. But even when facing important decisions, we are likely to rely on System 1 thinking when we’re very busy. The frantic pace of professional life suggests that very important leaders lean on their System 1 processes.18 Moreover, bestselling books, including Blink by Malcolm Gladwell, give people false hope that they can trust their intuitive System 1 thinking.19 In fact, there are plenty of reasons to question our intuition, as even the brightest people make judgmental errors on a regular basis.
Moving from System 1 to System 2 thought can take a variety of forms. It can entail explicitly going through a structured decision-making process like the one detailed above. It can mean critically examining the way your intuition is leaning. It could mean waiting until you are not under time pressure or stress, which is when your intuition is most likely to lead you astray. It might mean asking a smart friend, partner, or colleague to help you analyze the problem or turn the decision over to a group. Or it could involve using a calculator, computer, or algorithm, which will bring more logical analysis to the problem.
Turning to the realm of ethics, Josh Greene uses dual-processing research to argue that people have two separate modes of moral reasoning, just as they have two different modes of decision making. We use System 1 reasoning, our intuitive or instinctual responses, to respond to most moral contexts. Greene provides ample evidence that System 2, our more deliberative system, will lead us to decisions that create more value. Greene’s work provides guidance on how to move toward making more utilitarian, value-creating judgments. For the sake of efficiency, we can continue to use our faster, intuitive systems for most of our everyday decisions. But when we can carve out the time, we can create more value by using our more deliberative systems to make more important decisions.
ACTIVE INTELLIGENCE ASSISTS
Ask yourself how you currently create value for yourself and for others. Are there ways you might create more? Interestingly, most people haven’t sufficiently examined, or audited, their current ethical behaviors. Once you have the motivation to engage your System 2 thinking more often to be better, you will need some tools. Here are three practical strategies you can use to make more ethical decisions.
Joint, Rather than Separate, Evaluation
We often respond emotionally to moral problems. Unfortunately, our emotion-based decisions tend to be different from those we would make in a more rational state of mind. One reason we give our emotions so much weight in our decisions is that we tend to consider options one at a time. Substantial evidence documents that when we evaluate one option (such as a product, a potential employee, a job offer, or a possible vacation), System 1 has a powerful influence on our decisions. By contrast, comparing multiple options simultaneously invokes System 2 processing. Consequently, our decisions are more cognitive, less biased, and more utilitarian.
Take the task of weighing job offers. My colleagues and I asked graduating MBA students whether they would accept various job offers from a consulting firm when facing a deadline.20 Those in Condition A were told they would receive a moderate salary, the same offered to all graduating MBA students. Those in Condition B were offered a higher salary but learned that some other graduating students were being offered even more. Job A paid less than Job B, but Job B evoked an emotional reaction in students because it raised the moral issue of the firm paying others more than them. Such social comparisons have a strong impact on our judgments and decisions.
Social comparisons and the emotions they trigger have a far greater effect when we’re evaluating a single option than when we’re comparing two or more options at the same time. When MBA students were offered either Job A or Job B, they rated Job A as more attractive because of their emotional reaction to being offered less than others for Job B. However, when MBA students were asked to imagine that they received both offers and had to choose between them, they selected Job B over Job A. The cognition required to engage in joint comparison overrode the MBA students’ emotional reactions and allowed them to focus on the fact that Job B would pay them more than Job A.
Would you be interested in a tool that would allow you to hire better people and to discriminate less in the process? In a different study, economists Iris Bohnet, Alexandra van Geen, and I identified joint decision making as such a tool.21 We determined that when people are evaluating employees one at a time, their System 1 processes tend to dominate. As a result, they tend to rely on gender stereotypes: they lean toward hiring men for mathematical tasks and women for verbal tasks. By comparison, when people are able to compare two or more applicants at a time, they focus more on job-relevant criteria. Their decisions are more ethical toward job candidates, and organizational performance improves.
The Veil of Ignorance
Philosopher John Rawls offered the image of a “veil of ignorance” as a means of thinking through what would be best for society.22 Rawls’s challenge is to imagine that you know nothing about your position in society. In this uninformed state, behind a veil of ignorance, you will be in a better position to decide how society should be structured for the greater good. Rawls intuitively understood that your status, wealth, position, and so on form cognitive barriers to objectively assessing what is just. Under a veil of ignorance, you could do better.
A veil of ignorance that keeps us from knowing our role in many ethical real-life decisions should enable us to make wiser, more moral decisions. Let’s return to the last problem we considered in Chapter 1, in which five people are dying in a hospital and a surgeon has the opportunity to kill a healthy person to save them. Imagine that you knew you were one of the six people described in this problem, but had Rawls’s veil of ignorance and didn’t know which of the six people you were. I predict that you would now be more in favor of saving five people at the expense of one. After all, the death of the healthy person would give you an 83 percent chance of survival instead of a 17 percent chance. This thought process might move your decision in a utilitarian direction, even after you remove yourself from being one of the six key actors in the story. Karen Huang, Josh Greene, and I confirmed this prediction in a series of experimental studies.23
Rawls thought about the problem of how to help us ignore who we are. Another strategy that can improve our ethicality and objectivity is to intentionally be unaware of who other people are, so that we aren’t biased by their demographic information. Consider that in the 1960s, less than 10 percent of musicians in the major U.S. orchestras were female. This has changed dramatically, thanks in part to a simple change orchestras have made to their audition process: the addition of a screen between the musician and judges. In the past, judges watched musicians as they auditioned. Now, it’s the norm for musicians to perform behind a screen, which forces judges to evaluate what they hear rather than being distracted by what they see—and by their stereotypes of what constitutes a professional musician.24 Similarly, tech firms are increasingly eliminating names and pictures from the first round of job screenings to gain the ethical and objective benefits of blinding judges from the people they are considering.
More practically, I encourage you to try taking your identity out of the decision-making process. For instance, when considering candidates for a new position with your organization, try to ignore your power, your religion, where you went to school, and other traits. Or, in thinking about what a fair tax system would look like, imagine that you were born into your country with a random level of wealth. Without knowing what your wealth would be, what taxation structure would be fair? By adopting a veil of ignorance, we reduce our self-serving biases, and we enhance the morality of our decisions.
Pre-commitment
It is not always possible to adopt a veil of ignorance or to compare multiple people at the same time when making decisions with an ethical component. Another useful strategy may be to pre-commit to your goals before you are in the midst of making a specific decision. Let’s suppose you want to hire someone for a job requiring quantitative skills. Due to the constraints of the situation, you need to search until you find a good candidate and then try to hire them; that is, you need to consider one candidate at a time. How can you make a decision that will not be sexist and that will lead you to hire the best candidate for your organization?
In collaboration with Linda Chang, Mina Cikara, and Iris Bohnet, I have found that decision makers who first think through the criteria they are seeking in a new employee before considering a specific candidate make less sexist decisions and tend to hire a better-quality employee.25 When we think about our hiring criteria in advance, we engage in System 2 thinking about what would constitute a good choice. In contrast, when we consider a specific candidate without such pre-commitment, our System 1 processes are likely to prevail, including many of the biases that reduce the quality and morality of our decisions.
Joint decision making, imposing a veil of ignorance, and pre-commitment all move us from System 1 thinking toward System 2 thinking—and toward better, more moral decisions. Speaking more broadly, we can all more actively engage our intellect and make better, more moral decisions as a result. In the next chapter, we will confront a critical cognitive barrier that arises when we’re making decisions that involve other people—our tendency to see the size of the pie as fixed. When we move beyond the myth of assuming that what’s best for us is incompatible with doing the right thing, a path opens up to the ethically efficient frontier where both are possible.