5

Homo Socialis

A few decades ago I was teaching with Semester at Sea, a study-abroad program that travels the world by ocean liner. One moonless night on the Indian Ocean, the captain switched off the exterior lights so we could more easily see the stars. I was curious to see the Milky Way without any light pollution, so I meandered up to the top deck before going to bed. The night sky was crammed with stars, and I had spent about five minutes admiring it when an extraordinarily bright meteor blazed into view and then burned out right before my eyes.

It was one of the most amazing sights of my entire life, and I found myself surprised by my own response. Rather than savoring the moment or reflecting on my good fortune, I immediately looked around to see if anyone else was on deck with me. All I wanted was to turn to someone, anyone at all, and say, “Wow, did you see that?” and hear even a complete stranger respond, “Yes. That was amazing!” As a child, I had always envied Tigger in Winnie the Pooh for being the only one, but this experience taught me that I misunderstood my own preferences. Rather than feeling more special because I was quite possibly the only human on earth to witness this spectacle, I found that the meteor felt less real and the experience of seeing it less meaningful because I couldn’t share it.

Of all the preferences that evolution gave us, I suspect the desire to share the contents of our minds played the single most important role in elevating us to the top of the food chain.* We are the fiercest predator on the planet by virtue of the power of our minds, but even human minds aren’t that special on their own. If you drop one of us naked and alone into the wilderness, you’ve just fed the creatures of the local forest. But if you drop one hundred of us naked into the wilderness, you’ve introduced a new top predator to this unfortunate stretch of woods.

In chapters 1 and 2, I discuss the central role of social functioning in our evolutionary success story. Nothing is more important to us than our social connections because nothing was more critical for our ancestors’ survival and reproduction. As a result, we’ve evolved many ways to stay connected to our groups; chief among them is to know what others are thinking. Knowing others’ thoughts helps us fit in and predict what our group members will do next. We also want our group to know our thoughts and feelings, as planting our beliefs in the minds of others provides the best opportunity for nudging the group in our preferred direction. Acceptance of our thoughts and feelings by others also validates our place in the group and gives us a sense of security about our future. As luck would have it, these two inherently self-serving goals are also a recipe for successful cooperation; when we know the contents of one another’s minds, we are much more capable of social coordination and division of labor.

For these reasons, evolution has given us a perpetual desire to share the contents of our minds, even when there is nothing to gain at the moment by doing so. This desire to share our experiences, which I felt so acutely on the deck of the ship that night, emerges early in life. Toddlers endlessly narrate the world, pointing out people and objects simply to establish joint attention. No other animal does this at any stage of development.

The desire to share our understanding and experience goes beyond mere knowledge, as we also want to share our emotional reactions with others. For our group to deal effectively with a threat or opportunity, we must all perceive it the same way, and thus we have evolved to seek emotional consensus. There are few experiences in life more frustrating than sharing an emotional story with someone who reacts with indifference or an emotion opposite to our own. If I’m outraged by my colleague’s rude behavior, I’m even more outraged when my wife thinks it’s no big deal or, worse yet, funny or justified.

This need to share emotional experience sits at the root of nearly all exaggeration. If I’m worried that you won’t be sufficiently impressed by the fish I caught, then the fish grows in the telling. If I’m worried that you won’t be upset by my colleague’s rude behavior, then my colleague gets ruder when I relay the story. This need also sits at the heart of urban legends, which are the viral equivalent of gross exaggerations.

Chip Heath of Stanford University and his colleagues provide a nice example of this effect in their research on urban legends. They found that the more disgusting and outrageous the legends become, the more likely people are to say they would pass them on to others. For example, they presented people with different versions of an urban legend in which a family returns from vacation to find photos in their camera that the bellboy took of himself fouling their toothbrushes. In one version, the photos show him cleaning his nails with their toothbrushes, in another he has their toothbrushes stuck in his armpits, and in the coup de grace, they see the bellboy with the family toothbrushes “up his bootie.” It doesn’t take much thought to decide which story you’d tell your friends; the family toothbrushes in the bellboy’s bootie are a clear winner if your goal is to ensure your audience shares your emotional reaction.

When we exaggerate or pass on urban legends, we introduce distortion into our listeners’ understanding of reality. This can be seen as a costly side effect of the need to share emotions, but it shouldn’t detract from the fact that sharing emotions lies at the root of almost all successful social interaction. Think back to the last conversation you had with a friend and ask yourself how much new information was transmitted versus how much emotion was transmitted. Meaningful conversations are often very light on information, but they are almost never light on emotion. And conversations that are heavy on content, such as discussions of current events, are typically laden with emotion as well. The rare conversations that involve no emotion at all are typically with strangers and are perceived as trivial or boring.

Social Intelligence

The most important challenge people face in life is understanding and managing others. If I understand other people’s goals, I can position myself to benefit from their likely actions. Better yet, if I can manage other people’s agendas such that I plant my goals in their minds, I will almost assuredly be a success in life. On the other hand, if I can’t understand others, I’ll be buffeted around by their seemingly random plans. And if I can understand others but not manage them, I’ll see the bad news coming but will have limited capacity to improve my situation. Management skills are less important if I’m Genghis Khan and can push my agenda on others by brute force, but for most of us, persuasion is the key to success.

What does the science tell us about how to be a social success? Unfortunately, not much. The answer has proven to be incredibly elusive due to the challenges of measuring social intelligence, which have remained largely unchanged since the first comprehensive test was created in the mid-1920s. Consider an item on that original test that asked people what they would say to an acquaintance whose relative had just died. Participants were presented with the choice of speaking well of the departed relative or talking about current events of general interest. A moment’s reflection makes it clear that either or neither of these choices might prove to be correct.*

Indeed, the “correct” response to this question depends on how well you know the acquaintance and the deceased, their relationship, and countless other factors. As a consequence, one person’s gaffe could easily be another person’s comforting or uplifting words. Although it’s generally inadvisable to make jokes about the deceased at the funeral of a beloved relative, I suspect that many people have in fact comforted their friends and family with such jokes. And I’m equally confident that many other people have offended their friends and family with the exact same jokes. The same statement can be either comforting or upsetting depending on who makes it, who is listening, exactly when it is made, the tone of voice with which it is made, and so on.

The more you think about this problem, the more you realize that there is rarely a single correct social behavior in any given situation. The context-dependent nature of emotional responses, and hence social appropriateness, continues to stymie our efforts to measure social abilities. For example, an item from one of the most widely used current measures of social/emotional intelligence describes a person who works harder than his colleagues, gets better results, but is not good at office politics and loses out on a merit award to an undeserving colleague. Test takers are asked how much it would help the person who lost out on the award to feel better if he made a list of the positive and negative features of his undeserving colleague versus told others what a poor job his undeserving colleague had done and gathered evidence to prove his point. In truth, the answer to such a question is unknowable. Some people would benefit from one strategy, some from another, and the benefits they’d gain would assuredly depend on a wide variety of other factors and constraints. The latter strategy could clearly be a disaster, but it may conceivably be a winner as well.

The developers of the test provide “correct” answers by assessing the consensus of what most people would do, and by surveying experts in emotion research. Both these approaches are deeply inadequate, however, as they rely on the flawed assumption that there is a single correct answer waiting to be found. Even if such an answer existed, this approach also assumes that what most people would do is the best strategy, thereby limiting the upper end of social intelligence to the average response (or, perhaps even worse, the intuitions and inclinations of academic psychologists). I suspect that highly socially skilled individuals achieve their goals in part by doing things differently. When most people respond in a certain way, that response loses some of its power through repetition and predictability. Socially skilled people recognize this problem, and by finding a way to be just a little different from everyone else, they communicate more effectively.

These challenges struck me as insurmountable, so they led me to take a different approach to the problem. Rather than trying to find a way around the context-dependent nature of social intelligence, my colleagues and I decided to harness this defining feature of social functioning. There are clearly many qualities that enable people to be socially successful, but the fact that what works in one situation often does not work in another suggests that behavioral flexibility may be the single most important attribute for skilled social functioning. A number of factors enable behavioral flexibility (more on this later), but one of the most important is our capacity for self-control.

The Evolution of Self-Control

A running joke among academic psychologists is that we entered this field to try to understand our own failings—that is, we conduct “me-search” rather than research. I’m as guilty of this charge as the professor in the office next to me. My interest in social intelligence began not with a burning intellectual question, but with an embarrassing faux pas when I was in high school and blurted out the first thought that passed through my head.

Speaking before thinking gets me into trouble in a variety of circumstances, but food markets are my Achilles’ heel, particularly when meat looks like it did on the hoof. In combination with a fair bit of squeamishness, my carnivore’s guilt over the fact that my dinner once roamed this planet leads me to prefer foods that don’t resemble the animal from which they came. I get the willies when tapas bars hang pig legs from the rafters, when Chinese restaurants hang ducks in rows on hooks, or when a butcher’s shop displays an entire carcass. I don’t even like fish looking back at me. More than once I’ve made a disgusted expression in front of a shopkeeper before my (admittedly weak) internal filter had a chance to stop me. My friends and family are unimpressed with my behavior, so I set out to prove that it’s not my fault, that I’m the victim of faulty brain structures and should be pitied rather than censured.

One way to envision self-control is to imagine that your brain is like a chariot. The horses are your impulses, and they reside primarily in a small set of regions that sit under your cortex, near the base of your brain, such as the nucleus accumbens and the amygdala. The horses pull you toward gratification of your desires: food, sex, aggression, whatever it may be. Some people have wild stallions pulling their chariot, and they struggle to resist the temptation to eat too much, drink too much, have affairs, or punch the annoying guy in the nose. Other people are pulled along by petting farm ponies, and for them, managing their impulses is comparatively easy.

The chariot driver, who sits in a piece of the frontal lobes called the lateral prefrontal cortex (LPFC), is the one who resists temptation by reining in or redirecting the horses when the time, location, or goal itself is inappropriate. The driver has a copilot, who sits above the horses in or about the anterior cingulate cortex (ACC), and whose job is to alert the driver whenever the horses appear to be heading in the wrong direction. If you have an inattentive copilot or a weak driver, the horses will pull you pretty much wherever they like, and people will tell you that you’re a wild one or an impulsive jerk (depending on whether they sympathize with your actions).

In my own case, I think my ACC copilot is on permanent vacation. Perhaps my ACC is too small or is chronically starved of oxygen, or maybe it’s just too quiet to be easily heard by my driver. Once my LPFC is pulling the reins, I’ve got perfectly good self-control, but I often don’t notice that I need to control myself until it’s too late. I blame my ACC, but of course others blame me. So my first experiment on social intelligence was an exculpatory effort designed to test my hypothesis that blurting out what you’re really thinking is a sign of faulty frontal lobes rather than a moral failing.

In this experiment, we decided to reenact my frequent faux pas with unusual foods, which meant that we had to come up with an excuse to present people in the lab with a food item that looked as it did when it originally walked this planet.* After putting our heads together, my doctoral student Karen Gonsalkorale and I decided to use her Chinese ethnicity and cooking skills to our advantage. We brought Caucasian participants into the lab, and Karen explained—that is, she lied—that she was testing the effects of different food chemicals on memory.

After looking up the participant’s name and pretending to consult her clipboard, Karen then told each person, “You’re in luck! You get to eat my favorite food, which is widely regarded as the national dish of China!” (Our deception regarding food chemicals and memory allowed us to serve the participants an unusual food in the lab that would otherwise have seemed odd.) Karen’s claims about the personal and cultural significance of the dish were intended to communicate one simple message: whatever you’re served next, you should at least pretend to like it.

On hidden camera and in close proximity to the participants’ faces, she then opened a Tupperware bowl containing intact chicken feet, claws and all, cooked in a light brown sauce. Not everyone responded politely when given the feet to eat. My favorite participant blurted out, “That is bloody revolting!” This statement was followed by an awkward silence, which he broke with a few apologetic mumbles as he sheepishly picked up a foot and tried to muster the courage to nibble on one of the toes. In contrast, some people never lost their composure. They didn’t necessarily eat the feet—many suddenly remembered they were vegetarian or suggested that the feet might not be kosher—but they were polite even if they declined to eat them.

In the next phase of the experiment, we used the Stroop test to see if we could differentiate those who had handled this delicate situation politely from those who had not. The Stroop test takes advantage of the fact that reading is automatic, and thus we cannot help but read a word once we see it. For example, try to look at the word below without reading it:

Hello

I’d hazard a guess that you failed—if you saw it, you read it. In the Stroop test, people are asked to name, as quickly as they can, the colors of the letters that make up various words. The tricky bit is that the words themselves are names of colors. For example, people might see the word red written in green letters, which requires them to inhibit their automatic tendency to report the word they read and instead report the color of the letters.

When people take the Stroop test in an fMRI magnet,* we see that the ACC copilot becomes active whenever the color word doesn’t match the color of the letters and there is competition in people’s minds between the correct and incorrect response. This ACC activation is even stronger when people make a mistake. At that point, the ACC alerts the chariot driver, who reins in the tendency to read the word rather than name the ink color.

Because the Stroop test taps processing in the ACC, we expected that it would predict how people reacted to the chicken feet. People who have a responsive ACC should be good at inhibiting their initial reaction (yuck!) and replacing it with something more socially appropriate (interesting!). Consistent with our expectations, those who did well on the Stroop test were less likely to lose their composure than those who did poorly on the Stroop test. These findings showed us that people who had better self-control were able to respond in a more flexible and socially skilled manner. They were able to inhibit responses that they would typically have made but were inappropriate under the current set of social rules.

Subsequent work has shown that the blurting we found in the lab is only the tip of the iceberg. Two years later, Christopher Patrick of the University of Minnesota and his colleagues showed that a quiet ACC not only is problematic when it comes to saying the wrong thing; it also stands idly by when people do the wrong thing. In their study, they selected people who either reported having engaged in previous antisocial behavior or not and gave them a task very similar to the Stroop test while they wore a swim cap covered with electrodes. Because neurons emit tiny amounts of electricity, these electrodes enabled the researchers to see how their participants’ ACC responded when they made a mistake and pushed the wrong computer key.

Despite the rather bland nature of this laboratory task, Patrick and his colleagues could tell whether someone had engaged in antisocial behavior by how their ACC responded when they made a trivial mistake. When they made an error on this task, the more law-abiding folks showed an ACC response that was about 30 percent larger than that of the antisocial folks. Remember, the ACC copilot’s job is to notice the potential for conflict and alert the driver when an error is coming. A poor copilot responds weakly to the possibility of error, much as we see in this experiment. Presumably when the antisocial people were about to make a more important decision (such as whether to punch the guy who’s annoying them or throw a rock through someone’s window), their ACC was similarly quiet and failed to alert them to the conflict between what they were about to do and what they should be doing. These data reveal how an unresponsive ACC copilot can lead to poor social functioning.

I was delighted that these experiments vindicated me, showing that my failings revealed faulty brain structures rather than a faulty character. In retrospect, though, it’s clear that my self-serving agenda blinded me to the more important message this research was telling us. These studies show that self-control plays an important role in social functioning, but it didn’t occur to me for quite some time that social demands are probably what led to the evolution of self-control in the first place.

You don’t need an ACC copilot to tell you not to grab a salmon from a bear’s mouth; that would be obvious even to the most distracted of chariot drivers. Rather, you need your ACC to ring the alarm when you’re reaching for the last piece of cake, flirting with the big guy’s girlfriend, or starting to tell your boss what you really think of him. Social interactions are loaded with conflicting motives, which is precisely when we need the services of an attentive copilot. My goals are sometimes consistent with yours (e.g., when we want to see the same movie) and sometimes at cross purposes (e.g., when we want to see the same girl), and that’s when an attentive copilot plays a particularly valuable role.

It was not just my egocentrism that blinded me to what these studies showed. Most psychologists have taken it for granted that we evolved the capacity for self-control in order to pursue long-term goals. To be a successful farmer we must plant the seed rather than eat it; to have a happy retirement we must save our money rather than spend it; to maintain a healthy body weight we must decline the second piece of chocolate cake rather than eat it. But our world looks nothing like the world of our hunter-gatherer ancestors, who didn’t plant seeds or save money and who never worried about eating or drinking too much. Our ancestors were focused on today, with occasional thoughts of what they’d like to do tomorrow, so their lives were not the perpetual exercise in delayed gratification that ours have become. They almost assuredly did not rely on self-control to secure a better tomorrow. But they did have to control themselves to get along with their neighbors, manage their rivals, and achieve their social goals.

Our ancestors also had to control themselves whenever they engaged in cooperative efforts, particularly in the face of threat. Imagine what it was like for our distant ancestors who had migrated to the savannah and were a toothsome snack for hyenas or lions. When they noticed one of those beasts stalking them, it would have taken a monumental act of self-control to band together and throw stones at it rather than run away. A cooperative strategy was clearly the most effective way to achieve the survival goal that everyone shared, but fear would have made it awfully tempting to leave the task of protecting the group to others. Those who were unwilling to share the burden and who ran away at the first sign of danger would soon have found themselves unwelcome in the group, facing dire circumstances and poor reproductive prospects. Evolution thus favored the development of self-control in our ancestors to achieve a variety of social goals.

The joint actions of the ACC and LPFC enable self-control, but there is more to self-control than restraining ourselves in the face of temptation. Our big brains also allow us to reframe the physical world in abstract terms, helping us to see it in terms of problems to be solved rather than potentially overwhelming temptations. To explain what I mean, let’s consider a wonderful experiment with chimpanzees conducted by Sarah Boysen and Gary Berntson at Ohio State University.

First Boysen taught the chimps the numerals for one through nine. She then taught them to play a game that involved choosing the number of treats they would like to receive. In the game, two chimps are seated across from each other as in Figure 5.1a, and one chimp—we’ll call this one the chooser—has the opportunity to decide how many treats each will receive. The chooser is shown numbers on two separate cards, and its task is to point to one of the two numbers.

The trick to this game is that the other chimp receives the number of treats shown on the card the chooser chimp points to, and the chooser receives the number of treats indicated on the other card. Chimps are not good at sharing, so their goal is always to get the larger pile for themselves and have the smaller pile go to the other chimp. As a consequence, they soon learned to point to the smaller number in order to score the larger pile of treats (see Figure 5.1a).

Figure 5.1a. The chimp plays the game well. By pointing at the smaller number, the chimp gets the larger one. (Sarah T. Boysen)

The key to the study was what happened when Boysen presented the chimps with actual piles of treats rather than cards with numbers. Actual treats should have made the task easier, and the chimps more successful at getting the larger pile, because the chimps don’t need to remember what the numbers represent to win the game. Yet the exact opposite occurred; despite knowing the rules of the game, the chimps repeatedly pointed to the larger pile of tasty treats in front of them. The chimps appeared to realize their mistake immediately, often seeming very frustrated with themselves. Remarkably, they went on to make the same mistake on the very next round (and the next and the next, for literally hundreds of rounds).

Why did these clever animals repeatedly make this simple error? Most likely they could not escape the lure of the actual treats. When Boysen helped them translate treats into an abstract problem by showing them numerals, the chimps were able to take a mental step back and consider the problem objectively. But their limited symbolic abilities weren’t strong enough to transform the physical treats in front of them into an abstraction, and as a result, the temptation was too great. The treats were just too enticing for the chimps’ frontal lobes to put the brakes on and stop them from pointing to the larger pile.

Figure 5.1b. The chimp plays the game less well. By pointing at the larger pile of treats, the chimp gets the smaller one. (Sarah T. Boysen)

Because chimp frontal lobes are smaller than our own, their control functions and ability to think in abstract terms are not as strong as ours. Humans can easily convert piles of candies into an abstract problem, allowing us to think of the treats as numbers rather than objects. Once we translate our problems into abstractions, our temptations don’t swamp our control functions. In contrast, the chimps could achieve this conversion only when Boysen did it for them, by presenting them with numbers rather than actual candies. Lest we feel too superior to these beasts, it’s worth remembering that we are a lot like chimps when we’re children, with weak frontal lobes and limited abilities of abstraction. As a consequence, we can see this same process in action among (young) humans, most notably in Walter Mischel’s famous marshmallow studies.

Mischel brought young children into the laboratory and sat them down in a room with a single marshmallow on a tray in front of them. He told the children that they could eat the marshmallow now, if they’d like, but if they waited until he returned, he would give them two marshmallows. He also told them that if they decided they couldn’t wait, they were to ring a bell, and then they could eat the marshmallow. It won’t surprise you to learn that the children almost universally wanted two marshmallows and vowed to wait until Mischel returned. He then left the room and, unless summoned by the bell, did not return for fifteen minutes.

Mischel wanted to know how long the kids could wait and what would predict their self-control. Several interesting results emerged from this study. First and foremost, there were huge individual differences in how long the kids could wait—many held out for the entire fifteen minutes, but some barely made it to the ten-second mark. If you watch the tapes from the original experiments, one factor stands out in predicting who waits and who doesn’t. The kids who are able to wait the longest are the ones who turn their attention away from the marshmallow. They sing songs to themselves, turn their backs on the tray, play games, and even fall asleep. But the kids who stare at the marshmallow or, worse yet, hold it in their hot little hands have no hope. Down the hatch it goes.

Resisting marshmallows is particularly difficult for preschoolers, as their frontal lobes are underdeveloped, so their self-control skills are limited. But some kids figured out how to work around their own weaknesses, in much the same manner that Boysen was able to help her chimps by translating the physical candies into numerical abstractions. When Mischel tracked down these same children a dozen years later, he found that the kids who waited longer to eat the marshmallows had better SAT scores than the kids who folded quickly. Their skill in translating the temptation into a problem they could solve helped them resist temptation when they were small children and continued to help them engage in self-control throughout their life, presumably enabling them to study more and party less.

Beyond Self-Control: Social Benefits of a Big Brain

About fifteen years ago, I learned of the social brain hypothesis, an idea that had been knocking around in biology and anthropology since the 1960s but was largely unexplored in psychology. As I discuss in chapters 1 and 2, this hypothesis posits that primates evolved large brains to manage the social challenges inherent in dealing with other members of their interdependent groups. It eventually occurred to me that if our brains grew so large to solve social rather than physical problems, then many of the abilities that we regard as purely cognitive might play an important social role.

For example, perhaps we evolved the capability to conjure up alternative approaches to problems (known as divergent thinking) not to find a way across a raging river or to escape hungry hyenas, but to facilitate flexibility in social situations. Divergent thinking would have allowed us to deal more effectively with friends and enemies when our initial approach failed. This idea resonated with me because it fit my personal experience growing up. As a particularly small kid with a particularly big mouth, I relied heavily on my divergent thinking skills to extricate myself from dicey situations on the playground.*

To test the idea that divergent thinking enhances social success, my PhD student Isaac Baker brought groups of friends into the lab and ran them through a series of tasks. He tested their IQ, measured their personality, and asked them how many different uses they could think of for a brick, a plate, and other common objects. These latter questions tap divergent thinking, as some people will give answers that are pretty similar to one another (e.g., use a brick to hold open a door, hold open a window, prop up a shelf), and others will show a wider variety in their approach (use a brick to weight the corner of a picnic blanket, hammer a nail, throw at an annoying person).

He then asked everyone to privately report on the social skills of the other members of their friendship group. Isaac found that people who came up with more divergent uses for a brick were also more persuasive, humorous, and charismatic. This relationship held whether they had a high IQ or not, so it wasn’t just the case that divergent thinking was a reflection of being a smart person. Rather, divergent thinking is an important skill in its own right, enabling people to be more persuasive, amusing, and charismatic.

Mental speed, or the ability to retrieve information and solve problems rapidly, is another cognitive capacity that enables a flexible response to the world. Because social interactions often move quickly, they provide very little time to think. If you make a joke at my expense and I immediately come back with a witty response, I’ve held my own in our little banter. But if it takes me too long to come up with a response, then the conversation is likely to have moved on. By then, even if I’ve thought of a brilliant retort, I’ll look like a fool if I try to respond to your earlier point. The faster I can think, the broader the array of options I can consider before I have to respond.

To examine whether mental speed predicts social functioning, we conducted a study in which we asked people in friendship groups to answer simple common-knowledge questions (e.g., “Name a precious gem”) as quickly as they could. We then asked their friends how charismatic they were, and found that people who answered the common-knowledge questions more quickly were rated by their friends as being more charismatic. And as with divergent thinking, these effects of mental speed were independent of intelligence.

Research over the last century has taught us that IQ is our intellectual horsepower, and that social intelligence is just a subset or offshoot of that larger class of mental abilities. In contrast, the results of these initial studies suggest that perhaps we have it backward; maybe our social intelligence is our true intellectual horsepower, and our ability to solve complex problems (i.e., abstract intelligence/IQ) is just a fortuitous offshoot of our evolved social capacities. If we take the social brain hypothesis seriously, it suggests that IQ is a by-product of social intelligence rather than the other way around. And if social intelligence really represents our broader mental abilities, then it makes perfect sense that IQ is often a poor predictor of career success. When we measure IQ, we’re looking at only a small slice of the cognitive pie, while our social intelligence might tell us much more about our capacity to navigate the world.

At this point I can easily imagine a counterargument along the lines of “Hold on. Many of the smartest people I know are social misfits, while some of my socially skilled friends can’t add up their grocery bills. If IQ is an offshoot of social intelligence, shouldn’t the two be more highly related?” Such discrepancies are all too common, and they clarify that there is no one-to-one correspondence between overall cognitive abilities and social abilities. This is precisely why we have been exploring particular cognitive capacities (such as self-control, divergent thinking, and mental speed) that enable flexibility and thus should make people more socially skilled. In the same manner that some people can memorize the Constitution but can’t find their way home from the supermarket, some people are great at math but don’t have the particular cognitive skills that help them understand and manage others.

Finally, it is important to note that social skills depend on more than just cognitive abilities; they also depend on our attitudes. Perhaps surprisingly, one of the most important social attitudes is the one we hold toward ourselves.

The Social Benefits of Overconfidence

The von part of my surname reveals my German heritage—if you went back far enough, you’d find that my father’s family were landowners under the kings of Prussia. That means we have a family crest (which happens to resemble an advertisement for St. Pauli Girl beer) and a family motto, Mehr sein als scheinen. Translated literally, it means “More to be than to seem,” which I understand as “Be more than you seem to be.” A Google search suggests that our family motto is shared by many of our erstwhile neighbors, as modesty is a Prussian virtue. “Be more than you seem to be” is an excellent approach to life if you’re a ninja or a card shark, but for almost everyone else our motto has it exactly backward. We get more in life if we seem to be more than we are. Indeed, I strongly suspect that I regard myself as “Bill plus 20 percent.” Let me explain.

In one of my favorite studies on overconfidence, Nicholas Epley of the University of Chicago and Erin Whitchurch brought people into the lab to have their pictures taken. Epley and Whitchurch then morphed these photographs to varying degrees with attractive or unattractive photos of individuals of the same sex as the participant. A few weeks later, Epley and Whitchurch asked their participants to return to the lab and presented them with either morphed or unaltered photos of them under different circumstances. In one experiment, participants were asked to find their true photo in a jumbled array containing their actual photo and the various morphed photos of them. In that experiment, participants were most likely to guess that their true photo was the one that was morphed by 10 to 20 percent with the more attractive image.

In a second experiment, participants were presented with an array of photos of other individuals, among which was a single photo of them that was either unaltered or morphed 20 percent with an attractive or unattractive image. In this experiment, Epley and Whitchurch found that people were able to locate photographs of themselves most rapidly if those photos were morphed with an attractive photo, at an intermediate speed if they were unaltered, and most slowly if they were morphed with an unattractive photo. These findings suggest that the enhanced photo most closely matches how people see themselves in their mind’s eye, suggesting that we deceive ourselves about our own attractiveness.

If you think you would be immune to this self-enhancement effect, consider for a moment the last time you saw a candid photo of yourself that you liked. If you’re similar to most people, you probably think that candid shots of you are typically poorly taken. In my own case, I’m confident that my friends aren’t just bad photographers but downright sadistic. The sad truth is that our friends are not bad photographers; we’re just not as good looking as we think we are. And that’s why you don’t like candid pictures of yourself: because they capture what you actually look like, not what you think you look like. You prefer the picture of yourself that caught you at just the right angle, on just the right day, and those are the ones you put up on Facebook, Tinder, or in the company directory. Because you then see this photo more often than you see the (deleted) ones you disliked, you also come to believe that this unrealistically good picture is an accurate representation. No wonder you end up thinking of yourself as “you plus 20 percent.”

The Epley and Whitchurch study is a great example of self-deception in action, but it doesn’t tell us why people deceive themselves, a question that has been debated at least since Socrates and Plato. Socrates was a big fan of the Delphic admonition to “know thyself,” and he delighted in demonstrating to the Athenian elite that they didn’t know as much as they thought they did. His fellow Athenians didn’t appreciate his propensity to highlight their inadequacies, and they famously sentenced him to death on trumped-up charges. As a psychologist, I, too, think that self-knowledge is often highly overrated. All it takes is a glance at a photograph of my adolescent self to realize the enormous damage that self-knowledge can cause. If I had any idea what a twerp I looked like in ninth grade, I would have struggled even to go to school, let alone flirt with the girl in the seat next to me.

Freud also saw the value of self-deception, believing that it is intended to protect us from a world that is often too unpleasant to bear. Perhaps there is some truth to this, as it might be hard to face another day if we knew what our friends and neighbors really thought of us. But as I discuss in chapter 9, evolution didn’t design us to be happy; it designed us to be successful. And it’s easy to imagine how a misguided sense of self would make us a lot less successful. Bill plus 20 percent is going to get in fights he’ll lose, and is going to ask out women who are disinterested (and, if I remember right, laugh in his face). So how do the gains offset the costs?

Although psychologists ignored his insight for nearly forty years, Robert Trivers proposed a simple and brilliant answer to this question when he was a young professor at Harvard in the mid-1970s. He suggested that we deceive ourselves in order to deceive others more effectively. If my ninth-grade self (incorrectly) believes that I’m not a twerp, when I ask out the pretty girl in my biology class she’ll be faced with a conundrum. On the one hand, I certainly don’t look like much. On the other hand, I seem to think otherwise, so perhaps there is more to me than meets the eye.

Overconfidence can thus be beneficial if I can plant my inflated self-views in other people’s minds. And if I’m particularly good at planting my inflated self-views in the minds of others, I might not need to pay the costs I’ve just highlighted (getting my nose smashed or looking like a fool). The guy who can beat me up might wonder if I’m tougher than I look and think that perhaps he should let the matter drop, and the woman who can do a lot better might be tricked into thinking that “a lot better” is me. After all, I’ve known myself much longer than they have, so it would be foolish to ignore my opinion of myself.

There are only a few tests of this idea, but so far, the results are consistent with Trivers’ hypothesis. For example, Cameron Anderson at Berkeley and his colleagues put students in small working groups and found that they couldn’t differentiate between their peers who were knowledgeable and those who were overconfident. As a consequence, they tended to defer to overconfident people when they shouldn’t. Richard Ronay at the University of Amsterdam and his colleagues found similar effects among human resource consultants who were tasked with determining which applicants should be promoted to managerial positions. The HR consultants tended to recommend the overconfident applicants instead of the applicants who were better calibrated in their self-knowledge. Just like Cameron Anderson’s students, even trained HR consultants couldn’t differentiate between people who really knew what they were talking about and people who were just blowing hot air.

My PhD student Sean Murphy went on to show that these effects are not just evidence that people can be tricked in the short term by those they don’t know well. Sean found that high school boys who were overconfident about their sporting ability actually became more popular from one school year to the next. These data suggest that overconfidence not only is effective when people don’t know you well, but also has a positive impact in long-term social networks. Finally, and perhaps relatedly, in another set of experiments, Sean found that one of the reasons that overconfident people are successful is that they are more intimidating in competition, so people don’t like to go head-to-head with them.

These studies suggest that there are notable interpersonal benefits to overconfidence, and that Freud was wrong when he characterized self-deception as a defense mechanism. Rather, Trivers hit the nail on the head when he suggested that self-deception is more aptly described as a social weapon. Bill plus 20 percent isn’t trying to protect his psyche from an admittedly inhospitable world; he’s trying to get people to like him and avoid conflict with him. To put it more generally, my tendency toward self-inflation evolved to help me achieve social outcomes that I couldn’t get if I were honest about who I truly am.

Self-Deception Isn’t Just for the Overconfident

Trivers’ insight that self-deception is a weapon of social influence and not a strategy for feeling better helps us understand overconfidence, but self-deception goes well beyond that. Our self-confidence is not the only factor that is readily perceived by others, as nearly all our emotions have social consequences. Let’s consider one of our most important emotions: happiness. I spend plenty of time in chapters 9 and 10 talking about what makes us happy, so for now, the important point concerns the social consequences of happiness. A moment’s reflection reveals that happiness has substantial social impact, largely because we enjoy spending time with happy people. As the aphorism goes, “Laugh, and the world laughs with you; weep, and you weep alone.”

The social consequences of happiness are reason enough to put on a happy face, and people exaggerate their happiness in numerous social interactions. If I run into you on the street and ask, “How are you doing?” in truth, I don’t want to hear about your bunions or hemorrhoids, but rather I’m just hoping you’ll say, “Good! How are you?” Even when our bunions or hemorrhoids are bothering us, in brief social interactions we almost always tell other people that things are fine. But the effect goes deeper than just telling people one thing while believing another. If Trivers is right, we actually try to convince ourselves of the truth of these claims in order to persuade others. We can see this possibility in Epley and Whitchurch’s study with the morphed faces, Anderson’s study with the overconfident students, and numerous other studies in which people overestimate themselves and potentially lead others to do the same. So how might such an effect emerge in the case of happiness?

My favorite study on overhappiness is by Sean Wojcik of UC Irvine and his colleagues. The background to their study is an effect that is well known among social scientists: people on the conservative side of the political spectrum in the United States tend to be happier than people on the liberal side. A number of hypotheses for this effect have been proposed, but Wojcik and his colleagues had a different idea. When they dug into the literature, they realized that the data showed that conservatives claim to be happier than liberals, but no one knew whether they acted any happier. So, Wojcik took a deep dive into the kinds of big data that are now publicly available via Twitter, LinkedIn, and the Congressional Record. He pulled out millions of words, thousands of tweets, and hundreds of photographs from members of Congress and other people known to be on the political left or right, to see if there really are differences in the positivity of their language and the size of their smile.

The first question you might ask is why would conservatives claim to be happier than liberals if they really aren’t? If you reflect on the ideology of modern American conservatism versus liberalism, one of the clear differences between the two groups lies in their beliefs regarding the fairness of the playing field. Conservatives more strongly endorse the idea that the world is a meritocracy than liberals do, as liberals tend to see a variety of structural barriers to achievement that conservatives regard as relatively unimportant. For example, liberals believe that one’s race, sex, or sexual orientation can lead to unfair treatment and loss of opportunities, whereas conservatives tend to believe that the effects of race, sex, or sexual orientation are largely overblown.

If you follow these beliefs to their logical conclusion, you’ll see that conservatives should believe (consciously or otherwise) that people are more responsible for their own happiness than liberals do. If I’m an unhappy conservative, and if the world is a meritocracy, then I must have failed to achieve my goals, or otherwise I would be happy. In contrast, if I’m an unhappy liberal, then it’s very possible that my race, sex, social class, or something else about me that I cannot control has held me back, so my unhappiness is not necessarily a sign of my own failings. For this reason, it’s more important for conservatives to claim happiness than it is for liberals, as lack of happiness suggests failure on the part of conservatives but not necessarily on the part of liberals.*

Consistent with this logic, when Wojcik examined the things that liberals and conservatives say or tweet, and the way they look in their photos, no evidence emerged that conservatives are happier than liberals. In fact, Wojcik found just the opposite. Liberals used more positive words and showed larger smiles in their photographs. Smiles not only differ in size, but genuine smiles can also be differentiated from posed smiles by the crinkles that appear around your eyes. Genuine smiles almost always lead to eye crinkling, but posed smiles often don’t. When Wojcik and colleagues coded the photographs for the presence versus absence of eye crinkling, they found the liberals were more likely than the conservatives to show this evidence of genuine happiness.*

Self-Deception Works

These studies suggest that people are highly attuned to the impressions others form of them, but these studies don’t indicate whether self-deception is effective. For example, conservatives apparently go through life claiming to be happier than liberals, which means it’s safer to ask them how they’re doing when you see them on the street, but we don’t know whether conservatives benefit from their claims.

The easiest way to answer such a question is to conduct experiments, so that’s what we did. Our goal was to test Trivers’ hypothesis that self-deception makes people more persuasive, so we broke down the hypothesis into its three logical components. Whenever people are not entirely sure they’re telling the truth: first, they should try to convince themselves of the veracity of their claim; second, they should come to believe in their claim; third, by convincing themselves, they should be more effective in convincing others. To test these three possibilities, we borrowed an experimental paradigm from my old friend Peter Ditto.

Ditto’s experiments were conducted more than twenty years ago, but they remain my preferred method for examining the effects of self-deceptive motivation on how we gather information. In Ditto’s experiments, college students were brought into the laboratory and (incorrectly) told that the investigators were examining the relationship between psychological characteristics and physical health. In service of this ostensible goal, Ditto told the students that they had the opportunity to take a medical test that would indicate whether they were likely to develop a debilitating disease later in life.

In other words, although they were healthy college students now, this test would reveal whether they were likely to become sick in the future. In this “medical test,” participants exposed their saliva to a test strip and then observed whether the test strip changed color. Some of the participants were told that color change indicated good news (i.e., they would remain healthy), and some of them were told that color change indicated bad news (i.e., they were likely to become sick in the future).* In reality, the test strip was inert and never changed color.

Imagine yourself in this experiment. You’ve exposed the test strip to your saliva and are waiting to see if it changes color. If color change means you’ll stay healthy, you probably start to worry after a minute has gone by and the strip hasn’t changed color yet. What’s going on here? Is there something wrong with the test strip? At that point, like many participants in the study, you might throw the strip away and try again.

In contrast, if color change means you’re going to get sick, you’re probably watching the test strip with a fair bit of trepidation, hoping it sits there doing nothing. Once a half minute or so has passed, you’d probably start to feel relieved, at which point you’d shove the strip into the container for the experimenters, with no plans to check it again later. If that’s your intuition about what you’d do, then you’ve accurately described what most of the participants did in this experiment. Ditto found that people waited longer and rechecked the results more frequently when lack of a color change suggested they were susceptible to future illness than when lack of a color change suggested they would remain healthy.

Although there is nothing wrong with gathering a little more information when you’re unsure, selectively gathering more or less information depending on whether you like the initial results is a form of self-deception. It’s a bit like not bothering to check your exam grade when you’re worried you’ll get a bad score. Avoiding important information can allow us to deceive ourselves just as it can help us deceive other people (for example, if you change the subject when your spouse asks you why you’re late for dinner and you don’t want to admit you were chatting with your cute colleague in the next cubicle). Sometimes reality will rear its ugly head; if you failed the exam you’ll suffer the consequences whether you know your score or not. But sometimes we can reshape reality by avoiding information. If I make a fool of myself on my blind date but never get in touch with her again, I can approach my next date with more confidence and, consequently, a better shot at success.

In the case of Ditto’s experiment, avoiding information is a great way to deceive yourself, because you don’t know with certainty what’s coming around the bend. If you’d waited longer, maybe the test strip would have changed color, but maybe not, and you were just wasting your time. On average, this sort of selective information-gathering strategy will bias the answers the world gives you in favor of your preferred conclusions, but on any particular day it’s possible that the answer you found is the correct one. This sort of selective search can allow people to create a desirable yet potentially distorted view of reality.

Ditto’s experiment provides a great procedure for assessing self-deception, but there are a few important pieces missing if we want to test the questions that motivated our research. First and foremost, it’s unclear why Ditto’s participants tried to deceive themselves about their future. Did they want to protect their happiness and self-esteem from the potentially bad news of their future ill health, as a Freudian account would suggest? Or did they want to preserve an image of themselves as healthy so they could be more confident and hence more effective in attracting romantic partners and allies, as Trivers’ account would suggest? In Ditto’s case, the answer may well be a bit of both, but of course we wanted to design an experiment that would provide a clear test of Trivers’ interpersonal account. If people engage in self-deception to increase their chances of persuading others, then the type of biased processing showed by Ditto’s participants should emerge even when people have no reason to protect their fragile ego.

To test this possibility, my PhD student Megan Smith joined forces with Trivers and me. We conducted an experiment in which participants were told they would see a series of videos in which a guy named Mark engaged in a variety of different behaviors. Megan told some of the participants that they’d be paid a bonus if they could write a particularly persuasive argument that Mark is a dislikeable person, and others that they’d be paid a bonus if they could persuasively argue that Mark is a likeable person (and made sure that all of them knew they had to base their arguments on the information in the videos). The key was that sometimes the initial videos showed Mark engaging in positive behaviors and the later videos showed him engaging in negative behaviors, and sometimes the order of the videos was reversed. Finally, participants were told that they could watch as many or as few videos as they liked, and it was up to them to decide when they were ready to write their persuasive essay.

A few findings emerged from this study. First, rather than watching all the videos and choosing to talk only about the ones that were consistent with their persuasive goals, people gathered information just like Ditto’s participants had. If the early videos were consistent with their persuasive goals, they stopped watching pretty quickly. If the early videos were inconsistent with their persuasive goals, they continued to watch for longer, presumably in the hope of finding information that would help them be more persuasive. There is nothing wrong with that, and it’s possible that participants knew full well what they were doing and were just trying to be efficient with their time.

To test this possibility, after they wrote their essay, participants were asked for their private opinions of Mark. Their responses indicated that through their biased information gathering they had convinced themselves that Mark was what they wanted him to be: people who were paid to argue in his favor found him more likeable than people who were paid to argue against him. When we offered them another bonus if they could accurately guess how others would feel, they thought their own impressions would be shared by others. Again, this result suggests they were unaware of the impact of their own biased information gathering.

Finally, we found that the most persuasive people were the ones who were biased in their information gathering; participants who were unbiased wrote arguments that other people found less convincing. This effect of biased information gathering emerged primarily to the degree that the participants convinced themselves. When their biased information gathering enabled them to see Mark as they were paid to portray him, their arguments were most effective.

This is just a single experiment, but it provides support for Trivers’ proposal that we deceive ourselves in order to deceive others more effectively. The study also helps throw some light on a variety of behaviors that we see in our everyday lives. For example, there was a fair bit of concern after the 2016 presidential election that “fake news” might have played a role in Trump’s victory. Although this is certainly possible, the results of our study suggest that people are biased toward gathering information that fits with what they want to believe. Consistent with this interpretation, research conducted after the election showed that the people who were consuming most of the fake news were partisans who already had strong beliefs in the direction of the fake news stories.

This finding suggests that fake news probably had very little effect in causing undecided voters to choose Trump, but it was probably very persuasive to people who already tended to support Trump. In other words, people sought out and believed fake news only if it was consistent with their prior beliefs, and thus the effect of fake news was likely just greater polarization of the electorate. I suspect that only in rare cases did fake news actually generate political preferences that didn’t previously exist. Such a possibility would also be consistent with Chip Heath’s research on urban legends discussed earlier: people pass along exaggerated stories in part because exaggeration ensures that others share the teller’s emotional reaction. In this sense, fake news probably serves a bonding function, bringing group members closer together in their outrage at the supposed misdeeds of the opposing party.

It might strike you as odd that people are susceptible to these sorts of biases, but it’s worth remembering the evolutionary pressures that made us so smart in the first place. As the social brain hypothesis proposes, a substantial reason we evolved such large brains is to navigate our social world. In contrast to value in the physical world, value in the social world reflects objective reality only partially. If we decide that bell-bottoms are cool, then cool is what they are, and you’d better get yourself a pair or risk being a wallflower at the disco. A great deal of the value that exists in the social world is created by consensus rather than discovered in an objective sense.

If I can influence the consensus to move in a direction that favors me (whatever I’m doing is cool), then I’ll probably benefit even if my objective understanding of the world is biased. For this reason, it makes sense that our cognitive machinery evolved to be only partially constrained by objective reality, as the social consequences of our beliefs are often just as important as the objective consequences. Indeed, some researchers have argued that our minds evolved the ability to process logical arguments not so we could discover the true state of the world, but so we could convince others of the accuracy of our own self-serving beliefs.

In this sense, the social brain hypothesis suggests that the great discoveries of humankind are really just an evolutionary by-product of our ancestors’ efforts to persuade others of their dubious claims. In the words of my astronomer brother, “So NASA can thank all the self-serving liars of our evolutionary past for our ability to send robotic spacecraft throughout the solar system?” The answer to this question is a resounding yes, and it speaks to just how important sociality is in the evolution of our incredible cognitive abilities.