CHAPTER 5

Should I Stay or Should I Go?

At the dawn of the twentieth century, around the time Sigmund Freud was publishing his landmark Interpretation of Dreams, the Swiss neurologist Édouard Claparède decided to play a trick on one of his patients—all in the name of science, of course.

The patient was a forty-seven-year-old woman with a brain impairment brought on by the onset of Korsakoff’s syndrome, a form of amnesia. She wasn’t able to retain any new memories that went back more than fifteen minutes, though her intellectual abilities remained unchanged. Her awareness of the recent past wiped itself clean again and again in an unending cycle of forgetting. Each morning she arrived at Dr. Claparède’s office at the University of Geneva with no recollection of having been there before, and believing she was meeting the bearded, bespectacled doctor for the first time. Claparède always greeted her with a hearty handshake, and she always politely replied that it was nice to meet him. The young doctor happened to be a critic of Freud’s demonized version of a separate unconscious mind, and he wondered if his patient’s amnesia might not be as complete as it seemed. What if short-term memories persisted in some hidden recess of her mind, in lieu of those that erased themselves from her consciousness?

One day, when she arrived at his office as usual, Claparède held out his hand to shake his patient’s—but with a thumbtack he had taped to his palm. When she shook his hand, she felt a sharp pain as the tack pricked her skin. Fifteen minutes later the memory of the disagreeable incident had vanished from her conscious mind, so he held out his hand to shake again. Here was the moment in which Claparède might glean new insights into how—or if—unconscious memory functioned when its conscious counterpart failed. And sure enough, the patient reached out toward him, but right before they clasped hands, she abruptly drew hers back.

Intrigued, Claparède asked her why she wouldn’t shake his hand. “Doesn’t one have the right to withdraw her hand?” she responded evasively, becoming agitated. She reverted to vague explanations, unable to explain her intuition. The knowledge of what might happen if she shook with the good doctor had appropriately guided her behavior to avoid a possible repetition of that painful pinprick, and this response operated without the involvement of any conscious intention on her part. In other words, her memory was having an implicit effect on her behavior, in the absence of her explicit memory and lack of any conscious awareness of the previous painful handshake. Her memory was unconsciously working to help keep her safe in the present, just as it had evolved to do.

The story of Dr. Claparède’s slightly sadistic yet illuminating experiment was a crucial first baby step in psychology’s modern understanding of unconscious effects, and contemporary research on amnesiacs has confirmed what Claparède was the first to notice. In a 1985 study of patients with Korsakoff’s syndrome, Marcia Johnson and her colleagues found that the patients showed the same patterns of liking and disliking for people and objects as the normal participants did, even though they had little to no memory of those people and objects otherwise. For example, all participants were shown photographs of a “good guy” (as described in fictional biographical information) and a “bad guy.” Twenty days later the Korsakoff’s patients had virtually no memory of the biographical information; nonetheless, 78 percent of them liked the “good guy” in the photos better than the “bad guy.” In the absence of any conscious memories of the reasons why, the amnesiacs still had appropriate unconsciously generated positive or negative feelings about people and objects that they had previously encountered.

Claparède’s little prank revealed a vital and primitive unconscious function of our minds. In the ongoing present of life, in which we are continually buffeted by obstacles and tasks and things we need to confront and deal with, all of which fully occupy our conscious mind, this evaluative, “good or bad” mechanism is constantly operating in the background. While our conscious attention is often elsewhere, this unconscious monitoring process helps us decide what to embrace and reject, when to stay and when to go.

Good. Bad.

Yes. No.

Stay. Go.

This is the ultimate, fundamental binary code of life. It embodies the primary predicament of existence—for all animals, not just human beings. All forms of animate life share this basic “stay or go” conundrum, even the most primitive. Good or bad, stay or go is the original animal reaction to the world. Eons of evolutionary time have made “stay or go” the fastest and most basic psychological reaction of the human brain to what is going on outside of it. This initial reaction colors everything that comes after it: good or bad, stay or go, like or dislike, approach or avoid. We go down one path and not down the other. Revealing how it works exactly, what causes us to immediately veer in one direction instead of the other, sheds new light on why we are doing what we are doing. Sometimes there is simplicity at the heart of complexity.

Back in the 1940s, University of Illinois psychologist Charles E. Osgood performed landmark research on, literally, the meaning of life. What are the basic ingredients we use to give our words and concepts meaning—ingredients such as how good or bad something is, how big or small, strong or weak? To get the data for his investigation, he had thousands of people make many ratings of different “attitude objects,” pretty much anything you can have an attitude toward, such as war, cities, or flowers. You would rate each of them, let’s say war, for example, as to how sweet to bitter, fair to unfair, or bright to dark it is. Not to worry if these seem strange scales to rate that object on; you just go with what feels right to you. I’d say war, for example, was on the bitter, unfair, and dark side of those scales. Then he used a sophisticated data technique called factor analysis to distill all those ratings down into a very small set of basic factors, the “ingredients” underlying how we feel about most things, the bases of most of our attitudes. And by doing so, Osgood found that things were actually quite simple: we used only three main factors to organize and sort out all these things in our mind, and with just these three dimensions he could account for nearly all the variability in those ratings. It all came down to E-P-A: evaluation, potency, and activity. Or in other words: good or bad, strong or weak, and active or passive. Trees, most people would say, are good, strong, and passive (they just stand there). Trains, on the other hand, are (for most people, anyway) good, strong, and active.

Of these three main components of meaning, Osgood found that the single most important factor was the first, evaluation. Most of the meaning that words and concepts have for us boils down to variations on good or bad, just with different flavors. The second most important was potency, or strong versus weak, and the third was activity, or active versus passive. Think about it from our (very) old friend Ötzti’s perspective: when encountering a new person it was most important to know if they were bad (an enemy); next important was how powerful they were (uh-oh), and finally how active—fast, healthy, and mobile—they were (phew, his horse is stuck in the mud).

But first and foremost, we need to know if the “something” out there is good or bad, for us or against us—and we need to know it right away. Osgood published his major book on this research in 1949. Ten years later, the director of the American Museum of Natural History in New York City, T. C. Schneirla, published an influential paper comparing all animals, from the simplest single-cell paramecium all the way through to human beings. His message was that all animals, from the simplest to the most complex, possessed basic approach and withdrawal reactions to good versus bad things. Put a source of food (some sugar) near it and the paramecium moved toward it. Put a small electric wire and tiny shock near it and it moved away. And from there all the way through the animal kingdom to human infants as well, Schneirla showed that all animals possessed these two basic response options.

If good-bad, approach-withdraw is the most basic animal reaction to the world, then it is easy to see why Osgood’s research revealed evaluation, good or bad, to be the primary meaning for all of our concepts about the world. Each of us today has within us remnants of the entire evolutionary history of our species. What back then were the original, first single-cell reactions to the world’s creatures are now, in every present moment, our own first reactions to our experiences. What came first in the very, very long-term past is now first in the short-term present. In spite of all of the astonishing mechanisms and systems that we eventually developed from that original, single cell, the primordial question is still there at our core.

Should I stay or should I go?

While we constantly engage in our complex modern activities such as going out with friends, keeping up with the news, and performing our jobs, we are nevertheless still reliant on this primitive, elemental division of behavior. We must decide whether to “say yes” and stay close to each stimulus (person, object, situation) we encounter, evaluating whether it is advantageous, or at least not unsafe, or “say no” and increase our distance from it.

We make these calculations both consciously and unconsciously, again and again, but often the unconscious part, like the alligator’s symbolic belly in my dream, comes first. This was the case for Claparède’s patient because she didn’t have conscious memory to aid her decision-making, yet it is true for non-amnesiacs as well. In many cases it’s the conscious mind that plays explainer afterward, trying to make sense of a judgment that we seemed to already “know” so solidly that our assessment felt like an incontrovertible fact. Earlier, I told the story about when I was in grad school and my advisor Robert Zajonc called me into his office to show me museum postcards of abstract art, to ask me which paintings I did and didn’t like. I could quickly and confidently point to the one I liked (I preferred Kandinsky—he’s a good cave painter!) but then, when Bob asked me why, I hesitated and sputtered something about the colors and forms and Bob just grinned at my discomfort—and my evident inability to give many truly good reasons.

As the old cliché has it, “I don’t know much about art, but I know what I like.”

At that time in the late 1970s, Bob was doing important work on the mere exposure effect, which is, basically, our tendency to like new things more, the more often we encounter them. In his studies, he repeatedly showed that we like them more just because they are shown to us more often, even if we don’t consciously remember seeing them. For example, the Korsakoff amnesiacs in Marcia Johnson’s study showed this preference later on for new things they were shown more often over other things they were shown less often, despite having no recall of ever seeing any of those things before.

Zajonc’s research on the mere exposure effect was important for many reasons. First, it showed how we can develop likings and preferences unconsciously, without intending to, based solely on how often and how common the experience is. This makes complete adaptive sense, because the more we encounter things that do not harm us, the more we like and the more we approach (stay). The mere exposure effect is all about creating the default tendency to stay when things are okay. (And if things are not okay, and, say, a snake jumps out at us in that nice little grassy area by the stream, all bets are off, and that experience completely overrides the mere exposure effect. Note that it took just one little pinprick to stop Claparède’s patient from shaking hands with him again.)

Second, the mere exposure research showed how our likes and dislikes can be immediately provoked in the moment, independent of any conscious calculations or recollections, as shown not only by my spontaneous reactions to the art postcards in Bob’s office, but also by the findings of his mere exposure studies and Johnson’s demonstrations with her amnesiac patients. Much of our affective (or evaluative) system operates outside of consciousness. Like the dream alligator was telling me, that yes-no system came first in our evolution, before our development of a more thoughtful way to make those evaluations.

Before Zajonc’s influential paper “Preferences Need No Inferences” appeared in 1980, researchers believed that all our attitudes were the result of this slower, more thoughtful process of conscious calculation. He argued instead that often we have immediate affective reactions to things like paintings and sunsets and meals and other people, without first thinking about them so carefully. His idea led to a transformation of the field of attitude research a few years later, thanks mainly to the original research on “automatic attitudes” by Russell Fazio, a young professor at Indiana University.

For a long time in the mid-twentieth century, the study of attitudes was somewhat in disarray. This was primarily because attitude research had a rather poor track record of predicting actual behavior. After all, the main reason attitudes started to be measured in the first place, back in the 1930s, was to be able to predict behavior. Yet many early studies showed that people would say one thing on an attitude questionnaire, but do something else entirely. It is easy to say on a piece of paper that you are going to donate money to a charity, for example; it is harder to get out the old pocketbook and write out the check. The important question soon became when would attitudes predict behavior, and when would they not?

Along came Fazio in 1986 with the idea that maybe only some attitudes predicted behavior, not all of them; some of our attitudes might be stronger and more important than others. I don’t like peanut butter and I will not eat it under any circumstance; I don’t like cooked carrots either but I’ll eat them if they’re on my plate, no big deal. And Fazio reasoned that the strong and important ones would exert a more consistent and reliable influence on our actual behaviors. So the question became, how do you tell the strong and important ones from the weaker and less important ones? Fazio reasoned that the strong attitudes would be those that came to mind immediately and automatically whenever we encountered the object of that attitude in our environment. In other words, the fact of our liking or disliking something would have more of an effect on our behavior if it reliably came to mind without our needing to stop and think. He conjectured that, just like my fast positive reaction to the Kandinsky postcard, our strong attitudes will be the ones that come to mind quickly, and our weak attitudes will be the ones that take us longer to express.

To measure how strong or weak a person’s attitudes were, he just had participants press either a Good or a Bad button on the computer (computers being a new exciting research toy back in the 1980s) as fast as they could after each name of nearly a hundred mundane objects appeared on the monitor screen in front of them. For example, they would tend to say Good very fast to birthday, kitten, and basketball (the study was done in Indiana, after all, so Hoosier hardcourt fever clearly was a factor), and Bad very fast to Hitler, poison, and tuna. (I actually like tuna, so I never really understood this.) But on the whole it took them longer to say Good or Bad to more neutral, less passion-inducing words like calendar, brick, or yellow.

Fazio and his colleagues then selected the words (the “attitude objects,” the scientific term for these stimuli) to which the person responded the fastest—their strong attitudes—and those to which they responded the slowest—their weak attitudes—and used them in the next part of the experiment. This next part tested whether the person’s attitudes toward each of these words became active immediately and automatically as soon as the participant read that word on the screen. The attitude object word, such as butterfly, would be presented on the screen first, for just a quarter of a second, too fast for a person to be able to stop and consciously decide whether they liked it or not. Then a second word would be presented, an adjective such as wonderful or terrible, and all the participant had to do was to press the Good or Bad button to say whether that second word had a positive or negative meaning.

The logic of this new method Fazio introduced, called an affective priming paradigm, was quite elegant and simple. If the first word, such as butterfly, automatically triggered Good or Bad, then that response would be primed and more ready when it came time to say whether the second word, such as wonderful, was good or bad. If the attitude prime automatically suggested the right response to the adjective that came next (as in butterfly-wonderful), then those responses should be faster. And if it suggested the wrong response—say, if cockroach came before wonderful—then those responses would be slowed down, because the participant would be all primed and ready to say Bad and would have to stifle that tendency and say Good (the right answer) instead.

But this would happen only if the first attitude became active immediately and automatically. What Fazio showed was that the person’s strong attitudes did just that. For example, beer unconsciously primed beautiful, and accident unconsciously primed disgusting—but the weak ones, words like brick and corner, did not become immediately active.

By sheer coincidence, the same year that Fazio’s research on automatic attitudes was published, another young up-and-coming attitude researcher, Shelly Chaiken, joined the Psychology Department at NYU, where I worked. In her office one day soon after she arrived, we decided to start some research together. What should we do? we wondered. Hmmm, well, she was an attitude researcher, and I was an automaticity researcher, so (ding!) what about studying automatic attitudes? It was, you might say, a no-brainer.

Shelly and I had several interests in common besides psychology research, so when we weren’t terrorizing the graduate students by playing golf in the hallways of the Psychology Building, or making Peet’s coffee fresh from the beans we had delivered each month from Berkeley (where she used to live), we designed several studies to better understand the automatic attitude effect. One thing we were interested in was how general the affective priming effect was. It did happen for the strongest attitudes (that people were fastest to say good or bad to) and it didn’t happen for the weakest ones (that people were the slowest to respond to), but what about all those (which were most of them) in the middle? Did the effect happen for only the few strongest ones, or did it happen for all but the very weakest? And did it happen only after people had just thought about those attitudes, as they had in the first part of Fazio’s procedure? The answers to these questions would determine how often we’d expect the effect to occur in real life.

Other lines of research had given us good reasons to believe in Fazio’s basic idea—Schneirla’s description of the fundamental approach-avoidance response across the entire animal kingdom, Osgood’s research showing the importance of the good-bad dimension to the meaning of pretty much everything, and my advisor Bob Zajonc’s demonstrations of “feeling without thinking.” Still, Shelly and I were concerned that the intentional, conscious evaluation aspects of Fazio’s experimental procedure might have played a role in the results he got, so we expected, and predicted, that getting rid of those aspects would reduce or even eliminate the apparent unconscious effects.

Boy, were we wrong. Exactly the opposite happened. To our great surprise, over several years of trying to “get rid” of the effect, by removing things that might inadvertently influence the outcome of the procedure, what we kept finding was that the effect instead got stronger and more general than it was before. When we waited several days between the first attitude expression task (in which the subject said whether those things were good or bad as fast as she could) and the second task, which tested whether the attitudes were automatic, the effect happened for all of the objects, even those that inspired the weakest attitudes, as well as the strongest and everything in the middle. Then we changed the task that tested whether the attitudes were automatic, taking away the Good versus Bad buttons and asking participants to just say their second words out loud. Once again, we continued to get the automatic attitude effect, but now we got it for all of the objects, those generating both strong and weak attitudes. Amazingly, everything, it seemed, all the objects we used, was evaluated as good or bad under these stricter conditions, which, after all, were designed to mimic life outside the psychology laboratory more closely than did the original studies by Fazio and his colleagues. The new conditions captured more closely the mere effect of encountering these objects in the real world, without any conscious, intentional thought about how you felt about them at all.

The unconscious workings of our mind send us signals about when to stay and when to go not only about our passionate likes and dislikes, but also about our most lukewarm, indifferent opinions, and all those in between. If anything, in fact, the more we eliminated the conscious and intentional aspects of the tasks in our studies, the stronger and more general was the effect. Not the other way around. Now decades have passed since the original studies appeared, and since Shelly and I started our own research on the effect. Happily, twenty-five years of further research by many labs around the world have confirmed our findings, which were startling (especially to us) at the time.

My alligator friend, were he to actually exist, would be grinning and nodding his toothy green head at this conclusion. The unconscious evaluation of everything does appear to be a very old and primitive effect that existed long before we developed conscious and deliberate modes of thought. And so when we remove those conscious components of the task, as Shelly and I did in our series of studies, and leave the unconscious to its own devices, the attitude effect shows up more clearly than ever before. After all, the unconscious approach-or-withdraw response evolved to help protect us millions of years ago, before there was any such thing as conscious, deliberate thought (or any other thought, for that matter).

Push and Pull

Many years ago, a graduate student of Osgood’s at Illinois, Andrew Solarz, tested the connection between evaluation of things as good or bad, and approach or withdraw arm movements in response to those things. This was back in the day before computers, and most psychology labs had a machine shop where the technical staff would create amazing pieces of apparatus to enable the psychology professors to test their theories. These often had enough wires and tubes and dials and levers to put Dr. Frankenstein to shame. They sometimes took months, even a year, to make. Solarz had the shop techs create for him a masterpiece of ingenuity in order to test his hypothesis. In his experiment, he presented words one at a time to his participants by means of a display box mounted on a response lever. A mechanical device would drop a three-by-five index card with the word printed on it in large block letters into a slot area on the box (which was on top of the lever, above where the participant was gripping the lever) so that it was now visible to the participant, and at this exact moment, an electronic timer would be started. The participants would then either push or pull the lever, depending on their instructions, as quickly as they could. It was kind of like a scientific slot machine.

Some of the participants were instructed to pull the lever toward them if they liked the object named on the card (for example, apple, summer), and to push the lever away from them if they did not like it (worm, frozen). The other participants were given the opposite instructions: to pull if they disliked, and push if they liked the object. At the end of the study, he computed the average times it took for the participants to push to indicate “good,” push to indicate “bad,” pull to indicate “good,” and pull to indicate “bad.”

What he found was that indeed, participants were faster to say “bad” by pushing the lever away than by pulling the lever toward them. And they were faster to say “good” when pulling the lever toward them than when pushing it away. Pushing the word away recalls the little paramecium moving away from the electric wire; pulling the word closer recalls the single-cell creature moving closer to the food. Solarz’s participants were acting the same way, without realizing it, of course, being immediately ready to increase rather than decrease the distance between themselves and something they did not like (even though it was just a word on an index card), as well as immediately ready to decrease instead of increase the distance between them and something they liked. Their immediate feeling of liking or disliking when they saw the word also just as immediately caused their arm muscles to be more ready to make the appropriate movements. The good-versus-bad switch in their minds was literally making their muscles more ready to stay than to go.

At NYU more than thirty years later, Mark Chen and I set out to repeat the Solarz study but with the technological help of computerized displays and timing. We still had to get our machine shop people to make the response lever, though, to be like the one Solarz used—a three-foot-long Plexiglas rod connected to an electronic switch at the base, which was then wired into a computer input port. Our first experiment was a replication of the Solarz study, and we found exactly what he had found. But as in his original experiment, our participants were consciously and intentionally classifying each of the objects, because that is what they were instructed to do. Would the push-pull effect happen even when the participants were not consciously thinking about likes and dislikes at all?

So, in the second experiment Mark and I ran, we just had the participants move the lever as fast as they could each time a word appeared in the middle of the screen, like in one of those early rinky-dink computer games (think Pong). Every time a word came on the screen, the participant just knocked it off the screen as fast as he or she could by moving the lever. Sometimes they pushed the lever to do this, and other times they pulled the lever. And once again, they were faster to push for bad things and pull for good things, not the other way around, even though they were not trying to evaluate anything at all.

The logical next step is to assume that we are likely to have those basic, primitive approach and withdrawal reactions to people, the most important “attitude objects” there are. Michael Slepian, Nalini Ambady, and their colleagues showed just that. They used the push-pull lever design and had their participants push or pull the lever to respond as quickly as possible to photographs shown on the computer screen in front of them. The participants were told their job was to move the lever one way if they saw a picture of a house, and to move it the other direction if they saw a picture of a face. So they believed their task was to classify the photographs in terms of faces versus houses—they were not thinking in terms of whether they liked the face or not. The trick of the study was that the faces varied in terms of how trustworthy they were—they had been separately rated by other people, so that the faces shown to participants ranged from appearing untrustworthy to appearing trustworthy. (We will describe the remarkable power of faces in more detail below.) And indeed, participants made faster approach (pull) movements to the trustworthy faces, and faster avoid (push) movements to the untrustworthy faces, all accomplished unconsciously because the participants’ conscious task was not about judging the faces at all.

Today this basic approach or avoidance effect is being used to help make positive changes in people’s lives—to change negative behavioral tendencies, such as racist attitudes and cravings for alcohol and addictive drugs. Canadian psychologist Kerry Kawakami and her colleagues had white participants make pull (approach) joystick movements when they saw a black face on the computer screen, and push (avoid) movements when they saw a white face, and they did this for several hundred faces. Afterward, the participants’ automatic or implicit attitudes toward blacks as measured by the IAT procedure were more positive. Moving their arms in one direction instead of the other had actually changed their unconscious racial attitudes. And in another study, Kawakami and colleagues showed how making approach arm movements could change not only racial attitudes but also actual behavior toward blacks as well. After making the approach movements in response to a series of subliminal black faces the participants never even consciously saw, they then sat closer to a black person in a waiting room than did participants who had not just made those approach movements. This may not seem a very practical way to reduce racism in everyday life, but it does show the potential power of our ancient unconscious evaluation system over our modern-day social attitudes and behavior—and intriguingly, how our innate, evolved unconscious tendencies can be used to override our acquired cultural unconscious tendencies.

Another positive way this approach-avoid system has been deployed is to help alcoholics stop drinking. Reinout Wiers of the University of Amsterdam has developed such a therapy to combat alcoholism and other addictions. He had patients who wanted to stop drinking come in to his lab each day over a two-week period. There they would perform a simple computer task, taking about an hour, in which they classified photographs on the screen as either in landscape (wider) or portrait (taller) format. The critical part of the training was whether they pushed or pulled a lever to make their responses. The set of photographs was prearranged so that the patients always happened to push the lever when photographs of alcohol-related objects such as bottles, corkscrews, mugs, and wineglasses appeared. (There was also a control condition in which a different group of patients did the same task but without any alcohol-related photos being shown.)

This “pushing away” of alcohol-related objects was intended to increase the avoidance motivation toward alcohol in these patients. It was remarkably successful. Two weeks of pushing away the photos of the alcohol-related objects changed the patients’ unconscious attitudes toward drinking from positive to negative, as measured by the IAT test. And even more remarkably, follow-ups on these patients one year later showed a significantly lower relapse rate (46 percent) than for those in the control condition who had not pushed away the alcohol-related photographs (59 percent). Not perfect, not zero, but remember that the difference between those two percentages represents real people with real families and real jobs who did not relapse and start drinking again, when they otherwise would have. Wiers and his team used our scientific knowledge about unconscious mechanisms to give practical help to people wanting to make important changes in their lives that they were having difficulty accomplishing through good intentions alone.

What’s in a Name?

I’ve always loved to drive, and have driven across the United States a total of six times. The only state of the lower forty-eight I haven’t driven through in my own car is North Dakota, and doing so someday is high on my bucket list. I am also a lifelong fan of car racing. Like many people of my generation, I grew up listening on the radio to the Indianapolis 500 every Memorial Day. My dad and I used to listen to it on a transistor radio while we worked on the house or in the yard all day. It’s no surprise then that I later became a stock car racing fan. Ever since he was a rookie in 2002, my favorite driver has been the great Jimmie Johnson, a seven-time champion. My wife, Monica, on the other hand, roots for Danica Patrick, a world-class driver and bold shatterer of the glass ceilings of stock car racing (and Indy car racing before that) who is more successful than any other female racer in history.

While we can both present convincing, completely rational reasons for why these drivers are our favorites, please take note of our names and their names. John likes Jimmie Johnson (and he liked Junior Johnson before that). Monica likes Danica. Our names share sounds and initial letters, and the magnetism begins there. (My wife has a much better excuse because Danica is the only female driver—still, that’s a similarity, too.) This is called the name-letter effect, a phenomenon discovered in the 1980s that reveals another important unconscious source of preferences. We tend to embrace people who are “like” us, even if the source of that likeness is something as arbitrary as our names, which we ourselves don’t even choose, or a shared birthday, the date of which we had nothing to do with.

While Bob Zajonc showed that one way we unconsciously come to like something is by becoming familiar with it, another route to liking is that something is similar to you, even if those similarities are objectively meaningless. Remember back in Chapter 1, the story of Ötzti and the fact that human beings frequently killed each other in the ancient world. Our ancestors banded together as families in self-defense, and then as groups of families into tribes. Recognizing kin could well be a life-or-death example of evaluating whether to say yes or no. That someone was similar to you was a fundamentally good thing back then. Now flash forward to modern times. If someone, or something else, shares features of our self, our identity, we usually feel positively toward that person or thing. But this is a tendency that evolved long ago. We usually do not realize, at least at first, the actual reason why we have that positive feeling, and we certainly do not realize how strongly it might affect us regarding important choices, goals, and motivations. The researchers who discovered and documented the effect of this positive feeling call it implicit egotism: our liking, without knowing the actual reasons why, of people and things that are similar to us, even if only in superficial ways.

Through statistical examination of large public record databases such as the 2000 U.S. Census, the 1880 U.S. Census, and the 1911 English census (all now available online), as well as such other sources as Ancestry.com, psychologists Brett Pelham, John Jones, Maurice Carvallo, and their colleagues have discovered some quite startling patterns in human behavior.

First, there are disproportionately more Kens who live in Kentucky, Louises who live in Louisiana, Florences who live in Florida, and Georges who live in Georgia (these are just a few examples) than would be expected if determined by chance alone (by how prevalent the name is in general, compared to how many people live in these states). This is not because they were born there and thus more likely to be named after the state in some way. They moved there. They chose that state over all the other ones they could have chosen. Other studies have shown that men named Cal and Tex are disproportionately likely to move to states resembling their name. And people don’t just choose states with names similar to their own; people also disproportionately live on streets that match their last names—such as Hill or Park, Washington or Jefferson.

Sharing name letters (especially initials) also affects choices of occupations: there are proportionally more Dennys who are dentists, and Larrys who are lawyers, than should happen by chance alone. Meanwhile, people whose names start with H tend to be more likely to own hardware stores, whereas people whose names begin with F are more likely to own furniture stores. Across eleven different lines of work, men are disproportionately more likely than chance to work in occupations whose titles match their surnames: for example, Barber, Baker, Foreman, Carpenter, Farmer, Mason, Porter. This effect was as true in 1911 England as it is in the modern-day United States. The name-letter effect held for all eleven occupations. For example, there were 187 Bakers who were actually bakers, compared to the 134 expected by chance (taking both frequency of name and frequency of occupation into account). For Painters, an actual 66 versus the 39 expected by chance, Farmers 1,423 versus 1,337. You can see by these numbers that these are not big effects, and there were certainly many Painters and Farmers who did something else entirely. That the names are significant influences at all is what is remarkable. And the effects are statistically reliable, and hold even when controlling for and ruling out some important alternative explanations generated by skeptics, such as gender, ethnicity, and level of education.

Now birthdays. Here, just as remarkably, one’s birthday has a significant influence on a person’s choice of spouse. People disproportionately marry someone who shares birth date numbers with them. Take Summit County, Ohio, where there were half a million marriages from 1840 to 1980. Looking at the birth-day number, regardless of month, a couple getting married was 6.5 percent more likely than chance to have the same birth day of the month. Looking at birth month, regardless of day, couples were 3.4 percent more likely than chance to be born in the same month. This effect was then found again when researchers consulted statewide Minnesota marriage records from 1958 to 2001. In Minnesota, couples were 6.2 percent more likely to share the same birth-day number, and 4.4 percent more likely to be born in the same month.

I have succumbed to this effect myself. As I’ve already made abundantly clear, I’m a die-hard Led Zeppelin fan, going back to the time I first heard “Heartbreaker” on the Chicago station WLS, in the fall of 1969, when I was fourteen years old. Since then I’ve always felt a kinship with their music, but especially with Jimmy Page, the lead guitarist. Why is this? What do we have in common? Not much. I could never play the guitar, while he was a child prodigy and then a genius at it, let’s not even talk about looks, and he’s British. The answer? We share the same birthday. I feel a strange and obviously undeserved pride about this. At least it’s clear that I’m not alone in feeling such kinship!

A heartening, real-world demonstration of using unconscious affiliations for self-betterment occurred about ten years ago at a high school in my area. At the beginning of the school year, Yale researchers gave at-risk students who were struggling in math a fictitious New York Times article about a student from another school who had won a major math award. There was a little “bio box” at the top of the article. In that box, for half of the students in the class, the birthday given for the award winner was made to be the same as for the student, although no mention was made of this fact. For the other students, the award winner’s birthday was a different month and date from theirs. That was all the experimenters did, just a small, invisible tweak to create a link to the student’s own identity.

In May of the following year, at the end of that school term, the researchers looked at the final math grades for all the students in the study. And lo and behold, the students who had shared a birthday with the award winner had significantly higher final grades in math than the students who did not have the same birthday as the winner. Those who had the same birthday felt more similar to the award winner, and this carried over to their belief about their own math ability, with positive effects on their level of effort for the rest of the school year.

A few years ago, when my daughter was in third grade, the kids in her class played Secret Santa. They all wrote down the three things they liked the most, as guidance for choosing gifts, and each child picked a different child’s list out of a box. For the student for whom my daughter played Secret Santa, his first love was Real Madrid soccer, and his second was “math.” He was the only student in the class who put down “math” as one of the things he liked the most. This particular student even requested, on his form, that his present be math related.

His name? Why Matthew, of course.

Grumpy Cats and Competent Politicians

Remember the movie Home Alone? Remember Old Man Marley, the scary-looking next-door neighbor who turns out at the end to be kind and friendly after all?

Looks can be deceiving. My daughter had a librarian at her grade school who looked very crabby, and my daughter and all the other first graders were afraid of her. This continued until one day the librarian came up to her and said she liked her boots. Suddenly my daughter’s opinion of the librarian completely changed for the better. It’s a person’s behavior that matters, not their face. We all know this at an intellectual level, of course, but it is very hard to shake the impression we get from a person’s face, especially our first impressions. It is not so much that we think we know what a person is like based just on their face. It is that we feel absolutely certain we are right about what we think about them.

There is a social media star who weighs about fifteen pounds, never says or writes anything, and just has her picture taken all the time. And she has four legs. Grumpy Cat is funny to us because she looks so damn grumpy all the time. And it is funny because we know she is just a cat and doesn’t realize how she looks to us, and most likely isn’t really grumpy at all. She just looks that way. Grumpy Cat is relevant here because what we are doing when we judge a person’s personality from just his face is treating it as if it were a window into his emotional state. A person we meet can have a chronic angry look to his face, but that does not mean he is always angry. (The same goes for cats.) Recently on social media I read a rant by a friend who was going off about a woman he had never met and knew nothing about, based just on this woman’s photograph, saying what a bitch she must be. Another wise friend said, “Just because she has resting bitch face doesn’t mean she isn’t a nice person.”

Darwin, you will recall, recognized the adaptive value, over evolutionary time, of communicating our emotions to others, mainly through our facial expressions. It was one of the first—perhaps the first—way that we humans communicated with each other. Evolutionary psychologists John Tooby and Leda Cosmides call our attention to the intriguing fact that the muscles of the face are the only ones in the entire human body that directly connect bone to skin. Why would this be? Because our bones are what we use to move parts of our body, this direct connection must exist so that we can move the skin of our face. Why only our face and not other parts of our body? Because the face is the part of us that other people look at the most, to see where we are looking, to watch our mouth to help them understand what we are saying, and so on. In other words, we have been specifically designed by evolution to display our emotions on our faces so that others can see them.

Are we born with the ability to read a person’s emotional state from their facial expression, and are we born to trust unquestioningly what the other person’s face is telling us? According to Darwin, we came to trust those facial expressions so much because we learned that emotions are difficult to fake; indeed, the facial muscles involved are hard to move voluntarily. Our ancestors had to trust what other faces were telling them because often their lives depended on quickly reading and assessing the people they encountered. Again, we are reminded of poor Ötzti, murdered thousands of years ago in that high mountain pass. As Tooby and Cosmides argued, “Given the homicidal nature of the ancient world, knowing someone was approachable and friendly would be a true life or death judgment.” As you might expect, then, the facial expressions of the people around us are still today one of the most powerful signals our environment gives us about whether we should stay or go. Modern research has confirmed that we make very fast assessments of whether a person is a friend or foe (stay or go) within a split second of meeting them. Furthermore, these impressions are so powerful—we trust this flash assessment so much—they can even affect the outcome of important things such as political elections.

Alexander Todorov is a Princeton psychologist and neuroscientist specializing in people’s immediate reaction to faces. In his early experiments, he asked participants to make personality judgments of people based only on their faces. They were shown a series of faces, taken from a database of seventy amateur actors, males and females between twenty and thirty years of age, and in different studies rated each person’s attractiveness, likability, competence, trustworthiness, or aggressiveness. These studies confirmed what Darwin and Ekman had concluded: there was high agreement among the raters in these personality judgments across the five traits rated and across all the faces rated. Everyone was “reading” each face in pretty much the same way. Also, these personality assessments were computed by the participants’ brains with lightning speed. How long the face was presented on the screen didn’t affect the personality judgments—the raters had the same sense of competence or trustworthiness, for example, after seeing a face for just one-tenth of a second or for a full second, or with unlimited time to see it. And it was the trait of trustworthiness that showed the highest consensus agreement of all among the raters, even when the faces were shown for just a split second.

Todorov and his colleagues moved on to see if a political candidate’s face influenced how competent voters thought he or she was. Their earlier research had shown that people believe competence to be the most important attribute for a politician to have. He and his team took photographs from actual governor and congressional candidates’ websites, and then showed them to people from other voting districts, so that the study participants didn’t know who the candidates were, or their policies, or political party—and they also only saw the pictures briefly, again for as little as a tenth of a second.

Remarkably—and somewhat disturbingly, when you think about it—those rapid judgments of competence based on the face alone correctly predicted the outcome of gubernatorial elections from 1995 to 2002. Princeton undergraduates who participated in the study saw the faces of the winner and the runner-up for eighty-nine gubernatorial races and were asked to decide who was more competent, “relying on their gut reactions.” These predictions were just as accurate when the faces were shown for only 100 milliseconds as they were when the faces were on the screen for many seconds. Interestingly, when another group of participants was asked to think carefully and make a good judgment (instead of doing it fast based on their gut), this actually reduced the ability of the (now slow and deliberate) face ratings to predict the election outcome. That reminded me of the automatic-attitude research that Shelly Chaiken and I had conducted years earlier, in which we found stronger unconscious evaluation effects after removing the conscious, deliberate aspects of evaluation from the task as much as possible. It also suggests that the actual voters in these elections were more often going with their gut appraisals of the candidates’ faces than making careful judgments about their characters.

In their second experiment, the researchers removed other important influences on competence judgments, such as cultural stereotypes, in order to gauge the pure effect of the face itself. They looked at only the fifty-five gubernatorial races for which the gender and ethnicity of the candidates were the same. Doing so increased the percentage of correctly predicted races from 57 percent to 69 percent, and the face judgments of competence now accounted for 10 percent of the candidates’ vote shares in these elections. And it is how competent the face appears that is especially important to voters—in this experiment, no other personality trait judgments of the faces predicted the election outcomes. This effect has been found again and again in other elections, in the United States and other countries.

Obviously we as voters are putting way too much faith in these quick-and-dirty assessments based on faces alone. Our track record on electing trustworthy politicians is pretty bad. There have been far too many elected officials (including a string of governors in my home state of Illinois) who may have had a trustworthy face but were then indicted and convicted for corruption. So the real question is, why do we feel so sure about people when we quickly size them up based on their faces alone? I think Grumpy Cat has the answer to this one. We did not evolve to read a person (or a cat, for that matter) based on static photographs of their face; photography is only a very recent invention. We evolved to size up a person quickly based on seeing them (and not just their face) in action, if only for a brief period of time. Static photographs, frozen in time, fool us. When we look at a photograph, such as the stock photo of a candidate or politician in a newspaper article, we are mistaking the signs of a temporary emotional state (which is what we are wired to do) for a long-term, chronic personality trait. And that turns out to be a big mistake.

Seeing candidates or politicians on TV doesn’t help much, either, if we mainly see them in stock, stage-managed situations (as in their campaign ads, speeches, or “photo opportunities”). Todorov’s studies consistently show that the candidates’ faces by themselves are influencing a lot of voters. What that suggests is that even seeing the candidates on TV, to the extent we do, doesn’t add much to what we already conclude from their face alone.

While competence may be the important face trait determining who we vote for, other face traits have surprising amounts of influence on other important real-life outcomes. Take court cases, for example. Leslie Zebrowitz of Brandeis University has devoted much of her research career to the study of how our faces determine our treatment by society. She and her colleagues have shown that qualities of a defendant’s face influence conviction rates and sentencing in actual court cases. Going into courtrooms during trials they have found that, when all other facts of the case are the same, “baby-faced” adults are more likely to be found innocent and given lower sentences than are other defendants. Racially prototypic faces cause the defendant to be treated differently as well. Shockingly but not surprisingly, black defendants who had darker skin received sentences that were on average three years longer than did black defendants with lighter skin who committed the same crime. Sam Sommers of Tufts University has similarly shown that among blacks on trial, those who had more of an African appearance received harsher sentences overall, and were more likely to receive the death penalty if convicted of murdering a white victim, than were those who had less prototypic African facial appearance. Prison is nothing if not society’s method of avoidance.

In a classic social psychology study from the 1970s, Minnesota researchers showed that in a get-to-know-each-other phone conversation, participants were rated as being friendlier and having a more attractive personality if the person talking to them believed that they were an attractive versus a less attractive person. They obtained this belief at the beginning of the experiment when they were given a photo of the other person, which was not actually of the person they were talking to. Nonetheless, believing that the person was attractive brought out the friendlier and more attractive side of the participants’ personalities. We all are guilty of treating attractive people more favorably and with greater friendliness than we treat less attractive people.

Even babies are biased toward attractive people in this way, which shows that the tendency is a hardwired aspect of human nature. Newborn infants, not even a day old, prefer to look at attractive compared to unattractive faces, and they look at attractive faces longer when given the choice. It takes us as adults just a quick glance to know whether a face is attractive or not. Neuroscience studies have revealed that for adults, eye contact with photographs of attractive people activates the reward centers of the brain. In one study, viewing attractive faces alone, without judging them in terms of attractiveness, caused the activation of the participants’ medial orbitofrontal cortex (reward center). We naturally and unconsciously like to see attractive faces; they are rewarding and pleasurable to us. So we hire attractive people instead of less attractive but equally qualified people, pay them more money, go to see the movies they are in, and want to have relationships with them. Badly. We really want them to stay, and not to go.

As we stay or go in the ever-unfolding present, we have mental and muscular reactions that operate on a different, faster, and more instinctual plane than does conscious thought. Evolutionary forces field-tested and kept these unconscious mechanisms because they allowed us to survive, to be an exception to the fact that 99 percent of all species that ever existed became extinct. We could easily have been one of them. But for millions of years our instincts for survival caused us to approach and support and love our tribe, and to avoid and fight and hate the other tribes. Darwin argued that banding together like this to protect ourselves from other humans conferred on us a significant evolutionary advantage, and so became an innate tendency quite early on.

So it went, down through the millions of years of our species. We attacked and killed “them” and they attacked and killed “us,” at horrific rates by modern standards. Distinguishing us from them, distrusting “them,” and helping the others in our own group became things we were born to do. Today, underneath the nuances of faces, and the sharing of birthdays and name letters, the primordial code still is, Us versus Them, friend or foe, with us or against us. There are domains in modern life where these powerful motors of action, which governed the lives of our hominid ancestors, still move us. North versus South. Germany versus France. White versus Black.

And even: Yankees versus Red Sox.

Cheering for the Clothes

On the night of October 2, 2010, Monte Freire was at the U.S.S. Chowder Pot III in Branford, Connecticut, watching the Yankees play the Red Sox on one of the restaurant’s big-screen televisions. A family man and an employee of the Parks and Recreation Department in Nassau, New Hampshire, Freire was in town to compete in a weekend softball tournament with friends. After having played earlier in the day, he and his teammates were now kicking back at the quaint, nautically decorated restaurant, which boasted a giant red lobster on its roof. There was no reason to think anything bad would happen. Or was there?

As any baseball fan knows, the rivalry between the New York Yankees and the Boston Red Sox is legendary. The teams’ home cities themselves were fierce competitors for cultural and economic dominance during the eighteenth and nineteenth centuries, but their baseball stadiums became their symbolic battlefields starting in 1919, when the Sox traded the great Babe Ruth to the Yankees: Boston then suffered an eighty-six-year dry spell, failing to win a single World Series championship. (Superstitious fans called this the Curse of the Bambino, after Ruth’s nickname.) Over the years, the Yankees were clearly the stronger competitor, though there were many exciting showdowns between the two teams, and Boston fans never wavered in their support of the underdog Sox. In 2004 the “curse” was finally broken. The Red Sox first vanquished the hated Yankees in an epic comeback in the league championship and then went on to win that year’s World Series. (And then to win a couple more since.) The long-standing rivalry was still passionately intact that fall night at the Chowder Pot.

The game up on the big screen that Freire and his friends were watching was a decisive one for the Yankees. If they won, they would take their division. The Red Sox, naturally, hoped to prevent this. The restaurant was crowded with fans. At some point during the game, Freire and his friends began trading words with a local man named John Mayor, a Yankees fan. As the game progressed on-screen, Mayor became increasingly agitated and aggressive, loudly letting the visitors know that they were in “Yankee territory.” Freire and his friends alerted a nearby bartender, but no employee intervened. The tension continued to escalate, and before anyone knew what was happening, Mayor pulled out a knife, came over and stabbed Freire in the neck, twice, and ran out of the restaurant.

Freire collapsed, bleeding, while his friends chased Mayor outside behind the Chowder Pot. They apprehended him, assailing him in an onslaught of punches and kicks, until police officers arrived. Freire was taken to the hospital, where he technically died twice that night, though doctors revived him both times, and he somehow pulled through. Mayor was also taken to the hospital to recover from the blows he’d received; then he was arrested and charged with attempted manslaughter.

I live about ten miles from the Chowder Pot, and drive by it on U.S. 1 all the time. When she was very little my daughter was quite frightened of the giant lobster on top of the building and would hide her face in her hands when we approached. So, like many in the area, I followed the news about the incident as it developed, and two days later, an article in the Branford Eagle noted, “Police were at a loss Sunday to understand how a baseball rivalry could go so wrong.” Now, sports fans know very well that rivalries are intense, and can sometimes turn violent, and as a psychologist, I knew that sports are just a ritualized modern-day replication of the tribal conditions in which our mind evolved. And in the sports world, Yankees versus Red Sox is about as “us” versus “them” as it gets.

But as the quote from the local police showed, this can all seem rather odd to sports outsiders; after all, these are grown men playing a boy’s game, hardly something worth killing another person over. In a recent stand-up routine, Jerry Seinfeld played this role of the outsider with perfect pitch. He goes to a baseball game with a friend. He makes the mistake of cheering for a player they all cheered for the last time he was at a game. “What are you doing?” says his friend, glaring at him. “He’s on the Phillies!” Jerry looks puzzled: “But you loved the guy last year.” “That’s when he was a Met!” says his friend, in near exasperation. “Ahhh, I get it,” says Jerry, as everything becomes clear. “We’re cheering for the clothes.”

Until the 1970s, and the advent of free agency, players didn’t change teams all that much, and baseball fans could grow up cheering for pretty much the same players their entire childhood. Today things are very different, and a “hated” player on a rival team could suddenly be forgiven by fans, who now cheer for him instead. Seinfeld was right. When it comes right down to it, these days we are really cheering for the clothes.

There are two psychology experiments, an old one and a new one, that show how transient and plastic these “us” versus “them” feelings can be, and they speak to the senseless violence that happened at the Chowder Pot that night. Yet these studies also show that there is hope that we can control hatred and hostility toward out-groups. If “they” become included in a new “us,” then we can all be happy together. If former “theys” become part of “our” team, just as with traded baseball players, dislike suddenly changes to like.

The classic study was done seventy years ago at Robbers Cave State Park in eastern Oklahoma, right off of Highway 2. Nestled up in the foothills of the Ozark Mountains, Robbers Cave is a green, protected wilderness containing lakes, hiking and equestrian trails, and camping grounds with cabins. It was here in this tranquil place, in the summer of 1949, that Muzafer and Carolyn Sherif carried out one of the most famous experiments in the history of psychology.

The Sherifs invited a group of twelve-year-old boys—none of whom previously knew each other—to the Boy Scout area of the park for a multi-day camplike experience. The boys were all white and came from Protestant, lower-middle-class families. They didn’t know that they were part of an experiment. What the Sherifs hoped to learn about intergroup conflict and cooperation hinged on creating two groups of boys, not unlike fans of opposing sports teams. They divided the boys into these groups on arrival, keeping them separated so neither group even knew there was another. For several days, each group hiked and swam and bonded in their own part of the camp, becoming a team of sorts. They found who the natural leaders were, established a kind of hierarchy, and cohered into a unified collective. And as boys are wont to do, each group came up with a cool name for themselves—one was the Eagles; the other was the Rattlers.

Then came the twist. The Sherifs brought the two groups together. But that wasn’t all. As the boys soon discovered, not only was there another “tribe” in their midst, but they would be competing with this new opponent (out-group) in games such as tug-of-war and—of course!—baseball.

The boys’ lives at the camp abruptly changed. Their collective and individual behavior now passed through a dramatically simplified mental filter of us against them. The Rattlers rallied their team spirit against the Eagles, withdrawing into a tighter unit and antagonizing their perceived foe. They stuck their team flag into the ground of the playing field and menacingly warned the Eagles not to mess with it. The Eagles, naturally, found a way to burn the Rattlers’ flag, then trashed their cabin. Soon enough, the tensions become so heated that the “counselors” finally had to intervene physically to ensure the boys didn’t hurt each other.

The Sherifs’ Lord of the Flies–like experiment at Robbers Cave was unsettling. How easily the boys’ liking and disliking of each other was so manipulated, simply by their being divided into two groups, and how these attitudes so quickly turned into hostile acts was discouraging. It becomes easier to understand how the horrible incidents like the one that nearly killed Monte Freire can happen.

At the end of that strange, balkanized summer for the twelve-year-old boys, the experimenters tried to end the hostility and animosity between the two groups. They did this by giving all the boys some important common goals, which they could accomplish only if they all worked together. For example, on the way back from a distant part of the state park, the vehicles carrying the boys got stuck in some deep mud on the dirt road. Only by all of them pulling on ropes could they get the trucks unstuck and get back to their camp—which they did, to much cheering and pride. After a few more shared accomplishments, they were now all one team, laughing and having a great time with each other, the former bitter rivals now great friends. Their “us” identity had been changed by the common, shared goals—instead of Rattlers and Eagles, now they were all boys at the same summer camp together.

In a modern experiment on the same theme, psychologists Jay Van Bavel and Wil Cunningham showed how unconscious racism can be “e-raced” when members of the racial out-group become members of the main group. By showing black faces to white participants and telling them these will be your teammates on the next task, the participants’ initially negative implicit attitudes toward these same black faces (measured by the IAT) suddenly changed to be positive. This was even before they did anything on the team together. Just like those of the boys in the Robbers Cave study, our unconscious stay-or-go responses to social groups are not hardwired and unchangeable, not by any means. The participants in the Van Bavel experiment were not cheering for the skin color in that second IAT task. They were cheering for the clothes.