CHAPTER 1

The Past Is Always Present

Around 3200 BC, a man with brown eyes and wavy hair lay dying in a boulder-choked gully in what is now the Italian Alps, at more than ten thousand feet above sea level. The man had fallen facedown on the ground, his left arm crossed under his neck. He was five foot two, around forty-five years old, and had tattoo-like markings on his skin and a gap between his two front teeth. He had recently eaten some grains and ibex meat, and had a fractured rib. It was either spring or early summer, yet at this harsh altitude, with snowcapped peaks rising all around, the weather was unpredictable. He wore a goat-hide coat and leggings, carried a copper-bladed ax and other implements, and had a small medicinal kit with him, though it wouldn’t save him.

He died, and not long after, a storm descended, sealing his body in ice.

Five thousand years later, on September 19, 1991, two German hikers were making their way down a mountain in the Ötztal Alps and decided to take a shortcut. As they left the customary path, they passed by a gully and noticed an odd shape down on its rocky floor, which was half-flooded with meltwater. They approached it for a closer look, only to discover a human corpse. Shocked, they alerted the authorities, who were eventually able to remove it from the ice in which it was still partially stuck. Soon they realized it wasn’t a tragically unlucky mountaineer, as first believed, but one of the world’s oldest mummies. Thanks to the ice that had covered the brown-eyed man, and the tucked-away positioning of the gully, which put it out of the path of the crushing movements of the glacier, the body was a monumental scientific find: an exceptionally well-preserved specimen of human life in the Copper Age, offering insights as well into human death.

In the years following the discovery of Ötzti—one of the several nicknames that the media gave to the man who met his end in that lonely ravine—scientists carefully analyzed his remains and the objects found with him. One thing they wanted to know was what had killed him. This turned out to be a less than cut-and-dried forensic task. While Ötzti had suffered a head wound on that long-ago day before the storm rolled in to freeze him, it wasn’t so clear that this was the main cause of his death. For example, he had a parasitic worm (scientists found its eggs in his stomach), and a test on one of his fingernails revealed that he suffered from a chronic malady of some sort (possibly Lyme disease). The same test also revealed that his immune system had undergone periods of acute distress three times during the last four months of his life. Maybe he had just become weak from a combination of altitude and poor health, and fell off the mountain into the gully. Also, dangerous levels of arsenic were present in his blood, leading researchers to believe that he worked as metallurgist. As if this weren’t enough, he also had past bone fractures and a cyst that probably was an aftereffect of frostbite.

And you thought you had problems.

While there were many different leads about the nature of his demise, one thing was clear: Ötzti’s life was an ongoing assault from his environment. He must have been quite hardy to have made it to the age that he did. And all of this happened to a man who likely enjoyed high status in his community, as his possession of a copper ax suggests. But in the end, scientists discovered it wasn’t his health that killed Ötzti, but a more intimate peril—other humans.

In 2001, X-rays revealed an object hidden beneath the skin of his left shoulder. After a detailed inspection, researchers concluded that it was a flint arrowhead, and its sharp point had punctured a blood vessel that would have caused him to bleed out in a very short time. In other words, Ötzti had been murdered, leaving behind one of the coldest cold cases in human history.

The revelation cast his demise in a new light. His head wound, it now appeared, was related to the assault that took his life. He was either bludgeoned by the same attackers who had shot him with the arrow, or he had bashed his head from a fall brought on by the heavy blood loss. Perhaps he was even shoved into the gully by his assailants. Whatever the specific sequence of events that led to his death, it was surely a ghastly scene—a fight for survival that Ötzti lost. Yet this one fateful day arguably resulted in less bodily trauma than the forty-plus years of his daily existence, which was beset with disease, painful physical damage, and a variety of hostile factors in his surroundings. Ötzti’s life, just like his death, speaks to the tremendous dangers and difficulties the average human encountered throughout life during our species’ long evolution. This is crucial to understand, since it was amid these same dangers and difficulties—which go back much further than the Copper Age, a relative yesterday on the timescale of human evolution—that our adaptive unconscious brain systems were shaped and honed.

The obvious yet profound thing is that, unlike the personal experiences that shape who we are in the present, we have no memory of this past. We have no recollections of our evolution. It is hidden from us, which is slightly unsettling considering how dramatically it influences what we think, say, and do. We are born “factory-equipped” with some very basic motivations that came into being during a very different period in human history. (We also come preassembled, of course, though we grow in size.) As Charles Darwin wrote in 1877, “May we not suspect that the vague but very real fears of children, which are quite independent of experience, are the inherited effects of real dangers and abject superstitions during ancient savage times?” Yep, we may. Humans are not a tabula rasa, or blank slate. We have two fundamental, primitive drives that subtly and unconsciously affect what we think and do: the need to survive and the need to mate. (And in the next chapter we’ll focus on a third innate drive, to cooperate with each other, which is useful for both survival and reproduction.) Yet in modern life, these ancient, unremembered drives, or “effects” of the mind, often operate without our knowledge; they can cause us to be blind to the real reasons we feel or do things. By peeling back the layers on this hidden past that still affects us, and exposing the ways in which survival and reproduction are always at work in our minds, we can better understand the present.

Where’s My Button?

Now, I’ve never had to flee murderous assailants armed with flint-tipped arrows on a mountain in the Alps, like Ötzti did. But I have—like most people—felt the same will to survive surge through my body the way it must have for him.

It was August 1981, and I had just moved to New York City to begin teaching at NYU. I was twenty-six years old, fresh out of grad school, and the only other time I’d been to the city was for my job interview a few months earlier. Right away, I was on edge. Every morning at around six o’clock, an angry man would start yelling on the street below my studio apartment. I had no air-conditioning and it was the peak of summer, so my windows were wide open. For a week or so his shouts would wake me up, and occasionally a bottle would smash close to my window. I eventually learned that then mayor Ed Koch, who was up for reelection, lived in my building, up in the penthouse, and the angry guy’s projectile bottles were meant for him. Now, Angry Guy couldn’t throw high enough to reach the penthouse apartment, but he sure could throw high enough to reach my studio. While knowing I was not his intended target made me feel slightly safer (only slightly), the city outside my apartment didn’t.

Washington Square was a rougher neighborhood in the 1980s than it is today. (The same is true of many other parts of Manhattan.) During my first week there, two men ran right past me near the Washington Arch, the second one chasing the first one with a switchblade. Those first few months, I was too apprehensive to go anywhere but work during the day, and I never went outside after dark. My only furniture at that point was a wooden chair and a folding table, and every night I would double-check the four different locks on my door and wedge the top of the chair under the doorknob. Although I managed to go to sleep each night having lived another day, my flight-or-fight system was on constant high alert. I didn’t yet have a sense of belonging in New York, which would only come years later. I had had a wonderful childhood in small-town America, climbing trees and playing baseball and riding my bike around with the gang of kids on my block, and then going to college in my hometown, followed by graduate school in another midwestern college town, Ann Arbor. None of this was any preparation for the multicultural, densely packed, and constantly noisy streets of New York City. It was culture shock, big-time, and I had to have my eyes wide open and attention constantly vigilant if I was going to survive in it—much less thrive in it.

Working on my degree at Michigan a year earlier, I had read an important paper by the psychologist Ellen Langer pointing out the artificiality of many of the social psychology laboratory studies of the time. This paper turned out to presage my own experiences after moving to the city, maybe because Langer based her paper on studies she ran in New York. In real life, she reminded us, the world is a fast-moving, busy place, quite unlike the quiet and calm psychology laboratory rooms where an experimenter works with her participants. Reading Langer’s paper while still in Ann Arbor, I understood her argument at an intellectual level, but boy, did I really understand it on a personal one after moving to the city itself.

In many of the studies in the emerging psychology research area of “social cognition”—just starting up when I arrived at NYU—the study participants would be given a button to press when they were ready to move on to the next piece of information. They could read and think about a sentence—say, describing a particular behavior by a person in a story—as long as they wanted to, then press the button to get the next piece of information. Langer said in effect, Gee, this would be great, but in real life we do not have a magic button to press whenever we want the world to stop for a moment so we can figure out what is happening and why. We have to deal with things on the fly, in real time, and we have a whole lot of other things to do in any given instant than just form impressions of the personalities of the people we are with. Our attention has to be focused on several different tasks simultaneously, including what we need to get done at the moment, and there’s not all that much attention left to ponder the world at leisure.

New York was overwhelming to me: so many people, so much traffic, so much going on to pay attention to. I wondered if I could bring impressions of the city together with Langer’s point in order to create a study. One morning, I stepped out of my office building, wended my way through the crowds on the street, looking in every direction at street crossings, then suddenly came to a complete stop in the middle of the sidewalk on Washington Place. “Where’s my button?” I said to myself. I wanted a button to stop the real world so I could figure it out and also navigate it safely. But of course, there is no such button. The question I soon asked myself then was, How do we do it without one?

In the history of humankind, we never had the luxury to pause what is happening around us until we figured out the right/best/safest thing to do. We needed to make sense of the world—especially the dangerous social world—quickly and efficiently, faster than our slow conscious thinking was capable of. We often needed to react to dangerous situations immediately. Not long after expressing my wish for a stop button, I benefited from these unconscious skills firsthand when I stepped off a curb on the way back to my apartment, and was nearly hit by a bicycle whizzing the wrong way down that one-way street. With no time to think, I jumped back onto the curb just in time. In fact, I found myself back on the curb before I was aware of the bicycle that had just sped past. (And I made a mental note for the next time that not everyone obeys one-way-street signs, so always look both ways.) Reflexive, automatic mechanisms (or instincts) for physical safety had protected me, bypassing slower thought processes. I thought that this faster, unconscious form of thought and behavior must be one important reason we were able to deal with the busy world on a real-time basis.

Back in the lab, we set to work to test this idea, designing a research program with the premise that there was, in addition to relatively slow conscious thought processes, a faster, automatic, and not-conscious way in which people dealt with their social worlds. This was a radical premise, because at this time much of psychology continued to assume that everything we decided and did was the result of intentional, conscious thought. Like Langer, we wanted to make our laboratory studies true to the constant onrushing of the world. After all, the point of our research was to understand what was happening out there in real life, not just what happened in quiet, simple lab environments. In one of our first experiments, we redid one of the “button” studies in which the participant could look at a piece of information we gave them as long as he or she wanted before making a judgment about a person, and only then pressing a button to continue. But we added a twist.

Seated in front of a computer screen, our participants read about Gregory, a fictitious person, and twenty-four different things that Gregory had done during the past week, one behavior at a time. In the “honest Gregory” condition, he did twelve honest things, such as “returned the lost wallet”; six dishonest things, such as “did not admit his blunder”; and six neutral things, such as “took out the day’s garbage.” In the “dishonest Gregory” condition, he did more dishonest things. The twenty-four behaviors of honest and dishonest Gregory were presented in a random order. We asked the participant to form an impression of Gregory while reading the behaviors. Half of the participants had a button so that they could consider each behavior as long as they wanted, before advancing to the next one. Now, so far this was just a standard social cognition experiment, the kind that Langer had criticized. The wrinkle we added was a second condition where everything was the same except the participants did not have a button. Instead, the behaviors were presented very quickly, with participants allowed just enough time to read each of them once before the next one came on the screen, and they had to do the best they could in “real time” in figuring this guy Gregory out.

As you might expect, having the button made a tremendous difference. With it there, with the magic ability to stop the world until they’d figured things out, participants had no problem judging Honest Gregory as more honest than Dishonest Gregory. After all, Honest Greg did twice as many honest as dishonest things, and Dishonest Greg did twice as many dishonest as honest things. But without the luxury of the stop button, the participants could not tell any difference between the two! Their impression ratings were based only on those behaviors they could later remember; they were not able to form an impression while Gregory’s behaviors were coming at them rapid-fire. Without a button to stop the world for a critical moment, they could not detect even such an obvious difference between people as between Honest and Dishonest Gregory in our study. They couldn’t, but another group of our participants could. This other group was able to tell the difference between Honest and Dishonest Gregory even under the rapid-fire conditions, without the stop button to help them. We had selected them for the study in advance, because we predicted they would be able to deal with the overload just fine.

Who were these special people? They are you and me. What I mean is that there was nothing particularly special about this group, except that they were especially attuned to honesty and dishonesty. How honest a person was really mattered to them, in terms of whether they liked that person or not. Honesty is of course important to all of us, but for this group it was the number one important thing about a person. It was the first personality trait that came to mind for them when asked to write down the features of a person they liked (on a questionnaire we had given to all of our potential participants several months earlier), and dishonesty came first when writing down on a blank piece of paper the characteristics of a person they disliked. They chronically thought first about a person’s honesty when deciding whether they liked or disliked him. But each of us has our own particular sensitivities—for you it could be how generous a person is; for the person near you right now it could be how intelligent that person is. Or shy, or hostile, or conceited, or whatever. There are a wide range of personality traits we can develop these automatic antenna for; we just picked one to study as an example standing in for all the rest.

That this group with the honesty antenna was able to deal with the no-button conditions just as if they were in the button condition tells us that we are all able to develop radar to pick up the important blips of meaning in our social world, without having or needing to stop and consciously figure them out. We are able to detect aspects of another’s personality and behavior that are most important to us even when our mind is very busy. We can certainly do this by the time we are adolescents and young adults—but this is not something that young children can do before they’ve had enough experience with the social world. It develops over time like any skill does, such as typing on my keyboard now, or driving a car—activities that are often terribly difficult and overwhelming to start with but with experience become easy and effortless.

The bigger picture our button study paints is that—just as Charles Darwin argued in his seminal book on emotions—often the same psychological process can operate in an unconscious mode as well as a conscious mode. Our participants who had the ability to automatically and unconsciously deal with honesty information formed very similar impressions of Gregory as did those who didn’t have that ability but did have the button. That is, through using the button to slow the world down to a speed their conscious processes could handle, they were able to deal with the information as well as, and in the same way as, those who could do it using much faster and more efficient unconscious processes. But those participants who could not do either—who did not possess the unconscious antenna for honest behavior, and were not given a button to be able to deal with it consciously—were unable to notice the difference between the very different honest and dishonest versions of Gregory.

So now we had the beginnings of an answer to the question I first asked myself out on Washington Place, the busy New York street, that morning. Thanks to our ability to develop perceptual skills that can operate quickly, efficiently, and unconsciously under real-world conditions, quite often we don’t need a button.

The Alligator of the Unconscious

Our study with Gregory and the magic button was one of the first to show that automatic, unconscious ways of dealing with our social world did exist, and that their existence within us made sense given the busy and dangerous conditions—especially regarding other humans—under which we evolved. Back then (as well as today) we didn’t always have time to think, so we needed to size people up quickly based on how they acted, and we also needed to be able to act and react quickly. To paraphrase the old saying, “She who hesitates has lost”—her life, a limb, her health, her child. But there is an important difference between the evolved unconscious motivations for survival and physical safety that came up in our story of poor Ötzti (to which we’ll return in a moment), and our unconscious ability to detect honesty or shyness or intelligence under rapid-fire, real-world conditions.

We come factory-equipped with those basic motivations for survival and safety, but the “people radar” was a skill we had to develop out of experience and practical use. Think of it as the difference between breathing and driving. The one you were born with and never had to learn, the other you had to learn, yet both now can operate (under normal conditions) without much conscious guidance. Look a little more closely and you can see that even driving requires some evolutionary, “born this way” machinery. After all, let your dog practice driving all you want (far away from me, please) and he won’t ever be any good at it. (He might approach the level of some of the drivers in my neighborhood, however.) What I’m getting at is that our ability to drive a car, which only gets up to speed (sorry) after considerable experience and practice, is like our ability to develop a “people radar” through experience and practice, as in the button study. Both depend on the ability of the human mind to create new useful unconscious “add-ons,” out of our own personal experience with the world, to those we were originally born with.

When we started researching adaptive unconscious mechanisms for dealing with the busy world, back in the early 1980s, this “driving,” or experience-based, unconscious process was all we social psychologists knew about. Evolutionary psychology was just getting started, thanks to Paul Ekman and other pioneers such as David Buss and Douglas Kenrick. The field of cognitive psychology had just overthrown the dominating theory of behaviorism, made famous by its most ardent advocate, B. F. Skinner. As you’ll recall from the Introduction, behaviorism held that the human mind barely mattered, and conscious thinking didn’t matter at all; even the complexities of human behavior—including language and speech—were said to be caused by reflexive, trained reactions to the stimuli in our immediate environment. Cognitive psychology, on the other hand, championed the role of conscious thought and assumed it was necessary for nearly all human choices and behavior. Nothing happened, according to this view, without you consciously and intentionally causing it to happen. But this wasn’t right, either. (Extreme, all-or-none positions tend not to be.)

Within this “conscious-first” framework of cognitive psychology, from which my then newbie field of social-cognitive psychology took its lead, the only way an unconscious process could exist was by being conscious (and deliberate) first; then, only after considerable experience could it become streamlined and efficient enough—automated was the word we used—to not need much conscious guidance anymore. (Just like driving a car.) (William James had said the same thing in 1890, that “consciousness drops out of any process where it is no longer needed.”) For the next twenty-five years, then, up to about the turn of the millennium, I and the rest of my field assumed that this was the only way in which unconscious mental processes came into being: starting out conscious and effortful, and only with experience and frequent use, becoming able to operate unconsciously. But I and the rest of my field were wrong, or at least holding to an incomplete picture. This was because we were not paying enough attention to the growing body of theory and research evidence from the equally new field of evolutionary psychology growing up right next to us. We were playing in our own sandbox too much, perhaps, and not looking around at the rest of the busy playground.

What caused me to finally yank my head out of the sand and look around more widely was that this “conscious-first” assumption was starting to break down. We were starting to find effects in my own lab that this assumption could not explain, but also there was a wave of exciting new findings in developmental psychology—the study of infants and toddlers who have not yet had much experience or practice in the world—showing automatic and unconscious effects in children too young to have had much conscious practice or experience doing what they were so naturally able to do. This was marvelous new evidence of just how factory-equipped we come into the world, in terms of our ability to deal with our fellow humans, and it directly contradicted the bedrock assumption that these unconscious processes only came about—in older children and adults—after a lot of conscious use and experience.

This new evidence presented me with a puzzle during my own first twenty-five years of research, a conundrum I could not stop thinking about. Finally, after many years of considering this problem, my daughter was born and I took a semester of paternity leave to be able to spend time watching and playing with her at home. And while she was crawling around and contentedly playing with her toys and stuffed animals in her playpen, I sat nearby and read, more widely than I usually did, in areas such as evolutionary biology and philosophy, trying to find the answer to my longtime puzzle. How could there be, I pondered, psychological processes—what are called higher-order mental processes dealing with evaluation, motivations, and actual behavior—that operated unconsciously but apparently without the prior extensive conscious experience and use of them that we’d long assumed was necessary for their unconscious operation?

And so I found myself, on a beautiful fall day in 2006, many years after my epiphany on the streets of New York City, up in my tree house of an attic in New Haven, Connecticut, all the windows open and watching my infant daughter crawl around on the floor in front of me. She was trying her best to make sense of the world around her, just as I was. I had a stack of books next to me, classic works on human evolution by giants such as Richard Dawkins, Ernst Mayr, and Donald Campbell. The warm afternoon sun was pouring through the windows of the nursery, and I was feeling a bit drowsy. At the time, I was getting about as much sleep as most parents of infants do—little to none. As I finally got my daughter down for a nap—as usual, quite reluctantly on her part—I spread out all my research papers and notebooks on my own bed. I knew I was still missing something, but I didn’t feel I’d come any closer to what that something was. As I picked up a book and began reading, I could feel my eyes getting heavier and heavier. I fought it, until eventually I slumped over onto my notebooks and papers and fell into a deep sleep.

I was in Everglades National Park in Florida. I stood on one of those raised wooden walkways that look out over the swamp. Everything was in full color, and I could feel the humidity and denseness of the heavy air. Cypress and mangrove trees hemmed in the murky, almost black swamp water. As I stood on the walkway staring down into the swamp I saw ripples emerge, and a large and scaly alligator appeared in the murky water below. I walked ahead and the alligator swam alongside me. The alligator looked ominous, but in my dream I wasn’t afraid of it. After what seemed to be maybe five or ten seconds of walking, the alligator had gotten a little ahead of me. Then it stopped and, almost in slow motion, began to roll. It flipped completely over, exposing a long white belly that looked surprisingly tender and soft.

I awoke with a start and sat bolt upright. That was it. My eyes were wide open but I could still see that flipped-over alligator in front of me. I can vividly remember, even now, a decade later, the huge wave of relief that flowed over me, a tremendous release of tension. It was as if a weight I had been carrying for more than a decade just lifted away. Of course! I said to myself. I grabbed the pen and paper in front of me on the bed and wrote down everything I had seen in my dream, but more important, what that dream had just told me. In that moment of clarity, I finally understood how all the new unconscious effects being reported could occur without needing extensive prior conscious experience, or even any relevant experience at all, for that matter.

It’s the unconscious first, the alligator—that literally flipping alligator—was telling me. You dummy.

I’d had it completely backward, all those years. The alligator was telling me to flip my assumptions. Sure, all the new evidence did not make sense under the seemingly unshakable assumption that extensive conscious use of the psychological process came first, before becoming capable of operating unconsciously. But the problem wasn’t the evidence, it was my “conscious-first” assumption. The white belly of the alligator was the unconscious, and it was telling me that it would all make sense if I only realized that the unconscious came first, both in the course of human evolution and in the course of our individual development from infancy to childhood to adulthood. I had to flip my so ingrained assumption that for a given person, the conscious use of a process comes first, and that only after repeated use can the process then operate unconsciously, and also that over the course of human evolution, our basic human psychological and behavioral systems were originally unconscious, and they existed before the rather late appearance of language and conscious intentional use of those systems. By “systems” I mean the natural mechanisms that guided our behavior, such as approaching things (and people) we liked, and avoiding those we did not like; to naturally pay attention and notice things out in the world (like sources of food and water) that would satisfy our current needs; not to mention important survival instincts, such as the fight-or-flight response and other inborn mechanisms for avoiding danger (like our fear of the dark and becoming instantly alert after a nearby loud sound). And for each of us as infants, there are basic evolved motivations and tendencies, operating exclusively automatically up to age four, when we begin developing conscious intentional control over our minds and bodies. The alligator was telling me that not everything starts out as conscious and intentional and only after that becomes (with practice and experience) capable of unconscious operation. Mr. White Belly was saying that unconscious processes come first, not the other way around.

In retrospect, this dream was rather remarkable in another sense as well, for the dream itself was unconscious—I watched and experienced it passively, as if it were a movie on a screen. Many other scientists in the past have reported having dreams in which the solution to a problem they’d been working on for some time was revealed to them in some symbolic way. But my own scientific problem had to do with the unconscious per se, and so for perhaps the first time in human history, the unconscious was telling someone about itself. The answer to my decade-long quest for an answer to this fundamental question about unconscious processes had come, at last, from my own unconscious processes.

What we now know, thanks to Darwin, cultural (and cognitive) anthropology, and modern evolutionary biology and psychology, is that the human brain evolved slowly over time, first as a very basic unconscious mind, without the conscious faculties of reason and control that we possess today. It was the mind of millions of organisms that don’t have or need anything like our human consciousness to act adaptively in order to survive. But the original unconscious mechanisms of our long-ago brain did not suddenly disappear when consciousness and language—again, our own very real superpowers among earthly creatures—finally emerged rather late in the evolutionary story. Consciousness wasn’t a different, new kind of mind that miraculously appeared out of the blue one day. It was a wonderful add-on to the old unconscious machinery that was still there. That original machinery still exists inside each of us, but the advent of consciousness gave us new ways to meet our needs and desires, the ability to intentionally and deliberately use that old machinery from within.

So what does it mean that an unconscious mind was the foundation for the conscious version, and not the other way around? For starters, it resolves the either/or debate between the behaviorists and the cognitivists. We aren’t mindless automatons at the total mercy of incoming stimuli that send us marching through life like windup dolls, but neither are we all-seeing masters of ourselves who control our each and every thought and action. Rather, there is a constant interplay between the conscious and unconscious operations of our brain, and between what is going on in the world outside of us and what is going on in our heads (our current concerns and purposes, and residual effects of our most recent experiences). The cognitive scientists and the behaviorists are both right (and both wrong, if they deny any validity to the other side of the story). On the cognitive scientists’ side of the ledger, our current goals and motivations determine what we seek out and pay attention to in the world, and whether we like or dislike it (depending on whether it helps or hurts us getting what we currently want). And in the behaviorists’ favor, the world itself can indeed trigger emotions, behaviors, and motivations in us—and sometimes very powerful ones—without our knowledge or control, as Darwin himself argued. As the philosopher Susan Wolf has written, anyone who thinks they are completely free from such outside influences should try to walk away from a child drowning in the ocean. Hopefully, you couldn’t (and God help you if you could). There are, Wolf argues, some freedoms we just don’t want to have. And many of these, naturally, relate to the number one motivation of the ancient past that formed our mind—keeping our genes alive.

Genie in a Bottle

The survival of our species was never a foregone conclusion. In fact, the odds were very much against it. After all, more than 99 percent of all species that ever existed are now extinct. As Ötzti’s story vividly illustrates, human life evolved in very hazardous conditions. It is easy to forget that our “modern” brain was honed by evolution long before the comforts of modern life were even a twinkle in our visual-processing cortex. The Ötztis and Ötztettes of our past didn’t have laws, antibiotics, or refrigeration; they didn’t have ambulances, supermarkets, or governments; they didn’t have plumbing, guardrails, or clothing stores. Fortunately for us, we don’t live in Ötzti’s time. But in a very real sense, our minds still do. This is a very important point to grasp.

During our species’ long development, the biggest danger of all was our fellow humans. Ötzti’s murder on the mountain wasn’t at all remarkable, except in its fortuitous preservation of his body. Violent death at the hands of others was shockingly common among our ancestors. Analyses of human skeletons excavated from ancient cities show that about 1 out of every 3 men was murdered. And as recently as the 1970s, the murder rate for males of the Yanomami rain forest people, long isolated from modern civilization, was about 1 out of 4. Today, by comparison, the homicide rate in Europe and North America is about 1 in 100,000.

Now we seek to reduce the dangers to life and safety as much as possible. We have law enforcement, traffic lights and signals, efficient systems of exchange (money, that is) to translate our work into needed food and shelter. We also have medical science and health inspectors. So it is easy to overlook the fact that our unconscious tendencies were shaped by and adapted to this far more dangerous ancestral world, with its life-threatening natural elements such as cold and heat, drought, and starvation, and human and nonhuman organisms, such as wild animals, harmful bacteria, and poisonous plants. The fundamental drive for physical safety is a powerful legacy of our evolutionary past and it exerts a pervasive influence on the mind as it navigates and responds to modern life, often in surprising ways—like who you vote for.

In his first State of the Union address, in 1933, President Franklin Roosevelt famously said: “Let me assert my firm belief that the only thing we have to fear is fear itself—nameless, unreasoning, unjustified terror which paralyzes needed efforts to convert retreat into advance.” More than eighty years later, in his final State of the Union address, in January 2016, President Barack Obama echoed Roosevelt’s words: “America has been through big changes before. . . . Each time, there have been those who told us to fear the future, who claimed we could slam the brakes on change; who promised to restore past glory if we just got some group or idea that was threatening America under control. And each time, we overcame those fears.”

Both FDR and Obama were referring to the effect of fear on social change. Roosevelt worried that the fear brought on by the Depression would interfere with making the changes to the laws and to the economy that he strongly felt were needed to begin the process of economic recovery. Obama was referring to national health care and to immigration policies. Both presidents were Democrats and on the liberal side of the political spectrum. Both were arguing against the conservative political tendency to resist social change (that’s why it’s called conservative). Very interestingly, both recognized that fear could cause a person to want to avoid social change—that is, to become more conservative and less liberal in his political attitudes.

Why would conservative politicians try to make voters more afraid, and liberal politicians try to make voters less afraid? It has long been known that people become more conservative and resistant to change when under threat of some kind. Research in political psychology has shown that it is much easier to get a liberal to behave like a conservative than it is to get a conservative to behave like a liberal. For example, in one set of studies, liberal college students who were asked to imagine in detail their own death then expressed attitudes regarding social issues such as capital punishment, abortion, and gay marriage that were (temporarily) the same as those of conservative college students, who had not been threatened. In contrast to the results of this fascinating experiment, however, at this time no one had yet been able to change a conservative into a liberal. Under threat or fear people are less risk-taking and they resist change, the very definition of being conservative. The study’s findings led me and other scientists to think that perhaps conservative political attitudes were in the service of an unconscious motivation for physical safety and survival. But how could we test this with experiments? We looked first at the research that had already been done.

In one remarkable study, University of California researchers followed a group of four-year-old preschool children for twenty years to see what their political attitudes became when they were young adults. The researchers measured the children’s level of fearfulness and inhibition at age four, then two decades years later they evaluated their political attitudes. And those who had shown greater fear and inhibition at age four were indeed more likely to hold conservative attitudes at age twenty-three.

Socially conservative adults (who tend to be against social changes such as same-sex marriage or the legalization of marijuana) participating in psychology experiments show stronger fear or startle responses in reaction to unexpected loud noises, and they also show greater physiological arousal in response to “scary,” but not to pleasant, images presented to them. Other studies show that adult conservatives are more sensitive to dangerous or disgusting objects, compared to liberal adults, and that they are also more alert to potential danger and threatening events in the lab. More recently these differences have even been found in the sizes of brain regions that are involved in emotions, especially fear. The right amygdala region—the neural headquarters of fear—is actually larger in people who self-identify as politically conservative compared to that of those who don’t. During laboratory tasks involving taking risks, this fear center of the brain becomes more highly activated in self-reported Republicans than it does in self-reported Democrats.

So there does indeed seem to be a connection between the strength of the unconscious physical safety motivation and a person’s political attitudes. And research had shown that you can make liberals more conservative by threatening them and making them somewhat afraid. But what if you made people feel safer instead? If the boiling water of political attitudes can be turned up (conservative) or down (liberal) by the underlying flame of physical safety needs, then making people feel (temporarily) physically safe should cause conservatives’ social attitudes to become more liberal.

We conducted two experiments in which we used a powerful imagination exercise to induce feelings of complete physical safety in our participants. We had them imagine being granted a superpower by a genie in a bottle. In one condition, the superpower was to be completely safe and immune from any physical harm, no matter what you did or what happened to you; imagine Superman with bullets bouncing off him. In the control condition, the participant imagined being able to fly. We predicted that imagining being completely physically safe would temporarily satisfy and thus decrease the individual’s concerns about physical safety, entirely unconsciously, and thus—if our theory was right—turn conservatives into liberals. At least temporarily.

Use a little imagination yourself and pretend to be a participant in this study. You are asked to visualize and imagine the following things happening to you:

On a shopping trip, you wander into a strange store with no sign out front. Everything is dimly lit and the shopkeeper calls you by name even though you have never seen him before. He tells you to come close and he says to you in a weird voice, “I have decided to give you a gift. Tomorrow, you will wake to find that you have a superpower. It will be an amazing ability but you must keep it absolutely secret. If you purposely tell anyone or show off your power, you will lose it forever.” That night, you have a hard time sleeping, but when you wake, you find that you indeed have a superpower.

Now the story changes depending on which experimental condition you are randomly assigned to. If you are in the safety condition, the passage continues:

A glass falls on the floor and without meaning to you accidentally step on the broken glass. It doesn’t hurt you at all, though, and you realize that you are completely invulnerable to physical harm. Knives and bullets would bounce off you, fire won’t burn your skin, a fall from a cliff wouldn’t hurt at all.

But if you were in the fly condition, you would read instead:

You miss a step going down the stairs but instead of tumbling down, you float gently to the bottom of the banister. You try jumping from the top of the stairs again and realize that you are able to fly. You can propel yourself through the air as if you were a bird. You can travel entire distances without ever touching the ground.

After imagining having one or the other superpower, we measured the social attitudes of all the participants using a standard measure that in past studies had shown clear differences between conservatives and liberals. Then at the end, we simply asked them who they did or would have voted for in the most recent presidential election (2012), as a way to measure whether they were overall more conservative (Republican) or liberal (Democrat).

Among those who had imagined being given the superpower to fly, which was our control condition, there was the usual and expected large difference on the social attitudes measure: liberals were much less conservative on this measure than were conservatives, and imagining being able to fly didn’t change that at all. However, in the “safe from physical harm” superpower condition, things were different. Not for the liberals, who were unaffected by imagining being totally safe; their attitudes were the same as in the “able to fly” condition. But expressed social attitudes of the conservative participants had become much more liberal. Feeling physically safe had indeed significantly changed the conservative participants’ social attitudes to now be much more similar to those of liberals. The unconscious needs of their forgotten evolutionary past, for physical safety, had been somewhat sated by the genie imagination exercise, and this had in turn reshaped their seemingly conscious, intellectual beliefs on current social issues.

In our second experiment, everything was the same as before, except that we asked the participants questions about their openness or resistance to social change (which is the defining quality of a conservative versus a liberal political ideology). In the fly-superpower condition, there was the usual difference, conservatives being more conservative on this questionnaire than liberals. But in the safety-superpower condition, imagining being completely physically safe reduced the conservatives’ resistance to social change to the level of the liberal participants. Our genie really was magical. He had done something no one had been able to do before: turn conservatives into liberals!

Again, we had predicted this effect based on the idea that our modern-day social motives and attitudes are built upon, and are ultimately in the service of, our unconscious evolutionary goals: in this case, our supremely powerful motivation to be physically safe. Satisfying that basic need for physical security through the genie imagination exercise therefore had the effect of turning off, or at least reducing in strength, the need to hold conservative social and political attitudes, much the same as turning off the gas flame under a pot of water causes the water to stop boiling.

Of Germs and Presidents

Since we did our genie study on liberals and conservatives, there has been another U.S. presidential election, in 2016. And what an election year that was! On February 9, Donald Trump won the Republican primary election in New Hampshire. From that day, with his strawberry helmet of hair and reality-TV billionaire bluster, he plowed onward to clinch his party’s nomination in a series of resounding primary victories with little resistance at the polls, though with plenty—pah-lenty—of resistance everywhere else, even inside his own party. And then he topped it off with a stunning upset victory over Hillary Clinton, to become the forty-fifth president of the United States. With an incendiary, off-the-cuff speaking style, Trump created controversy after controversy, which the twenty-four-hour news cycle hungrily gobbled up again and again. He insulted and degraded women, made fun of a handicapped person, and bragged about his penis size and wealth. Tellingly, he also seemed obsessed with germs, and a reporter who followed his campaign and was often backstage with him described Trump as “a germaphobe who doesn’t like shaking hands and will only drink soda from a sealed can or bottle. He keeps a distance from the supporters who come to his rallies.”

During the campaign, Trump very often called his political rivals “disgusting”—most famously, when Hillary Clinton was seconds late getting back to the podium during a televised Democratic candidate debate with Bernie Sanders because she had to go to the restroom. As Trump told his supporters at a rally in Grand Rapids, Michigan, the next day, “I know where she went—it’s disgusting, I don’t want to talk about it,” wrinkling his nose and giving a sour look, to the delight of his crowd. “No, it’s too disgusting. Don’t say it, it’s disgusting.” A few months later, following his own first debate with Clinton, he referred to former Miss Universe Alicia Machado as “disgusting” as well. Without rehashing the whole bizarre campaign, suffice it to say that it was one of the most memorable presidential election seasons in a long time, and according to most observers, a new low in the U.S. public dialogue.

Our physical safety is not only about avoiding physical damage. It is also very much about avoiding germs and disease. We are careful not to eat food that smells spoiled or rotten—we have evolved senses to detect—and we are squeamish about touching things that look dirty or contaminated. As Darwin argued, we are also quite sensitive to the expression of disgust by others around us, and we react strongly and automatically to those expressions by avoiding any contact with what they just ate or drank or touched and with good reason: germs and viruses have wiped out huge swaths of the human population from time to time during recorded human history.

Infection was a real killer in our ancestors’ world. Getting a cut or open wound through which germs and viruses could enter the body was a very serious and potentially life-threatening situation. This was the case even as recently as the U.S. Civil War, in the 1860s, when 62 out of every 1,000 soldiers died not from being shot or stabbed, but from infections. It was only with the invention of the microscope and Louis Pasteur’s discovery of microorganisms that we came to understand how diseases were transmitted. Modern-day improvements in sanitation especially have reduced the threat of plagues, widespread contamination, and spread of disease. Thanks to these advances and to our own personal knowledge regarding the importance of hygiene and protecting cuts and wounds, we are much safer from germs and diseases than we used to be. Still, viruses and bacteria are evolving just like people are. For instance, there seems to be a new strain of flu virus nearly every season.

Over the vast majority of human history, during which our mind became what it is today, there was a very real survival advantage to avoiding anything that smelled or looked as though it was full of germs or bacteria. After all, the ancient world did not feature refrigeration, or health department ratings of food found on the ground. Things that smelled “bad” to us did so for a reason. (Things that smell really bad to us probably smell really good to, say, a dung beetle.) Those of us who were put off by the odors of filthy, germ-laden material avoided them and so were less likely to be contaminated and made ill by them. Disgust and germ avoidance were thus highly adaptive components of our general motivation to remain physically safe, to protect ourselves and our families from disease.

With this in mind, now consider the modern political divide on the issue of immigration: conservatives are strongly opposed to and liberals more in favor of immigration. It was one of the central, hot-button issues in 2016 election-year politics in the United States and elsewhere, made even more prominent by the Syrian refugee crisis. One reason for conservatives’ antipathy to immigration is the change it brings to one’s country and culture. Social change can occur when immigrants bring in their own cultural values, practices, religions, beliefs, and politics. But given the greater concern that conservatives have with physical safety and survival, another reason to oppose immigration could be found in the frequent analogy made by conservative politicians of the past (and present) between immigrants coming to a country (the political body) and germs or viruses entering one’s own physical body. Archconservative leaders of the past such as Adolf Hitler explicitly and repeatedly referred to scapegoated social out-groups as “germs” or “bacteria” that sought to invade and destroy the country from within (and who therefore must be eradicated). If immigration is unconsciously linked to germs and diseases, then anti-immigration political beliefs would effectively be in the service of that powerful evolutionary motivation—disease avoidance.

To test this possibility, we devised two studies around the time of the 2009 H1N1 flu virus outbreak, in the fall, when people are encouraged to get preventative flu shots. That year, the virus was particularly virulent, and for the first time, Yale put antibacterial disinfectant stations all over campus. We conducted our first experiment at lunchtime just outside the Commons dining hall, a large, Hogwarts-esque room with dark wooden paneling, stained-glass windows, long wooden tables, and cast-iron chandeliers hanging from a vaulted ceiling. To turn on participants’ disease avoidance motive, we first reminded them about the current flu rampage, with a handout and a personal message from the experimenter about the importance of getting vaccinated. Then participants answered a survey about their attitudes toward immigration. After they had finished the survey, we asked our participants if they had already been inoculated against the flu or not.

As we’d predicted, those who had been reminded about the threat of flu at the start of the experiment but had not yet been inoculated—and so should be somewhat threatened by the flu virus—had attitudes toward immigration that were significantly more negative. But those who had already been inoculated expressed more positive attitudes about immigration. The reminder about the flu virus also reminded them that they were safe from it because of their flu shot.

We then did a follow-up study at the same campus locale. We reminded all of the participants about the ongoing flu season in the same way as before. But this time we also emphasized how washing one’s hands frequently or using Purell or other antibacterial sanitizers was an effective way to avoid catching the flu. After this message, participants were randomly assigned to (a) be offered a chance to use some hand sanitizer or (b) not be given this chance. We then gave them the same political attitudes survey including items about immigration. And once again, those who had cleansed their hands after the disease-threat manipulation had more positive attitudes toward immigration, and those who had not been given this chance to wash their hands reported more negative attitudes about immigration.

As odd, or even disturbing, as it may seem, our political attitudes are profoundly influenced by our evolutionary past. Deep, primitive needs underlie our beliefs, although we are rarely, if ever, consciously aware of our reasons for holding those beliefs. Instead—myself included—we convince ourselves that our thinking emerges only from rational principles and ideologies, perhaps about rugged individualism and honor, or to fairness and generosity toward others. We are not consciously aware of the winds of our evolutionary past blowing through our attitudes and behavior, but this does not mean those influences aren’t there.

But feelings of disgust affect more than our abstract political attitudes. Simone Schnall and her colleagues at the University of Virginia have shown how feelings of physical disgust, caused for example by being in a very dirty room, influence our feelings of moral disgust, in terms of how morally wrong we believe various behaviors are. The study participants completed morality ratings of various behaviors, such as stealing a drug you can’t afford in order to save your spouse’s life. If they happened to make those ratings in a dirty room, they considered those behaviors to be less moral compared to when those same behaviors were rated by other participants in a clean room.

Our primary, ultimate, deepest evolved motivation for survival and physical safety is at the root of many of our attitudes and beliefs. This need influences us largely unconsciously and usually without our understanding what is really going on. This is not a bad thing, of course. It’s a matter of context. Our deep concern with physical safety and disease avoidance is, without a doubt, highly adaptive. It has become part of our genetic makeup because it helped us, as individuals as well as a species, to survive. It is such a basic and powerful influence in our lives that its reach extends well beyond the concrete, relatively simple tasks of staying alive and avoiding bodily damage. Even our moral judgments, as well as our abstract, conscious reasoning about political and social issues, can be in the service of this paramount motivation, without our realizing it.

Sharing Is Caring

Another evolved trait that helped us survive and stay physically safe is inherently social in nature—the spontaneous and involuntary emotions we experience and outwardly express to others. They were the focus of Darwin’s third major book on evolution, The Expression of Emotions in Man and Animals, his powerful follow-up to On the Origin of Species and The Descent of Man. This third book was all about human social life, because Darwin believed that our emotions evolved to help us communicate important information about safety and disease to each other, and that cooperation and sharing are part of our larger human nature.

Sometime at the end of the 1860s or beginning of the 1870s, Darwin invited twenty friends and acquaintances over to his house in Kent, England, to look at a series of photographic slides. Darwin had been exchanging letters with a French doctor named Guillaume-Benjamin-Amand Duchenne, who was convinced that humans exhibited sixty different emotional states through facial expressions linked to specific muscles. In slightly grotesque support of his theory, Duchenne took photographs of people to whose faces he administered mild electric shocks to engage the muscles. The sepia-tone images were strange and carnivalesque, but the radically different expressions did all look familiar as everyday emotions.

Ever an elegantly clean thinker, Darwin disagreed with Duchenne’s theory. Examining the slides, he concluded that in fact human facial muscles and emotions combined to represent just six fundamental states produced by the full mosaic of facial muscles, not sixty distinct ones linked to distinct muscle groups. “Prompted by his doubts regarding the veracity of Duchenne’s model,” writes Peter J. Snyder, whose team of researchers discovered and in 2010 published archival evidence of this forgotten experiment, “Darwin conducted what may have been the first-ever single-blind study of the recognition of human facial expression of emotion. This single experiment was a little-known forerunner for an entire modern field of study with contemporary clinical relevance.”

Darwin gave eleven of Duchenne’s slides to the people he had invited to his house and asked them what emotions each slide represented. With no preconceptions or suggestions to bias them, they in effect agreed with Darwin, sorting the slides into just a handful of universal emotional states, such as fear and happiness. This seemed to confirm his theory that certain emotions come factory-equipped inside the human mind and body.

Inexplicably (and unfortunately), for almost a full century after Darwin published his book on emotions, the psychological sciences did next to nothing with his insights. Then, in 1969, Paul Ekman and his colleagues published a groundbreaking paper that both ratified and expanded on Darwin’s ideas. After collecting a staggering amount of data from every corner of the world, Ekman and Wallace V. Friesen showed that not only were basic types of emotions human universals, but so were their manifestations. In cultures across the globe—even in primitive ones that had existed in isolation from the rest of us for the last several thousands of years—we expressed the same emotions with the same facial muscles and expressions. Wherever these researchers went, subjects showed anger with the same bared teeth and close-knit eyebrows, and the subjects knew that others making this face were angry. The same went for happiness and other keystone emotions. Darwin was right.

As Darwin went on to theorize in his book, our species evolved to both feel and express emotions automatically and involuntarily because these two behaviors helped us to survive. Darwin crucially understood that we don’t choose to have particular emotions but, rather, that they happen in us unconsciously. (We would never choose to feel anxiety and worry, yet they serve useful functions; they get us up and doing something about a problem, before it’s too late.) Darwin did recognize that people can also voluntarily and consciously express emotions in several ways, and even fake them. We can try to appear pleased and happy with a gift we find disgusting (say, a joke coffee mug in the shape of a toilet), and we can mostly suppress our glee during our office rival’s epic fail of a boardroom presentation. But even so, Darwin believed that our emotions were better expressed unconsciously, and that they leaked out despite our attempts to manage them. As the Eagles sang, “You can’t hide your lyin’ eyes.”

Above all, Darwin observed, our involuntary emotional expressions serve an important communicative function to the others around us—that there is something to be afraid of, such as drinking this water or biting into this berry—and for that information to be valid the emotional expression has to be largely automatic and involuntary. This explanation of facial expressions brings us to another fundamental and innate component of the human drives to survive and reproduce that we unconsciously possess, even in early childhood, as we are building our social bonds: cooperation with one another.

Our emotional expressions were the original way humans shared information with each other about the state of the world. Primatologist Michael Tomasello has devoted his career to the study and comparison of humans to our closest genetic neighbors—other primates, such as apes and chimpanzees. Tomasello argues that there is an “intrinsic human desire to share emotions, experiences and activities with others.” He has concluded from his decades of research that our evolved motivation to cooperate and coordinate our activities with others is no less than the crowning trait that distinguishes us from other primates. A brief glance around at human civilization (and a moment’s comparison with the collective feats of any other species) will tell you just how important that single difference between us and other animals has been.

If cooperation is an evolved motivational tendency—in the ultimate service of our survival, just as eating and breathing are—then it should be present in young children even before they have sufficient life experience to develop it on their own. To test whether our cooperative tendencies are innate, Harriet Over and Malinda Carpenter, researchers at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany, assembled sixty eighteen-month-old toddlers and had an assistant show each of them a series of eight colorful photographs of everyday household objects, such as a bright red plastic toy teakettle, a shoe, and a book. In the upper right corner of each picture were other, smaller objects, not the main event of the photograph, but off to the side. It was this smaller feature of the photograph that was designed to trigger the unconscious goal of cooperation in the young children. For one group of children, two dolls were shown in the upper right corner of each picture. These two dolls were always close to and facing each other, signaling a bond of friendship between them. Other groups of children were shown other things in the upper right corner of each picture—for some it was the same two dolls but facing away from each other, for another group it was colorful blocks. The researchers predicted that the children who were shown the two friendly dolls would cooperate with the experimenter more than the children in the other photo conditions, because the friendship between the dolls is a cue to the innate, evolved human motivation to help and cooperate. The other conditions of the experiment all lacked this key ingredient of the friendly dolls.

After an assistant showed the infant the eight color photographs, the experimenter came in to play with the child, bringing some wooden sticks, which she then pretended to accidentally drop. She then waited ten seconds to see if the child would spontaneously help on his or her own, without needing any requests for help from the experimenter. The results were quite clear: 60 percent of the children in the friend-doll-priming condition spontaneously got up to help the experimenter pick up the sticks, compared to only 20 percent in all other conditions.

This study makes several important points. First, that even children as young as eighteen months will spontaneously help, without being asked or told to do so, consistent with Darwin and Tomasello’s notion that we were born to cooperate. Second, those children didn’t help just anybody, but only when the idea of a personal bond of trust was active in their minds (caused by seeing the two friendly dolls). In normal life outside of the laboratory, this idea of trust and friendship would be active instead when they were with people such as family members that they love and trust. Third, both the friendship cue and the cooperation goal operate unconsciously. It was just subtly there in the background, not even the main, large feature of the photographs. Yet the presence of those two friendly dolls in the corner of the pictures was sufficient to unconsciously cue the idea of social bonds in the toddlers, and the cue of trust and friendship was the gateway to their spontaneous cooperative behavior.

Sometimes, then, an innate or evolved tendency does not manifest itself in our lives no matter what. We cooperate, yes, but only with people we feel we can trust. This makes a lot of adaptive sense, because we can certainly be taken advantage of (and many people are) if we blindly trust and cooperate with just anyone. Learning and knowing whom we can and can’t trust is one of our major life tasks, and as the Over and Carpenter study of the eighteen-month-old children shows, we are already making those choices soon after birth. This leads us to the basic idea of the next chapter—that there are innate tendencies gleaned from our hidden evolutionary past that depend as well on what happens in our own very early (and equally hidden) experience as infants with our parents, siblings, and social group. We will pick up the story of how nurture interacts with nature to unconsciously influence whom we trust and help, and whom we don’t, in Chapter 2. For now, though, let’s turn to another facet of the forgotten evolutionary heritage lurking in our mind. Our genes certainly care a lot about our safety and survival, but with one main underlying goal—surviving long enough to have children. Random genetic improvements in our ability to survive increase our chances to mate and pass these improvements down to our offspring. This last objective, of course, is one of our other fundamental drives: to reproduce.

The Selfish Gene

In 2013, scientists discovered something new about Ötzti—he had children.

The murdered mummy of the Alps, it turned out, had lived on—through his genes. Researchers collected and analyzed blood samples donated by nearly four thousand people in the region of Austria near Ötzti’s final resting place, and they found matches. Nineteen, to be exact. These people shared a genetic mutation that linked them to their posthumously famous ancestor. The existence of these very distant relatives of Ötzti cast a new light on his story. Yes, he undoubtedly failed at his number one drive, both conscious and unconscious: to stay alive. But he succeeded at the other overarching goal our brain evolved to achieve: to pass our genes down to the next generation. Or to put it more sweetly, to have kids.

Much of the early, original work in the field of evolutionary psychology focused on just this: “mating.” As Richard Dawkins argued in his landmark book The Selfish Gene, our genes are all about getting themselves into the next generation. Think about it: without exception, each and every one of your direct ancestors had children. It was something all of them were successful at. If this hadn’t been the case, you would not be here today reading this book.

As we saw with our unconscious need for physical safety, our biological mandate to reproduce can have surprising manifestations in today’s world. One of the best examples comes from an Italian study conducted from August 2011 to September 2012. These researchers carried out an intriguing experiment about the effects of physical attractiveness on hiring practices, without actually putting any participants into a lab room together. They sent 11,000 résumés to 1,500 posted job openings. The disproportionate number of résumés to openings was because they sent multiple résumés to each posting. Each of the résumés had the exact same work history, and thus equal qualifications for the job. Some of the applications had photographs attached, while some didn’t. (Also, of course, the names were different.) The applicant was described as being either Italian or foreign, and as male or female. Of the résumés with photographs, the applications were evenly divided up to have one attractive man and one unattractive man, and one attractive woman and one unattractive woman. (The subjects were rated on attractiveness by another group of people when the experimenters were developing the study materials.) Since the résumés were otherwise identical, different responses would have to be attributed to this variable—the photo. So the researchers were basically asking: would having an attractive photograph attached to your résumé increase your chances of being called in for an interview?

The answer was a resounding “yes.” Overall, Italian applicants were favored over foreign applicants. This isn’t surprising. Among the Italian applicants, however, being attractive was a definite advantage, especially for females: attractive females were far more likely to be called back than unattractive females with the same qualifications, by a whopping 54 percent to 7 percent. There was also a considerable though less dramatic advantage for attractive men over unattractive men, 47 percent to 26 percent. Based on the study’s findings, you’d be better off sending in no photo at all than an unattractive one; the callback rates in the no-photo conditions were higher than in the unattractive-photo condition. The results of this study are dispiriting from an egalitarian perspective, if not shocking. This phenomenon has a name: “the beauty premium.”

Like it or not, physical attractiveness is a significant predictor of career advancement and promotions. Workers of above-average looks earn 10–15 percent more than workers of below-average looks, a gap that is comparable to race and gender gaps in wages. The question is why this is so. After all, there are laws against discrimination, plus many companies have stern guides for hiring practices. Moreover, there are countless good-hearted bosses and personnel directors who passionately believe in equal opportunity and try to hire the most qualified person for the job, no matter what they look like. The point is, even these well-meaning people are prone to pander unwittingly to the beauty premium. According to the authors of this report, their unconscious mating drive is part of the reason.

You don’t have to be a teenager to know that our adult conscious minds are often preoccupied by sexual thoughts and feelings, and that we all would rather look at people who are physically attractive than at those who are less attractive. (Brain imaging studies have shown that when heterosexuals are shown faces of attractive opposite-sex individuals, the reward centers of their brain become activated.) Less obvious is how these feelings invisibly influence our behavior when they really “shouldn’t,” since they run counter to the egalitarian, merit-based ideals most of us genuinely subscribe to. Most likely, many of the Italian hirers (who didn’t know they were part of an experiment) would claim that the photo didn’t affect their decision, or would be willing to reconsider their choice if it could be proved that they had been swayed by the beauty premium unconsciously.

We have this bias toward attractiveness because of our selfish-gene history: the unconscious mandate to reproduce, reproduce, reproduce, so that we as a species don’t go extinct. This deep-seated urge is so strong that studies have shown that men’s mating motives are triggered by the mere presence of attractive women, even when they are trying to focus on something else. One study showed, for example, that when working on a difficult, attention-demanding task in the laboratory, male participants were distracted more and had worse performance when interacting with a woman during the task (but not when interacting with a man), and even more, the more attractive the woman, the worse the male participants’ performance on the task. While this may sound like science backing up familiar caricatures of unreformed male horniness, these hidden “behaviors” occur in all of us. In a certain sense, our bodies are in constant stealthy, unconscious communication.

Physical attractiveness is not the only trigger for the mating motive. Our unconscious detects hormonal signals of fertility that operate through the nose. In one of a series of fascinating studies on hormonal influences, Florida State University researchers showed that heterosexual male college students were more attracted to a female participating in the same study when she happened to be at the peak ovulation time of her cycle than when she was at the least fertile period, without the young men being aware of this influence at all. They also were more likely to unconsciously imitate and mimic the woman during her fertile than her nonfertile days; as we will see in Chapter 7, this subtle mimicry is a natural and unconscious tactic we use to bond with new acquaintances. Again, the men in these studies were completely unaware of how these subtle fertility cues, unavailable to their conscious awareness, influenced their attraction and behavior toward the women. Of course, all of this leads our species to that most universal of experiences—family.

I live in the countryside, across from a lake, and down the road is a small working farm. If you travel down my road in the springtime, you can see unconsciously operating evolutionary goals all over the place, everywhere you look. Every spring the little goslings stay very close to their mother goose and father goose, and often we have to wait patiently in our cars as they cross our road in single file—one parent in the lead, the other bringing up the rear. The baby cows wander around the large hayfields with the rest of the cows; the baby deer stay close behind their mother. They instinctively keep close to their parents and to other animals of their kind. You don’t see the various baby animals—the little cows, deer, and geese—all playing together in the farmyard like in some baby-animal playschool. They stay close to their parents and siblings instead. Newborns, whether they are ducklings or humans, must depend on their parents and caretakers to keep them warm, fed, and safe from predators. It’s part of their, and our, hardwired nature, and it’s a matter of survival.

The little farmyard animals and their parents are bonding. They do not blindly trust other animals or even their own kind outside of their own small social circle; trust can be exploited and you can be taken advantage of, to the exploiters’ benefit but your harm. This early experience is important to survival. In humans, our early experiences set the tone not only for whom we can trust as an infant and small child, but whether we feel we can trust people in general or not, for the rest of our lives. Like our long evolutionary past of survival and reproduction, which compels us to crave physical safety, to be able to deal with a fast-moving world without having to stop and think about it, to avoid contamination and disease, to share information through our emotions, and to help our friends and family, our own personal past of early experience stamps its own indelible unconscious influences on us. Yet we have few if any memories of these early years of our life, causing us to be largely unaware of how powerfully they have shaped our feelings and behaviors. Our experiences in these years constitute a second form of hidden-past influences upon us, and these are the focus of the next chapter.