2

HEROES AND ANTIHEROES

HEROISM AND ANTIHEROISM both ultimately boil down to suffering. What is heroism except relieving or preventing someone else’s suffering? What is villainy except causing it? What this means, unfortunately, is that gaining a better understanding of the roots of goodness and evil, compassion and callousness, requires that somebody suffer. I found this out the hard way—hard as a fist, hard as a slab of concrete—midway through my first year of graduate school, when I was violently assaulted by a stranger. The incident served as a bizarre counterpoint to having been rescued by a stranger. I’m not exactly glad it happened, but it undoubtedly gave me a more complete understanding of the human capacity for callousness and cruelty.

It happened soon after the clock struck midnight on December 31, 1999—that giddy moment when the world realized it would not be ending in a massive global computer meltdown courtesy of the bug known as Y2K. I, along with several of my closest childhood friends from Tacoma, had convened to celebrate the event on the Las Vegas Strip. This was probably unwise. I knew that at the time. The Strip is a bit of a mess even on a quiet off-season night. On the New Year’s Eve that marked the dawn of a new millennium, “mess” doesn’t begin to describe what it was like. It was chaos, it was Mardi Gras on steroids, it was an endless sea of giddy, drunk, raucous humanity stretching for miles in every direction.

My friends and I were six twenty-three-year-old women who collectively made a second unwise decision, which was that the theme of our night would be “sparkles.” Sparkly dresses, sparkly halter tops, sparkly makeup. Also, silly New Year’s–themed, glitter-caked cardboard hats and flashing-light sunglasses. We were shooting for glamorous and fell more than a little short. Luckily, Las Vegas standards are not high. When, at the beginning of the night, the six of us and all our sparkles poured out of the elevator and onto the floor of the casino hotel where we were staying, the whole floor burst into spontaneous applause. We heard people shouting, “Whooooo!” and thought we were the most spectacular things in town. It seemed an auspicious start to the evening.

For most of the hours leading up to midnight, we had a ball. Everyone was in a great mood. Televisions in the casinos showed that the clocks had rolled over into 2000 in Australia, and the world had remained on its axis. No computer meltdowns, no shutdowns of city grids. All the people we met, most of whom were roaming around like us in large flocks of twenty-somethings, were ebullient. Buying each other drinks, stopping to pose for group pictures—not something people normally did back then either, when taking pictures required using an actual camera and waiting hours or days to see the results.

But as the evening wore on and our sparkles faded, people’s manners started to fade as well. People—men, specifically—started getting grabby. At first it was just the occasional, seemingly errant brush of the hand. But as the hours and drinks piled up, it escalated to grabs and squeezes of breasts and backsides. By midnight, my friends in dresses could feel hands creeping up inside their skirts and down their tops when they stopped to take pictures. I was wearing leather pants and managed to escape some of that indignity, but I lost count of how many times strange men squeezed my ass.

At first, honestly, it was all sort of funny. We were drinking and giddy just like everyone else. It seemed mostly harmless—there were lots of other people around, men and women both, and the Strip was brightly lit and lined with police officers. It never occurred to me that anything worse than a little silly grabbing would happen. Then I saw someone die.

He was young, midtwenties at most. Maybe he was trying to get a better view of the Strip, or impress his friends, or maybe the night’s wild frissons just drove him to try something wild. Whatever the reason, he climbed up a metal traffic signal pole on the Strip and ventured out onto the arm that extended over the street. It was impossible to tell from below, but the wires that run through these arms are exposed. His hand made contact with one, and he tumbled, lifeless, to the pavement below. Even if the electric shock hadn’t killed him, the fall might have. I read later that he’d landed on his head. That night all I saw was a man up on the pole, and then, a fraction of a second later, he had fallen and the crowd around me was shouting incoherently. The news spread from group to group that the man on the pole was dead. We didn’t even know yet if it was true, but the night took on a newly sinister feel.

Getting constantly grabbed rapidly went from funny to tiresome to infuriating. The evening’s alcohol was wearing off, and I was tired and my boots were giving me blisters. I remember muttering to myself as I hobbled along, “The next guy that grabs my ass…” I didn’t even have time to finish the thought before one did. I spun around and glared at him. He grinned proudly back. He was muscular and broad-faced with slicked-back and gelled blond hair. He was also very short. His face, his idiotically leering face, was almost level with my own. I don’t know if it was the leer or the gel or just that his grab was the last one I could tolerate, but I slapped him. Pretty hard too.

I saw his grin falter, to be replaced with a flicker of annoyance, and before I had time to think or duck or even turn my head, his fist was hauling back and then smashing into my face with brutal force. The world went wavy and dim as my head snapped back and I crashed down onto the concrete, blood streaming from my broken nose. A murmuring crowd gathered around me. I felt dazed, and I couldn’t get the legs and feet around me to come into focus. It took a moment to figure out that the force of the blow had knocked out one of my contact lenses. My friend Heather rushed over. She cradled me as I struggled to gather myself, the blood from my nose trickling down my sparkly top and over her hand.

As she was helping me to my feet, two police officers approached us. They were dragging a man between them—a man whose panicked face I’d never seen. They shook him by the shoulders.

“Is this him?” one shouted. “Is this the guy who hit you?”

His T-shirt was the wrong color. He was too tall. It definitely wasn’t him.

“No,” I said, shaking my head, “That’s not him.”

They let him go, and he disappeared into the crowd. I figured my assailant must have done the same. It would be impossible to find him in the swarming sea of people. We turned to leave, and I felt a tap on my shoulder. A woman with blazing eyes stood beside me. Her breath smelled of beer as she leaned in close and murmured in a low and satisfied voice, “I don’t know if you saw what happened. A bunch of guys saw that fucker hit you. They chased him down. He’s pretty much a smudge on the pavement now.”

The whole incident left me newly tormented. It all seemed so bizarre that I would have been tempted to assume I had dreamed it, were it not for the black eyes blooming across my face the next morning and the fact that my nose was crooked and puffed up to three times its normal size.

I had led a fortunate life in many ways. Intellectually, I knew that violence occurred. My hometown of Tacoma was a hotbed of gang activity throughout the 1980s and 1990s, and the local news was full of shootings and stabbings and muggings. More than one serial killer was picking off Tacoma residents during those years as well. But I had never personally been seriously harmed by anyone. The poet John Keats was correct in observing, “Nothing ever becomes real ’til it is experienced.” There really is no substitute for getting your own face smashed in to make you appreciate at a gut level that the world contains people who will actually hurt strangers to serve their own brutal purposes.

My roadside rescuer had made me believe in the possibility of genuine altruism. But more than that, his actions had cast a wider glow over the rest of humanity, whose capacities for altruism were still untested. Perhaps, I’d thought, my rescuer was just one of a vast swath of people who were also capable of great compassion. But what happened to me in Las Vegas didn’t stay there. It followed me wherever I went, gnawing at me, whispering in my ear that perhaps I should reconsider my beliefs about human nature. Maybe my rescuer was an anomaly, and my attacker one of many. Who knew how many of the strangers I passed on the streets every day had the capacity to do what he had done? Every man I knew reassured me that under no circumstances would he ever punch a woman in the face, regardless of whether she had slapped him, regardless of how much he’d had to drink. But the fact remained that a whole mass of other strange men had rushed my attacker that night and brutally assaulted him next. Did the capacity for such violence lie latent in many or most people? I signed up for a self-defense class, just in case.

My psychology studies offered me no comfort. Here I was at the university my Dartmouth professor Robert Kleck winkingly termed the “center of the intellectual universe,” immersed in the best that empirical research had to offer about the nature of human cognition and behavior, and most of it seemed to point to the same terrible conclusions as my Las Vegas encounter. I learned about the infamous case of Kitty Genovese, a Queens, New York, resident who had been brutally murdered on the street outside her apartment building as (so the story then went) thirty-eight witnesses watched silently, none calling for help. The results of follow-up psychology studies by Bibb Latané and John Darley seemed to confirm the reality of the apathetic bystander. I learned about Philip Zimbardo’s infamous Stanford Prison Experiment, during which a more or less random sample of Stanford University undergraduates were turned, practically overnight, into cruel and sadistic prison guards simply by donning the requisite role and uniform. So many studies seemed to convey the same message about humans’ terrible capacity for cruelty and callousness.

Perhaps the most infamous of these studies—and also possibly the most important—were those conducted by one of Harvard’s most eminent PhD students and, later, psychology faculty alumni. Stanley Milgram’s research was so controversial it ultimately cost him his Harvard job and tenure. A psychologist of uncanny brilliance and prescience, Milgram is still ranked among the most influential psychologists of the last century (number 46, to be exact). Among his many claims to fame is that he conducted the research that proved “six degrees of separation” is a real thing. In 1963, Harvard hired Milgram away from Yale shortly after he’d concluded another series of studies that may represent the most notorious use ever of electric shocks in psychology research. Like every other psychology major in the world, I had learned as an undergraduate about these studies and the savage cruelty that they showcased.

But also as nearly everyone does, including most psychologists, I initially drew entirely the wrong conclusions from them.

In 1961, Milgram posted newspaper advertisements in New Haven and Bridgeport, Connecticut, inviting local men to volunteer for a study researching how punishment affects learning. When each volunteer arrived in Milgram’s Yale laboratory, he was led into a testing room by an angular, stern-looking experimenter in a lab coat. The experimenter introduced the volunteer to a stranger named Mr. Wallace, who, the experimenter explained, had been randomly selected to be the “learner” in the experiment. The volunteer had been selected to be the “teacher.” All the volunteer had to do was “teach” Mr. Wallace a long list of word pairs, like “slow-dance” and “rich-boy.” Simple enough.

The experimenter showed the volunteer and Mr. Wallace to their seats, which were in adjoining rooms connected by an intercom. Mr. Wallace wouldn’t just be sitting, though—he would be tied down. Before the experiment began, and while the volunteer looked on, the experimenter bound both of Mr. Wallace’s forearms to the arms of his chair with long leather straps, ostensibly to “reduce movement.”

One can only imagine what went through each volunteer’s mind at that point. Video footage shows them to be such a wholesome-looking bunch, in their dapper 1960s haircuts and collared shirts. Here they had volunteered for a Yale research study to help science and make a little money, and before they knew what was happening some mad scientist was tying a middle-aged stranger to a chair right in front of them.

The volunteer and experimenter left the room and the experiment began. First the volunteer would read a long list of word pairs through the intercom to Mr. Wallace. Then he would go back to the beginning of the list and read out one word from each pair. Mr. Wallace would try to remember the other word. If he guessed right, they’d move on. If he guessed wrong, he was punished. The volunteer had been instructed to pull one of a long row of levers on a switchboard after each wrong answer. Each lever was marked with a different voltage level, ranging from 15 volts at the low end to a high of 450 volts. Pulling a lever completed a circuit within the switchboard and delivered an electrical shock of that voltage to Mr. Wallace’s tied-down arm.

Nearly all the “teachers” went along with the experiment for a while. The experimenter reassured them early on that the shocks were “painful, but not harmful.” But as the study progressed and the teacher pulled lever after lever as wrong answers mounted up, the shocks grew stronger. Mr. Wallace started to grunt each time he got a shock, then to cry out in pain. He began complaining that his heart was bothering him. Eventually, the shocks drew long, ragged screams from him, and he bellowed through the wall, “Let me out of here! Let me out! LET ME OUT!

Then he fell silent.

After that point, any teacher who elected to carry on could only grimly continue delivering shocks to Mr. Wallace’s unresponsive arm.

Only nobody was expected to carry on that far. Before the study began, Milgram had polled a number of expert psychiatrists about what they predicted would happen. They overwhelmingly agreed that only a tiny fraction of the population—perhaps one-tenth of a percent—would continue administering shocks to a stranger who was complaining about his heart and screaming for mercy.

The experts were overwhelmingly wrong. Fully half of Milgram’s volunteers continued administering shocks right through Mr. Wallace’s chest pain and screams and well past the point when he fell silent. No external reward motivated their behavior. They would keep their four dollars and fifty cents payment no matter what they did. The only thing urging them along—very mildly—was the experimenter. When a volunteer started to protest or asked that the experiment be stopped while someone checked on Mr. Wallace, the experimenter would reply, “The experiment requires that you continue.” Calm prods like this were all it took to induce ordinary American men to subject an innocent stranger to terrible pain, grievous harm, and, as far as they knew, death. One volunteer later said he was so sure that he’d killed Mr. Wallace that he anxiously monitored the local obituaries for some time after the experiment.

Of course Mr. Wallace didn’t die—nor did he actually receive any shocks. He wasn’t even named Mr. Wallace. He was part of the act, an amiable forty-seven-year-old New Haven accountant named Jim McDonough who had been hired and trained for the role of the study’s purported victim.

Nor were the studies aimed at understanding learning. Milgram was really studying obedience to authority—specifically, whether ordinary people would commit acts of cruelty or brutality if told to do so by someone in authority. The research was inspired by the trial of Adolf Eichmann, the Nazi officer who carried out some of the worst atrocities of the Holocaust. Captured in Argentina in 1960 by Israel’s Mossad intelligence forces and made to stand trial for his crimes, Eichmann’s defense was shocking. He claimed to feel no remorse for his actions, not because he was a heartless monster, but because he had simply been following the orders of those in authority. Later, pleading for his life in a handwritten letter to Israel president Yitzhak Ben-Zvi, Eichmann protested, “There is a need to draw a line between the leaders responsible and the people like me forced to serve as mere instruments in the hands of the leaders… I was not a responsible leader, and as such do not feel myself guilty.” In essence, Eichmann was claiming that his superiors instructed him that the Final Solution required that he continue.

So he continued.

What Milgram discovered at Yale did not necessarily demonstrate that Eichmann had been telling the truth. Indeed, more recent evidence has suggested that functionaries like Eichmann were not simply cogs in a machine but were proactively and creatively working to advance Nazi causes.

But Milgram did show that Eichmann could have been telling the truth. His studies showed that, under the right circumstances, ordinary people will engage in horrific, sadistic crimes if an authority figure who is willing to take responsibility for the outcome instructs them to do so. In another era, under a different regime, Eichmann might indeed have led an ordinary, blameless life. There may have been nothing fundamentally evil about him as a person that would have inexorably led him to perpetuate atrocities. The opposite is also true, of course. Under the right circumstances, ordinary, otherwise blameless people—say, a shopkeeper from Bridgeport, Connecticut—could wind up plotting the torture and deaths of millions of innocent people. After all, the otherwise ordinary, blameless Connecticut shopkeepers and millworkers and teachers in Milgram’s studies had, to their knowledge, willingly taken part in torture, false imprisonment, and perhaps even murder in exchange for four dollars and fifty cents.

Milgram would muse in an interview on CBS’s Sixty Minutes some years later, “I would say, on the basis of having observed a thousand people in the experiment and having my own intuition shaped and informed by these experiments, that if a system of death camps were set up in the United States of the sort we had seen in Nazi Germany, one would be able to find sufficient personnel for those camps in any medium-sized American town.”

Nobody disputes Milgram’s basic findings, at least not explicitly. But on some level, most people also don’t really buy them. Deep down, nobody really believes that Adolf Eichmann was an ordinary guy who happened to work for bad managers. Neither do most people believe that a mild-mannered authority figure could induce them personally to override their own moral values and torture someone. Psychology students who watch the videos of Milgram’s experiments in classrooms across America every year all reassure themselves, That would never be me. So do the Internet surfers who come across the studies on Wikipedia. That would never be me, they think. Maybe some middle-aged, cigarette-smoking, work shirt–wearing, Connecticut-accented, Mad Men–era dupe would be gullible enough to follow those orders, but not me.

But midcentury conformity has nothing to do with it. Neither does gender or age or social class. Male and female college students in California who were run through a nearly identical experiment only a few years ago acted no better than Milgram’s subjects. Versions of the study have been run with varying compositions of study participants across generations and countries—from England to South Africa to Jordan—and they have all replicated Milgram’s findings. What do these numbingly familiar results mean? That none of us—not you, not me, not Pope Francis or Bono or Oprah Winfrey or anyone else—can claim with confidence that, if it had been us called into Stanley Milgram’s Yale laboratory, we wouldn’t have kept pulling those levers too.

The basic findings of these studies are clear and widely accepted. They are also, unfortunately, often misinterpreted. It is easy to draw the conclusion after learning about Milgram’s studies or watching the video footage of them that people are uniformly callous and heartless, that within each of us lies a little Eichmann content to inflict terrible suffering on strangers. I certainly did when I first learned about the studies. But in fact, this is not at all what they show.

First of all, when you watch the video footage of the studies, it is obvious that the volunteers were anything but heartless. Even the ones who kept on shocking Mr. Wallace until the bitter end were visibly miserable. They paused and sighed gustily. They buried their heads in their hands, rubbing their foreheads before drying their sweaty palms on their pants. They chewed on their lips. They emitted nervous, mirthless chuckles. Between shocks, they implored the experimenter to let them stop. Milgram reported that at some point every participant either questioned the experiment or refused the payment he had been promised. When the experiment finally did stop and it was revealed that Mr. Wallace was only an actor, the participants looked shaky with relief. The major reason the studies are now considered ethically dubious is because of how much the volunteers themselves appeared to suffer.

Second, the volunteers’ responses weren’t uniform. True, fully half of the volunteers carried out all of the instructed shocks when Mr. Wallace was seated in a separate room from them. But at some point the other half refused to continue. Even more refused in a variation of the study in which all the men sat in the same room. On the other hand, many fewer refused when Mr. Wallace was sealed off in a separate room that left him totally inaudible to the volunteers. Milgram ran these and many other permutations of the study designed to make either the experimenter’s authority or Mr. Wallace’s suffering more or less obvious. The proportion of volunteers who continued carrying out the shocks fluctuated in each permutation, but never did the volunteers behave as a bloc. Inevitably, some continued following the experimenter’s orders while others refused—bucking authority to spare a stranger from harm.

It’s worth taking a moment to flip things around—to think about what motivated those who ultimately disobeyed the experimenter’s orders. After all, why not just keep on shocking Mr. Wallace? In theory, if people are uniformly callous, this is what they should all have done. It was the path of least resistance. There was no external reward for stopping. Nor was it likely that the volunteers feared punishment if they kept going—the experimenter repeatedly reassured them that he would take responsibility for Mr. Wallace’s fate. Did social norms constrain them? Probably not. In a situation so far out of the ordinary—leather arm straps, lab coats, a shock generator—exactly what social norms would have applied? So if our refuseniks neither anticipated reward nor feared punishment, and weren’t trying to adhere to some norm, what was left? What about compassion—simple concern for the welfare of someone who was suffering?

This seems the only likely explanation. The volunteers’ entreaties to stop the experiment always invoked Mr. Wallace’s welfare. Those who eventually stopped administering shocks said it was because they refused to cause him further suffering.

Even more striking, when you look across all the permutations of the study, it becomes clear that compassion is a stronger force than obedience. Think about it this way: When Mr. Wallace was seated in a separate room—invisible and audible only through the intercom—and the experimenter was in the room with the volunteer, the proportion of people who obeyed versus bucked authority was perfectly balanced. Milgram described the influences that the experimenter and Mr. Wallace exerted as analogous to fields of force. That an equal balance between these forces was achieved when the experimenter, but not Mr. Wallace, was standing right next to the volunteer suggests that the experimenter’s authority was a weaker force than Mr. Wallace’s suffering. To exert equal influence, the experimenter needed to be physically closer. When the experimenter and Mr. Wallace were in equal proximity to the volunteer—when both were in the room with the volunteer, or both were outside it—fewer than half the volunteers fully obeyed. The pull of compassion, on average, was stronger than the pull of obedience.

This is an oddly heartwarming message from a study not usually thought of as heartwarming: Milgram actually demonstrated that compassion for a perfect stranger is powerful and common. This is particularly interesting given that Mr. Wallace was hardly the world’s most compassion-inducing person. He was a portly, middle-aged man who wasn’t especially cute or cuddly-looking and who was a stranger to the volunteers in the study. They’d never met him before, they spoke with him only briefly before the study started, and they were unlikely to ever see him again. He never did anything for them. Why should they have cared about his welfare at all? And yet they did. They ultimately cared more about Mr. Wallace’s welfare than they cared about obeying authority, even though their obedience is what everyone remembers.

Now, you could argue that compassion that merely stops someone from zapping a stranger with painful shocks isn’t very impressive. More impressive would be compassion that moved volunteers to make some sacrifice to help Mr. Wallace—to give up their payment or undergo some risk to make the shocks stop. Or, even better, to offer to switch roles and receive the shocks in his place. Sadly, Milgram never thought to give his volunteers that chance. But someone else did. Although he is not as well known as Milgram, no social psychologist has uncovered more about the nature of human compassion than Daniel Batson.

Batson holds not one but two doctoral degrees from Princeton University: one in theology and one in psychology. He is linked to Milgram by only one degree of separation: his psychology graduate mentor was John Darley, famed for his studies of bystander apathy. Darley earned his doctoral degree from Harvard in 1965, when Milgram was on the faculty. Darley would probably have taken classes with Milgram, and he certainly crossed paths with him. Darley’s student Batson spent his academic career at the University of Kansas conducting research on spirituality, empathy, and altruism—including one study undoubtedly inspired by Milgram’s. But Batson’s study used electrical shocks to investigate how far compassion would drive ordinary people to help a stranger.

Batson recruited his volunteers for the study—all of whom were women—from an introductory psychology course. Each volunteer arriving in the lab was met by an experimenter who told her that the other subject in the study that day was running a little late and could she read a description of the study while they waited? Then the experimenter handed the volunteer a leaflet that described a study that was similar in many ways to Milgram’s. It explained that the study was investigating the effects of electric shocks on work performance. As in Milgram’s study, Batson’s volunteers believed that random chance dictated that the other volunteer would be receiving the shocks instead of themselves. But Batson’s volunteers would not be administering any shocks personally. They would merely be watching the other volunteer being shocked via closed-circuit television while they evaluated her performance.

Watching someone get shocked sounds much easier than actually giving someone shocks, and initially it probably was. The other “volunteer,” actually an actor posing as an undergraduate, eventually arrived, and the first volunteer watched her on-screen as she introduced herself to the experimenter as Elaine and was escorted into the shock chamber. There the experimenter explained the study to her and attached electrodes, much like those used by Milgram, to her arm. Elaine stopped her at one point to ask how bad the shocks would be. The researcher answered that the shocks would be painful but wouldn’t cause any “permanent damage.” After this less than reassuring response, the experiment began.

Elaine’s job was to remember many long series of numbers. Every so often, while she was in the middle of trying to recite the numbers, the experimenters would administer a strong shock to her arm. It was obvious to the volunteer watching through the monitor how much pain the shocks caused Elaine. With each one, her face contorted and her body jerked visibly. A galvanic skin response reading showed that her hands were sweating profusely. Her reactions grew stronger as the study progressed and the volunteer watched from the other room while trying to evaluate poor Elaine’s memory performance. You can imagine her relief when the experimenter eventually paused the experiment to ask Elaine if she was able to go on. Elaine replied that she could, but could they take a break so she could have a drink of water? When the experimenter returned with the glass, Elaine confessed that the experiment was bringing back memories of having been thrown by a horse onto an electric fence when she was a child, a traumatic experience that left her fearful of even mild shocks.

Hearing this, the experimenter protested that Elaine definitely should not continue with the study. Elaine backtracked, saying she knew the experiment was important and she wanted to keep her promise to complete it. The experimenter was briefly stumped. She thought for a moment, then suggested another option. What if Elaine switched places with the volunteer watching from the other room and they carried on with the experiment with the roles reversed?

The experimenter returned to the room where the volunteer sat. She closed the door behind her and explained the situation. The volunteer was completely free to make whatever choice she wanted, the experimenter emphasized: to switch places with Elaine or to continue as the observer. The experimenter even gave some volunteers an easy out—if they decided to continue as the observer, they could just answer a few more questions about Elaine and then they were free to go. They didn’t have to watch Elaine any more on the screen. Other volunteers were told that if they chose to continue as the observer, they’d have to watch Elaine get up to eight more shocks.

Put yourself in the volunteer’s place for a moment. You’ve been watching a stranger obviously suffering. Maybe you’ve been trying to tune out her reactions to the shocks, or maybe you’ve been thinking of asking the experimenter to stop the experiment. Maybe you’ve just been feeling relieved that it wasn’t you getting shocked. Then suddenly the experimenter appears and turns the tables: it’s up to you to decide what happens next. Would you let Elaine keep suffering, or would you be willing to suffer in her place? Would it make a difference if you had to keep on watching her suffer? Batson didn’t query any psychiatrists in advance about what they thought the volunteers would do, but perhaps you have a guess. Would any of these teenage women volunteer to receive painful electric shocks to spare a stranger from getting them? How many out of the forty-four volunteers? One or two? Half?

As in Milgram’s studies, the researchers varied several features of the experiment, each of which shaped the volunteers’ decisions to some degree. One important factor turned out to be how similar the volunteer perceived Elaine to be to herself. Volunteers who perceived Elaine as similar to themselves were twice as likely to help as those who didn’t. Whether they would have to watch Elaine continue suffering or whether they could flee also mattered, although less so. But across all the variants of the experiment, a whopping twenty-eight of the forty-four volunteers (a majority by nearly a two-to-one margin) said that they would prefer to take the rest of the shocks themselves rather than watch Elaine suffer through them anymore. Even when offered the chance to escape, over half the volunteers offered to take Elaine’s place. In no variation of the experiment was Elaine left high and dry. When asked how many of the remaining study trials they’d be willing to complete in Elaine’s place (with the most possible being eight; Elaine herself only completed two), the volunteers in some versions of the study offered to complete, on average, seven.

Stanley Milgram famously revealed that ordinary people are willing to give a stranger painful electric shocks when told to do so by an authority figure. Less famously, he also found that when the power of authority and compassion are pitted equally against each other, compassion ultimately wins. Recall that when the experimenter was in the room with a volunteer and Mr. Wallace was in an adjacent room, half of the volunteers continued shocking Mr. Wallace until the end of the experiment. But when the experimenter’s and Mr. Wallace’s proximity to a volunteer were equal—when both were in the room with the volunteer, or both outside it—obedience in Milgram’s studies dropped below 50 percent, suggesting that the pull of compassion is, on average, stronger than the pull of obedience. Even less famously (sadly!), Batson found that when people are able to choose freely, most will opt to receive electrical shocks themselves rather than let a suffering stranger continue receiving them. Taken together, the real message of these studies is that, when given the opportunity, some people will behave callously or even aggressively toward a suffering stranger—but more people will not. Compassion is powerful. And so is individual variability.

Together, these studies and others like them provided me with my first clues to help solve the mystery of my roadside rescuer. He and all the other drivers who encountered me that night on the freeway found themselves in an identical situation, and it was a situation ideally designed to minimize compassion. I was completely sealed off from the drivers who passed me—trapped inside my car, inaudible and perhaps invisible to them. They wouldn’t have known whether I was young or old, similar or dissimilar to themselves, one person or several. The many dangers they would face if they stopped to help would have been all too obvious. Escape from the situation was as easy as not hitting the brakes. Plus, the drivers had only the briefest of moments to decide what to do as they passed by. Under these circumstances, I would never conclude that any of the dozens of people who drove by me without stopping that night was incapable of compassion, any more than I would conclude that about the Milgram volunteers who could neither hear nor see Mr. Wallace and continued shocking him, or the Batson volunteers who left the experiment instead of volunteering to take the shocks for Elaine. The force of my suffering was too far removed from those who passed me to overcome the much stronger and more salient forces of self-preservation and easy escape—for most people. Thank God for variability. Even in these unpropitious circumstances, the drivers did not act as a bloc. One of them stopped to help. And one was all I needed.

Although heroes often argue otherwise, they do seem to be unlike other people in some important ways. Confronted by the same situation—a stranded motorist, a screaming man, a distressed young woman—they are moved to help rather than ignore or flee the situation. Thinking back to Milgram’s conception of fields of force, one possibility is that heroes are somehow impervious to the forces that work against heroism, like self-preservation. But this doesn’t seem to comport with the experiences of heroes like Cory Booker. He didn’t race through a burning building to save his neighbor because he was insensitive to the risks he faced. Far from it—he described himself as having been terrified for his life throughout the ordeal.

Another possibility is that heroes are more strongly affected by the “field of force” that promotes compassion. Perhaps the sight or sound, or even the idea, of someone suffering affects them more strongly than it affects the average person. This seems a critical insight, but unfortunately, neither Milgram’s nor Batson’s studies give us much information about why or how this might be. Although in some ways Milgram’s and Batson’s goals were polar opposites (with Milgram interested in forces like obedience that override compassion, and Batson interested in when compassion overrides other forces like self-interest), in other ways their goals were very similar. Both men were trained in social psychology, a discipline that has historically focused on how situations and external events affect people on average rather than focusing on variation among people. Social psychologists ask how events like orders from an authority affect people—on average. How are people influenced by the belief that a stranger is suffering—on average? The advantage of focusing on external events and situations like these is that they can be tweaked. An experimenter can seat a suffering stranger right next to a study volunteer, or choose to make the stranger audible only over an intercom, or to not make him audible at all. Tweak! When these tweaks are coupled with tight control of all the other features of the study—the testing room, the shocks, the experimenter’s instructions—you’ve got a true experiment. And what a true experiment gives you is the most satisfying kind of scientific power—the power to say that the thing you tweaked caused the thing you measured. Mr. Wallace’s audible cries caused more people to stop shocking him. Difficulty of escape caused more volunteers to take the shocks on Elaine’s behalf.

But the disadvantage of focusing on external events is that you don’t get much information about individual variability. The factors responsible for variability often can’t be tweaked. They include every biological and environmental force that has ever affected a research volunteer up until the moment he or she arrives in the lab; tweaking factors like these is usually impossible or unethical, or both. What kind of parenting Milgram’s volunteers received as children, their IQs, and their personalities could very well have played a role in how they responded. But scientists can’t tweak these things. They can’t remove babies from their homes and foist them on new parents to study parenting, or inflict IQ-altering brain damage, or give people personality-altering drugs. Experiments like these would be as horrific as Eichmann’s atrocities.

So we have to do the best we can without tweaking these fundamental features. We observe and measure parenting and intelligence and personality, try to control for extraneous variables, then statistically map possible causes onto possible effects, all with the knowledge that we may still be missing something. For instance, if an aggressive adult had harsh and punitive parents, perhaps the harsh parenting caused his aggression. Or perhaps not. People also share genes in common with their parents. So another possibility—one of many—is that observed correspondences between harsh parents and aggressive offspring result from common genetic factors causing them all to act out.

The Milgram volunteers who continued shocking Mr. Wallace all the way down the switchboard intentionally and knowingly caused him harm, which is the definition of aggression. Let’s say that the volunteers who tended to do this were raised by harsher than average parents. Even if this were true, we couldn’t say that harsh parenting caused the volunteers to act more aggressively because there are too many other possible alternatives. It comes back to the old trope about correlation not implying causation. Keep this in mind whenever you hear about developmental studies linking some behavior in parents—harsh discipline or breastfeeding or using complex language or anything else—to some outcome in children. Just because one event precedes another does not mean that it caused the other. Many of these studies aren’t designed to tease apart the roles of genetic and environmental factors, so they can’t clearly establish cause and effect.

Luckily, there are ways of getting around problems like this. One is to take advantage of natural experiments. Natural experiments result when a variable is tweaked by someone, or something, other than a scientist. They are rarely natural and are not true experiments, but they are invaluable nonetheless. One well-known example is adoption studies. It’s clearly unethical for scientists to take babies from their biological parents and give them to unrelated adults to raise, but it’s fine—admirable even—for adoption agencies to do the same thing. And what’s not unethical is for scientists to study children who have been adopted to untangle the effects of genes and parenting. Adoptions “naturally” disentangle genes and parenting by ensuring that one set of parents contributes genes and an unrelated set contributes only parenting. So scientists can study adopted children to learn about how genes and parenting contribute to nearly any outcome in children.

Another way to disentangle genetic and environmental effects is through studies of twins. Because identical twins share 100 percent of their genes whereas fraternal twins share on average only 50 percent (just as any other biological siblings do), the contributions of genes and the environment can be teased apart by studying similarities and differences between identical and fraternal twins. An even more powerful approach is to combine these methods and study identical and fraternal twins raised by their biological or adopted parents. As a result of such studies we know, for example, that identical twins are very similar to one another across multiple physical and psychological indices, even when they are raised in different households. In fact, when raised apart they are sometimes more similar to one another—in terms of their IQs, say—than fraternal twins raised in the same household. A study like this provides compelling evidence for genetic contributions to intelligence.

The results of twin and adoption studies show that some outcomes are almost entirely inherited. To no one’s surprise, for example, adopted children’s eye color corresponds more strongly to that of their biological parents than their adopted parents. The heritability of human eye color—meaning how much of the variation in eye color results from inherited factors rather than environmental factors—is about 98 percent. Environmental factors contribute almost nothing. This is how researchers could determine with certainty the eye color (blue) of the English king Richard III, who died more than 500 years ago, using DNA samples extracted from his bones. The heritability of other physical features also tends to be high. Height is about 80 percent heritable, meaning that most variation in height results from genes. The other 20 percent largely reflects the effect of nutrition or illness. At least, this is how it works in typical modern, prosperous societies.

A caveat is that the heritability of some traits, like height, may fluctuate depending on the environment. For example, when food is scarce, the heritability of height decreases. This is because genes encode people’s maximum potential height—the height they can achieve with adequate health and nutrition during childhood. Food scarcity prevents people from reaching their potential, and the greater the deprivation, the greater the difference between their genetic potential and their actual adult height. In a malnourished population, those children who get 70 percent of the calories they need end up even shorter than the children who get 90 percent of what they need. As a result, widespread environmental factors like food availability account for much more than 20 percent of the differences in malnourished children’s heights. And as the proportion of the variability accounted for by the environment goes up, the proportion accounted for by genes inevitably goes down.

When children are getting enough food, however, more of it will make no difference. Children who consume 100 percent of the calories they need—enough to compensate for the calories they expend through activity and growth—will not be shorter than children who consume 110 percent of the calories they need. Once you have enough food to reach your maximum potential height, getting more food has no further effects. This is why public health efforts aimed at improving a population’s well-being tend to focus on reducing poverty rather than increasing wealth: even small reductions in poverty can improve overall outcomes in a way that increases in wealth never will.

Even in prosperous populations, the heritability of other physical traits is somewhat lower than it is for height. For body weight, heritability hovers around 50 percent. This makes sense, in part because body weight has no maximum potential value, nor is it fixed by adulthood. So your parents’ choices and other environmental factors that shape your diet and lifestyle can more strongly shape your body composition than your height. But that 50 percent genetic contribution should not be ignored—body shape is not infinitely modifiable. No biological offspring of stocky parents will ever be rail-thin, even if the child is adopted and raised by paleo-vegan Pilates devotees. No diet could turn Kim Kardashian into Kendall Jenner, her lanky half-sister. Kendall carries the genes of tall, lanky Caitlyn Jenner, whereas Kim’s father was the short, stocky Robert Kardashian Sr. Biology is not destiny, but it does place limits on destiny.

This is true for essentially every complex human trait, from physical traits like body shape or facial appearance to psychological traits like aggressiveness or extraversion. According to the famed behavioral geneticist Eric Turkheimer, the first law of behavioral genetics is that all human behavioral traits are heritable. Like body composition, most psychological traits—our mental composition, you might say—are about 50 percent heritable. A massive study reported in the journal Nature Genetics showed that, across fifty years of studies of hundreds of thousands of pairs of twins, genes account for, on average, 47 percent of variance in cognitive traits like intelligence and memory and 46 percent of variance in psychiatric traits, including aggression. Parenting and other environmental factors undoubtedly shape outcomes as well, but genetic factors are at least as influential—and often more influential. This helps explain why twins adopted into separate families and later reunited find themselves tickled by the similarities they discover—from hair color to preferred hairstyle to preferred hobbies—despite their separate upbringings.

In Milgram’s era, most psychologists would have considered delving into the heritability of aggression, or any other personality variable in humans, a fool’s errand. From the early twentieth century and extending well into the 1960s, the tenets of a school of thought called behaviorism dominated psychology. Behaviorists like John Watson and B. F. Skinner viewed observable variations in animal and human behavior as primarily a result of their learning histories. If an organism—a pigeon, a rat, a monkey—had been previously rewarded (or “positively reinforced,” as the behaviorists termed it) for pushing a button, it would come to push the button more. If it had been punished for pushing the button, it would push it less. Two animals in adjacent cages that pushed their buttons different numbers of times must have experienced different prior outcomes for doing so. The behaviorists’ views were very influential—Skinner (yet another of the Harvard Psychology Department’s famous faculty) is today considered the single most influential psychologist of the last century.

Skinner’s experiments were beautifully designed and their results compelling. The ingenious “Skinner boxes” that he created to test his predictions were preserved in a lovingly curated exhibition in the basement of William James Hall that I used to pass on my way to my classes there. I marveled at the ingenuity of the little boxes festooned with elaborate arrays of tiny wires and pulleys and buttons and drawers. Of course, Skinner’s experiments used little metal boxes because all of his participants were rats and pigeons. In truth, the scope of his research was very narrow. He measured only simple behaviors that could be tested in one of his boxes, like lever-pulling and button-pecking; then, like other behaviorists, he extrapolated wildly from his findings, arguing that all variability in all animals’ behaviors—from rat aggression to human language and love—was best understood as resulting from learning histories. Skinner famously mused in his novel Walden Two, “What is love except another name for the use of positive reinforcement? Or vice versa.”

Likewise, the thinking went, two children in adjacent houses with different levels of aggression must simply have received differential reinforcement for aggression along the way—one rewarded for aggressive behaviors more than the other. The rewards in question needn’t be cookies or stickers. Only the world’s worst parents would reward aggression with literal prizes. But all sorts of other inadvertent behaviors on the part of parents could reward aggression, in theory. When aggression begets attention, even in the form of yelling or criticism, it can be more rewarding than no attention. A child who is ignored most of the time except when he hits his brother—in which case he gets yelled at—might actually prefer the yelling. Or if hitting his brother gets him something else he wants—his brother out of his bedroom, a toy his brother was holding—that’s a reward too. According to Skinner, if we could perfectly control the rewards and punishments that children receive from their rearing environments, we could eliminate undesirable behaviors like aggression entirely.

But there is absolutely no evidence that this is true. Rewards clearly do influence behavior, as do punishments. But heritability studies prove without a doubt that they are not the only influences. The heritability of aggression is consistently found to be around 50 percent, and for some forms of aggression it is as high as 75 percent. If that much of the variability in children’s aggression can be predicted from genetic differences among them, genes must play a major role in promoting aggression.

What all of this means is that understanding aggression—and the compassion that can inhibit it—requires more than looking at people’s behaviors inside a laboratory, where various environmental variables can be tweaked to make people behave more or less compassionately. A complete understanding of the roots of aggression and compassion also requires looking at deeply rooted, inherited variables that affect how compassionately people behave—that produce the variation in how people respond to various tweaks. Perhaps the most infamous and compelling such variable is psychopathy.

Psychopathy (pronounced sigh-COP-a-thee) is a disorder that robs the human brain of the capacity for compassion. It is characterized by a combination of callousness, poor behavioral control, and antisocial behaviors like conning and manipulation. Psychopaths need not be violent, but they often are. Only about 1 or 2 percent of the American population could be classified as true psychopaths, but among violent criminals the number may be as high as 50 percent. Psychopaths are marked by their tendency to engage in proactive aggression—acts of violence and aggression that are deliberate and purposeful rather than hot-tempered and impulsive.

Psychopathy is also highly influenced by genes, with a heritability quotient that may be as high as 70 percent. That this surprises many people I encounter reflects a common view of human aggression that is often, whether they know it or not, colored by the long shadow of behaviorism. Most people assume that violent, callous individuals must be the outcome of highly abusive or neglectful homes. But this simply isn’t true.

Take Gary Ridgway, the middle son of Mary Rita Steinman and Thomas Newton Ridgway. Gary and his brothers were raised in McMicken Heights, Washington, just north of where I grew up in Tacoma. The family was poor, no doubt: Thomas drove trucks off and on for money, and the family was crammed into a 600-square-foot house. Mary was a bossy and dominating mother—a “strong woman,” as her oldest son Greg later recalled. She and her husband had fights that sometimes turned violent—she once broke a plate across his head during a family dinner. But Gary also remembered her as a kindly figure who did jigsaw puzzles with him when he was a small boy and helped him with his reading. There was no sign of true abuse or household dynamics outside the range of normalcy for a family in the 1960s. And Gary’s brothers grew up to lead ordinary lives.

Not so for Gary, who grew up to become the Green River Killer, the most prolific serial murderer in American history. He is now serving a life sentence for forty-nine confirmed murders, and he has claimed that he committed dozens more. His first attempt at murder took place around 1963—the same year, as it happens, that Milgram was conducting his studies on obedience.

Gary was about fourteen years old and on his way to a school dance. Walking through a wooded lot, he ran across a six-year-old boy. Almost without thinking, he pulled the boy into the bushes and, using a knife he always carried with him, stabbed the boy in the ribcage, piercing his kidney. He quickly withdrew the knife and watched blood gush from the wound. Then Gary walked away, leaving the boy to die—or live. He wasn’t particularly concerned either way, except that he hoped that if the boy lived, he wouldn’t be able to identify him. (The boy did live, but never identified Ridgway as his assailant.) Later on, Ridgway couldn’t even pinpoint why he had done it. It had felt like it just happened, much as other bad things often seemed to just “happen” for him—gleaming rows of windows shattered by rocks, birds felled by a BB gun, a cat suffocated in a picnic cooler.

As Ridgway neared adulthood, things turned much darker. Relentless sexual urges awakened within him. Combined with the callousness and delight in the power of killing he already possessed, those urges turned Ridgway into an insatiable sexual sadist who raped and murdered at least forty-nine girls and women, most of them teenage runaways and adult sex workers around the town of SeaTac in the 1980s, while I was in elementary school some thirty miles to the south.

Ridgway was an unusually depraved personality even compared to other murderers—“a lean, mean killing machine,” as he called himself. Mary Ellen O’Toole, a famed FBI profiler and expert on psychopaths, spent many hours interviewing Ridgway, and she has told me that he is one of the most extreme, predatory psychopaths she has ever encountered.

Ridgway lured his victims into trusting him by showing them pictures of his young son Matthew, or leaving Matthew’s toys across the seat of his truck. After kidnapping them, he assaulted and killed the women and girls in ways that were often gruesome or bizarre, even by the standards of a culture inured to the horrors of CSI and True Detective. Most of his victims were suffocated or strangled, and all of them showed signs of sexual assault. Their arms and hands bore bruises and other injuries. Oddly shaped stones were sometimes found in their vaginas. Several of their bodies were festooned with branches or loose brush. One victim, a twenty-one-year-old named Carol Christensen, was found lying in the woods with a paper bag over her head, twine wrapped around her neck, and a wine bottle lying on her stomach. A trout was draped across her neck, and another lay on her shoulder.

People are hungry for details about psychopaths. I have learned that if I want to start an hour-long conversation with a stranger, I need only mention that I study psychopathy. (If I want to be left alone, I say I’m a psychology professor, which sends people running for the hills.) At least ten books have been written about Ridgway, including one by his defense attorney, Tony Savage, and another by Ann Rule, the queen of true crime. Why the fascination? I don’t fully understand it myself, but I think it is partly because psychopaths, especially the really ghastly ones like Ridgway, are simultaneously so terrifying and so hard to identify. Even psychopaths who commit strings of unimaginably awful serial murders are often shockingly normal on the surface. And not so-normal-they-seem-creepy normal. Actually normal. Wave-to-their-neighbors-on-the-way-to-work normal.

Tony Savage emphasized this in a 2004 interview with Larry King. “Larry,” he said, “I keep telling people, you could sit down and talk with this guy at a tavern and have a beer with him, and twenty minutes later, I’d come up and say, ‘Hey, this is the Green River monster,’ and you would say, ‘No way!’” If you think about it, this has to be true. If psychopaths were obviously creepy or “off,” they couldn’t commit long series of crimes. They wouldn’t be able to convince their victims to trust them or to evade detection for long.

Their seeming normalcy distinguishes psychopaths from murderers who are psychotic—a common confusion, but an important distinction. Psychosis is the inability to distinguish fantasy from reality. It is a common symptom of schizophrenia and bipolar disorder, and usually takes the form of delusional beliefs or hallucinations. People who are psychotic might believe that they are being followed by the CIA or sent secret messages through billboards or their televisions, and they might hear voices telling them to do terrible things, including, sometimes, to commit acts of violence. (Most people who are psychotic are not violent. But the results can be devastating when they are, sometimes because they are both psychotic and psychopathic—a truly awful combination.) Recent mass killers like Jared Loughner, who shot former congresswoman Gabrielle Giffords and eighteen others in a Tucson, Arizona, parking lot, and James Holmes, who shot eighty-two people in an Aurora, Colorado, movie theater, were psychotic. People who knew them found them odd and alarming, and even in photographs it is easy to see how disturbed they were. But mass shooters like Loughner and Holmes don’t need to convince anyone to trust them or evade detection, because they commit their crimes all at once and out in the open and often intend to die anyhow, either by self-inflicted or police-inflicted wounds.

As scary as mass killers are, serial killers are somehow scarier, perhaps because the most frightening kind of danger is the kind that cannot be predicted in advance. Not all serial killers are psychopaths, but a lot of them are. And if psychopaths genuinely come across as normal, there is no easy way to steer clear of them, making them that much more frightening. My guess is that the pervasive fascination with psychopathy in part reflects a desire for details that will somehow give psychopaths away—nonverbal “tells” like unusual patterns of eye contact, or signature biographical details like childhood bed-wetting or fire-setting. Maybe people think that if we can just find the clues that mark people as psychopaths, we can avoid them or round them up and lock them away. This could be why the myth that psychopaths result from abusive upbringings is so persistent. It seems plausible, it is sometimes true (Ted Bundy and Tommy Lynn Sells are two notorious psychopathic murderers who experienced terrible abuse as children), and it might be the kind of signature detail we could use to isolate the budding psychopaths among us.

Some of Ridgway’s biographers have fallen prey to just this temptation—trying to link his gruesome career as a mass murderer to his parents’ fighting, or the way his mother bathed him. But it’s just not that simple. Thousands of children witness their parents fighting, sometimes violently, every year. Many thousands more, sadly, are abused or neglected, sometimes horribly so. But (thankfully) we don’t have thousands of serial murderers running around in the aftermath of this mistreatment. If childhood mistreatment alone caused people to become psychopathic killers on the scale of Gary Ridgway, our society would make a zombie apocalypse look like Disneyland.

Without a doubt, the maltreatment of children is a terrible thing. Children who are abused or neglected or witness violence frequently experience all kinds of negative outcomes later in life. They often develop, not surprisingly, exaggerated sensitivity to potential threats or mistreatment, and they sometimes overreact aggressively to it. This is called reactive aggression—angry, hotheaded, impulsive aggression in response to being frustrated or provoked or threatened. If your significant other threatens to leave you and you throw your glass at him, this is reactive aggression. If someone bumps into you on the sidewalk and you turn around and shove him, this is reactive aggression. If a strange woman slaps you after you grab her ass and in response you haul back and punch her in the face—again, reactive aggression. This kind of aggression is relatively common, and it often crops up in people who are depressed or anxious or have experienced serious trauma.

But this is not the primary problem with psychopaths. Psychopaths can be quite impulsive and do often engage in reactive aggression, but recall that what really sets them apart is proactive aggression—the cool-headed, goal-directed kind of aggression, the seeking-out-vulnerable-women-to-rape-and-murder kind. Child abuse and neglect don’t seem to promote this kind of aggression. There is almost no evidence of any direct, causal links between parental maltreatment and the proactive aggression that sets psychopaths apart. It’s not like people haven’t looked for evidence, but well-controlled studies just don’t find it.

For example, one study conducted by psycopathy expert Adrian Raine and his colleagues at the University of Southern California looked at reactive and proactive aggression in more than 600 ethnically and socioeconomically diverse pairs of twins in the greater Los Angeles area. They tracked the twins over the course of their adolescence, which is the time when aggression tends to become most pronounced. The researchers found that genetic influences contributed about 50 percent to persistent reactive aggression across adolescence, with the rest resulting from environmental influences. But genes contributed a whopping 85 percent to persistent proactive aggression. And none of the remaining 15 percent was attributable to what are called shared environmental influences, which include any influences that affect children within a family similarly, like poverty, the type of house or neighborhood they live in, or having parents who fight or are neglectful. These shared influence variables—even all added together—don’t seem to predict the course of proactive aggression in adolescents.

This of course leaves an urgent, open question: what does cause psychopathy? Through a series of very fortunate events, I got the opportunity to take part in seeking out the answers.

In 2004, I was nearing the completion of my PhD and finishing up my doctoral dissertation. What I needed next was a job. I knew I wanted to continue in academic research, but at twenty-seven, I wasn’t ready to begin a professorship. I had managed to secure a tenure-track offer from a small, selective, rural college, but I couldn’t bring myself to accept it. The school was too small and too rural, and I didn’t feel ready to face the slog that assistant professors face on their way to tenure. The obvious alternative was a postdoctoral fellowship, which is sort of the equivalent of a medical residency for PhDs. Postdoctoral fellowships provide doctoral graduates with a few extra years of training in the laboratory of an established investigator. Postdocs are a terrific way to acquire training in new research techniques and publish original research before tackling a professorship.

I started looking for a postdoctoral position based mostly on geography. I was engaged to be married, and my fiancé, Jeremy, whom I had started dating at Dartmouth, was a US Marine who had nearly completed his four years of service. Of all the cities in the country, the one with the most and best professional opportunities for a former Marine with a Dartmouth degree in government is Washington, DC. So I started looking there. The Washington area contains several major research universities, and better yet, the National Institutes of Health is in Bethesda, Maryland, just a few miles outside the District.

As an aside, of all American city names, “Bethesda” may be my favorite. It is named for Jerusalem’s Pool of Bethesda, the waters of which are described in the biblical Gospel of John as possessing extraordinary powers to heal. The Bethesda in Maryland may be less poetic, but it also possesses extraordinary healing powers. The NIH is far and away the biggest supporter of medical research in the world. The billions of dollars in grant funding it has awarded to researchers around the world over the last several decades have underwritten discoveries of treatments for diseases ranging from cancer to HIV to schizophrenia that have healed countless suffering people.

The NIH also supports a smaller number of scientists—about 6,000—who conduct research on its Bethesda campus. The “intramural researchers,” they are called. The intramural resources at NIH are abundant, and its location right outside Washington, DC, made it a perfect spot for me geographically. But what were the odds that I could find a position there? Most NIH researchers do medical research and have degrees in medicine or biology or chemistry. Even at the institute where most psychology and neuroscience research is conducted, the National Institute of Mental Health (NIMH), the researchers are mostly psychiatrists and clinical psychologists. Was there any place there for a social psychologist casting around for a postdoc?

I sought help from a former graduate school colleague, Thalia Wheatley, who was also a social psychologist and had recently started a postdoc at NIMH. Did she know of any researchers on campus who might have a postdoc position for me? She suggested a few names, the last of which was James Blair. “Oh, he’d be perfect for you!” she said. “You’re interested in empathy, and he studies psychopaths.”

“James Blair?” I repeated. “Wait, that’s not R. J. R. Blair, is it?”

R. J. R. Blair (alternately R. Blair, J. Blair, R. J. Blair, or J. R. Blair) was the researcher with the hard-to-pin-down initials who I knew to be among the world’s foremost researchers on the neural basis of psychopathy. I was very familiar with his work, having cited seven of his research papers in my dissertation, but the bylines on those papers said he was at University College London. His relocation to the NIMH was so recent that there was no scholarly record of it. Thalia laughed. “Yes, R. J. R. Blair is James Blair. And I think he is looking for a new postdoc. I have a meeting with him next week and I can find out.”

I was ecstatic. Thalia was right, this was perfect. This was better than perfect.

Although I was earning my degree in social psychology, where the focus is historically on how people as a whole respond to external influences, over the course of my graduate studies I’d been moving in the direction of studying differences among people. As I sought predictors of altruistic responding in my laboratory studies, I noticed that the individual differences were often more important than the laboratory manipulations that were my initial focus.

For example, one of my dissertation studies aimed to replicate an altruism paradigm developed by Daniel Batson. Batson’s primary focus was on the relationship between empathy and altruism. I should note that Batson used the term empathy to mean what most researchers now refer to as empathic concern or compassion or sympathy—namely, caring about others’ welfare. The term empathy is more commonly used to mean simple apprehension of another’s emotional state, or sometimes sharing that state. If you look frightened and I correctly detect how you are feeling and show physiological changes like an increased heart rate or sweating hands, or if I report feeling upset myself, we can say I’ve experienced empathy. If I also express the desire to alleviate your distress, that’s empathic concern or compassion. The processes are related but distinct.

Batson manipulated empathic concern by asking some volunteers to focus on the thoughts and feelings of a woman named Katie Banks, whose sad radio interview they were listening to. In it, Katie described the terrible hardships she was experiencing following the deaths of her parents, which had left her to care for her young siblings while trying to complete college. Other volunteers were asked to focus on the technical details of the broadcast. Batson reliably found that instructing volunteers to focus on Katie’s feelings caused them to offer more help to Katie afterward. I found this in my own research too. Volunteers listened to a similar radio interview and afterward were given the opportunity to pledge money or time volunteering to help Katie. (In my study, Katie was actually me putting my college theater training to good use while reading from the same transcript Batson had used.) The research assistants running the study gave the volunteers envelopes in which to seal their pledges so that their decisions would remain anonymous. Like Batson, we found that the volunteers instructed to focus on Katie’s feelings experienced more empathic concern and pledged more time volunteering to help her than did those asked to consider technical details of the broadcast.

But this manipulation was not the only, or the best, predictor of how much time people pledged. After the volunteers had listened to the broadcast, we gave them other forms to fill out and tests to complete. One of them was a test of facial expression recognition. Volunteers viewed twenty-four standardized photos of young adults posing expressions of anger, fear, happiness, and sadness and tried to identify each expression using a multiple-choice format. Some of the expressions were obvious, but others were subtle. One dark-haired woman’s fearful expression was betrayed by only the faintest elevation of her upper eyelids and slightly parted lips.

After the study, my research assistants and I tallied up the volunteers’ accuracy in recognizing each of the various emotions and plotted them against their donations to Katie. What we found surprised me a little. Volunteers’ ability to recognize happy expressions was actually a negative predictor of donations: the volunteers who pledged the most time to help Katie were worse than average at recognizing happiness. But the most generous volunteers were better than average at recognizing fearful facial expressions. Even more surprisingly, the power of fear recognition to predict pledges was statistically stronger than the effect of the empathy manipulation. When it came to predicting pledges of money to help Katie, the empathy manipulation didn’t predict anything. Instead, the most powerful predictor of donations of money to Katie, by a mile, was individual variation in the ability to recognize others’ fear.

I followed up this puzzling finding with more studies, which kept showing the same thing: the most reliable predictor of altruism, across different tests and groups of participants, was how well people could recognize fearful facial expressions. This was a better predictor than the ability to recognize any other facial expression, and it was a better predictor than other traits that are sometimes touted as promoting altruism, like gender, mood, and how empathic people report themselves to be. It was a weird result. I knew it at the time. It later went on to be selected by the psychologists Simon Moss and Samuel Wilson as one of the “most unintuitive” psychology findings of 2007. It wasn’t an anomaly, though. Subsequent research has also linked sensitivity to fearful expressions to altruism and compassion in both adults and children across different cultures.

There was one set of data out there that could make sense of these findings. But it wasn’t data collected by a social psychologist—it had been published by none other than James Blair. And, lucky me, he did offer me that postdoc position. That meant I’d soon be working alongside him in his new NIMH lab, digging deeper into the brain basis of the capacity to care for others by conducting the first-ever brain imaging research on psychopathic teenagers.