7

CAN WE BE BETTER?

AS SHOULD BE clear by this point, an overwhelming body of scientific data supports the conclusion that human beings are in no way fundamentally selfish or callous. All the neural and cognitive tools needed to experience genuine concern for others’ welfare and the desire to help those in distress are part of our birthright as mammals who bear altricial, helpless infants in need of care and protection from both their parents and adults other than their parents. These tools include the ability to detect others’ distress, the tendency to feel concern when we do, and the desire to help those in distress, even individuals who are unrelated to us. Naturally, people vary considerably in terms of both concern for others and desire to help them, but it is the rare person who is totally blind to others’ distress and completely without concern for their welfare. That gives most of us plenty of innate capacity to build on.

Thus, an obvious question arises: Why can people still be so awful to one another? Why is there violence and hatred and cruelty? Why are some 400,000 people around the world murdered every year? Why did the Holocaust happen? Why can untold millions of suffering refugees not find asylum within the many prosperous nations of the world?

When it comes to all manner of crime, cruelty, and callousness, it is clear that the 1 or 2 percent of true psychopaths among us are disproportionately responsible for much of it. But remember that this fact says nothing about “human nature.” In fact, as I’ve emphasized, the fact that psychopaths are so very different from other people only serves to highlight the average person’s capacity for genuine compassion and concern. That said, we clearly cannot blame psychopaths for all the cruelty and violence in the world, or even most of it. Among people incarcerated for violent crimes, for example, only about half are true psychopaths. A nation doesn’t invade another country or commit ghastly atrocities because the entire nation is made up of psychopaths. And in daily life, minor acts of cruelty and callousness are too widespread to all be the work of psychopaths. If it’s true that nature has built most humans so beautifully for compassion, how can this be?

Part of the answer is that nature has also built us beautifully for aggression and violence. There is nothing inherently contradictory about this. Consider the case of the lioness who slaughtered a baboon she intended to eat one minute, then tenderly retrieved and groomed that baboon’s baby the next—then savagely chased another lion away from “her” baby a moment later. Or consider the sheep who nurses and dotes on her own lamb one minute before callously butting away the hungry lamb of another ewe. Are these creatures really caring, or are they really cruel? It is both foolish and unnecessary to come down on one side or the other. Both capacities are equally real. Similarly, the question of whether humans are compassionate or cruel can never be answered—we are both. At least, we have the capacity to be. The real question is: when do we express compassion versus cruelty, and why, and to whom? A complete answer to this question requires understanding the essential, inexorable influence of culture on the basic biological processes at hand: how the physical and social environments we inhabit shape our views about, and treatment of, the other beings we encounter during our lives, and how our culture may ultimately enable us to expand our capacity for compassion and altruism.

This chapter delves into four considerations that should be kept in mind as we set out to understand how we might become more altruistic.

1. We are already so much better than we think we are.

It is easy to be misled by attention-grabbing atrocities, but try not to be. The actual numbers are clear: goodness is overwhelmingly common, and kindness is the norm, not the exception. Recall the World Giving Index, which is compiled from the results of massive Gallup polls of thousands of people around the world. The 2016 Index tracked how residents of 140 countries responded to three questions that, together, span a wide range of altruistic behaviors: (1) Have you, in the last month, given money to charity? (2) Have you engaged in volunteer work? (3) Have you given help to a needy stranger? All three forms of altruism can be motivated by a variety of forces, but the third question indexes the kind of generosity most likely to represent a spontaneous, caring response to another’s distress or need. This question is aimed at capturing direct acts of altruism like helping a lost stranger find their way, picking up something that was dropped, or giving to a needy person begging for help. Helping a needy stranger also happens to be by far the most common form of generosity in the world, according to the Index: over half of the world’s population report helping a needy stranger every month. Donating money and volunteering are also remarkably common. Every month over 1 billion people donate money to a charity. And over 1 billion volunteer their time. Every month.

The United States is a more generous country than nearly any other nation on earth, according to these three indices. Across the last five years, it has remained the second-most-generous country in the world. Americans donate hundreds of billions of dollars of their own money annually to charities, and spend over 7 billion hours volunteering to help members of their communities (and this number includes only formal volunteering through charitable organizations). And according to the Index, the United States is a particular standout in giving help to needy strangers. Extrapolating from the Index’s results, Americans offer help to hundreds of millions of strangers every year in countless unknown acts of direction-giving, belonging-collecting, change-offering, and the like. Their help also includes forms of altruism not assessed by the World Giving Index, like blood donation. Americans donate over 13 million units of their blood to sick and injured strangers annually, and many of these donations represent spontaneous responses to strangers’ suffering and distress. Blood donations reliably surge following publicized tragedies like mass shootings or terrorist attacks. Two months after 9/11, the number of Americans who reported having donated blood had increased by 50 percent. Thousands more Americans undergo painful medical procedures to give strangers the marrow from their very bones every year, and many millions more have volunteered to make these donations if asked. And of course, every year dozens of Carnegie Hero Fund awardees and hundreds of altruistic kidney donors take significant risks to save the lives of strangers.

And these numbers reflect only altruism toward humans. Americans also rescue hundreds of thousands of animals every year. When the National Wildlife Rehabilitators Association conducted a survey of animal rescues in 2007, respondents reported having treated over 64,000 rescued birds, 39,000 mammals, and 2,300 reptiles and amphibians that year alone. Numbers like these mean that, on any given day, hundreds of Americans are rescuing vulnerable, helpless creatures that they encounter. Not, of course, that Americans are alone in this regard. On a webpage that compiles the activities of international wildlife rehabilitation groups, you can scroll through a glorious, seemingly endless list of 153 nations, from Afghanistan to Zimbabwe, where organized groups of altruists are coming to the aid of the animals among them. And if my own experiences are any indicator, many of them are doing so despite personal risks or costs to themselves, and despite the absence of any personal gain.

All of this comports with laboratory findings showing that, when given the opportunity to be generous, most people will be, at least some of the time. The amount of goodness displayed freely and frequently by large proportions of the populace—large pluralities and even majorities in some instances, not just an elite few—is staggering. It’s overwhelming.

And not only are people good—they’re getting better. Across a wide range of time frames and definitions of altruism, the incidence of helping is continually rising. The World Giving Index shows that trends in all three forms of generosity it evaluates—donating to charity, volunteering, and helping needy strangers—are increasing year to year around the world, although official numbers go back only five years.

Other figures back these numbers up. Charitable giving estimates for the last forty years in the United States show that charitable giving has increased steadily and significantly during this period. Per capita, Americans donated over three times as much money to charity in 2015 as they did in 1975, even after adjusting for inflation. Globally, blood donations are also increasing: 10.7 million more donations were made in 2013 from voluntary, unpaid donors compared to 2008. Blood donation rates in the United States also continue to rise, as do bone marrow donations—over three times as many people received marrow from strangers in 2015 as in 1995. It’s anybody’s guess how current blood and bone marrow donation rates would compare to rates in the more distant past had the technology for widespread donation been available then, but it’s interesting to note that it was only in the 1970s that the United States even switched to an all-volunteer blood supply. Prior to that time, blood donors were paid. In other words, whereas 100 percent of blood donors today are altruistic donors, many or most were not fifty years ago. And of course, as recently as twenty years ago, altruistic organ donation did not exist and was widely viewed as unfathomable, even though it was medically feasible and desperately needed. Only in the last two decades have most people even been able to conceive of an act of such extraordinary generosity.

One possible exception to the overall trend may be volunteering in the United States. Bureau of Labor Statistics (BLS) estimates show that volunteering has generally either held steady or dropped slightly during the last twelve years. It’s hard to be certain how to interpret this pattern. It is possible that changes in volunteering reflect true reductions in the desire to expend time and effort helping others. Alternatively, because this is the only indicator of altruism that appears to be declining, the trend may instead reflect forces that affect volunteering specifically. It may, for example, reflect increases in the number of hours Americans spend working, which could cut into the time available to volunteer. Or it could reflect the general decline in various kinds of civic participation in the United States, from voting to joining clubs to attending church. Ongoing declines in religious affiliation may be particularly relevant, as religious organizations are the single biggest supporters of volunteering in America. It may be primarily formal volunteering through charitable organizations that is declining, rather than volunteering overall. This would be consistent with the fact that the World Giving Index, which uses a looser definition of volunteering, continues to record increasing, not decreasing, rates of volunteering in the United States.

Not only are people helping one another more, but they are also hurting one another less. In his terrific book The Better Angels of Our Natures, Steven Pinker has provided convincing evidence that the incidence of all manner of cruelty and violence has been steadily decreasing for centuries, regardless of the time frame or the type of cruelty under consideration: deaths in international wars, deaths in civil wars, murders, executions, child abuse, animal abuse, domestic violence—the list goes on and on. All of it falling, falling—not linearly, but persistently over time, all over the world. In Europe, the homicide rate today is a mere one-fiftieth of what it was during medieval times. Cruel practices like slavery and torture that were commonplace around the world for many thousands of years are now nearly extinct. Mauritania’s abolition of slavery in 1980 marked the first time in history that slavery was illegal everywhere in the world. Torture of even vicious criminals is now widely condemned, whereas it was once a standard punishment everywhere for crimes that today would be considered minor. I recently was stunned to learn that in the United States it was widespread and accepted for police officers to torture suspects up through the 1930s. Support for capital punishment has also been steadily dropping. In 2015, for the first time in fifty years, only a minority of polled Americans still favored the practice; this was also the first year in recorded history in which Europe was completely free of capital punishment. Although 2016 did not see this record matched (Belarus executed at least one person), that year had the distinction of being—once the Colombians ultimately agreed on a FARC peace treaty—the first year in recorded history in which there were no war zones anywhere in the Western Hemisphere.

All of this makes it very difficult to refute that, relative to any reasonable frame of reference, modern human societies are generous, peaceful, compassionate, and continually improving. We could only be considered selfish and violent in comparison to a utopian society in which no violence or cruelty takes place—a somewhat unfair comparison considering that there is no evidence that such a society has ever existed. A fairer comparison would be to all the actual alternative realities represented by the various forms that human societies have taken over the millennia. And relative to any of these actual alternative realities, the present era is one of overwhelming caring and kindness.

Not that you would ever know this by asking people. Despite all the numbers to the contrary, majorities of respondents polled in the United States and elsewhere believe that people are, as a rule, selfish, preoccupied by their own interests, and untrustworthy—and getting worse. Every year over the last decade a majority of Americans have reported that crime that year increased relative to the previous year, when the precise opposite has nearly always been true. This imaginary dystopia is not just an American phenomenon. Although youth violence and delinquency have also plummeted in the United Kingdom during the last twenty years, majorities of Britons consistently believe that they have increased or stayed the same.

These striking discrepancies between actual reality and beliefs about reality reflect the fact that our brains are not very good at tabulating the actual state of the world. They weren’t built to be. Sure, our brains need to calculate the nature of the world around us accurately enough so that we don’t walk into walls or fall off precipices. But even our perceptions of the simplest physical surfaces we see in the world around us—a solid white wall, rough ground, a sharp edge—are more illusion than reality. The world as it really exists is a colorless, swirling soup of atomic particles made up nearly entirely of empty space. The rich, textured colors and shapes and feelings that we experience—like solid and white and rough and sharp—feel real, but they are not. They are a product of the interpretive machinery inside our brain. Eighty percent of the fibers entering the visual processing areas of the brain emanate from the rest of the brain, not from our eyes. That means the world we see is a warped interpretation of reality, not “reality.” And this inaccuracy bleeds into every facet of cognition. Inaccurate perceptions of the world lead to inaccurate memories of it, which lead to inaccurate predictions about the future.

And if our brains can so massively distort our perceptions of simple concrete objects like walls and edges, how badly do you think they distort abstract concepts like “human nature”? The answer is: quite badly.

It would be bad enough if our brains were merely randomly inaccurate in how they perceive, remember, and predict the world, but it’s even worse than that. They are also systematically biased toward perceiving, remembering, and predicting bad things. The reason is that, again, brains’ reason for existing isn’t accuracy—it’s survival. As a result, they are especially biased toward focusing on bad things that could threaten our survival over good things that would at best incrementally improve it, a phenomenon known as the negativity bias.

The negativity bias dictates that we generally pay more attention to bad events, encode their details with higher fidelity, and remember them better afterward. This asymmetry is as prevalent in the social domain as it is everywhere else. Negative comments from others have a stronger impact than positive comments, such that, for example, the relationship psychologist John Gottman has estimated that a romantic relationship must be marked by at least a five-to-one ratio of kind to unkind comments to be successful. Negative actions also stick in a way that positive ones do not; the worse the action, the more likely it is to be remembered and used to estimate what a person or group of people are really like, with particularly extreme negative actions, like overt cruelty, carrying disproportionate weight. Paradoxically, negative actions carry disproportionate weight in part because they are rare and unexpected, which makes them even more attention-getting and memorable when they do occur. As a result, even in a world full of people speaking and acting in overwhelmingly good ways—which they do—we notice and remember the small number of highly callous, selfish, and untrustworthy acts better, and we perceive them as far more representative of reality than is actually the case.

This problem is exacerbated by the fact that much of what we know about the wider world of “people” beyond the ones we know personally derives not from our own experiences, or even from secondhand reports from friends and family, but from the media. This is a problem because the media are also not motivated to represent the actual state of the world in an unbiased way. I’m not referring here to political bias, but to simple negativity bias. Most media outlets are ultimately profit-driven and need people to pay attention to them to sell copies and airtime and advertisements. Because of this, and because people are biased to pay more attention to bad things, the journalists who want us to read or watch or listen to their stories are biased toward telling us about bad things. Bad news sells, as the familiar saying goes. And so, by some estimates, the ratio of negative to positive events covered by popular news media is seventeen-to-one, a ratio that does not remotely reflect the ratio of positive to negative events in the actual world. And it’s not just any bad news that sells. As the aphorism goes, “dog bites man” is much less newsworthy than “man bites dog.” The more unusual or unexpected the bad news, the more likely it is to capture people’s interest and make it to press. Thus, once again, it is in part because cruelty and violence are becoming rarer that their newsworthiness continues to increase.

The resulting deluge of selectively reported news about objectively rare violence and cruelty feeds into the perception that we live in a world in which many more bad things happen than good, further fueling the mistaken but common belief that the world is dangerous and becoming ever more so and that people are cruel and callous and getting worse. Is it any wonder that people who consume more news media also tend to be unhappier, more anxious, and more cynical?

An example of this paradoxical process can be seen in the media spotlight that has recently been shining on certain crimes in the United States. For example, sexual assaults on college campuses receive vastly more media attention today than they did in the past. A Google Trends search finds over ten times as many news stories on campus sexual assault in 2016 than there were five years prior. The word “epidemic” frequently crops up in these articles, which probably contributes to the fact that four in ten Americans believe that the United States currently fosters a “rape culture” in which sexual violence is the norm; only three in ten disagree. But this is not remotely true. Sexual assault is no more common on college campuses than among same-aged adults who are not in college. And like most other kinds of crimes, rates of sexual assault are decreasing, not increasing, both on campuses and off, according to the Bureau of Justice Statistics. There is no epidemic, other than an epidemic of awareness. Now, this epidemic of awareness may be a good thing if it results in an epidemic of concern about these crimes that contributes to their continued decline—and I hope it does. But the media spotlight is not without downsides, one of which is a massively distorted public perception of the frequency of sexual assault, which in turn worsens broader perceptions about gender relations, law enforcement, and human nature itself.

Hardwired cognitive biases exacerbated by biased media coverage help to explain why people’s beliefs about human nature are mathematically incompatible with reality. Amazingly, people’s beliefs are even incompatible with the knowledge that they have about themselves, although you might assume that self-knowledge would be more resistant to distortion. (It’s not.)

In the illuminating Common Cause UK Values Survey, pollsters found, consistent with the results of other surveys, that negativity bias had successfully distorted its respondents’ views of human nature. About half of the respondents reported that, in general, people place more importance on selfish values like dominance over others, influence, and wealth than on compassionate values like social justice, helpfulness, and honesty. At least, respondents believed these things to be true about other people. When asked about their own values, a substantial majority (74 percent) of these same respondents reported that they themselves placed more value on compassionate values than selfish values.

Obviously, one of these two findings has to be wrong. It can’t be simultaneously true that most people value compassion more and that most people value selfishness more. So what should we believe: what people say about themselves, or what they say about others? The pollsters took several steps to reduce the odds that the discrepancy resulted from respondents merely bragging or puffing themselves up. They concluded that the primary source of the problem was respondents’ overly negative perceptions of others. Ultimately, after comparing people’s actual reported values with their perceptions of others’ values, the pollsters concluded that a whopping 77 percent of the sample underestimated how much their fellow Brits valued compassion.

Now, 77 percent is high, but it’s not 100 percent. Not all respondents had equally cynical views of human nature, and the likelihood that a given respondent would underestimate others’ compassion was not randomly distributed. A reliable predictor of cynicism about others’ values was the respondent’s own values: respondents who themselves valued compassion very little also perceived others as valuing compassion very little. Conversely, those who valued compassion more tended to believe that others valued compassion more as well. Psychologists call this pattern the false consensus effect, according to which people believe their own values and beliefs more closely reflect what the average person values and believes than is actually the case. As a result, people who are themselves highly selfish tend to believe that others are too, whereas those who are highly compassionate believe that others are as well. Think, for example, of Anne Frank, who concluded, despite all she had seen and experienced, that “people are truly good at heart,” or Nelson Mandela, who believed that “our human compassion binds us one to the other.” Or Martin Luther King Jr., who in his Nobel Prize acceptance speech said, “I refuse to accept the view that mankind is so tragically bound to the starless midnight of racism and war that the bright daybreak of peace and brotherhood can never become a reality.… I believe that unarmed truth and unconditional love will have the final word in reality.” Or Mahatma Gandhi, who proclaimed, “Man’s nature is not essentially evil. Brute nature has been known to yield to the influence of love. You must never despair of human nature.”

None of these people can possibly be accused of naïveté. All knew horrors beyond what any human being should have to witness or experience. But all were people whose compassion for others persisted despite their own experiences, and whose faith in the compassion of others remained undimmed.

By contrast, those who themselves are callous or cruel tend to falsely believe that their values represent the consensus. Compare the beliefs of Frank, Mandela, King, and Gandhi to the beliefs of Richard Ramirez, the notorious serial murderer—nicknamed “The Nightstalker”—who brutalized and killed thirteen people during the 1980s. Although his actions placed him far outside the bounds of normal human behavior, he viewed himself as relatively typical, once asking, “We are all evil in some form or another, are we not?” and claiming that “most humans have within them the capacity to commit murder.” The serial murderer Ted Bundy concurred, warning, “We serial killers are your sons, we are your husbands, we are everywhere.” Even Adolf Hitler framed his own horrific misdeeds as reflecting basic human nature, retorting, when questioned about his brutal treatment of Jews, “I don’t see why man should not be just as cruel as nature.” Perhaps Josef Stalin best demonstrated the relationship between the possession and perception of human vice: he once proclaimed that he trusted no one, not even himself.

I realize that many people will persist in believing that people are fundamentally and uniformly selfish and callous by nature, regardless of the objective evidence to the contrary. But the evidence also suggests that rigid adherence to this belief says much more about the person who espouses it than it does about people in general.

Resist the temptation, then, to believe only the most pessimistic messages about human nature. Consider the evidence I’ve given you, as well as the evidence of your own eyes. Next time you see or read or hear about a callous or terrible thing that some person or group of people has done—or hear someone bemoaning how awful people or just some group of people are—don’t succumb to negativity bias without a fight. Stop a moment to remember how genuinely variable people are and ask: Is that terrible thing really representative of people as a whole? Is it likely to even be representative of what that person or group of people is like? In some cases it may be—for example, when a psychopath commits a truly heinous crime. But such an act is the diminishingly rare exception, not the rule.

Don’t limit your stopping and thinking to the bad things either. The many acts of kindness and generosity that happen every day around all of us can fade into the scenery if we let them. When you see or hear or read about (or commit!) an act of genuine kindness or generosity, please take a moment to notice it and to remember how much goodness there is in the world.

There are many reasons that this approach is worthwhile; perhaps the most important is that trust in others can become a self-fulfilling prophecy, a fact that has been demonstrated using simulated social interactions. Perhaps the most famous such simulation is the Prisoner’s Dilemma. In this paradigm, a player is told that he and his partner each have two options in every round of the game. They can choose to cooperate with each other, in which case both will get a medium-sized reward of, say, $3. Alternatively, they can choose to defect. If both players defect, they both get only $1. Where things get interesting is if one player decides to cooperate and the other defects. In this case, the cooperator gets nothing at all and the defector gets $5. The hitch is that the players are not allowed to communicate while making their decisions. They must choose what to do—to trust or mistrust each other—before learning of their partner’s decision.

In any given round of the game, the payoff structure ensures that it is always more rational to defect than to cooperate. If his partner defects, a player will get $1 if he also defects, but $0 if he cooperates. If his partner cooperates, the player gets $5 if he defects, but $3 if he also cooperates. And yet, when people play this game, they overwhelmingly cooperate. Why would this be? It happens because the game typically involves multiple rounds—often an indeterminate number of them—and so each player’s partner has opportunity to pay him back, for better or worse, as the game goes on. This makes the Prisoner’s Dilemma a good model of reciprocity-based altruism. Cooperating in any given round requires a player to make short-term sacrifices that benefit his partner under the assumption that the partner will reciprocate in the future. And in the Prisoner’s Dilemma, as in real life, they usually do.

Early studies found that the optimal strategy in the Prisoner’s Dilemma is called tit-for-tat: starting out cooperating, then doing whatever your partner did in the last round. If he cooperated, so do you; if he defected, defect right back. Those who use this strategy tend to win out in the long term. The fact that tit-for-tat entails cooperating on the opening move is key. This demonstrates that starting from the assumption that even perfect strangers are probably trustworthy is the more advantageous approach for everyone. Starting from an assumption of others’ trustworthiness usually leads to an upward spiral of cooperation and increasing trust.

Trust, in other words, becomes a self-fulfilling prophecy.

In my interactions with altruistic kidney donors, I have gotten a peek into the worlds that such a prophecy can create in real life. Like the most compassionate respondents in the Common Cause UK Values Survey, altruists’ own deep-seated compassion and kindness often leads them to assume the best of others, and to be fairly open with and trusting of others, even people they don’t know well. As one altruist averred: “I always say: Everybody helps people in some way or another. There are just different ways to do it.” Another concurred, saying: “I think generally people are good, and I think people would like to do the right thing.” For my research team and myself, it’s been one of the most remarkable aspects of working with them—being treated with the trust and warmth of old friends by people we have just met.

I think this view of the world helps explain altruists’ decisions to donate as well. When I ask ordinary adults why they wouldn’t donate a kidney to a stranger, they often cite concern that the recipient might not truly deserve it—that person could be a criminal or a drug abuser or otherwise just not quite trustworthy. But altruistic kidney donors don’t seem to adopt this viewpoint. As one of them told us, “Everyone is going to live their own life and make their own decisions, and some of them are going to be bad and some of them are going to be good. But nobody is that bad to not deserve a normal life.” Or as another said, “Everyone’s life is equally valuable. There’s no reason to pick or choose.” The fact that altruists are willing to give a kidney to literally anyone means that they must start from the belief that whoever is selected to receive their kidney, it will be someone who deserves life and health and compassion.

You might be tempted to conclude that maybe altruists are just suckers, but that’s not it. In one computer simulation study we conducted, altruists were as willing as anyone else to penalize people who actually acted unfairly. But their default assumption—their starting point with people who are totally unknown to them—seems to be trust. This approach to the world and the people who populate it seems to result in more positive interactions than would result from a mistrustful approach, and over time they reinforce altruists’ perceptions of the basic goodness of the people around them.

Who wouldn’t want to live in a world like that?

2. Caring requires more than just compassion.

Understanding that care requires more than compassion is a really, really important aspect of understanding altruism. It suggests that a heightened capacity for compassion is not the only thing that fosters extraordinary altruism. What makes acts of extraordinary altruism—from altruistic kidney donations to my roadside rescue to Lenny Skutnik’s dive into the Potomac—extraordinary is that they are undertaken to help a stranger. Most of us would make sacrifices for close friends and family members—people whom we love and trust and with whom we have long-standing relationships—but these sorts of sacrifices can be easily accommodated by established theories like kin selection and reciprocity, which dictate that altruism is preferentially shown toward relatives and socially close others and that these forms of altruism are at least in part self-serving. When people violate this dictum by sacrificing for an anonymous stranger, however, their actions suggest that they possess the somewhat unusual belief that anyone is just as deserving of compassion and sacrifice as a close family member or friend would be. Think of it as alloparenting on overdrive.

Recent data we collected in my laboratory allowed us to mathematically model this feature of extraordinary altruism. The paradigm we used is called the social discounting task, which was originally developed by the psychologists Howard Rachlin and Bryan Jones. Rachlin and Jones were seeking to understand how people’s willingness to sacrifice for others changes as the relationship between them becomes more distant. In the task they created, respondents make a series of choices about sacrificing resources for other people. Each choice presents the respondent with two options. They can either choose to receive some amount of resources (say, $125) for themselves or split an equal or larger amount (say, $150) evenly with another person, in which case each person receives $75. In this example, choosing to share would result in sacrificing $50 ($125 − [$150 ÷ 2] = $50) to benefit the other person.

The identity of the other person varies throughout the task. In some trials, the respondent is asked to imagine sharing the resources with the person who is closest to them in their life, whoever that may be. Imagine the person closest to you in your life. Would you accept $75 instead of $125 so that this person could get $75 as well? You probably would—me too. In other trials, respondents are asked to imagine that the other person is someone more distant: their second-or fifth-or tenth-closest relationships, all the way out to their one-hundredth-closest relationship. Typically, the one-hundredth-closest person on anyone’s list is not remotely close at all and may be only barely familiar—perhaps a cashier at a local store, or someone seen in passing in the office or church. Now, would you settle for $75 instead of $125 so that someone this distant from you—someone whose name you might not even know—could receive $75? Maybe, maybe not.

Rachlin and Jones, and others as well, find that the choices people make during this task describe a very reliable hyperbolic decline as a function of social distance. This means that people reliably sacrifice significant resources for very close others, but their willingness to sacrifice drops off sharply thereafter. For example, most respondents, when given the choice to receive $155 for themselves or to share $150 with their closest loved one, choose to share. In other words, they will forgo getting an extra $80 for themselves so that their loved one can get $75 instead. This choice indicates that respondents place even more value on the sacrificed money when it is shared with a loved one—who otherwise would get nothing—than they would if they had kept it for themselves. But as the relationship in question moves from a respondent’s closest relationship to their second-closest relationship to someone in position 10 or 20, the average person’s willingness to sacrifice declines by about half. By positions 50 through 100, most people will sacrifice only about $10 to bequeath $75 on a very distant other. This pattern holds up across multiple studies and subject populations and across disparate cultures. It also holds up whether the money in question is real or hypothetical. Rachlin and Jones’s term for this hyperbolic drop-off, social discounting, refers to the fact that people discount the value of a shared resource as the person with whom it is shared becomes more socially distant.

Can social discounting help to explain the difference between extraordinary altruists, who really do make enormous sacrifices for very distant others, and everyone else? Obviously, money is not a kidney. Sharing it does not require undergoing general anesthesia or major surgery. But in other ways the task is not a bad parallel to donating a kidney. When a living donor sacrifices their own kidney for another person, their choice to give their extra kidney away means that they place even more value on it when it is shared with another person—who otherwise would have no functioning kidney at all—than they do on keeping it for themselves. Think back to kidney donor Harold Mintz’s question: if your mother was going to die of renal failure tomorrow and your kidney could save her, would you give it to her? If you answered yes, we can say that you would rather sacrifice half of your total renal resources than leave your mother with none. This is exactly the choice that thousands of living donors make every year. Now, what if the person who needs a kidney is your friend, or your boss, or a neighbor? Would you sacrifice half your renal resources so that they could have some rather than none? If this was a harder choice, you have just discounted the value of your shared kidney.

Our data suggest that social discounting helps us understand the real-life choices that altruistic kidney donors make. The kidney donors and controls in our study—who were matched on every variable we could think of, including gender, age, race, average income, education, IQ, even handedness—completed a version of Jones and Rachlin’s social discounting task. Over and over again they made choices about whether they would prefer to keep resources for themselves or share them with close and distant others. Tabulating the results, my student Kruti Vekaria and I first looked at how altruists responded when choosing to sacrifice for the people closest to them. We found that they looked almost exactly like our controls. The data for the two groups overlapped completely, with nearly everyone willing to sacrifice the maximum amount ($85, in this case) to share money with their loved one.

But as we plotted further and further out on the social distance axis, the two groups began to diverge. By their fifth-closest relationship, controls were willing to sacrifice only $65. But altruists hadn’t budged. They responded just as they had for their closest loved ones. By position 20, controls’ willingness to sacrifice had dropped by about half, to $45, following closely the arc predicted by Jones and Rachlin. But altruists’ discounting slope remained so shallow that they were choosing to sacrifice as much for their twentieth-closest relationship as controls were for their fifth-closest. And on and on it went, until the most distant relationship (one-hundredth-closest), by which point controls would sacrifice only about $23—roughly one-quarter as much as they would for a loved one. By contrast, altruists elected to sacrifice more than twice as much as controls—$46—to share $75 with a near-stranger. Their generosity had dropped by less than half.

These results suggest a simple reason why altruistic kidney donors find the decision to share such a precious resource—their own internal organ—with a stranger so intuitive: they don’t discount the welfare of strangers and near-strangers as much as the rest of us do. To them, it is nearly as worthwhile to make a sacrifice for someone whose name they don’t know or whom they have never even met as it would be for most of us to sacrifice for our closest friends and family. In the words of one donor we have worked with, “I see the world as one whole. If I do something for someone I loved, or for a friend… why would I not do it for someone I did not know?” This tendency really does seem to be alloparenting on overdrive, particularly because this generosity emerges even in the absence of vulnerability or distress cues—subjects in the social discounting study never saw or heard another actual person, but only imagined them.

These findings also reinforce the critical distinction between “can” and “does.” Altruism is not simply a matter of having the ability to experience compassion and provide care. Nearly everyone can be compassionate and caring—at least for some people. The real question is, what do you do with that capacity when the person in need of your compassion and generosity is a stranger?

This, of course, leads to another question: can the rest of us flatten out our discounting curves more? Can we become more like extraordinary altruists?

On one level, the answer is almost certainly yes. All the social changes that are already occurring prove it. If people are becoming less violent and more altruistic toward strangers all over the world, then we must all be coming to care more about strangers’ welfare than we used to. It would be impossible for any kind of widespread genetically rooted change in the capacity for compassion to have occurred during this period of time, so these changes must reflect cultural shifts instead. Somehow these shifts are causing us to place increasingly more value on the welfare of strangers and flatten our discounting curves—or, as the philosopher Peter Singer and others have put it, to expand our “circles of compassion.”

I think of discounting as a mountain on which the self stands at the pinnacle. The slopes of the mountain represent social discounting. If the mountain’s slopes are steep, like the Matterhorn, the person at the pinnacle values their own welfare high above that of others, and the welfare of their close friends and family high above the welfare of anyone more distant. Very distant others’ needs and interests are down in the foothills and can barely be seen through the haze. What factors might help to compress this mountain a little—flattening its slopes to more closely resemble the gentle silhouette of Mount Fuji—such that the welfare of more distant others is not so steeply discounted?

3. More self-control is not the answer.

Steven Pinker has suggested several possible reasons for ongoing declines in cruelty and violence over time. Some of them might influence social discounting, but others probably don’t. For example, one factor that may have contributed to declining violence—but not because it makes us fundamentally care more about strangers—is the rise of centralized governments. Centralized governments oversee the resolution of conflicts and the distribution of resources among individuals—and more importantly, among clans and tribes and nations. When a relatively impartial state mediates disputes, it interrupts the cycles of vengeance and retaliation that erupt when disputes must be resolved by the individuals involved in them. Later, during the Middle Ages, the severe punishments that the state meted out to criminals further reduced the appeal of criminal violence, while the rise of state-regulated trade and commerce increased the appeal of cooperation. According to Pinker, these changes reduced violence for two reasons. First, they shifted the incentives surrounding both cruel and cooperative behavior, rendering violent solutions to provocation or frustration both less necessary and less likely to yield desirable outcomes. Second, these changes may also have changed the social norms surrounding violence. As more wealth and status began to accrue to people who inhibited their aggressive impulses, the ability to exert self-control came to be viewed as more desirable.

Although changes in people’s ability or tendency to exert self-control may partly explain declining violence, it almost certainly cannot explain increasing care and altruism toward distant others, because altruism in response to others’ distress or need is fundamentally emotional, not rational. As is true for the most common form of aggression—the hot, reactive, and frustrated kind—altruistic urges emerge from deep, primitive emotional structures in the brain. This is clearly true of compassion-based altruism, but it is also true of much learned altruism, and probably of kin-based altruism as well. (Reciprocal altruism is the closest to being genuinely rational, although it too is supported by activity in a subcortical agglomeration, in this case the striatum, that drives reward-seeking.) Ancient subcortical brain structures respond quickly and intuitively to altruism-relevant social cues, like vulnerability and distress in the case of compassion-based altruism. This is probably why altruistic kidney donors overwhelmingly report that their decision to act bubbled up quickly, and in many cases unexpectedly, in response to learning about someone suffering or in need. As one altruist told us, when he first spotted a billboard about someone seeking a kidney, “It was just like I was compelled to do it. The only thing I can figure is God reached down, poked me in the side, and said, ‘Hey, go help your fellow man.’… It was just overwhelming, I just wanted to do it. I have no clue why.” Another decided to donate after wandering by a booth at a health fair and learning about the dire need for kidneys. She recalled thinking simply, I’m pretty healthy. I have two kidneys. You got anybody that needs one? Both Lenny Skutnik and Cory Booker recounted that their decision to act was a fast and impulsive response to another person’s distress. My own heroic rescuer’s decision must have been nearly instantaneous as well—he would only have had a second or so to decide whether to stop and help me. When altruism arises this way, from primitive, emotional processes, the only effect that self-control could possibly have is to suppress it, much as it suppresses aggression.

My colleague David Rand, a behavioral scientist at Yale University, has collected systematic data supporting the idea that generosity toward strangers results from fast and intuitive processes and that rational deliberation suppresses it. He and his students have amassed a wealth of data from experimental simulations, including the Prisoner’s Dilemma, showing that people who respond the most generously usually do so quickly and without a lot of thought. The more time people take to stop and reflect, the less generous or altruistic they will ultimately be.

Rand and his colleague Ziv Epstein have also examined the cases of dozens of real-life altruists who were awarded the Carnegie Hero Medal for confronting extraordinary risks to save another person’s life. (Lenny Skutnik is one of them.) They wanted to know whether, when facing real risks, people still leap into action first and only later stop to consider the risks, or do they exert self-control to overcome their fear for their own safety? To answer this question, they combed through news archives to find interviews with people who had received Carnegie Hero Medal awards between 1998 and 2012 and extracted fifty-one heroes’ explanations of why they had acted. Among these explanations were statements like the following:

And:

The minute we realized there was a car on the tracks, and we heard the train whistle, there was really no time to think, to process it.… I just reacted.

The researchers then had raters evaluate how fast and intuitive versus deliberative and rational each decision was. They also asked them to estimate, based on the details of each situation, how many seconds each rescuer had to act before it would have been too late. Finally, they ran all of the heroes’ descriptions through a software program that coded for certain kinds of language, like words and phrases associated with the exertion of self-control.

I’m sure you can guess the results from the descriptions I’ve given you. Nearly half of the heroes described themselves as having acted without thinking at all, and their descriptions received the highest possible “fast and intuitive” score. Altogether, 90 percent of these altruists received ratings on the “fast and intuitive” end of the scale rather than the “deliberative and rational” end. This was true even for those rescues that had allowed at least a little wiggle room in terms of time—perhaps a minute or two to contemplate whether to act or not. The researchers ultimately found no relationship between how much time was available and how intuitively the altruists responded, suggesting that intuitive responding was not the inevitable outcome of a fast-moving emergency. The computer algorithm confirmed these findings, showing that heroes’ descriptions of their decisions incorporated little language suggesting that they had attempted to exert self-control. Together, these findings reinforce the idea that, rather than being deliberate attempts to be noble, urges to care and cooperate are deeply rooted in parts of the mammalian brain that may drive us to act on others’ behalf before we fully understand what we are doing or why.

This fact has given me some pause about a growing movement called effective altruism, which is aimed at encouraging people to restrain their initial altruistic impulses in order to accomplish the greatest objective good. The movement was inspired by the work of the philosopher Peter Singer, and its advocates’ explicit aim is to convince people to donate to charity only after conducting comprehensive research into the objective impact that their donation will yield. The problem, in Singer’s view, is that we are prone to give to causes that happen to tug at our heartstrings—the GoFundMe campaign we saw on Facebook, our local animal shelter, a charity that collects toys for homeless children in our community—rather than rationally planning out altruistic giving to yield the greatest objective good. Instead of helping the GoFundMe family, the pets, and the homeless children, why not use that same money to buy bed nets for dozens or hundreds of families in Africa, reducing their risk of contracting malaria? Wouldn’t this result in objectively better outcomes, and wouldn’t that be preferable? (Remember how much more good it does to improve the lives of those who start out the worst off?)

I couldn’t possibly disagree with the idea of using charitable donations effectively. But I see two problems with the philosophy. First, I doubt that there is usually a way to determine what constitutes the greatest objective good. Many would agree that saving five children from malaria is more valuable than buying supplies for an animal shelter (although others would not), but is it more valuable than donating to a university to support malaria vaccine research? How about supporting research on diabetes, which affects more people at any given time than malaria? Or spending time that could otherwise have been spent fund-raising to prepare for and recover from a kidney donation to save a single person’s life? What if that person was a malaria vaccine researcher? Answering these questions requires making so many guesses and assumptions and subjective value judgments that any attempt to arrive at an answer using sheer rationality would quickly spiral into a vortex of indecision.

“A vortex of indecision” is, by the way, where people actually end up when, following a brain injury, they are forced to use only logic and deliberation to make decisions about the future. Such injuries leave IQ and reasoning ability intact, but they prevent the affected from incorporating emotional information from deep within the brain into their decision-making. It turns out that intellect and reasoning alone are not sufficient for making complex subjective decisions. People who cannot generate an intuitive feeling of caring about one outcome more than another struggle for hours to make decisions about even basic things like which day of the week to schedule a doctor’s appointment—the kind of decision, like so many others, for which there is no purely rational answer.

As he accepted his Nobel Prize for literature in 1950, the philosopher Bertrand Russell declared, “There is a wholly fallacious theory advanced by some earnest moralists to the effect that it is possible to resist desire in the interests of duty and moral principle. I say this is fallacious, not because no man ever acts from a sense of duty, but because duty has no hold on him unless he desires to be dutiful.” Ultimately, the gut-level, irrational feeling of just caring more about certain causes than others is what moves people to help. Desire, not reason, drives action. This is why even the most sophisticated computers don’t yet act of their own accord, despite having perfect reasoning abilities—they have no feelings or desires. Psychopaths are often highly rational, but this does not drive them to provide costly help to others, believe me, because they lack the emotional urge to do so. And those who do go to great lengths to help others overwhelmingly describe their motivations in terms of impulse and feelings. Consider the case of Robert Mather, the effective altruist and founder of the Against Malaria Foundation, which has been called the most “effective charity” in the world. In Mather’s own telling, he was first moved to devote his life to charity work, not by clear-eyed rationality, but because he stumbled upon the story of a single little girl who had been horribly burned in a fire, and whose story moved him to tears.

Even when people do describe their decisions in terms of clear-eyed rationality, their brains may tell a different story. One altruist who participated in our research described his decision to donate in beautifully utilitarian terms. Upon first reading a news article about altruistic donations, he said,

it clicked with me immediately, and then I thought: this is something that I could do, and something I was comfortable with. So at that point, I did some research on the web about side effects and mortality rates and the possibility of comorbidities afterwards, and I was very comfortable that the risks were low and acceptable to myself, and that the benefits to the patient—especially if they were already on dialysis—the improvements in lifestyle and improvement in life span and their ability to get back in their life, the benefits were great.

He conducted, in other words, a simple cost-benefit analysis. When I asked about any other thoughts or feelings he might have been having that contributed to his decision to donate, he replied, “I guess I would say I am super-rational—I do not get emotional about the decisions I make.” On one of the standard empathy scales we used in the study, he reported his own levels of empathy to be very low. I had no reason to doubt anything he said about himself. It was clear from his professional accomplishments in the technology sector and the way he described other decisions he had made that he was capable of sophisticated rational analysis. But our data revealed that rationality was not all that he was capable of.

When we first examined the scatterplots describing our brain imaging and behavioral data, one of the altruists had stood out from the rest. Of the nineteen we had tested, one scored nearly perfectly in terms of his ability to recognize fearful facial expressions—the top scorer of all our participants. This person also showed a robust amygdala response to fearful expressions during the brain scan—easily in the top half of the altruists. Who was this super fear-responder? None other than our self-described super-rational, low-empathy altruist. I believe this altruist considered himself to be low in empathy. And he may well have been relatively low in the sort of cognitive empathy that is linked to Theory of Mind and autism. But he also had remarkably high levels of the kind of empathy that is important for caring and altruism: sensitivity to others’ displays of vulnerability and distress.

Of course, this one data point cannot prove that this altruist’s heightened sensitivity to others’ fear was the cause of his extraordinary actions. But it does prove that you should never take people’s self-reported empathy at face value. Just as Daniel Batson’s subjects were led to believe that a placebo called Millentana had led them to behave altruistically, so can all of our brains easily mislead us about the causes of our own behavior and feelings and decisions.

Everything we know from the laboratory suggests that deliberation and rationality are not what ultimately drive people to care. Indeed, the more deliberatively and rationally people think about generosity, the more likely they may be to suppress their initial urges to help, and the less generous they ultimately become. Viewing people’s natural desires to help particular causes as a springboard to action, rather than as something to be suppressed or overridden, seems to me like it would be a more genuinely effective approach than insisting on pure altruistic rationality.

4. Key cultural changes have made us more caring.

So if it’s not self-control that is leading to more caring and compassion in the world, what could it be? Another possibility is that a much more general change—one that has also been indirectly promoted by the rise of state governments quelling violence and promoting trade—is responsible: an increase in quality of life. People act better when they are themselves doing better.

The last millennium has been a period of extraordinary improvements in human prosperity, health, and well-being around the world. It’s not just deaths and suffering from violence that have decreased during this period—it’s deaths and suffering from all causes, including famine, injury, and disease. Global hunger has precipitously declined. Life expectancies have more than doubled over the last 200 years. Near-miraculous advances in medicine have eradicated horrifying diseases, like smallpox, plague, polio, and measles, that once ravaged millions of people around the world. Do you know what scarlet fever is? I don’t, and neither do you, probably. But as recently as 150 years ago, epidemics of it killed tens of thousands of children every year—sometimes every child in a family in just a week or two. Two of Darwin’s children were killed by it, as was John D. Rockefeller’s grandson. It’s one of dozens of former scourges that are now all but gone. Only fifty years ago, one child of every five born around the world died before their fifth birthday. The rate is now less than one in twenty-five. It is easy to miss the significance of these changes because they have been so gradual and consistent. But the amount of human suffering and misery that has been alleviated in the last century is, in reality, staggering.

Education rates also continue to improve worldwide. Literacy was near 0 percent essentially everywhere in the world 500 years ago. As recently as 1980, barely half of the world’s population could read. But the global literacy rate is now around 85 percent, and across broad swaths of the world it is close to 100 percent, thanks in part to tremendous strides made in public schooling and in providing equal educational opportunities for boys and girls.

Wealth is increasing at astonishing rates as well. The economic historian Joel Mokyr has observed that in modern industrialized nations, middle-class families enjoy higher standards of living than emperors or popes did just a few centuries ago. The unequal distribution of wealth remains a serious concern, but the poor are also better off than they used to be. The proportion of people living in abject poverty continues to fall all over the world, dropping from around 90 percent of the global population in 1820 to just under 10 percent today. The World Bank estimates that in just the three-year span from 2012 to 2015, the number of people living in extreme poverty (defined as living on less than $1.90 per day) dropped by 200 million, bringing the percentage of people living in extreme poverty to below 10 percent of the global population for the first time. That is a remarkable amount of progress in a very short time. World Bank president Jim Yong Kim called it “the best story in the world today.”

There is every reason to believe that these increases in prosperity and quality of life have been the source of many other positive downstream effects—which include ongoing positive trends in generosity and altruism toward strangers, up to and including extraordinary altruism. That well-being has increased worldwide in tandem with various forms of generosity and altruism toward strangers is clear, although obviously this is merely a correlation, and a wildly confounded one at that. But my lab and others have conducted more targeted research showing that, even after controlling for many possible confounds, increasing levels of well-being are associated with increased altruism.

A few years ago, my student Kristin Brethel-Haurwitz and I were combing through national statistics on altruistic kidney donation when we noticed the incredible variation in rates of donation across the fifty US states. We wondered why this might be. Around the same time, the Gallup polling organization came out with its first-ever statistics on variations in well-being across the states. When we compared maps of altruistic kidney donations and well-being side by side, the similarities were striking. We ran a number of analyses to probe these similarities and found that even after we controlled for every difference we could possibly think of across the states—median income, health metrics, inequality, education, racial composition, and religiosity, to name a few—high well-being in a state remained a strong predictor of altruistic kidney donations.

Well-being is more than just happiness—it’s life satisfaction, having a sense of meaning and purpose, and being able to meet basic needs. These are qualities shared by denizens of well-off states like Utah, Minnesota, and New Hampshire, which, though very different in some ways, all produce high proportions of altruistic kidney donors. In states like Mississippi, Arkansas, and West Virginia, on the other hand, both well-being and altruistic donations are very low. Kristin and I also found that although well-being is somewhat related to baseline variables like income and health, it’s even more strongly related to whether these indicators are improving. We found that, even after controlling for baseline income and health, increases in median income and health over a ten-year period were strong predictors of both well-being and extraordinary altruism.

In some ways we found this result surprising. It’s a common trope—embodied by super-wealthy fictional characters from Ebenezer Scrooge to Gordon Gekko to the Malfoys—that wealth and status lead to selfishness. But these tropes are not really relevant to our findings. “Wealth” in large population studies like the one we conducted doesn’t refer to people with butlers and mansions. The super-wealthy represent only a tiny fraction of the population, and their actions aren’t reflected in our data set. Instead, our findings suggest that incremental increases in objective and subjective well-being across large groups of people also increase altruism. As people move out of poverty and into the middle class, or inch from below the median income to above it, the odds that they will opt to give a kidney to a stranger fractionally increase. This says more about possible benefits of financial security and reductions in poverty than it does about stereotypical wealthy people.

Our findings are consistent with a large body of literature linking generosity to well-being. The psychologists Elizabeth Dunn and Mike Norton, as well as others, have conducted experimental and population-level studies that consistently find a positive relationship between well-being and various forms of generosity, such that people who report higher well-being, or whose well-being is experimentally tweaked, tend to behave more generously. This is in keeping with the theory that flourishing promotes engagement in a variety of voluntary, beneficent activities. Our findings on kidney donations support this theory, as do a wide array of studies that have linked objective measures of well-being, including wealth, health, and education, to everyday generosity and altruism. One 2005 Gallup poll found a linear relationship between income and volunteering, donating money (of any amount), and donating blood. Individuals in households earning more than $75,000 per year were the most likely to engage in all three behaviors, followed by those living in households earning more than $30,000 per year, then by households earning less than that amount. (Keep in mind, of course, that such studies tell us about population averages rather than the behavior of any one individual; plenty of less-well-off households are generous, and plenty of wealthier ones are not.) Another large study found similar results in Canadians: the best predictors of charitable donations, volunteering, and civic participation included higher income and more education. Large-scale naturalistic experiments reveal similar patterns. A field experiment in Ireland found that socioeconomic status was the best predictor of donations to a child welfare charity and of altruism. In a “lost letter” paradigm (first created by Stanley Milgram, incidentally), stamped letters addressed to charitable organizations were left on the ground for passersby who were so inclined to pick up and deposit in a mailbox. As in similar previous studies in the United States and England, letters dropped in more deprived neighborhoods were less likely to be returned than letters dropped in less deprived neighborhoods. The positive relationship between well-being and altruism persists across cultures, from Taiwan to Namibia.

One reason for these patterns may be that lower levels of well-being that follow from financial insecurity, poor health, or traumatic life events inhibit altruism by souring people’s view of the world, human nature included. Many decades of research on misanthropy find that this trait, which reflects cynical attitudes about human nature and lack of faith in other people, is inversely related to most indicators of well-being. People who are experiencing hard times are much more likely to report dour views of others, such as that others are only looking out for themselves and cannot be trusted. This suggests that, among the many positive sequellae of being wealthier (again, in the less-poor sense, not in the mansions-and-butlers sense), healthier, and enjoying higher social status appears to be a greater tendency to view others as generally trustworthy, kind, and generous.

To be fair, some recent studies conducted by psychology researchers whom I greatly respect have found conflicting results. For example, a study conducted by the psychologists Paul Piff, Dacher Keltner, and their colleagues found that people who drive luxury cars (who also tend to be wealthier) were less likely than other drivers to follow traffic laws and norms like yielding at a four-way stop or at a pedestrian crossing. In other studies, undergraduate students at the University of California–Berkeley who placed themselves higher on a ladder representing their overall standing in the community were less likely to share resources with strangers in a computer-based economic game. Parallel results were obtained using a national sample of adults from an email list maintained by a private West Coast university; of these adults, the wealthier and more educated ones were less generous in a computerized task. And in a sample of adults recruited from Craigslist, those who reported their social status to be higher were more likely to cheat in an online game of chance.

I myself was quite torn about these divergent sets of findings, being just as familiar with the trope that wealth promotes selfishness as everyone else. Then I encountered the biggest, most ambitious, and best-controlled study conducted on this topic yet, the results of which were so clear and so consistent that they convinced me that being better off may—again, on average—actually increase a wide variety of caring behaviors toward strangers. The study was conducted by the German psychologist Martin Korndörfer and his colleagues, who were familiar with the thinking that wealth and status tend to promote selfishness. So they conducted eight large studies that aimed to examine the association in large samples. And I mean really large: their studies included upwards of 37,000 people. Importantly, these were also representative samples, which capture the behavior of an entire population, not just selective subsets of it. All else being equal, findings from larger and more representative samples are more likely to be accurate than findings from smaller, more selective samples.

The researchers were surprised to find the opposite of what they had expected. They started off looking at charitable donations in their native Germany and found that, as wealth and education and status rise, Germans donate proportionally more of their income to charity, not less, and that the proportion of households that donate increases as well, from around one-quarter of the poorest 10 percent of households to three-quarters of the wealthiest 10 percent. They next looked at Americans’ charitable donations and found exactly the same effects. They found similar patterns again for volunteering—wealthier, higher-status Germans and Americans were more likely to volunteer to help others, and they volunteered more frequently. Reinforcing the idea that “wealthier” in these studies is not referring to the super-wealthy, the researchers found that generosity increased consistently with wealth along the entire income spectrum, from the very poor to the slightly less poor to the middle class to the wealthiest, who were still nowhere near super-wealthy. (In the United States, the top 10 percent of households earn around $160,000 or more per year, which is well-off, to be sure, but hardly super-wealthy.)

The researchers also looked at everyday helping behaviors in large, representative US samples—behaviors like carrying a stranger’s belongings, or letting them go ahead of you in line, the forms of altruism that are most likely to be spontaneous reactions to another person’s need—as well as behavior in a controlled economic game in which resources could be freely given to a stranger. The pattern was always the same. Those who were relatively better off were more likely to give. Finally, consistent with the literature on misanthropy, the economic game also showed that wealthier and higher-status players were not only more trustworthy (giving more resources to the other player) but more trusting as well.

These findings contradict the possibility that wealth and status are correlated with generosity only because poorer, lower-status households have fewer resources to give away. If it were only the case that poor households donate less money than wealthier ones, this explanation would make sense. But the fact that both the proportion of income donated and the likelihood of donating anything at all continue to rise with every incremental increase in wealth and status does not fit as well with this explanation. Surely most middle-class families have at least some resources they could donate—but the likelihood that they will donate anything rather than nothing increases at every point along the spectrum of wealth and status. So too does the likelihood that they will volunteer their time, despite the fact that wealthier individuals generally have less leisure time, not more. Moreover, there is no clear reason why poverty would impede everyday helping behaviors like giving directions or helping someone carry their belongings.

Could these patterns be somehow unique to Germany and the United States? It appears not. When the researchers examined patterns of volunteering in twenty-eight other nations across five continents, they found identical results in twenty-two of them (interestingly, exceptions included states with strong social welfare systems, like France, Norway, and Sweden, where volunteering was roughly equal across incomes), and in no country was increased wealth associated with less generosity.