CHAPTER 14

OUR INNATE SENSE OF FAIRNESS

Fairness is what justice really is.

—POTTER STEWART, US SUPREME COURT JUDGE

IF YOU HAVE TROUBLE SLEEPING OR ADJUSTING TO JET LAG, you may want to try a chemical known as 5-hydroxytryptamine, more commonly known as serotonin. Then again, you may want to be careful, because it turns out that serotonin directly affects your propensity to judge situations in a fair way—a crucial component of our social and personal life. Research conducted by Molly Crockett and her colleagues at the University of Cambridge and the University of California at Los Angeles shows that lower levels of serotonin in the brain cause people to be more intransigent about what they consider their unfair treatment at the hands of others.

The researchers used a variant of the so-called ultimatum game, in which subjects were asked to accept or reject a given offer to split a sum of money. Most people consider a 45/55 percent split, even if they get the short end of the stick, to be within the boundaries of fairness. When the split shifts to 30/70, subjects say the deal is unfair, and they think of it as highly unfair when the split is about 20/80. What Crockett and her collaborators did was to submit some of their subjects to an acute tryptophan depletion procedure, which interferes with the production of serotonin, temporarily lowering it significantly. To make sure that their results were not biased by the subjects’ knowledge of whether they had received the procedure or not, a placebo control was also used during the experiment. Moreover, to achieve the highest standard of scientific investigation, the whole thing was done using a double-blind protocol: because the scientists analyzing the data did not know which subjects had received the treatment and which had received the placebo, any possibility of unconscious bias in the interpretation of the results was eliminated.

The outcome was pretty clear-cut: there was no difference between treated and placebo subjects when the offer was fair, or even somewhat unfair; when the offer was unquestionably unfair, however (the 20/80 proposal), a much larger percentage of serotonin-depleted subjects rejected the offer when compared to the controls. Less serotonin circulating in your brain automatically (that is, without your conscious realization of it) lowers your threshold of tolerance for unfair treatment. (It is possible that more serotonin will make you temporarily more tolerant of unfairness, but the study conducted by Crockett and her colleagues did not address this possibility.) What makes this research particularly interesting is that it is not the only piece of neurobiological information we have about how the brain weighs fairness: it turns out that a similar behavior is observed in patients with lesions to the ventral prefrontal cortex. Researchers interpret this to mean that serotonin is probably achieving its effect through a modulation of the activity in the ventral prefrontal cortex. But that is not all: we also know that interference with a nearby brain area called the dorsolateral prefrontal cortex—for instance, by disrupting its activity using transcranial magnetic stimulation—achieves the opposite effect, making people more likely to accept unfair offers. It’s like the brain has these two regions that join up to weigh a reaction to the potential instances of unfairness in which we find ourselves. These findings are quite stunning and indicate that “fairness” is not just a cultural construct or a matter for theoretical philosophical discussions; in fact, we literally have a fairness calculator embedded in our brains!

Not surprisingly, the concept of fairness itself is a matter for serious discussion among philosophers, and as we shall see, some philosophical treatments of the matter mesh with the science significantly better than others—the sci-phi interface once again providing us with the best available picture on which to base our views and make our decisions.

Let us start with a very important concept in ethical and moral philosophy, the idea of “reflective equilibrium,” originally introduced by Nelson Goodman in 1955 (though he didn’t use this term) and eventually made famous by one of the most influential moral philosophers of all time, John Rawls. In essence, the method of reflective equilibrium, as the name implies, is a type of rational reflection that seeks to achieve an equilibrium among different notions, judgments, or intuitions we might have about a given ethical problem (or any other problem, for that matter). The goal is to continuously revise our judgments and reasons until they become as coherent as possible, thus allowing us to achieve said “equilibrium.” For instance, I may hold a starting position that abortion is to be avoided because it amounts to the extinction of a potential human life. However, I also think that a woman should have as much control as possible over her own reproductive fate. Moreover, I think that the welfare of the mother should override that of the fetus when they are in conflict, because the first is a fully formed person with rights and the latter is a potential person with a more limited range of rights. A process of reflective equilibrium would force me to acknowledge all these different moral intuitions, work through why exactly I hold them, and highlight when they are in conflict with each other. I would then proceed (probably with the help of a friend who would function as a sounding board for my complex and perhaps contradictory thoughts on the matter) to reconcile as many of my different intuitions on abortion as possible, and even to consciously reject or modify some of those intuitions once I see more clearly what they consist of.

We shall see in the next chapter exactly how Rawls applied the method of reflective equilibrium to arrive at an ingenious device meant to guide us toward the establishment of a society that is as just as can be rationally conceived. For now, however, let us note that some of the other philosophical positions on ethics that we have encountered in our previous discussions are clearly at odds with the idea of reflective equilibrium—as sensible as the latter appears to be at first glance. For instance, utilitarians, who think we should maximize happiness for the greatest possible part of the population, have at least two problems with an approach to ethical decisions that seeks coherence among different ethical intuitions. On the one hand, they question whether we should be taking intuitions in matters of morality seriously at all: where do these intuitions come from, and why should we trust them? On the other hand, anyone seeking coherence in their ethical thinking is willing to question, revise, and possibly reject their own rules or priorities in matters of morality. Since utilitarianism is based on one such cardinal rule (the pursuit of the course of action most likely to benefit the largest number of people), obviously utilitarians feel uncomfortable with the whole idea of reflective equilibrium. The rejoinder to the second utilitarian objection is philosophical in nature, while a good response to the first objection comes from neurobiology (and, as the reader will surely have guessed, from the sort of evolutionary biology that we discussed when talking about the evolution of morality more broadly).

We will consider the philosophy first, then move back to science. As with everything else in life, we simply cannot afford the time, energy, and quite frankly the pain of continuously revising our assumptions about what we do and how we do it. It may be true, as Socrates famously said, that the unexamined life is not worth living, but it is also true that we better spend most of our time actually living said life. This is why proponents of the reflective equilibrium approach make a distinction between a broad and a narrow application of the principle. To take a narrow approach to reflective equilibrium is to seek balance among accepted principles and moral intuitions, without going so far as to question the origin or reliability of said principles and intuitions. For instance, going back to our brief discussion of abortion, your moral intuition may well be that life has to be protected at all costs, and this intuition becomes an underlying assumption that guides all your treatments of the issue. Even holding on to such an intuition, however, there may be significant room to achieve a reflective equilibrium among other elements of the problem—for instance, how to balance the right to life of the fetus and that of the mother. (After all, both have a right to life, so if the two are at odds we need to agree on whose right trumps the other’s.)

However, from time to time we may want to expand the debate to question some of the cardinal principles, such as the idea that life has to be protected no matter what. That principle may turn out to be in contradiction with other moral positions held by an individual—for instance, someone with a strong moral intuition that makes him favor the death penalty. If life is sacred—in either a religious or even a secular sense—then it would seem that being against abortion but in favor of the death penalty generates a tension between different moral intuitions. This is where a broad conception of reflective equilibrium comes into play: we are now expanding the circle, so to speak, and considering the possibility that perhaps one (or more) of our cardinal rules or fundamental intuitions is wrong or needs to be revisited. The point is that, contrary to the claims of utilitarian critics, the idea of reflective equilibrium is compatible with the tendency we have not to question all our principles all the time. Indeed, this is what is going on when we take the broad approach to reflective equilibrium; it does not apply, however, to the narrow approach, which is certainly more frequently useful.

What about the other crucial objection to applying reflective equilibrium to ethics—that when we use “moral intuitions” of one sort or another we have no grounds to justify such usage at all because we do not know where these intuitions come from and whether they are valid? Here is where our discussion turns back toward science.

As we have seen, it turns out that we have a built-in “fairness calculator” in our brains. It appears that the calculator does not work along exclusively utilitarian lines, but rather by seeking an equilibrium (not reflective in this case, but unconscious) between different criteria. Recall (from Chapter 3) the work of Ming Hsu, Cédric Anen, and Steven Quartz, showing that three regions of the human brain are differentially engaged when we assess the fairness of a situation. The so-called putamen area seems to be in charge of assessing the efficiency of a situation (the brain’s version of utilitarianism), while the insula is concerned with assessing the degree of inequity inherent in a particular decision. When the two are at odds—as they often are—a third brain circuit, the caudateseptal subgenual region, mediates between the insula and the putamen area and comes up with the final decision.

The Hsu team uncovered this by subjecting people to brain scans while they were intent on making decisions about how to best allocate funds to orphans in Uganda. Their decisions involved taking away and redistributing meals to three groups of children, and the experiment was set up in a way that the subjects had to balance their perception of inequity with their assessment of the efficiency of the resulting allocations. The overall results of the study support the idea that people tend to make moral decisions while guided principally by their sense of fairness, even at the expense of efficiency of resource distribution (within limits). In other words, our brains do not work like utilitarian calculators, but rather seem to follow a sense of, as Rawls put it, justice as fairness. Moreover, these researchers’ results were congruent with the idea that this is achieved not through a method of rational deliberation but rather because of emotionally grounded moral intuitions. This finding may seem to go against the whole idea of a reflective equilibrium: after all, if we reach decisions by intuition, we don’t actually reflect on them! Indeed, Hsu and his colleagues explicitly mention in their article (which was published in Science, not in a philosophy journal) that this contradicts Rawls’s approach to some degree. But it doesn’t really. As we have seen, reflective equilibrium has to start with some assumptions and intuitions, and when used in the narrow mode it does not have to go as far as questioning them. Indeed, the work done by these researchers offers a partial insight into the issue of where these intuitions come from: apparently, they are embedded in the inner workings of our brains. Whether this is the result of acculturation or of a long period of evolution as social animals, or both, is another matter, which, just like any other nature-nurture issue in humans, is going to be difficult to settle.

Despite the rather compelling evidence from neurobiology that our brains are wired to be “fairness calculators,” the question remains of why that should be in the first place. We have discussed the possible evolutionary origins of human morality to some extent (see Chapter 4), but another piece of the puzzle that has emerged from recent research on human developmental psychology will help us get a more informed perspective on fairness and its origins. Ernst Fehr, Helen Bernhard, and Bettina Rockenbach published a paper in Nature in which they investigated differences in social behavior between human children of different ages in order to gain a developmental insight into our evolved differences with our closest evolutionary relative, the chimpanzee.

The study’s setup was pretty straightforward. A number of children between the ages of three and eight were asked to make decisions about whether and how to share food in the form of some sweets. They could, for instance, decide either to share the candy equally with other children or to keep the candies all for themselves. The authors made sure that the recipients of the gifts were not present in the room and that the game was played only once. That way the children’s decision would be a better reflection of their propensity for fairness, not of (conscious or unconscious) calculations of future expected rewards or of the need to build a reputation in the group. (Note, however, that it is impossible to control all pertinent variables in this kind of study: for instance, the children’s behavior may have been affected by what they felt the adults administering the test thought of their decisions. It is really tricky to do experiments with conscious animals.)

The main result of this study was a pretty sharp distinction between the behavior of very young children (ages three to four) and that of slightly older ones (ages seven to eight): the older group had a significantly higher propensity to share their sweets, exhibiting what researchers refer to as “other-regarding” behavior. In other words, while young children tend to be self-centered, a concept of social fairness seems to emerge later on during the development of a human being. This is important because chimpanzees, for instance, do not show other-regarding preferences in their behavior and essentially always act like three- to four-year-old humans. The other interesting point to note here is that human children do engage in helping behavior when they are very young, but of an entirely different sort. Cognitive scientists refer to it as “instrumental helping”: humans as young as fourteen to eighteen months readily help others achieve certain goals, such as arranging objects or opening doors, even without receiving a reward. Importantly, chimpanzees also engage in instrumental helping, strengthening the parallel between our close evolutionary relatives and the very young of our own species.

Michael Tomasello and Felix Warneken, commenting on the paper by Fehr and his colleagues, pointed out that the picture emerging from developmental and evolutionary studies is of a natural progression toward more and more sophisticated social-ethical behaviors: we start with instrumental helping (shared with chimps), move to forms of other-regarding behavior (unique to humans), and finally engage in the sort of complex reciprocal altruism that is influenced by considerations such as reputation achieved within the group, as has been repeatedly shown by studies conducted on adult humans. There is, however, an important catch: typically, human beings’ other-regarding behavior is limited to members of one’s group and does not extend, or is extended only reluctantly, to members of other groups. This psychological mechanism is responsible for the racism and xenophobia that underlie major conflicts between human beings, both historically and in today’s increasingly multicultural society. It seems that biologically grounded social instincts only get us so far in expanding the circle of beings we are ready and willing to treat fairly (and this does not even begin to address, of course, the question of animal rights). To bootstrap our morality beyond that point, it seems that we need some more sophisticated philosophically based reflective equilibrium, which brings me to the topic of the next chapter: justice.