Chapter 31
Morality

Robert Kurzban and Peter Descioli

Introduction

In the prior version of the chapter on morality in this Handbook, Krebs (2005) took his task to be to “explain how mechanisms that give rise to moral and immoral behaviors can evolve” (p. 747). In this version of the Handbook, reflecting changes in scholarship surrounding morality over the past decade, we seek to explain not moral behaviors, but rather the evolved function of mechanisms that give rise to moral judgments, beliefs, and motives.

This crucial distinction is subtle and readily overlooked. One research question asks: What explains behaviors that are widely judged as morally right, such as altruism, honesty, and fairness? A second, very different, research question asks: What explains why humans judge any behaviors at all to be moral or immoral?

In focusing on moral behavior, Krebs followed in distinguished footsteps. Darwin (1871), in his two chapters on morality, developed an explanation for altruistic behavior, proposing that sympathy was the “foundation stone” of morality, motivating people to help others. Darwin built his account on group selection, the idea that moral behavior was selected because it facilitated success in between-group competition, an idea that waned (Williams, 1966) and waxed (Haidt, 2012) in subsequent years.

Research in moral psychology over the past two decades, especially proposals by scholars such as Shweder, Rozin, and Haidt (e.g., Haidt, Koller, & Dias, 1993; Shweder, Mahapatra, & Miller, 1987), has expanded the discussion beyond what Darwin, Krebs, and others took to be the quintessential puzzle in moral psychology: why humans deliver benefits to one another. Haidt, for example, emphasized that cross-culturally, moral issues include not only altruism but also sexual practices, intuitions about purity, and deference to authority, among other spheres of life.

In turn, this expansion has led to new explanations for “moral” behavior. Haidt (2012), for example, invoked kin selection (Hamilton, 1964), reciprocal altruism (Trivers, 1971), and pathogen avoidance to explain the array of behaviors that encompass morality. Importantly, these explanations continue to focus on moral behavior. Kin selection theory explains why people have systems designed to deliver benefits to close relatives. Reciprocity theories (Axelrod & Hamilton, 1981; Fehr, Fischbacher, & Gächter, 2002; Nowak & Sigmund, 2005; Trivers, 1971) explain why people have systems designed to aid those who have previously helped themselves (or others, in the case of indirect reciprocity). Theories of pathogen avoidance explain why people have mechanisms designed to cause them to avoid exposure to bacteria and viruses.

Although these theories do an excellent job of explaining many kinds of moral behavior, it is important to note what these theories do not explain. Although the selective advantage of avoiding pathogens, for instance, explains why individuals avoid decaying animals, it emphatically does not explain, in itself, why one individual can come to believe that other individuals should be punished for exposure to pathogens. Whereas many organisms have adaptations designed to resist pathogens, humans judge others for engaging in behaviors that they themselves avoid, believe that such behaviors are “wrong,” and (in at least some cases) are motivated to harm (i.e., punish) people who engage in such behaviors.

The balance of this chapter is aimed at explaining these features of human psychology. Far from leading to a disconnect with the previous Handbook chapter and related theories, explanations of judgment, beliefs, and motives dovetail felicitously with prior accounts. To the extent that people's beliefs about what is wrong and should be punished are explained, an additional explanation is provided for why people choose not to engage in behaviors so judged: to avoid the punishment others are motivated to mete out.

Moral Phenomena: Cutting Morality at the Joints

Throughout history, moral beliefs have motivated people to imprison, torture, and execute other people for behaviors such as premarital sex, witchcraft, endorsing religious beliefs, and scientific research. Morality continues to motivate hate crimes, mass incarceration, and terrorism (Atran, 2010). Moral condemnation of abortion kills 47,000 women per year and inflicts injuries on millions by causing societies to outlaw safe abortions (World Health Organization, 2011). In the United States in 2012, law enforcement reported 1,318 hate crimes motivated by anti-homosexuality attitudes (Federal Bureau of Investigation, 2012). Moral condemnation of drugs has had “devastating consequences for individuals and societies around the world,” including mass incarceration and funding organized crime (Global Commission on Drug Policy, 2011). On a smaller scale, people's moral judgments can interfere with close personal relationships, as when moral righteousness undermines compromises among friends.

These destructive moral phenomena are distinctively human. In contrast, many behaviors commonly judged as morally right, such as altruism, parental care, honest communication, monogamous mating, respect for property, and restraint of aggression are widely observed in nonhuman animal species (Davies, Krebs, & West, 2012). Importantly, members of these species do not make explicit moral judgments, communicate moral rules, debate which rules are best, punish violators, or espouse impartial judgment. There are then, across taxa, many causes of behaviors commonly judged “moral,” even in the absence of moral judgments and systems of moral rules.

Moreover, the human capacity for moral judgments does not necessarily systematically lead to moral behavior (Kurzban, 2010). Research on moral hypocrisy shows that people often engage in behavior that they themselves judge to be morally wrong (Batson & Thompson, 2001). In one set of experiments, most participants made someone else do an unpleasant task rather than do it themselves even though they said it was morally best to use a coin flip to decide (Batson & Thompson, 2001). Similarly, developmental research has found that children's moral judgment of other people's lies was unrelated to their own lying behavior (Talwar, Lee, Bala, & Lindsay, 2002). In short, there is a double dissociation between moral behavior and moral judgment that potentially points to different evolved functions, and, further, suggests that it is moral judgment that is most distinctive and least understood.

Researchers can, of course, define “morality” in many ways. However, we are not certain of the value of defining “morality” as all behaviors that humans might moralize: altruism, reciprocity, fairness, honesty, monogamy, fighting, parenting, black magic, supernatural beliefs, cryopreservation, cigarette smoking, and on and on. Similarly, extremely broad definitions such as prosocial or non-zero-sum behavior essentially include all social behavior and equate morality to sociality, making these two terms redundant. We therefore favor a narrower scientific definition, following moral philosophers such as Kant (1785/1993) or Moore (1903), who draw a sharp distinction between moral judgment and the behaviors that are morally judged. Evolved moral adaptations, on this view, are the cognitive programs that compute moral values for a diversity of actions, but are not the systems that produce the actions themselves.

For the remainder of this chapter, we use the term “morality” to refer to the observation that people, cross-culturally, judge some behaviors as “wrong” as opposed to “right” or “not wrong.” Our interest is in explaining this “moral sense,” as James Q. Wilson (1993) put it: people's experience of others' behavior as falling along a moral continuum. The balance of this section reviews the empirical features of moral beliefs and judgments that a theory of morality must explain.

Beyond Harm and Altruism

A critical advance in the study of morality was the idea that moral judgment does not focus only on preventing harm and promoting altruism. This is clear from the anthropological record, which shows a stunning diversity of rules about sex, food, violence, communication, property, trade, witchcraft, supernatural beliefs, and more. For instance, recently there has been debate in Iran about whether it is immoral to own a dog, currently a punishable offense (Fassihi, 2011).

Shweder, Much, Mahapatra, and Park (1997) interviewed participants from Bhubaneswar, India, about perceived moral violations such as a woman who ate rice with her husband and his elder brother, a son who addressed his father by his first name, or a widow who ate fish twice in a week. When asked why these and 30 other behaviors were morally wrong, participants' justifications referred not only to harm, but also to hierarchy, duty, divinity, purity, and other concerns.

Further, Haidt (2001) showed that even when people claim that consideration of harm drives their moral judgments, these claims are often post hoc justifications rather than the genuine causes of judgments. Haidt presented participants with harmless moral offenses and asked them why the offenses were morally wrong. Many participants referred to particular harms. Researchers then asked participants to imagine that these harms were hypothetically ruled out. When all harms were removed, many participants maintained their moral judgments even when unable to explain why, a phenomenon termed moral dumbfounding. Tetlock (2000) used a similar procedure and found the same results: Many participants continued to condemn practices such as markets for body organs even after potential harms were eliminated to their satisfaction. Related work shows that the harshness of moral judgments for violations (such as eating dogs, cleaning a toilet with a flag, etc.) is not predicted by participants' own assessments of harm (Haidt et al., 1993; Haidt & Hersh, 2001).

Haidt's (2012) moral foundations theory attempts to account for the diversity of moral rules. Haidt motivates each foundation—fairness, loyalty, authority, sanctity, and, of course, harm—with a different adaptive problem: parental care, reciprocity, coalitions, hierarchies, and contaminants, respectively.

A good theory of morality needs to account for diversity in the content of moral rules. At the same time, a good theory should also account for the many common features of moral cognition that cut across moral domains. Why, for instance, are violations in different domains all judged “wrong,” rather than each only having its own specific label as uncaring, unfair, disloyal, disobedient, or impure? Across domains, wrongness is associated with accusations, guilt, condemnation, gossip, punishment, and impartiality. These common properties can be seen in the process of moralization—when amoral actions are transformed into moral violations (Rozin, 1999). Rozin (1999) argues that moralization of behaviors such as smoking or eating meat are associated with a suite of psychological features including prohibition, outrage, censure, overjustification, internalization, and facilitated social learning.

Nonconsequentialism

Explanations of moral judgments must, obviously, account for the broad empirical patterns of such judgments. Arguably one of the most important patterns is that moral judgments are often nonconsequentialist (cf. Sinnott-Armstrong, 2006). That is, as an empirical matter, people's judgments of whether a behavior is wrong do not depend only on their beliefs about the (actual, direct, and intended) consequences of the action (Hauser, 2006). Specifically, moral judgment is deontic, sensitive to the action chosen by the actor, rather than only the intended consequences.

The most common empirical example is judgments about the Trolley Problem, a vignette used to probe people's moral intuitions (Greene, Sommerville, Nystrom, Darley, & Cohen, 2001; Mikhail, 2007). In the Footbridge case, a runaway trolley will kill five innocents on the track unless a man with a large backpack is pushed off of a footbridge onto the tracks, killing the man but stopping the trolley. If people's moral intuitions were consequentialist, then people would judge pushing the man with the backpack as morally good, since the consequences of pushing lead to one death instead of five. Cross-culturally, in sharp contrast, a vast majority of subjects judge pushing to be wrong (Hauser, 2006). Similar evidence of nonconsequentialism is found in research on taboo tradeoffs (Tetlock, 2003) and protected values (Baron & Spranca, 1997); results of research in these areas show that moral cognition is particularly attuned to prohibited actions rather than only consequences.

These observations of nonconsequentialism are important because they contradict prominent explanations for moral judgment. Altruism models, for example, predict a primary focus on consequences. For instance, kin selected systems should be expected to guide behavior towards good (i.e., inclusive fitness maximizing) consequences. And, indeed, many organisms routinely harm or kill one close relative (sibling, offspring) in order to benefit more than one other close relative (Mock, 2004). In sharp contrast, 84% of people said it's wrong to push one brother to save five brothers (Kurzban, DeScioli, & Fein, 2012), even though many participants (47%) said they would push anyway, despite it being wrong. If moral judgments were designed by kin selection, then people should judge pushing to be virtuous, rather than wrong. Nonconsequentialist judgments point to a different class of explanation. Moral judgment systems focus on how actions are completed as opposed to the consequences they bring about. This points to the possibility that they are solving a coordination problem, as discussed below (DeScioli & Kurzban, 2013).

Judgments Are Complex, Implicit, and Variable

Although some view moral judgments as products of simple heuristics (Baron, 1994; Gigerenzer, 2007; Sunstein, 2005), other researchers have found that moral judgments are complex, responding to many features of behavior and context. Returning to the Trolley Problem, discussed above, Mikhail (2007) found that moral judgments depend systematically on particular structural features of the behavior. In the Footbridge case, for instance, the person with the backpack is used as a means to an end—stopping the trolley. In the Switch case, however, the man's death is a side effect, and most participants judge killing one to save five by diverting the train to be morally permissible. Moral judgments track this distinction between means and side effects across a variety of moral offenses (DeScioli, Asao, & Kurzban, 2012).

Another important distinction that has received substantial attention is the act/omission distinction. Somewhat puzzlingly, people reliably evaluate an outcome as less wrong if it comes about as a result of inaction as opposed to action, holding both the outcome and intentions constant. For instance, withholding the antidote from someone is judged less wrong than poisoning someone, even when the intent in both cases is the person's death (Cushman, Young, & Hauser, 2006). This effect too occurs across a variety of moral domains; for instance, participants judge cannibalism to be more morally wrong if it is done through an action as opposed to inaction (i.e., not spitting out food after finding out that it is human flesh; DeScioli, Asao, & Kurzban, 2012).

Although moral judgments track these dimensions of behavior, participants are often unable to articulate the relevant factors behind their differing moral judgments (Haidt, 2012). Indeed, Haidt and others have argued that moral psychological judgments are frequently—though not always—implicit and intuitive as opposed to conscious and explicit. The source of moral judgments, then, is at least sometimes located in nonconscious systems, including emotional systems. These considerations have led Mikhail (2007), among others, to compare moral judgment to natural language insofar as such judgments involve complex and unconscious computations.

Finally, a key feature of moral judgments is that the actions that are moralized vary tremendously across time and across cultures (Haidt, 2012; Rozin, 1999; Shweder et al., 1987). This variability is perhaps most transparent in cases in which two different cultures moralize opposite behaviors. For instance, Western readers are familiar with property rights regimes in which the person who takes a resource is the perpetrator: Taking is moralized. However, in some moral regimes, in which property rights prioritize needs over who acquired the goods, refusing to give is moralized. For example, Fiske (1992) discussed communal sharing relationships in which individuals are expected to share resources with those in greater need. More generally, moralized categories of behavior in one culture often seem very peculiar to members of other cultures. Food taboos, clothing restrictions, and sexual mores offer many examples.

In short, while there are some cross-cultural similarities in moral rules—unprovoked intentional harm is frequently moralized—there is also a tremendous amount of variability.

Punishment

Moral judgments, once made, are accompanied by a cascade of emotions and motivations; in particular, moral infractions evoke anger and disgust (Rozin, Lowery, Imada, & Haidt, 1999), and, generally, the intuition that the actor should be punished (Robinson & Kurzban, 2007; Wiessner, 2005). The desire for punishment provides a clue to the function of moral judgments. For example, if moral judgments were built simply for choosing interaction partners (Baumard, André, & Sperber, 2013), then it is unclear why people would seek punishment rather than simple avoidance of perpetrators.

Further, the motive to impose costs is important because the motive itself could potentially carry costs. Because harming others provokes subsequent retaliation by the person harmed and their allies (e.g., Knauft, 1987), the motive to do so must, presumably, be offset by some gain to the individual.

Three other well-documented features of the motive to punish are potentially important. First, while the desire that perpetrators be punished is very common, people do not necessarily want to mete out the punishment themselves. Laboratory evidence indicates that when behavior is kept carefully anonymous, people do not engage in much costly punishment (Kurzban & DeScioli, 2013), suggesting the absence of a motive to punish per se, absent reputational benefits (Kurzban, DeScioli, & O'Brien, 2007). This distinguishes moralistic punishment from revenge in which people seek to punish those who have imposed harm on them (rather than for violating a moral rule against someone else) (McCullough, Kurzban, & Tabak, 2013).

Second, there is tremendous cross-cultural variability in how infractions are punished, ranging from informal sanctions (Hess & Hagen, 2002; Kaplan & Hill, 1985; Wiessner, 2005) to the intricate, culturally elaborated police and justice systems in industrialized societies. Third, and perhaps related, whereas the particular behaviors that are punished—and how much they are punished—vary tremendously, there is widespread agreement about the relative severity of many moral violations and, consequently, the severity of punishment they merit (Robinson & Kurzban, 2007).

Impartiality

A signal feature of evolved social behavior is favoritism, whether with respect to kin (Hamilton, 1964), prior interactants (Trivers, 1971), coalitions members (Harcourt & de Waal, 1992), coethnics (Gil-White, 2001), and so forth. Favoritism allows organisms to direct social efforts toward partners who bring them greater benefits.

One feature of moral psychology, impartiality, presents a puzzle in this context. By impartiality we mean that a person is impartial to the extent that the person's judgment of another's moral wrongness is applied independent of the actor's identity (e.g., kinship, ingroup, ethnicity). Impartiality, then, refers to ignoring the very criteria that altruism systems commonly use to guide preferential behavior (see also Shaw, 2013).

The empirical data do not, of course, support the extreme claim that everyone is always impartial in their moral judgments of others' actions. The data do, however, support the narrower, weaker claim that people are sometimes impartial. That is, people will sometimes damage their valuable relationships when the violation of a moral rule is at stake. One study found, for instance, that more than half of American soldiers would report a member of their unit—generally extremely loyal groups—for committing violence against foreign civilians (Morgan, 2007).

Evidence from the laboratory is similarly suggestive. Lieberman and Linke (2007), in one of the few studies looking at the relationship between preexisting social ties and moral judgments, found that people's wrongness judgments did not depend on group membership or even kinship relations; kin were judged as harshly as strangers, though kin were seen as deserving less punishment.

Generally, impartiality as a communally valued aspect of moral judgment—illustrated by Lady Justice's blindfold—is both a puzzle and clue surrounding moral judgments. Set against the backdrop of adaptations for treating others differently depending on relationships—including loyalty, reciprocity, and nepotism—moral impartiality stands out as an important property to explain.

Moral Judgments Coordinate in Conflicts

The empirical patterns in moral judgment suggest that the underlying psychological mechanisms do not function to benefit kin, solidify groups, avoid pathogens, and so on. Instead, we argue that these patterns are best explained by a different function: choosing sides in disputes (DeScioli & Kurzban, 2009, 2013).

In human social life, people have conflicts over status, resources, and mates. Bystanders to these conflicts often must choose sides, particularly when both sides request support. In nonhuman animals, with a few exceptions, the adaptive problem of choosing sides does not exist. In some cases, this is simply because conflicts are never greater than dyadic is size: In many animal species, individuals do not team up (Harcourt, 1992). In other species, when bystanders intervene they always side with kin (e.g., baboons; Seyfarth & Cheney, 2012), so difficult side-taking decisions do not commonly occur. More rarely, individuals in some species support nonkin, including chimps, macaques, and dolphins (Connor, 2007; de Waal, 1982; Schülke, Bhagavatula, Vigilant, & Ostner, 2010).

Human conflicts often escalate beyond two individuals. Bystanders are sometimes loyal to long-term friends, but they also change sides, being flexible in their coalitions (Kurzban, Tooby, & Cosmides, 2001). So, when conflicts emerge, observers to the conflict—which we hereafter refer to as “third parties”—might choose to intervene on one side or the other, in which case they must use some criterion for choosing sides.

One way that third parties might choose is based on dominance, taking the side of the more dominant individual involved in the conflict. We refer to this as a bandwagon strategy. Under such a choice regime, the most dominant individual would win all conflicts and would have a monopoly on power, as in linear dominance hierarchies (e.g., Holekamp, Sakai, & Lundrigan, 2007). Although some human social groups are rigidly hierarchical, with a despot at the top, most are not (Boehm, 1999).

A second way that humans might choose sides is based on the strength of preexisting relationships, backing the disputant who is closer in kinship, friendship, or group membership. This is choosing sides based on partiality or favoritism. As an empirical matter, people frequently show favoritism, but, crucially, they do not always do so. Departure from partiality, even if rare, is interesting given the central role that it plays in theories of altruism. Instead, third parties sometimes choose sides with a stranger against a friend, a friend against family, or with a foreigner against a compatriot. This sometimes happens, for instance, when the closer individual perpetrated unprovoked intentional harm on a more distant individual.

Choosing sides based on alliances can lead to costly escalated fighting (Snyder, 1984). Imagine a world in which conflicts emerge periodically and each third party always chooses the closer person to support. Any two individuals will tend to have their own family and friends to support them. The result is that fights will tend to be evenly matched. A key finding from the literature on animal contests is that evenly matched disputes are most likely to escalate because neither side is so outmatched that it is clearly best to back down (Arnott & Elwood, 2009; Davies et al., 2012; Enquist & Leimar, 1983; Mesterton-Gibbons, Gavrilets, Gravner, & Akçay, 2011; Parker, 1974). Due to these costs of escalation, other ways of choosing sides might be able to invade by reducing these costs.

In order to avoid escalated fighting, bystanders can try to choose the same side as everyone else—that is, to coordinate their side-taking decisions. Coordination problems occur in a wide variety of social contexts, such as avoiding collisions on the road, carrying furniture with housemates, meeting up at the same location, or negotiating a price for a trade, and this class of problems has been intensively studied (Camerer, 2003; Schelling, 1960; Thomas, DeScioli, Haque, & Pinker, 2014).

One way to accomplish coordination is for everyone to make their decisions contingent on a public signal (Schelling, 1960). This coordination strategy is referred to as a correlated equilibrium (Aumann, 1974). The most frequently used example of a correlated equilibrium is a traffic light. While any color of light could be used to mean “go” or “stop,” once this equilibrium has been selected, each individual driver does best by using the colors to make decisions. In coordination games, when other players are expected to make decisions based on an otherwise arbitrary signal, each player's interest is to make decisions contingent on that signal, maintaining the equilibrium.

DeScioli and Kurzban (2013) proposed that moral contents serve this coordination function for bystanders choosing sides in conflicts. Moral cognition assigns moral wrongness to a set of actions and motivates people to debate and agree on which actions are morally wrong and on their wrongness magnitudes. When disputes arise, the moral side-taking strategy is to choose sides against the individual who has taken the action with the greatest moral wrongness. This decision rule might lead an observer to side against a friend or relative, but this cost must be set against the benefit of siding with other third parties.

This strategy works when third parties agree before conflicts break out—either explicitly or implicitly—how they will all make their choice should a conflict arise. A key point is that, just as in the traffic light case, if potential third parties agree in advance, then a given third party pays a big cost for deviating from this prior agreement because their side will be vastly outnumbered by nondeviating third parties. (Such third parties might nonetheless choose to support a friend or relative; the ultimate decision depends on all of the relevant costs and benefits.)

Importantly, moral side-taking is only one coordination equilibrium among others, including bandwagon and alliance strategies. Each third party's best strategy depends on how other people choose sides, which can explain why morality is diminished in human groups that prioritize hierarchy (e.g., fascist regimes) or loyalty (e.g., ethnic conflict). For example, in extremely hierarchical societies, people routinely violate moral rules when they are directed by authorities to perform immoral acts such as murder or genocide. Related, in ethnic conflicts, people engage in otherwise immoral behaviors such as murder or rape, given the support of their coethnics for doing so. In these social contexts, the motives to adhere to moral rules and to condemn moral violators are diminished because bystanders have coordinated on power or loyalty as the primary basis for choosing sides.

Further, because the function of moral beliefs is coordination as opposed to, say, cooperation, there is no particular reason that the consequences of actions must be central to moral beliefs. If everyone else is going to judge the person doing action X as “wrong,” then similarly judging action X to be wrong can be the best strategy even when doing X makes everyone involved better off. (Indeed, there are many examples; see section titled “Conflict and Agreement Over Moral Contents,” further on.) Related, just like traffic lights, what people agree in advance is “wrong” can be nearly anything and still successfully perform a coordination function. Just as many different combinations of phonemes can mean “cow” (i.e., the animal), it doesn't matter which combination means that particular animal as long as everyone has (roughly) the same belief about what the word “cow” refers to.

Note that this proposal explains beliefs as opposed to behavior. Observing a person behave in a way that violates a preexisting moral rule—don't steal—evokes the belief (judgment) that the action is “wrong” and the person ought to be sided against.

Coordination Explains Moral Phenomena

Components of Moral Representations

Joining the same side as other third parties requires prediction. The dynamic coordination model proposes that moral judgment—the representations (beliefs) that spontaneously arise to categorize particular actions as “wrong”—function to predict the side other observers will choose.

One key aspect of these representations is that they include (at least) two agents, a perpetrator—the agent who committed the “wrong” action—and a victim—the agent who was wronged (Gray & Wegner, 2009; Gray, Young, & Waytz, 2012). These representations are necessary to guide behavior against the perpetrator and in support of the victim. The prototypical role of a victim leads to some unusual cases, such as suicide in which the same agent is both perpetrator and victim; indeed, people seem to invent victims as needed once they have made a wrongness judgment (DeScioli, Gilbert, & Kurzban, 2012).

Moral judgments, then, serve as a prediction about others' side-taking behavior and, in addition, guide behavior toward making the same choice. The concurrent motivation that the perpetrator be punished satisfies another functional requirement, signaling to other third parties that one is taking sides against and supports aggression toward the perpetrator. This idea might explain why some people seem eager to announce their moral condemnation of others' actions, ranging from expressions of anger or disgust to public comments in various social media to public demonstrations of outrage (Tooby & Cosmides, 2010). Beyond these signals, communicating the willingness to punish the perpetrator—by, for instance, actually doing so—is an even more reliable signal of support. Moralistic punishment, then, can be understood as a costly signal that facilitates third-party coordination.

Impartiality and Nonconsequentialism

Viewing moral judgments as coordination devices explains impartiality. To the extent that judgment depends on relationships to a perpetrator or victim, coordinating with other third parties—who will have different loyalties—is undermined. To function effectively, moral judgments must align with other people's moral judgments. This requires individuals to make judgments based on the disputants' actions per se instead of the individual's relationships to the disputants. To use the traffic light example, there is no advantage to believing that a red traffic light signals “go,” even if you prefer to continue driving through a red light, because coordination requires all drivers to agree, independent of their personal preferences. This entailment, an unbiased perception of actions, is the essence of impartiality. This is not to say that people will always behave impartially after evaluating others' acts; as discussed above related to research by Lieberman and Linke (2007), observers might evaluate acts as equally wrong when committed by friends and strangers, but still respond differently in the two cases, supporting punishment for strangers but not for friends and relatives. Third parties are expected to weigh the benefits of impartiality against the costs to their relationships. Consistent with this idea, Petersen (2013) found that people with fewer friends, and hence lower costs for impartiality, are more prone to moralization.

The coordination function explains nonconsequentialist judgments because coordination requires building the same representation as others build regardless of how consequences affect others' judgments. To the extent that people's judgments are driven by features of the behavior other than consequences—as the array of Trolley Problem results illustrates—people are best off aligning their own judgments similarly. This does not preclude the possibility that intended consequences can also affect moral judgments. Indeed, they do in many cases (Robinson & Darley, 1995). However, because specifying what is “wrong” is designed for coordination (as opposed to reducing harm), intended harm need not be the sole criterion for judging wrongness as, indeed, it is not (e.g., Haidt, 2012; Mikhail, 2007; Tetlock, 2003).

Related, to be effective at resolving conflicts, a group's vocabulary of moral rules—those behaviors judged as “wrong”—should include most actions that might initiate conflict. These actions include those pertaining to harm, property, contracts, sex, status, and so on. This requirement helps to explain why changing technologies inevitably lead to new moral rules being minted, such as rules and laws governing electronic property rights or Internet surveillance.

Conflict and Agreement Over Moral Contents

Although different moral rules might work equally well for coordinating side-taking, moral rules might have very different consequences for different people within a social group. For example, if some individuals have a mating strategy of pursuing multiple mates, then they are disadvantaged by moral rules against promiscuity or polygamy, relative to monogamous maters. To make local rules work to an individual's benefit, moral cognition might include adaptations for advocating moral rules that are in the individual's interest, leading to fights and debates over moral rules (Kurzban, Dukes, & Weeden, 2010; Tooby & Cosmides, 2010; Weeden, 2003; Weeden & Kurzban, 2014).

History is replete with illustrative examples. Robinson and Acemoglu (2012) argue that a particular kind of contract called the commenda in medieval Venice made some people rich and influential. Once established, such people banned the use of the commenda to prevent others from rising to compete with their power. Scholarship in criminal law has long recognized this process; the “conflict model” suggests there is “an on-going struggle between vested interest groups which seek to have their particular values legitimated and supported by the coercive power of the state” (Thomas, Cage, & Foster, 1976, p. 110).

An obvious modern example is digital music. Musicians are better off when duplication of their products is moralized and punished; consumers are in the reverse position. These incentives readily explain why Metallica and recording companies filed suit against Napster, the (now defunct) peer-to-peer file-sharing service.

Conflicts are not always obvious. Weeden (2003), for example, proposed that conflicts over the morality of abortion are really proxy battles over sexual strategies. People pursuing a more short-term mating strategy (Buss, 2006) are obstructed when practices facilitating promiscuity are moralized, banned, and punished. Applying this logic to abortion, Weeden (2003) found that people pursuing a strategy weighted toward mating effort and away from parenting effort were more likely to be pro-choice; people pursuing monogamous strategies, reciprocally, were more likely to be pro-life. Although people justify their moral views with reference to freedoms or religious texts, life history variables are, Weeden argues, driving people's positions on abortion.

Moral side-taking does not always lead to conflict but can also cause agreement about which actions are immoral because some rules affect everyone more or less the same. Rules that punish intended physical harm, for instance, protect everyone who can be physically harmed—that is, everyone—and therefore lead to roughly equivalent benefits to all. Everything else equal, the least conflict should be expected over these rules, which DeScioli & Kurzban (2013) refer to as Rawlsian because of their equal effects. Indeed, reflecting the Rawlsian nature of some rules, there are many cross-cultural commonalities in moral contents—such as rules surrounding unprovoked intentional killing (Mikhail, 2009).

This mix of conflict and agreement is expected to generate themes and variations in moral rules across cultures. Where there is conflict, variation is substantial and potentially influenced by the number of people affected and their ability to coordinate to advocate for their interests. Where there is agreement, cross-cultural themes emerge with some moral rules showing widespread adoption.

The dynamic coordination model described above makes few predictions about which cultures will adopt which moral rules. Because there are arbitrarily large numbers of equilibria, additional theory is needed to account for variation. One such account is from Haidt and colleagues' Moral Foundations Theory (MFT; Haidt, 2012), which proposes that disagreements over moral contents can be usefully divided into disagreements over the weight placed on six basic content areas of morality (harm/care, fairness/reciprocity, ingroup/loyalty, authority/respect, purity/sanctity, liberty/oppression). In the United States, for example, Democrats value notions of autonomy, individual rights, and fairness while Republicans place greater weight on purity (e.g., Graham, Haidt, & Nosek, 2009).

According to MFT, members of different groups try to bring about moral regimes within their society that reflect their emphasis on the respective foundations. In turn, these different emphases aid groups in coalescing around these common moral commitments (Haidt, 2012). MFT views moral commitments less as strategic and more as alternative sets of norms around which to build coalitions and alliances reflecting differences in emphasis on the different moral foundations.

Note that if the dynamic coordination explanation for beliefs and motives is correct, then moral conformity is also partially explained. As Boyd and Richerson (1992) showed, in a world in which agents punish (i.e., impose costs on) those who X, there will be selection for choosing not to X, to avoid such punishment. In a social world in which betraying trust is punished, people are predicted not to betray trust absent sufficiently large offsetting incentives. Trustworthiness, then, can be partially explained by the presence of beliefs that such betrayals are “wrong” and motives to punish those who betray.

By viewing moral rules as points in a large equilibrium space—echoing work on cultural evolution (Boyd & Richerson, 2005)—the dynamic coordination model links research in moral psychology to work in cultural evolution. If moral rules are for coordination of side-taking decisions (as opposed to cooperation), it is far less surprising to find that groups have equilibrated at welfare-destroying rules, as they so often have historically (Diamond, 2005; Robinson & Acemoglu, 2012). Related, because groups compete with one another, those groups with “bad” (i.e., aggregate welfare-destroying) rules will, on average, be at a disadvantage to groups with “good” rules. This dynamic explains why good rules are common, and should be expected to become even more so over time.

Moral Emotions

Historically, because the study of morality concentrated on prosocial behavior, researchers focused on “moral emotions” such as empathy, sympathy, and guilt. This dates at least as far back as Adam Smith's (1759) Theory of Moral Sentiments, with its focus on sympathy, echoed by Darwin more than a century later. This emphasis continues to some extent in modern approaches. Haidt (2003), for instance, plots the moral emotions on the axes of the prosociality of the associated behavior on the one hand and the person's own interests on the other.

In contrast, research on moral condemnation, the primary topic here, focuses instead on two emotions associated with judging others' actions: anger and disgust (Hutcherson & Gross, 2011).

Anger

Observing a moral violation often triggers anger and outrage (Rozin et al., 1999). In general, the emotion of anger motivates and prepares an individual for aggression. This raises the question of why moral judgment is closely connected to anger and aggression.

Outside of the moral realm, people get angry when they, their allies, or relatives have been harmed (Fessler, 2010; Srivastava, Espinoza, & Fedorikhin, 2009). The functional logic of this response to harm is straightforward. Anger serves as a deterrent (McCullough, Kurzban, & Tabak, 2013; Tooby & Cosmides, 2008) or a mechanism for recalibration (Sell, Tooby, & Cosmides, 2009). If B knows A will reply to harm with retaliation, B will be less likely to harm A in order to avoid these subsequent costs (McCullough et al., 2013). The related recalibration function entails using the threat of harm to make other people engage in less harmful (or more beneficial) actions in the future. (See Sell et al., 2009, for a discussion.)

However, people also get angry when someone violates a moral rule, even when no harm has been done (DeScioli & Kurzban, 2009; Haidt, 2012; Rozin et al., 1999). Moreover, this occurs even when the violation does not directly affect the individual or their allies. This suggests that human anger expanded to include an additional input, not only harm but also someone else's choice of a morally prohibited action. Similar to basic anger, moral anger motivates aggression toward the perpetrator. However, the motivation might not be to aggress against the perpetrator per se, but rather to support others' aggression against the perpetrator. According to the dynamic coordination model described above, the behavior motivated by anger at moral violations signals that the person has judged an action as wrong and will side against the perpetrator. Under the proper conditions, it will further lead to imposing costs on the perpetrator (Kurzban et al., 2007).

The close connection between moral judgment and anger does not fit well with altruism models, which instead predict reliance on empathy. In fact, moral outrage reduces empathy toward perpetrators (Decety, Echols, & Correll, 2010). One reasonable possibility is that this reduction facilitates support for harming the violator. The reduction of empathy can be profound, as illustrated by historical examples of public support for draconian punishments of harmless offenses such executions for holding different supernatural beliefs (Levy, 1993) or for illicit sexual behavior (Appiah, 2010). Whereas these observations conflict with altruism models, they fit the side-taking model in which aggression toward the perpetrator plays a key role.

Disgust

Following Tybur, Lieberman, Kurzban, and DeScioli (2013), we distinguish two issues surrounding moral disgust. The first issue is the question of why many behaviors that elicit disgust, such as incest and eating particular foods, are, cross-culturally, frequently moralized. The second issue is why morally wrong acts that are not “disgusting” in the traditional sense—stealing candy from a baby—recruit the language of disgust.

We follow previous work holding that core disgust functions to avoid hidden risks such as pathogens and inbreeding costs (Tybur, et al., 2013). In this light, the moralization of disgusting acts seems especially vexing. Moralizing actions disincentivizes them. However, disgusting actions are typically things people don't want to do anyway. Because disgusting actions are generally harmful to fitness, people are motivated to avoid them, and would, presumably, do so even if they were not moralized in their social group.

Haidt (2007, 2012) proposed one answer to this question—that the moralization of disgusting behaviors serves to “bind” people into cooperative groups. He proposes that human in groups “circle around sacred values” (p. 31), arguing that moralizing actions, including disgusting actions, unite and unify.

A second possible answer to this puzzle, proposed by Tybur et al. (2013), rests on the possibility that the moral rules that are observed across cultures depend on who is willing to fight to support (or oppose) them, as discussed above. Generally, people only oppose rules preventing them from doing things they want to do. So, because people, by and large, don't want to do actions they view as disgusting, there should be the least resistance to rules that moralize disgusting actions, possibly explaining their prevalence.

As an empirical matter, eliciting disgust does seem to affect moral judgments. Participants who smell a disgusting odor, for instance, judge acts more morally wrong than controls (Schnall, Haidt, Clore, & Jordan, 2008). A parallel result has been obtained for taste (Eskine, Kacinik, & Prinz, 2011). Particularly intriguing, the cues that lead to sexual aversion toward opposite-sex siblings also predict the extent to which one judges one's opposition to incest by others (Lieberman, Tooby, & Cosmides, 2003, 2007).

The second question is why morally wrong actions are frequently described using the language of disgust. As an empirical matter, people do indicate that wrongful actions that have nothing to do with pathogens or sex, such as theft from a blind person, are “disgusting.” Related, when subjects are asked to nominate actions that caused them to be “disgusted,” they nominate times when a moral rule was violated (Curtis & Biran, 2001; Haidt, McCauley, & Rozin, 1994; Haidt, Rozin, McCauley, & Imada, 1997; Tybur, Lieberman, & Griskevicius, 2009).

Why this is the case remains the subject of debate. Hutcherson and Gross (2011), for instance, proposed that moral disgust functions to “mark” people who are threatening. This view suggests that labeling others' actions as disgusting will aid in avoiding those actors in the future. In contrast, Tybur and colleagues (2013) propose that using the language of disgust serves a coordination function. They suggest that showing the canonical disgust expression or using disgust metaphors signals to third-party observers that one opposes a particular action or perpetrator, facilitating third-party coordination against that individual. Additional work will be required to distinguish these possibilities for why actions perceived as morally wrong recruit the language of “disgust.”

Summary

Emotions have been increasingly incorporated into the study of morality. Research in this area is made more difficult by the fact that feelings of anger and disgust are frequently closely though not perfectly correlated (Russell & Giner-Sorolla, 2013). Empirically, people do frequently report strong affective reactions to moral violations, even in cases in which the violation does not harm themselves or a relative or ally. These emotions, in turn, appear to motivate the administration of—or support of—sanctions imposed on the perpetrator, though the details of the context exert important influences on the decision to do so.

Conclusions

We began with the observation that scholarship on the evolution of morality can be conveniently divided into two threads that turn on the distinction between doing and believing. Humans do many things that might be labeled “moral.” Many altruistic acts are so labeled. A related but distinct question surrounds a particular sort of belief, or mental representation. Many people have the belief that sex between full siblings, for instance, is wrong or immoral.

Explanations for why people do things are likely to be different from explanations for why people believe things. The previous incarnation of the morality chapter in this Handbook consisted in large measure of explanations for why people behave in particular ways. Krebs (2005) referred to explanations such as reciprocity, kin selection, group selection, and so on. Related, Haidt (2012) leans on these explanations to ground Moral Foundations Theory in evolution.

In contrast, we have discussed explanations for moral beliefs. Because theories such as reciprocity and kin selection are good for explaining (prosocial) behavior but not as apt for explaining judgments of moral wrongness, additional ideas are needed to supplement these powerful explanations. Thankfully, the past 20 years has seen a fluorescence of research on moral psychology, and new conceptual tools are available to shed light on this perennially vexing issue. Debates continue, however, and it is clear that much work has yet to be done on the function of the cognitive systems that generate moral condemnation, and the panoply of human behaviors that moral condemnation motivates.

Note

References

  1. Appiah, K. A. (2010). The honor code: How moral revolutions happen. New York, NY: Norton.
  2. Arnott, G., & Elwood, R. W. (2009). Assessment of fighting ability in animal contests. Animal Behaviour, 77, 991–1004.
  3. Atran, S. (2010). Talking to the enemy. London, England: Penguin.
  4. Aumann, R. J. (1974). Subjectivity and correlation in randomized strategies. Journal of Mathematical Economics, 1, 67–96.
  5. Axelrod, R., & Hamilton, W. D. (1981). The evolution of cooperation. Science, 211, 1390–1396.
  6. Baron, J. (1994). Nonconsequentialist decisions. Behavioral and Brain Sciences, 17, 1–10.
  7. Baron, J., & Spranca, M. (1997). Protected values. Organizational Behavior and Human Decision Processes, 70, 1–16.
  8. Batson, C. D., & Thompson, E. R. (2001). Why don't moral people act morally? Motivational considerations. Current Directions in Psychological Science, 10, 54–57.
  9. Baumard, N., André, J.-B., & Sperber, D. (2013). A mutualistic approach to morality: The evolution of fairness by partner choice. Behavioral and Brain Sciences, 36, 59–122.
  10. Boehm, C. (1999). Hierarchy in the forest. Cambridge, MA: Harvard University Press.
  11. Boyd, R., & Richerson, P. J. (1992). Punishment allows the evolution of cooperation (or anything else) in sizable groups. Ethology and Sociobiology, 13, 171–195.
  12. Boyd, R., & Richerson, P. J. (2005). The origin and evolution of cultures. New York, NY: Oxford University Press.
  13. Buss, D. M. (2006). Strategies of human mating. Psychological Topics, 15, 239–260.
  14. Camerer, C. (2003). Behavioral game theory. Princeton, NJ: Princeton University Press.
  15. Connor, R. C. (2007). Dolphin social intelligence: complex alliance relationships in bottlenose dolphins and a consideration of selective environments for extreme brain size evolution in mammals. Philosophical Transactions of the Royal Society B: Biological Sciences, 362, 587–602.
  16. Curtis, V., & Biran, A. (2001). Dirt, disgust, and disease: Is hygiene in our genes? Perspectives in Biology and Medicine, 44, 17–31.
  17. Cushman, F. A., Young, L., & Hauser, M. D. (2006). The role of reasoning and intuition in moral judgments: Testing three principles of harm. Psychological Science, 17, 1082–1089.
  18. Darwin, C. (1871). Descent of man, and selection in relation to sex. New York, NY: Appleton.
  19. Davies, N. B., Krebs, J. R., & West, S. A. (2012). An introduction to behavioural ecology (4th ed.). Hoboken, NJ: Wiley.
  20. de Waal, F. B. M. (1982). Chimpanzee politics. Baltimore, MD: Johns Hopkins University Press.
  21. Decety, J., Echols, S., & Correll, J. (2010). The blame game: The effect of responsibility and social stigma on empathy for pain. Journal of Cognitive Neuroscience, 22, 985–997.
  22. DeScioli, P., Asao, K., & Kurzban. R. (2012). Omissions and byproducts across moral domains. PLoS ONE, 7, e46963.
  23. DeScioli, P., Gilbert, S., & Kurzban, R. (2012). Indelible victims and persistent punishers in moral cognition. Psychological Inquiry, 23, 143–149.
  24. DeScioli, P., & Kurzban, R. (2009). Mysteries of morality. Cognition, 112, 281–299.
  25. DeScioli, P., & Kurzban, R. (2013). A solution to the mysteries of morality. Psychological Bulletin, 139, 477–496.
  26. Diamond, J. (2005). Collapse: How societies choose to fail or succeed. New York, NY: Penguin.
  27. Enquist, M., & Leimar, O. (1983). Evolution of fighting behaviour: Decision rules and assessment of relative strength. Journal of Theoretical Biology, 102, 387–410.
  28. Eskine, K. J., Kacinik, N. A., & Prinz, J. J. (2011). A bad taste in the mouth: Gustatory disgust influences moral judgment. Psychological Science, 22, 295–299.
  29. Fassihi, F. (2011, July 18). A craze for pooches in Iran dogs the morality police. Wall Street Journal.
  30. Federal Bureau of Investigation.(2012) Hate crime statistics 2012. Retrieved from http://www.fbi.gov/about-us/cjis/ucr/hate-crime/2012
  31. Fehr, E., Fischbacher, U., & Gächter, S. (2002). Strong reciprocity, human cooperation, and the enforcement of social norms. Human Nature, 13, 1–25.
  32. Fessler, D. M. (2010). Madmen: An evolutionary perspective on anger and men's violent responses to transgression. In M. Potegal, G. Stemmler, & C. Spielberger (Eds.), International handbook of anger (pp. 361–381). New York, NY: Springer.
  33. Fiske, A. P. (1992). The four elementary forms of sociality: Framework for a unified theory of social relations. Psychological Review, 99, 689–723.
  34. Gigerenzer, G. (2007). Gut feelings: The intelligence of the unconscious. New York, NY: Viking Press.
  35. Gil-White, F. (2001). Are ethnic groups biological “species” to the human brain? Current Anthropology, 42, 515–553.
  36. Global Commission on Drug Policy.(2011). Report of the Global Commission on Drug Policy. Retrieved from http://www.globalcommissionondrugs.org/wp-content/themes/gcdp_v1/pdf/Global_Commission_Report_English.pdf
  37. Graham, J., Haidt, J., & Nosek, B. A. (2009). Liberals and conservatives rely on different sets of moral foundations. Journal of Personality and Social Psychology, 96, 1029.
  38. Gray, K., & Wegner, D. M. (2009). Moral typecasting: Divergent perceptions of moral agents and moral patients. Journal of Personality and Social Psychology, 96, 505–520.
  39. Gray, K., Young, L., & Waytz, A. (2012). Mind perception is the essence of morality. Psychological Inquiry, 23, 101–124.
  40. Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293, 2105–2108.
  41. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108, 814–834.
  42. Haidt, J. (2003). The moral emotions. In R. J. Davidson, K. R. Scherer, & H. H. Goldsmith (Eds.), Handbook of affective sciences (pp. 852–870). Oxford, England: Oxford University Press.
  43. Haidt, J. (2007). The new synthesis in moral psychology. Science, 316, 998–1002.
  44. Haidt, J. (2012). The righteous mind. New York, NY: Vintage Books.
  45. Haidt, J., & Hersh, M. A. (2001). Sexual morality: The cultures and emotions of conservatives and liberals. Journal of Applied Social Psychology, 31, 191–221.
  46. Haidt, J., Koller, S. H., & Dias, M. G. (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology, 65, 613–628.
  47. Haidt, J., McCauley, C., & Rozin, P. (1994). Individual differences in sensitivity to disgust: A scale sampling seven domains of disgust elicitors. Personality and Individual Differences, 16, 701–713.
  48. Haidt, J., Rozin, P., McCauley, C., & Imada, S. (1997). Body, psyche, and culture: The relationship between disgust and morality. Psychology and Developing Societies, 9, 107–131.
  49. Hamilton, W. (1964). The genetic evolution of social behaviour. Journal of Theoretical Biology, 7, 1–52.
  50. Harcourt, A. H. (1992). Coalitions and alliances: Are primates more complex than non-primates? In A. H. Harcourt & F. B. M. de Waal (Eds.), Coalitions and alliances in humans and other animals (pp. 445–471). New York, NY: Oxford University Press.
  51. Harcourt, A. H., & de Waal, F. B. M. (1992). Coalitions and alliances in humans and other animals. New York, NY: Oxford University Press.
  52. Hauser, M. D. (2006). Moral minds. New York, NY: HarperCollins.
  53. Hess, N. C., & Hagen, E. H. (2002). Informational warfare. Unpublished manuscript.
  54. Holekamp, K. E., Sakai, S. T., & Lundrigan, B. L. (2007). Social intelligence in the spotted hyena (Crocuta crocuta). Philosophical Transactions of the Royal Society B: Biological Sciences, 362, 523–538.
  55. Hutcherson, C. A., & Gross, J. J. (2011). The moral emotions: A social–functionalist account of anger, disgust, and contempt. Journal of Personality and Social Psychology, 100, 719–737.
  56. Kant, I. (1993). Grounding for the metaphysics of morals (J. W. Ellington, Trans.) Indianapolis, IN: Hackett. (Original work published 1785).
  57. Kaplan, H., & Hill, K. (1985). Food sharing among ache foragers: Tests of explanatory hypotheses. Current Anthropology, 26, 223–245.
  58. Knauft, B. M. (1987). Reconsidering violence in simple human societies: Homicide among the Gebusi of New Guinea. Current Anthropology, 28, 457–500.
  59. Krebs, D. (2005). The evolution of morality. In D. M. Buss (Ed.), The handbook of evolutionary psychology (pp. 747–771). Hoboken, NJ: Wiley.
  60. Kurzban, R. (2010). Why everyone (else) is a hypocrite: Evolution and the modular mind. Princeton, NJ: Princeton University Press.
  61. Kurzban, R., & DeScioli, P. (2013). Adaptationist punishment in humans. Journal of Bioeconomics, 15, 269–279.
  62. Kurzban, R., DeScioli, P., & Fein, D. (2012). Hamilton vs. Kant: Pitting adaptations for altruism against adaptations for moral judgment. Evolution and Human Behavior, 33, 323–333.
  63. Kurzban, R., DeScioli, P., & O'Brien, E. (2007). Audience effects on moralistic punishment. Evolution and Human Behavior, 28, 75–84.
  64. Kurzban, R., Dukes, A., & Weeden, J. (2010). Sex, drugs and moral goals: Reproductive strategies and views about recreational drugs. Proceedings of the Royal Society B: Biological Sciences, 277, 3501–3508.
  65. Kurzban, R., Tooby, J., & Cosmides, L. (2001). Can race be erased? Coalitional computation and social categorization. Proceedings of the National Academy of Sciences, USA, 98, 15387–15392.
  66. Levy, L. W. (1993). Blasphemy: Verbal offense against the sacred, from Moses to Salman Rushdie. New York, NY: Knopf.
  67. Lieberman, D., Tooby, J., & Cosmides, L. (2003). Does morality have a biological basis? An empirical test of the factors governing moral sentiments relating to incest. Proceedings of the Royal Society B: Biological Sciences, 270 (1517), 819–826.
  68. Lieberman, D., & Linke, L. (2007). The effect of social category on third party punishment. Evolutionary Psychology, 5, 289–305.
  69. Lieberman, D., Tooby, J., & Cosmides, L. (2007). The architecture of human kin detection. Nature, 445, 727–731.
  70. McCullough, M. E., Kurzban, R., & Tabak, B. A. (2013). Cognitive systems for revenge and forgiveness. Behavioral and Brain Sciences, 1, 1–15.
  71. Mesterton-Gibbons, M., Gavrilets, S., Gravner, J., & Akçay, E. (2011). Models of coalition or alliance formation. Journal of Theoretical Biology, 274, 187–204.
  72. Mikhail, J. (2007). Universal moral grammar: Theory, evidence and the future. Trends in Cognitive Sciences, 11, 143–152.
  73. Mikhail, J. (2009). Is the prohibition of homicide universal? Evidence from comparative criminal law. Brooklyn Law Review, 75, 497–515.
  74. Mock, D. W. (2004). More than kin and less than kind. Cambridge, MA: Harvard University Press.
  75. Moore, G. E. (1903). Principia ethica. New York, NY: Barnes & Noble.
  76. Morgan, D. (2007, May 4). U.S. Marines unlikely to report civilian abuse: Study. Reuters.
  77. Nowak, M. A., & Sigmund, K. (2005). Evolution of indirect reciprocity. Nature, 437, 1291–1298.
  78. Parker, G. A. (1974). Assessment strategy and the evolution of fighting behaviour. Journal of Theoretical Biology, 47, 223–243.
  79. Petersen, M. B. (2013). Moralization as protection against exploitation: Do individuals without allies moralize more? Evolution and Human Behavior, 34, 78–85.
  80. Robinson, A. D., & Acemoglu, R. (2012). Why nations fail. New York, NY: Crown.
  81. Robinson, P. H., & Darley, J. M. (1995). Justice, liability, and blame: Community views and the criminal law. Boulder, CO: Westview Press.
  82. Robinson, P. H., & Kurzban, R. (2007). Concordance and conflict in intuitions of justice. Minnesota Law Review, 91, 1829–1907.
  83. Rozin, P. (1999). The process of moralization. Psychological Science, 10, 218–221.
  84. Rozin, P., Lowery, L., Imada, S., & Haidt, J. (1999). The CAD triad hypothesis: A mapping between three moral emotions (contempt, anger, disgust) and three moral codes (community, autonomy, divinity). Journal of Personality and Social Psychology, 76, 574–586.
  85. Russell, P. S., & Giner-Sorolla, R. (2013). Bodily moral disgust: What it is, how it is different from anger, and why it is an unreasoned emotion. Psychological Bulletin, 139, 328–351.
  86. Schelling, T. C. (1960). The strategy of conflict. Cambridge, MA: Harvard University Press.
  87. Schnall, S., Haidt, J., Clore, G. L., & Jordan, A. H. (2008). Disgust as embodied moral judgment. Personality and Social Psychology Bulletin, 34, 1096–1109.
  88. Schülke, O., Bhagavatula, J., Vigilant, L., & Ostner, J. (2010). Social bonds enhance reproductive success in male macaques. Current Biology, 20, 2207–2210.
  89. Sell, A., Tooby, J., & Cosmides, L. (2009). Formidability and the logic of human anger. Proceedings of the National Academy of Sciences, USA, 106, 15073–15078.
  90. Seyfarth, R. M., & Cheney, D. L. (2012). The evolutionary origins of friendship. Annual Review of Psychology, 63, 153–177.
  91. Shaw, A. (2013). Beyond “to share or not to share”: The impartiality account of fairness. Current Directions in Psychological Science, 22, 413–417.
  92. Shweder, R. A., Mahapatra, M., & Miller, J. G. (1987). Culture and moral development. In J. Kagan & S. Lamb (Eds.), The emergence of morality in young children (pp. 1–83). Chicago, IL: University of Chicago Press.
  93. Shweder, R. A., Much, N. C., Mahapatra, M., & Park, L. (1997). The “Big Three” of morality (autonomy, community, and divinity), and the “Big Three” explanations of suffering. In A. M. Brandt & P. Rozin (Eds.), Morality and health (pp. 119–169). New York, NY: Routledge.
  94. Sinnott-Armstrong, W. (2006). Consequentialism. Stanford Encyclopedia of Philosophy. Retrieved from http://plato.stanford.edu/entries/consequentialism/
  95. Smith, A. (1759). The theory of moral sentiments. London, England: A. Millar, A. Kincaid, & J. Bell.
  96. Snyder, G. H. (1984). The security dilemma in alliance politics. World Politics, 36, 461–495.
  97. Srivastava, J., Espinoza, F., & Fedorikhin, A. (2009). Coupling and decoupling of unfairness and anger in ultimatum bargaining. Journal of Behavioral Decision Making, 22, 475–489.
  98. Sunstein, C. (2005). Moral heuristics. Behavioral and Brain Sciences, 28, 531–543.
  99. Talwar, V., Lee, K., Bala, N., & Lindsay, R. C. L. (2002). Children's conceptual knowledge of lying and its relation to their actual behaviors: Implications for court competence examinations. Law and Human Behavior, 26, 395–415.
  100. Tetlock, P. E. (2000). Coping with trade-offs: Psychological constraints and political implications. In S. Lupia, M. McCubbins, & S. Popkin (Eds.), Elements of reason: Cognition, choice, and the bounds of rationality (pp. 239–263). Cambridge, England: Cambridge University Press.
  101. Tetlock, P. E. (2003). Thinking the unthinkable: Sacred values and taboo cognitions. Trends in Cognitive Science, 7, 320–324.
  102. Thomas, C. W., Cage, R. J., & Foster, S. C. (1976). Public opinion on criminal law and legal sanctions: An examination of two conceptual models. Journal of Criminal Law and Criminology, 67, 110–116.
  103. Thomas, K. A., DeScioli, P., Haque, O. S., & Pinker, S. (2014). The psychology of coordination and common knowledge. Journal of Personality and Social Psychology, 107, 657–676.
  104. Tooby, J. & Cosmides, L. (2008). The evolutionary psychology of the emotions and their relationship to internal regulatory variables. In M. Lewis, J. M. Haviland-Jones, & L. F. Barrett (Eds.), Handbook of emotions (3rd ed., pp. 114–137). New York, NY: Guilford Press.
  105. Tooby, J., & Cosmides, L. (2010). Groups in mind: The coalitional roots of war and morality. In H. Høgh-Olesen (Ed.), Human morality and sociality: Evolutionary and comparative perspectives (pp. 91–234). New York, NY: Palgrave MacMillan.
  106. Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology, 46, 35–57.
  107. Tybur, J. M., Lieberman, D., & Griskevicius, V. (2009). Microbes, mating, and morality: Individual differences in three functional domains of disgust. Journal of Personality and Social Psychology, 97, 103–122.
  108. Tybur, J. M., Lieberman, D., Kurzban, R., & DeScioli, P. (2013). Disgust: Evolved function and structure. Psychological Review, 120, 65–84.
  109. Weeden, J. (2003). Genetic interests, life histories, and attitudes towards abortion (Unpublished doctoral dissertation). University of Pennsylvania, Philadelphia.
  110. Weeden, J., & Kurzban, R. (2014). The hidden agenda of the political mind. Princeton, NJ: Princeton University Press.
  111. Wiessner, P. (2005). Norm enforcement among the Ju/'hoansi Bushmen: A case of strong reciprocity? Human Nature, 16, 115–145.
  112. Williams, G. C. (1966). Adaptation and natural selection. Princeton, NJ: Princeton University Press.
  113. Wilson, J. Q. (1993). The moral sense. New York, NY: Free Press.
  114. World Health Organization. (2011). Unsafe abortion: Global and regional estimates of the incidence of unsafe abortion and associated mortality in 2008 (6th ed.). Geneva, Switzerland: Author.