12 The retribution heuristic

Stephen Koppel1 and Mark R. Fondacaro2

Introduction

Among the four generally accepted justifications of punishment—retribution, rehabilitation, incapacitation, and deterrence—retribution is aberrant in that it rejects consequential reasoning. Although there are various conceptions of retribution,3 each holds that wrongdoing should be punished proportionately, regardless of practical consequences. “Do justice though the heavens may fall,” as a proponent of retributivism would have it. Despite efforts to root criminal justice policy in empiricism, this peculiar feature of retribution has been left largely unchallenged. This is surprising since retribution’s indifference to consequences makes it unfit for empirical scrutiny, for without concern for consequences, measuring the effectiveness of retributive punishment is not possible. For some perspective on how unusual this is in the realm of policy-making, consider your reaction to the following proposals: a health care intervention wholly unconcerned with medical outcomes; a fiscal plan with no tangible economic indicators of success; a military intervention whose purpose is not to improve national security but to give another country what it deserves, no matter the result. But perhaps retribution’s abiding relevancy is due to the fact that retributive urges—the rush of emotion a person feels when he or she has been wronged—are so familiar to us all. That these emotions are universal, however, also points to the possibility that retribution is more concerned with consequences than it at first seems.

Cognitive heuristics are mental shortcuts that enable quick and efficient decision-making. Several converging lines of research suggest that retribution reflects one of these rule-of-thumb heuristics—what we term a retribution heuristic—that answers the question of how to respond to wrongdoing. Such findings belie the claim that retribution is unconcerned with consequences: Like all hard-wired cognitive heuristics, the retribution heuristic can be assumed to reflect an adaptive logic that at one time enhanced survival. What’s more, though such cognitive heuristics can generally be relied upon to produce sound judgments, research shows that they can lead us to systematically commit errors of judgment known as cognitive biases. Perhaps this is because many rules-of-thumb were codified in environmental conditions that differ significantly from today. This leads to the question of whether the retribution heuristic can also be demonstrated to produce errors of judgment?

Section I of this chapter will provide neurobiological and psychological evidence for a retribution heuristic. Section II will explore recent discoveries in psychology showing that a number of cognitive heuristics produce errors of judgment. Section III will situate the retribution heuristic among these other cognitive biases and show its tendency to produce fundamental retribution errors that are shaped and influenced by cultural and social factors over time. Section IV will draw conclusions about the link between the retribution heuristic and the imposition of overly harsh, unjust, and ineffective criminal sanctions.

I The retributive heuristic

In Sociobiology: The New Synthesis, E. O. Wilson, a renowned evolutionary biologist, advances the controversial hypothesis that our sense of morality is rooted in biology and has been shaped by natural selection (1975). But if biology lies behind morality, then what of the great cerebrations of ethical philosophy? Wilson argues that ethical philosophy is limited in that it treats the human mind as a “black box” and then assumes that emanations from it reflect a “direct awareness of right and wrong that it can formalize by logic.” The problem with this, he claims, is that ethical philosophers fail to consider the “biological or ecological” implications of their arguments. His solution: snatch ethics temporarily from the hands of philosophers and “biologize” it. While philosophers continue to cogitate about right and wrong, scientists have been making progress toward prying open the black box to reveal morality’s neurobiological and psychological underpinnings.

The language of disgust is commonly used to describe moral transgressions: Immoral acts leave us feeling dirty; unfair prices tags are obscene; the legal doctrine requiring a plaintiff to enter a suit free of unfair conduct is “clean hands.” Could this be something more than metaphor? Hypothesizing that the clean/dirty metaphor indicates a connection between moral disgust and more primitive forms of disgust related to toxicity and disease, researchers searched for similarity in the facial motor activity elicited by unpleasant tastes, basic disgust elicited by photographs of contaminants, and moral disgust elicited by unfair treatment in an economic game. They found that all three evoked an identical facial response: activation of the levator labii muscle region of the face (Chapman et al. 2009).

The results suggest a biological link between disgust and moral judgments. Researchers believe that this can be explained by an evolutionary phenomenon known as exaptation, whereby a physical trait previously shaped by evolution is co-opted for a new use (Gould and Vrba 1982). A classic example of exaptation is the shift in use of bird feathers from temperature regulation to their present role in flight. Evolutionary biologists posit that exaptations have also occurred at the psychological level. The exaptation of language out of brain structures that previously served a different function is one plausible candidate (Gould 1991). That the faculty for moral evaluations appears to have exapted from disgust of environmental toxins suggests a similar role in a different context—judging conduct to be right or wrong in order to avoid future harm (Hoffman 2014). This raises the question: If through evolutionary pressures we have become hard-wired to recoil from certain environmental toxins, are we also hard-wired to have moral aversions to certain conduct?

With advanced neuroimaging technology, cognitive psychologists have been able to probe the underlying biology of moral decision-making. One such investigation carried out by Joshua Greene and Jonathan Cohen was designed to reveal the parts of the brain engaged while faced with a moral dilemma: the runaway trolley problem. The trolley problem generally includes two scenarios: Scenario #1 in which subjects are asked if they would pull a lever to divert a trolley that fatally threatens five people onto another track where it only threatens one; Scenario #2 is identical, only now subjects are asked if they would physically push a person in front of the trolley for the same purpose. Typically, people approve of pulling the lever in order to save five lives in exchange for one but disapprove of pushing a person despite the identical result. What accounts for the differences? When asked, participants have difficulty justifying their inconsistent responses.

With the aid of neuroimaging, though, researches can examine the neurobiology that underlies such moral reasoning. By doing so, they have identified several areas of the ventral medial pre-frontal cortex (VMPFC) and medial front and posterior gyri of the insula—regions associated with emotion—that are significantly more active during contemplation of Scenario #2 (Greene et al. 2004). That these areas of the brain are more engaged in Scenario #2 suggests that the thought of pushing a person triggers an intuitive moral aversion that overrides the cost-benefit reasoning displayed in Scenario #1. The researchers theorized that “the thought of pushing someone in front of a trolley elicits a prepotent, negative emotional response that drives moral disapproval.” In the absence of a prepotent emotional response, though, they found that “utilitarian reasoning” prevails. When the same experiment was conducted on individuals with a disorder characterized by damage to the emotional center of their brain, frontotemporal dementia (FTD), they found further support for this view (Mendez, Anderson, and Shapira 2005). Like healthy individuals, subjects with FTD approved of pulling the lever in order to save five lives. However, unlike healthy individuals, they exhibited little reluctance to push a person in order to achieve the same goal: Without the countervailing brain area responsible for moral aversion, cost-benefit reasoning prevailed.

On research conducted with babies, cognitive psychologists have found evidence that humans enter the world with a sophisticated moral schema. Though babies clearly lack the capacity to communicate moral judgments, their preferences can be discerned indirectly by measuring attention. In a study conducted by a team of Yale psychologists, babies were presented a puppet show in which a climbing puppet struggled to ascend a hill (Hamlin, Wynn, and Bloom 2007). On some occasions a puppet came along and helped the climber up. On others a second puppet came along and swatted the puppet down. Then, researchers placed the helper and hinderer puppets in front of the babies. When babies expressed their preference by way of choosing a puppet to play with, they were significantly more likely to prefer the helper puppet. The results, according to the investigators, suggest that the capacity of individuals to express moral preferences on the basis of social interactions is “universal and unlearned.”

To determine whether a mental process is rational and under conscious control, as opposed to intuitive, automatic and outside conscious awareness, cognitive psychologists use a robustly validated experimental method called cognitive loading. Under a heavy cognitive load—such as holding in working memory a string of arbitrary numbers—people have been demonstrated to have a diminished capacity for controlled mental processes. Thus, if performance suffers while carrying a cognitive load, controlled processes can be inferred. If performance remains unchanged, though, an automatic process is implicated. In an experiment designed to test whether moral judgment is affected by cognitive load, researchers found no difference in performance when participants were carrying a cognitive load (Haidt 2012). These results suggest an important role for automatic mental processes in the evaluation of moral problems.

In the struggle for survival among animals, what can be characterized as punishment is routinely imposed by animals that are subjected to harm or threatened with harm. From an evolutionary perspective, this type of eye-for-an-eye strategy makes sense. In certain species, though, punishment goes beyond eye-for-an-eye retaliation: It’s imposed by third parties in the presence of moral transgressions. In humans, this model of third-party punishment parallels formal systems of criminal justice, where members of the community (judges and/or jurors) sit in judgment of a defendant who has committed an offense against some other member of society or a social norm. Evolutionary biologists observe that third-party punishment, where “animal A punishes animal B for violating norm C,” occurs exclusively in highly cooperative species. For example, chimpanzees attack allies that do not support them in third-party conflicts, and queen naked mole rats will attack workers that they judge to be lazy. Punishment of this kind, referred to as altruistic punishment, has been identified as a key factor accounting for the extraordinary degree of cooperation displayed by humans (Seymor, Singer, and Dolan, 2007). According to game theorists, this is because altruistic punishment mitigates against the free-rider problem—caused by individuals reaping the rewards of cooperation without contributing—and thereby increases the likelihood that a cooperative equilibrium will emerge (Yamagishi and Sato 1986).

Using a laboratory task designed to elicit acts of altruistic punishment among human volunteers, cognitive psychologists appear to have identified the complex processes driving this behavior (Knutson 2004). During the task, subjects played a single game involving real money with a series of partners. In each game, subjects would give their partners money, which was then quadrupled by the investigators. Then, the recipients were given a chance to reciprocate. If they failed to do so, subjects could choose to administer a monetary punishment against the non-cooperator. During the act of imposing punishment, the subjects’ brains were scanned. The researchers found that imposing punishment activates a subcortical region of the brain called the striatum. The striatum has been shown to be associated with reward-processing in similar studies looking at the areas of the brain activated during anticipation of non-social rewards such as monetary gains and pleasant tastes. Thus, the researchers interpreted the findings to indicate that punishing a defector activates brain regions related to feeling good about punishment, rather than bad about a violation.

What are the implications of these findings? The picture that begins to emerge is of a highly complex, biologically hard-wired network of regions in the brain, shaped by evolution so as to produce automatic moral assessments, and when a moral wrong has occurred, to engender retributive emotions that promote a punishment response. Like all other products of evolutionary pressure, the utility of this intuitively guided behavior is readily identifiable: to increase fitness for survival by deterring future harm and fostering cooperation.

II Folk reasoning and inevitable bias and mistake in judging others

Without understanding Newton’s laws of physics, a center fielder in pursuit of a fly ball can quickly calculate its trajectory. This intuitive grasp of physical laws is an example of folk physics. Folk physics serves us quite well. In many domains, though, our intuitions about the physical world fail us miserably. In fact, quantum mechanics runs so counter to intuition that one of its leading scientist famously quipped, “If you think you understand quantum mechanics, then you don’t understand quantum mechanics” (Dawkins 2009). And nevertheless, it works: Quantum mechanics has been demonstrated through empirical testing to be one of the most reliably accurate theories in the history of science.

A growing body of evidence suggests a similar phenomenon in the psychological domain of common sense or folk reasoning. Research suggests a gap between folk reasoning and reality. Like folk physics, folk reasoning appears to be useful—up to a point. In some situations, however, it has been shown to predictably produce errors of judgment. The following are a number of empirically studied examples of cognitive heuristics that are based on folk reasoning and can lead to erroneous and biased judgments.

The first example is the confirmation bias, which results from the tendency to search for or interpret information in a way that confirms one’s preconceptions. Put differently, people tend to focus on results that support their position and ignore the rest. For example, researchers showed a confirmation bias in subjects who held strong views on capital punishment (Lord, Ross, and Lepper 1979). In that study, subjects were divided into groups based on whether they favored or opposed capital punishment and were then each given descriptions of two studies—one in support and one in opposition to capital punishment. They were then asked whether their opinions had been swayed by the studies. Next, they were given a more detailed account of the studies and were told whether one study was written in support of or in opposition to capital punishment.

Though subjects were initially influenced by the descriptions, their original preconceived opinions re-emerged upon learning more information about the research. When asked to justify their views, the subjects invoked the supporting research. In response to questions regarding contradictory research, though, the subjects were apt to cast doubts on its validity.

A similar study was conducted during the 2004 presidential elections (Westen et al. 2006). Subjects were given three statements from three individuals: the Republican candidate, George Bush, the Democratic candidate, John Kerry, and Tom Hanks. The first two statements from the candidates were patently contradictory. The third statement attempted to reconcile the conflict between statements one and two. For example, the first statement by George Bush indicated support for Kenneth Lay; the second a clear criticism of Kenneth Lay; the last an attempt to give a reason for the contradiction, for instance, that George Bush was betrayed by Kenneth Lay or was sincerely disappointed by his alleged conduct at Enron. The subjects were significantly more likely to find the statements of the candidate from the opposing party inconsistent, whereas statements from the political party with which they identified were seemingly reconciled by the third statement.

Another cognitive heuristic with potential to bias decision-making is the framing bias. Research in this area has shown how the presentation of like things in different formats can significantly affect the decisions people make. Experiments performed by Tversky and Kahneman, for instance, showed that framing influenced participants’ responses to questions regarding disease prevention (1981). In that experiment, subjects were informed that an outbreak of a rare Asian disease was imminent and would result in the death of 600 people. They were then given the option to decide between two alternative programs to combat the disease. The first group was presented two programs: option 1) 200 people will be saved or option 2) There is a 1/3 probability that 600 people will be saved and a 2/3 probability that no one will be saved. The second group was presented two programs: option 1) 400 people will die or option 2) There is a 1/3 probability that no one will die and a 2/3 probability that 600 people will die.

Plainly, saving 200 lives is the same as 400 people dying. Even so, when the participants were asked to choose a treatment, option 1 was favored over option 2 by 72 percent of participants when it was presented with positive framing (“saves lives”), and fell to 22 percent when the same choice was presented with negative framing (“people will die”). Simply framing a decision in either positive or negative terms had a significant effect on the outcome.

The overconfidence bias is another common phenomenon anchored in folk reasoning in which individuals subjectively overestimate their abilities relative to objective criteria. This bias has been confirmed in various ways; for example, in a poll drivers were asked to rate their competence at driving in relation to others, nearly 93 percent reported to be above average (Svenson 1980).

The availability bias is a cognitive tendency in which people make predictions about the rate of an event based on available information. For example, in a study in which subjects were asked to judge whether a word randomly chosen from an English text was more likely to begin with K or have K as its 3rd letter, significantly more participants were found to believe that there are more words that begin with the letter K (Tversky and Kahneman 1973). According to the researchers, this was because such words like kid, kick, and kiss are more readily available to our minds than ask, bike, and joke, and so we are biased toward thinking that there are more of these words. Tversky and Kahneman have shown that this availability heuristics produces errors of judgment in a variety of settings.

Finally, the fundamental attribution error (FAE) is the tendency, when forming impressions of others, to overestimate the importance of dispositional factors (e.g., aggressiveness) in the person being judged and to underestimate the importance of the situation in which the observed behavior is occurring (Ross 1977). Using a mock TV quiz show, Ross and his colleagues demonstrated the tendency to commit fundamental attribution errors (Ross, Amabile, and Steinmetz 1977). In that study, participants were assigned to play the role of the questioner, contestant, or spectator. In front of the spectators, the questioners were then instructed to come up with 10 questions from their own general knowledge to pose to the contestants. Contestants answered roughly 40 percent of the questions correctly. When spectators were asked to rate the questioners’ and the contestants’ general knowledge, they rated the questioners as above average and the contestants as below average. In doing so, they attributed the results to an essential characteristic rather than the important fact that questioners held a distinct advantage—naturally they knew the answers to their own questions.

The systematic bias in judgment detailed above raises concerns regarding the consequences of folk decision-making. If folk reasoning is so shot through with bias, then what type of bad outcomes might it produce? And what happens when people learn of the error in their judgment? Do they simply self-correct? One reason to believe that the implications of folk reasoning are even worse than they appear comes from the now famous chicken-claw experiment conducted on split-brain patients: people whose corpus callosum, the channel of communication between the two brain hemispheres, is severed. In the experiment, researchers showed a split-brain patient two pictures: The patient’s left hemisphere saw a chicken claw, and his right saw a snow scene. Then, from an assortment of pictures that were visible to both hemispheres, the patient was asked to choose two that matched. Appropriately, he chose a chicken to go with the claw and a shovel to go with the snow. Next, the researchers asked him to explain why he chose those items. “The chicken goes with the chicken claw,” he replied. This was no surprise given that his left hemisphere, the region responsible for language, had seen the chicken claw and was able to explain the decision to choose a chicken to go along with it. The left hemisphere, though, was completely unaware of the snow scene, which was seen only by the right hemisphere. Still, when asked to articulate why he chose the shovel, the man’s left hemisphere simply fabricated an answer: “And you need a shovel to clean out the chicken shed.” His brain had done something astonishing: It created a coherent story in order to justify the decision it had unconsciously made. For this reason, Dr. Gazzaniga, the scientist who designed the experiment, dubbed the left-brain hemisphere responsible for the story the “interpreter.”

On the basis of his work on split-brain patients, Gazzaniga characterizes the left hemisphere as the brain’s “story-teller,” whose role is to create a coherent narrative to support one’s perception of behaving intentionality and purposefully, even when that’s not the case. These stories that people tell themselves, he notes, are likely to be based on “post-hoc explanations using post-hoc observations with no access to non-conscious processing” (Gazzaniga 2011). With regard to the various cognitive heuristics, this ability of the interpreter to read intentionality into and justify decisions would seem to make honest self-assessment and correction unlikely.

What’s more, research by Heider and Simmel suggests a similar tendency to read intentionality into the behavior of other people (1944). For example, they found that when subjects were asked to describe the movement of geometric figures (a small circle and triangle and a larger circle), they typically characterized the figures as animated persons. And when asked to interpret the movement of the figures as human actions, they typically ascribed internal motives and said things like the big circle is “bullying” the little triangle. Thus, like the stories the “interpreter” tells about oneself, there seems to be a tendency for people to read intentionality into others’ behavior.

Together, these two tendencies—to concoct intentional and coherent stories about one’s own decisions and those of others—raise serious concerns with respect to the retribution heuristic and the judgments to which it leads. That’s because one way to determine proportional punishment—and the approach our modern criminal justice systems has adopted—is to measure culpability by retrospectively reading the mental state of the accused at the moment of the crime. Specifically, the question of criminal liability under this approach turns on whether a person’s mental state (“mens rea”) actuated a criminal act (“actus reus”). The Model Penal Code (“MPC”), for instance, ascribes criminal responsibility only if it can be shown that a criminal act was carried out with a negligent, reckless, knowing, or purposive mind, with each step up representing an increase in the amount of punishment an individual deserves. Thus, under the MPC the difference of a crime committed negligently as opposed to recklessly or purposefully can amount to years of incarceration. That we are prone to read intentionality into others’ behavior raises serious doubts about the accuracy and reliability of such judgments of criminal responsibility.4 And that we are prone to tell ourselves make-sense stories about such judgments, blind to the non-conscious processes behind them, raises concerns about our capacity to identify and correct our own automatic and, at times, erroneous decision-making. What type of errors might we expect the retributive heuristic, which underlies this assessment of criminal responsibility, to produce? It is to this question that we will turn in the following section.

III Fundamental retribution errors

As we have seen, punishment appears to be an intuitive response to moral transgressions (Darley 2009). Given that intuitive folk reasoning has been shown to produce various errors of judgment, the following concern arises: Does folk reasoning lead us to make fundamental retribution errors? On retribution’s own terms, establishing an error of judgment is difficult. That’s because the retributive justification is facially indifferent to punishment’s effects—so there is no empirical criterion against which to judge outcomes. Also, the only remaining criterion by which to judge the appropriateness of retributive punishment—whether punishment is commensurate with a crime—is vague and very difficult if not impossible to assess with an acceptable degree of confidence. Nevertheless, if it can be shown that punishment severity is influenced by factors wholly irrelevant to any reasonable assessment of proportional deserts, this would constitute persuasive evidence of fundamental retribution errors (Dripps 2003). The following research showing how irrelevant factors can influence judgments of punishment provides such evidence.

Relying on evidence that the cognitive process underlying moral disgust stems from physical disgust, researchers hypothesized that moral judgment could be affected by taste perception. In a study testing this idea, participants were given a sweet beverage, a bitter beverage, or water and then asked to rate a variety of moral transgressions. Results showed that there was a relationship between taste and moral judgments—specifically, physical disgust elicited significantly greater feelings of moral disgust. In a similar study, researchers sprayed a nearby trash can with commercially available fart spray to induce disgust in their participants (Schnall et al. 2008). Investigators found that participants exposed to strong and mild stink conditions made harsher judgments on a variety of moral transgressions (e.g., sex between first cousins) than did participants in a no-spray condition.

Using the framework of terror management theory, which posits that reminders of mortality cause individuals to invest in and defend their cultural beliefs, the effect of mortality salience on punishment severity has been investigated in several studies. Postulating that law constitutes a central part of a judge’s worldview, one study conducted with 22 municipal court judges examined the effect of mortality salience on judgments against a transgressor (Rosenblatt et al. 1989). The judges were presented with a hypothetical defendant held on a charge of prostitution. Following a mortality salience induction, the judges imposed an average bond of $455 on the defendant, compared to an average bond of $50 from judges in the control group.

A similar study was conducted in which students were first given an assessment of their attitude toward prostitution and then asked to set a bond amount for the alleged prostitute (Rosenblatt 1989). Among those who were subjected to heightened mortality salience, only those with a relatively negative view of prostitution recommended particularly high bonds.

Factors that influence neurotransmitter levels can also affect the strength of individuals’ retaliatory responses to a perceived transgression. For example, serotonin, a neurotransmitter involved in communication between brain cells, is thought to play a significant role in emotional responses to unfairness. Researchers posit that serotonin does so by modulating impulsivity via emotional regulation mechanisms. The function of these emotional mechanisms during social interactions has been studied with the Ultimatum Game (Nowak, Page, and Sigmund 2000). The rules of the game are simple: One player (the offeror) is given money and is told to split it with another player (the responder). Should the responder accept, both players are paid; if the responder rejects, neither player is paid.

Researchers investigated the effects of manipulating serotonin on rejection behavior in the ultimatum game. Using a procedure called acute tryptophan depletion, investigators temporarily lowered the serotonin levels of the subjects. Participants then played the role of responder in the Ultimatum Game. The researchers found that simply by temporarily lowering serotonin levels in participants they could increase their retaliatory response to perceived unfairness.

Evidently we do judge books by their covers. For example, researchers have consistently found an attractive-leniency effect: people recommend less severe punishment for defendants with physically attractive features (Sigall and Ostrove 1975). This effect has been found in experimental studies, where attractiveness is manipulated through the use of defendants’ photos, as well is in the court room, where observers were asked to rate the attractiveness of defendants standing trial (Stewart 1985). Similarly, researchers have found that people recommend harsher punishments for defendants perceived to have stereotypical black facial features. In one study, participants were asked to rate the degree to which images of black defendants who were convicted of murdering white victims had stereotypically black faces. After controlling for other factors, researchers found that defendants whose appearance was perceived as more stereotypically black were more likely to have been executed. In the study, only 24 percent of black defendants classified in the bottom half of the black stereotype scale were executed; those classified in the top half did not fare as well: 58 percent were executed (Eberhardt et al. 2006).

With legal realism in mind—particularly the idea that a judge’s decision is likely to turn on something as arbitrary as what he or she ate for breakfast—investigators sought to test whether a judge’s food breaks influenced his or her decision to grant or deny parole (Danziger, Levav, and Avnaim-Pesso 2011). Relying on earlier research demonstrating that repeated judgments tend to deplete executive functioning, investigators hypothesized that food breaks would restore executive decision-making capacity. Granting a request for release was further hypothesized to require heightened executive functioning compared to a continuation of the status quo (i.e., ongoing detention). In the study, two food breaks were used to divide a judge’s day into three deliberation sessions. Researchers found that during the course of a typical session the percentage of rulings to grant parole dropped from roughly 65 percent to zero toward the end of a session and then returned abruptly to 65 percent after a break.

Glitchy computers and websites are not held morally responsible for the psychic harm that they wreak. That is because they lack the capacity to choose. Humans, on the other hand, are held morally responsible for our actions for the opposite reason: We are perceived to have free will. But as recent discoveries in neuroscience cast more and more doubt on humans’ capacity for free choice, researchers have begun to question whether moral determinations of punishment are influenced by the strength of a person’s conviction in the notion of free will. To test the relationship between free-will belief and retributive punishment, researchers compared the severity of punishment chosen by participants after reading a passage in which free will is rejected in favor of a mechanistic view of human behavior, against a control group that read a neutral passage (Shariff, Greene, et al. 2014). Researchers found that the participants whose belief in free will had been weakened by the anti-free-will passage recommended roughly half the length of imprisonment compared with the control group.

Judgments of morality and punishment are susceptible to influence by extraneous factors. Contrary to the perception of such judgments as based on scrupulous weighing of relevant factors, the evaluation process works more like an erratic scale: now adding because of a foul odor, now subtracting on account of a fetching face, now adding because of a reminder of mortality, now subtracting on account of a muffin, now adding due to a drop in serotonin, now subtracting because of a loss of belief in free will. Susceptibility to such factors would not be so problematic if there were a feed-back mechanism to separate the good reasons from the bad. Decisions by scientists in the lab, to be sure, are also subject to any number of extraneous influences. The difference is that the scientific method compels them to put their ideas to the empirical rack to test their validity. Proponents of consequential justifications of punishment can do the same by testing, say, whether a rehabilitative program reduces recidivism and by how much or whether a sanction deters crime and by how much. The results could then inform decisions to continue or discontinue such policies. Justifications rooted in retribution, by contrast, are beyond the reach of empiricism. As we have seen, this is because retribution is rooted in a cognitive heuristic whose consequential aims are shrouded in retributive reasoning.

IV Conclusion

What are the effects of relying on a cognitive heuristic to dispense justice? As a recent report by the US National Research Council observed, retribution is one of the “main impulses” behind the United States’ punishment problem: perennially the highest rates of incarceration in the industrialized world; incarceration patterns that suggest invidious discrimination; the use anachronisms like capital punishment (Travis, Western, and Redburn 2014). Nonetheless, the Council goes on to endorse the “normative values” that underlie retributive incarceration. And though the Council rightly criticizes recent criminal justice policy discourse premised on cost-benefit calculations that “mask strong but hidden normative assumptions,” it fails to recognize the opposite problem: normative assumptions that mask strong but hidden, and often erroneous, calculations that occur outside the conscious mind.

The research presented in this chapter sheds light on these normative values. It suggests that judgments of morality and punishment have neural correlates in the emotional centers of the brain, that the behaviors produced by these areas are largely biological, automatic, and apt to have at one time played a key adaptive role in avoiding harm and improving cooperation. Like folk physics, having a quick, intuitive response to questions of crime and punishment has proven useful. But research also shows that intuition can often lead us astray. Cognitive psychologists have demonstrated that intuitive folk reasoning can lead to erroneous judgment in a wide range of contexts. And, in thinking about crime and punishment particularly, can lead to fundamental retribution errors. The potential harm of punishment policy based on such faulty intuitions should prompt us to ask why retribution is still taken seriously at all.

Several arguments in favor of retribution can be anticipated. First, Robinson and Kurzban purport to find empirical support for retribution (2006). But their work, which they have dubbed “empirical deserts,” merely demonstrates that intuitions with regard to punishment are roughly consistent among lay people. This is no surprise, since retribution appears to be rooted in our common biology and our common cultural milieu. But in no way does this undermine the argument made here that retribution is not empirical in the sense that it fails to produce testable hypotheses about the effects of punishment. To continue with the analogy above from astronomy, by their definition of “empirical,” a poll showing that most people believed that the earth was at the center of the universe would provide support for a geocentric universe.

Second, Cass Sunstein, though he agrees that retribution stems from a cognitive heuristic, contends that retribution should not be dismissed because it may nonetheless be the right approach (2013). This argument fails to appreciate that science is a corrigible and collective enterprise that depends on individuals showing their work. Sure, as far back as ancient Greece there were people who intuited the process of evolution. But real progress in our understanding of evolution did not get underway until Darwin carefully collected data to support the theory, which was later corroborated and fine-tuned by countless other researchers. In the same way, even if retribution is the right approach, the inability to operationalize it would prevent us from examining and improving upon punishment policies.

Finally, Steven Pinker defends retribution on the grounds that, if the criminal justice system were to become too narrowly consequentialist, people would learn to “game it” (2003). Retribution, he argues, may ultimately serve as a deterrent against this contingency. But there is no reason an exclusively consequentialist criminal justice system that fails to deter crime couldn’t self-correct for exclusively consequentialist reasons—in fact, it would have every reason to do so—without resorting to the notion of retribution. Pinker’s argument, it seems, equates consequentialism with leniency, which needn’t be the case.

One final thought. Careful empirical research represents our best tool for overcoming the cognitive limitations outlined here. In the case of physics, the shift from intuition to empiricism was pivotal; without it our understanding of the physical universe might still be stalled in the fifteenth century. Criminal justice punishment policy, on the other hand, does in many ways seem to be stuck in another epoch. One of the reasons for this is that the principal justification for punishment—retribution—still lies beyond the reach of empirical scrutiny. In order to move criminal justice policy forward, it will therefore be necessary to de-legitimate retribution as a justification of punishment. Otherwise, we are liable to continue to equate doing justice with dispensing harsh punishment, though the heavens—and the criminal justice system—may collapse.

Notes

1 Stephen Koppel, JD is a PhD candidate in the Doctoral Program in Criminal Justice at the CUNY Graduate Center and John Jay College of Criminal Justice.

2 Mark R. Fondacaro, JD, PhD is Professor and Director of the Psychology & Law Doctoral Training Area at John Jay College of Criminal Justice and the Graduate Center of the City University of New York.

3 For a discussion of several conceptions of retribution, see: Robinson (2008). ‘Competing conceptions of modern desert: vengeful, deontological, and empirical’. The Cambridge Law Journal, 67 (01): 145–75. Robinson distinguishes three types of retribution based on their metric of proportional punishment: Vengeful retribution measures proportional punishment based on the extent of harm; deontology on the degree of moral blameworthiness; and empirical deserts on a community’s shared intuitions of justice. Since this paper is concerned with reliance on retribution in contemporary criminal justice policy, its focus is the deontological form. This is because considerations of moral blameworthiness figure prominently in penal codes such as the Model Penal Code; for instance, the assignment of varying degrees of criminal responsibility on the basis of mental states perceived to evince more or less evil. Also, note that Robinson justifies “empirical deserts” on the basis of consequential reasons, such as shaping societal norms and promoting compliance. The challenge to retribution posed in this paper thus does not apply to Robinson’s conceptualization of empirical deserts. That said, while we agree that, contrary to other conceptions of retribution, empirical deserts is in principle testable, we disagree that evidence of shared intuitions regarding punishment provides support for retributive punishment. For that, Robinson would have to offer evidence that punishment effectively advances empirical desert’s stated purposes. A second way conceptions of retribution differ is on their view of whether wrongdoing may or must be punished. Mandatory, positive, and pure retribution all hold that wrongdoing must be punished; permissive retribution holds that it may be punished.

4 Researchers have found that, whether or not given legal instructions on mental state definitions, laypeople were generally unable to agree on their judgments of past mental state, with the exception of the difference between the least (criminal negligence) and most (criminal intent) culpable states (Severance, Goodman, & Loftus 1992). In a more recent study of the mental states used in the Model Penal Code, researchers found that, although subjects were generally able to distinguish among purposeful, negligent, and blameless conduct, they fared poorly in distinguishing knowing and reckless conduct (Shen et al. 2011).

Bibliography

Chapman, H., Kim, D., Susskind, J., Anderson, A. (2009) ‘In bad taste: evidence for the oral origins of moral disgust’, Science, 323: 1222.

Danziger, S., Levav, J. and Avnaim-Pesso, L. (2011) ‘Extraneous factors in judicial decisions’, Proceedings of the National Academy of Sciences, 108(17): 6889–92.

Dawkins, R. (2009) The God Delusion, New York, NY: Random House.

Darley, J.M. (2009) ‘Morality in the law: the psychological foundations of citizens’ desires to punish transgressions’, Annual Review of Law and Social Science, 5(1): 1–23.

Dripps, D.A. (2003) ‘Fundamental retribution error: criminal justice and the social psychology of blame’, Vanderbilt Law Review, 56: 1383.

Eberhardt, J.L., Davies, P.G., Purdie-Vaughns, V.J. and Johnson, S.L. (2006) ‘Looking deathworthy perceived stereotypicality of black defendants predicts capital-sentencing outcomes’, Psychological Science, 17(5): 383–86.

Eskine, K., Kacinik, N., Prinz, J. (2011) ‘A bad taste in the mouth: gustatory disgust influences moral judgment’, Psychological Science, 22: 295–99.

Gazzaniga, M. (1989) ‘Organization of the human brain’, Science, 245: 947–52.

Gazzaniga, M. (2011) Who’s in Charge. Free Will and the Science of the Brain. New York, NY: Ecco.

Greene, J., Nystrom, L., Engell, A., Darley, J. and Cohen, J. (2004) ‘The neural bases of cognitive conflict and control in moral judgment’, Neuron, 44: 289–400.

Gould, S.J. (1991) ‘Exaptation: a crucial tool for evolutionary psychology’, Journal of Social Issues, 47: 43–65.

Gould, S. J. and Verba, E.S. (1982) ‘Exaptation-a missing term in the science of form’, Paleobiology, 4–15.

Haidt, J. (2012) The Righteous Mind: Why Good People are Divided by Politics and Religion, New York, NY: Vintage Books.

Hamlin, J, Wynn, K. and Bloom, P. (2007) ‘Social evaluation by pre-verbal infants’, Nature, 450: 557–60.

Heider, F. and Simmel, M. (1944) ‘An experimental study of apparent behavior’, American Journal of Psychology, 57: 243–59.

Hoffman, M.B. (2014) The Punisher’s Brain: The Evolution of Judge and Jury, Cambridge: Cambridge University Press.

Knutson, B. (2004) ‘Sweet revenge?’, Science, 305: 1246–47.

Lord, C., Ross, L. and Lepper, M. (1979) ‘Biased assimilation and attitude polarization: the effects of prior theories on subsequently considered evidence’, Journal of Personality and Social Psychology, 37: 2098–2109.

Mendez, M., Anderson E. and Shapira, J. (2005) ‘An investigation of moral judgment in frontotemporal dementia’, Cognitive Behavioral Neurology, 18: 193–97.

Nowak, M., Page, K. and Sigmund, K. (2000) ‘Fairness versus reason in the ultimatum game’, Science, 8: 1773–75.

Pinker, S. (2003) The Blank Slate: The Modern Denial of Human Nature, New York, NY: Penguin.

Robinson, P.H. and Kurzban, R. (2006) ‘Concordance and conflict in intuitions of justice’, Minn. L. Rev., 91: 1829.

Rosenblatt, A., Greenberg, J., Solomon, S., Pyszczynski, T. and Lyon, D. (1989) ‘Evidence for.

terror management theory: the effects of mortality salience on reactions to those who violate or uphold cultural values’, Journal of Personality and Social Psychology, 57: 681–90.

Ross, L. (1977) ‘The intuitive psychologist and his shortcomings: distortions in the attribution process’, Advances in Experimental Social Psychology, 10: 173–220.

Ross, L., Amabile, T. and Steinmetz, J. (1977) ‘Social roles, social control, and biases in social-perception processes’, Journal of Personality and Social Psychology, 35: 485–94.

Schnall, S., Haidt, J., Clore, G. and Jordan, A. (2008) ‘Disgust as embodied moral judgment’, Personality and Social Psychology Bulletin, 34: 1096–1109.

Seymor, B., Singer T. and Dolan, R. (2007). ‘The neurobiology of punishment’, Nature Reviews Neuroscience, 8: 300–11.

Shariff, A.F., Greene, J.D., Karremans, J.C., Luguri, J.B., Clark, C.J., Schooler, J.W., … and Vohs, K.D. (2014) ‘Free will and punishment: a mechanistic view of human nature reduces retribution’, Psychological Science, 0956797614534693.

Shen, F.X., Hoffman, M.B., Jones, O.D., Greene, J.D. and Marois, R. (2011) ‘Sorting guilty minds’, New York University Law Review, 86.

Sigall, H. and Ostrove, N. (1975) ‘Beautiful but dangerous: effects of offender attractiveness and nature of the crime on juridic judgment’, Journal of Personality and Social Psychology, 31(3): 410.

Stewart, J.E. (1985) ‘Appearance and punishment: the attraction-leniency effect in the courtroom’, The Journal of Social Psychology, 125(3): 373–78.

Sunstein, C.R. (2013) ‘Is deontology a heuristic? On psychology, neuroscience, ethics, and law’, On Psychology, Neuroscience, Ethics, and Law (August 1, 2013).

Svenson, O. (1980) ‘Are we all less risky and more skillful than our fellow drivers?’, Acta Psychologica, 47: 143–48.

Travis, J., Western, B. and Redburn, S. (eds.) (2014) The Growth of Incarceration in the United States: Exploring Causes and Consequences, Washington, DC: National Academies Press.

Tversky, A. and Kahneman, D. (1973) ‘Availability: a heuristic for judging frequency and probability’, Cognitive Psychology, 5: 207–32.

Tversky, A. and Kahneman, D. (1981) ‘The framing of decisions and the psychology of choice’, Science, 211: 453–58.

Westen, D., Pavel, B., Harenski, K., Kilts, C. and Hamann, S. (2006) ‘Neural bases of motivated reasoning: an fMRI study of emotional constraints on partisan political judgment in the 2004 U.S. Presidential Election’, Journal of Cognitive Neuroscience, 18: 1947–58.

Wilson, E.O. (1975). Sociobiology, Cambridge, MA: Harvard University Press.

Yamagishi, T. and Sato, K. (1986) ‘Motivational basis of the public goods problem’, J. Pers. Soc. Psychol., 50: 67–73.