6

Brain Sciences Are Exploring How and
Brain Sciences Are Exploring How and
Why We Are (and Are Not) Ethical

    Here are two of the biggest questions in moral psychology: (1) Where do moral beliefs and motivations come from? (2) How does moral judgment work? All questions are easy . . . once you have clear answers to these questions.

JONATHAN HAIDT AND FREDERIK BJORKLUND

Introduction: Ethical Naturalism with a Nod to Free Choice

Philosophers and scientists have begun to collaborate to design experiments to investigate moral psychology. They’ve begun to think together about how all the new information about how the brain works changes how we have to think about why and when people act morally and why and when they don’t. A three-volume compendium, Moral Psychology, came out in 2008 with essays by many of the foremost scientists and philosophers working in the field.1 Collected here are articles describing cutting-edge work; each article also has appended to it the responses of others in the field, who raise the current debates. The first volume is The Evolution of Morality: Adaptations and Innateness, the second is The Cognitive Science of Morality: Intuition and Diversity, and the third is The Neuroscience of Morality: Emotion, Brain Disorders, and Development. It is safe to say that all the philosophers and scientists represented in the three volumes—more or less the major players in the field of rethinking philosophical ethics in the light of the new brain sciences—advocate some version of ethical naturalism. This is the view that what we can learn from the sciences has some bearing (though there is a range of opinion on what that bearing might be) on the origin and content of morals and how and why we enact them or fail to enact them. Many other philosophers, however, hold the standard view that only by investigating our moral concepts by philosophical or conceptual analysis can we come to understand our human moral nature and engagements. So science can explain only the “wetware” underlying our concepts; it cannot contribute to any understanding of what ethics is. Other material mechanisms could underlie those concepts and it would make no substantive difference in human moral life. That view is challenged in these volumes. In their introductory essay for the three volumes, “Naturalizing Ethics,” Owen Flanagan, Hagop Sarkissian, and David Wong set out what they mean by the phrase “naturalizing ethics.” Central to that endeavor is getting rid of any appeal to God or the divine—or to a faculty of free will:

    Let us call an individual a scientific naturalist if she does not permit the invocation of supernatural forces in understanding, explaining, and accounting for what happens in this world. . . .

        A naturalist cannot accept . . . the notion found in Kant that humans have metaphysical freedom of the will. . . . The twentieth-century philosopher Roderick Chisholm (1966) puts the point . . . this way: “If we are responsible . . . then we have a prerogative some would attribute only to God: each of us when we act, is a prime mover unmoved. In doing what we do, we cause certain things to happen, and nothing—or no one—causes us to cause those events to happen.” . . . This sort of free will violates the basic laws of science, so the naturalist must offer a different analysis.2

Flanagan, Sarkissian, and Wong make absolutely clear in this essay that they hold that “there is no such thing as ‘will’ and there is no such thing as ‘free will.’ . . . There is no faculty of will in the human mind/brain.”3 While they grant that there is no such thing as a free will in the Cartesian-Kantian sense and that scientifically no such organ or natural capacity as a “will” can be found by empirical investigation into the components and operations of the brain, they argue, nevertheless, that we still choose and make choices in a way that originates our actions:

    Persons make choices. Typically they do so with live options before them. If new reasons present themselves, they can change course. . . . Persons experience themselves choosing, intending, and willing. Ethics sees persons as choosing and thus . . . look[s] for . . . voluntary action that involves reasoning, deliberation, and choice.4

These reasons, resulting from conscious thinking and deliberation, have causal power or efficacy, the authors say; they are reasons that can be causes of our actions, in fact, originating causes of our actions.5 And so the authors call themselves “neocompatibilists,” which is philosophical jargon for saying they believe that human free choice—that is, originating choice—is compatible with naturalist determinism, the view that all actions are situated within and emerge from layers and systems of causes, both internal and environmental. Standard philosophical compatibilism serves to carve out interior consciousness as a realm apart, one immune both to scientific explanation and to multicausal contextual assignment of responsibility. The authors maintain, however, that we can and ought to hold both that we choose and act freely via rational thinking, originating reasons, and at the same time that we are fully determined from the standpoint of natural lines and webs of causation, both material and psychosocial.

Flanagan, Sarkissian, and Wong reject the common sop that somehow the indeterminism of quantum physics helps us out here. First, there is no evidence that the neurons of the brain are subject to indeterminacy in the way, say, the firing of electrons is (and in fact there is much evidence against it); even if that were the case, however, they point out that the indeterminacy of some outcomes in the brain would not help with establishing personal causal origination of actions. For randomness in fact would make us more rather than less subject to unexpected turns of fate, as the Epicureans were well aware with their theory of the occasional random swerve of atoms. Only an open or closet theist would benefit from such hoped-for indeterminacy in the brain, because space would then be opened for a divine hand to intervene—but not a human one. Moreover, human free choice would not be made possible by neuronal randomness in any case (and all the evidence so far seems to be against it) because no conscious human choice could ever operate to refashion neural networks directly at the neuronal level. Neural networks change through experience, not through will. One can’t just say, “I think I’ll connect my love of chocolate with my fear of heights and see if I can get myself to fear chocolate.” We do not have direct access to neurons and their patterns of firing any more than we have the capacity for direct internal intervention into the functioning of our liver, even if the liver sometimes were to function randomly. And, as a colleague once proposed to me in defending free will, where would the “I” come from that is other than that of the neurons, the “I” of the gaps between them, if such indeterminacy were the case? But that’s magical thinking and the authors of this essay reject it. Only divine intervention could work like that—the hand of the biblical God reaching down miraculously and intervening in nature, as in the rescue of the Israelites from the Egyptians at the Sea of Reeds. But divine intervention is, of course, exactly what all forms of naturalism preclude.

Flanagan, Sarkissian, and Wong opt for a kind of two-truth theory or perspective. The human perspective of experiencing ourselves as choosers and deciders, originators of our actions, and hence alone and individually morally responsible for them, is valid because it is phenomenologically true or real—it is a true (and universal) account of how we experience ourselves as human beings, they propose. At the same time, they hold that the naturalist perspective of our actions and choices as within and resultant from the complex interactions of various biological, psychological, social, cultural, historical, quantum, and cosmological causal systems is also true. Nevertheless, the authors claim that by giving up the idea that there is a faculty in our minds that is a will, free or otherwise (as the philosopher Hume did in the seventeenth century), science can be left to its kind of explanation—causal determinism—and at the same time, a second description from the insider human subjective perspective of experience can also be true. The insider subjective description of ourselves as originating our actions from ourselves alone is compatible with the scientific causal explanation, they say, because it makes no claims about a faculty in the natural brain that can be located and proven to exist or proven not to exist. One explanation is about natural causes, and the other is about presumably universal human subjective experience. The human and the natural are separate self-contained systems of explanation. The authors have saved as much of the Cartesian-Kantian philosophical tradition of a subjective turn inward as they possibly can while giving some opening to an account of human beings as products of nature.6 But do the facts bear out the authors’ saving operation? Can any facts challenge this viewpoint or is it beyond any empirical challenge? If the latter, is this in the end a metaphysical question and thus a kind of article of faith?

Some of the discoveries made by the new brain sciences point toward an answer to this question—a plausible answer if not a definitive one, for how could one ever definitively prove or disprove divine intervention or an article of faith? The validity of compatibilism (neo- or otherwise) and hence of the unassailable truth of the internal perspective comes into question because there is quite a bit of evidence suggesting that we harbor an illusion that we originate our actions. The evidence—I’ll describe the relevant experiments in some detail in subsequent chapters—suggests that we do not have conscious access to the internal psychological causes of our actions. Much evidence indicates that we become consciously aware of decisions or choices only after we make them and begin enacting them. This surprising discovery has been confirmed in experiment after experiment and the evidence is quite robust.7 So the reasons we impute to why we perform an action are, or tend to be, ex post facto reconstructions (“confabulations,” psychologists call them)—that is, illusions. (At least, this is generally true, although it is perhaps possible for some few people to be trained to some extent in rigorous introspection. Later I’ll say a lot more about how the psychology of decision making works.) Philosophers would certainly hold that having an illusion does not in and of itself prove that it is a false belief—being paranoid does not preclude the possibility that there are in fact people out to get you. Nevertheless, the explanation of special access and hence the unassailability of our internal perspective no longer seems to be the most plausible and parsimonious explanation if it can be shown that we tend toward such an illusion.

Another source of evidence against the universality of the presupposition of free choice comes from cultural anthropology. For example, one can point to the particular Buddhism of the Khmer, whose belief in karma has a profound inhibiting effect on their conception of the capacity to take action. They hold that a person’s current character as well as situation and fate are entirely predetermined and merited by actions in a prior life and hence outside a person’s control. So parents generally wait to see what a child’s character and fate seem to be before urging him or her, however slightly, toward positive actions and engagements. This Cambodian cultural myth generally engenders a kind of quietism, to the point that some Cambodians feel that they merited the genocide of the Khmer Rouge.8 This view is in stark contrast to the Western one, which would seem to encourage a radical kind of self- and world-making, as if no cultural background, biological conditions, psychological inheritance, or situational context could in the end be thought too significant to overcome. That this philosophic move has as marked a religio-cultural flavor and provincialism as the Khmer belief does in its cultural context can be seen in the particular Augustinian cast to the Western conception and implications of theological predestination/predeterminism in comparison with the Khmer notion. Calvinist predeterminism assigns to the divine will all power, in analogy to the Khmer doctrine’s assignment of all power to the operation of a karmic moral rule of fate. Moreover, the conceptual underpinnings of divine predestination are not naturalistic despite their determinism (as I pointed out in relation to Augustine), for the reduction of all nature to the intervention of divine fiat is exactly the opposite of assigning all causes to the working out of independently functioning and necessary laws. Yet in stark contrast with Khmer quietism, the Calvinist myth functions to encourage a wild activism, in order to prove to the world that one is in fact saved rather than damned.

So Calvinism, paradoxically, functions pretty much the way the myth of free choice and free will does to elicit concerted and even frenzied action in pursuit of the afterlife. Such is the power of provincial cultural presuppositions to dominate thought across generations and to do so beneath awareness. The Khmer doctrine of karma, while superficially similar to Calvinist predestinarianism, ends up driving an entirely different theological anthropology and ethics. The brain science discovery of motivational amnesia suggests that such cultural myths, the Western Augustinian as well as the Khmer, come about because they fill a gap left by our actual ignorance of the causes of our actions—a finding that Spinoza anticipated. Our still dominant and pervasive Latin Christian culture fills the gap with a divine-like human freedom, while the Khmer culture fills the gap with a divine reward system that is retrospective rather than prospective. The latter myth justifies and rationalizes one’s self and the present situation and sets them in stone, so to speak, whereas the former makes the present and oneself completely open to personal (divine-like) control, hence our notion of “freedom.” We are enacting our own provincialism and calling it “universal.” Our unconscious subjectivity, at least in the case of some of our deepest presuppositions, turns out to be in fact cultural and hence not universal at all. And the cultural filling in of aspects of the deepest self would appear to take place just at those junctures where self-transparency fails us. So the universal feature that we share as human beings is not the freedom of our interior consciousness from natural causal determination (as we are brought up to believe) but instead the general inaccessibility to our conscious minds of the causes of our actions. So the architecture of our brains is actually driving our tendency to cultural myth making here, both theirs and ours. We are most unfree when we think ourselves most free—and there’s neuroscientific evidence for that, too.

It is hardly surprising, then, to find that while Flanagan, Sarkissian, and Wong end their essay with a plea for ethics as a kind of evolutionary way of seeing human beings as within and part of their natural and social worlds, they also return to their vision of the freedom available to us to choose our values and lives. On one hand, they write, “If ethics is like any science or is part of any science, it is part of human ecology, concerned with saying what contributes to the well-being of humans, human groups, and human individuals in particular natural and social environments.”9 And they remark on the local and contingent nature of many moral values as relative to flexible notions of healthy community and social and individual flourishing. They call themselves “pluralistic relativists” and “pragmatic human ecologists.” Yet they conclude their essay with a plea for voluntarism: they insist that we must express our freedom to choose our lives, our identities, and our actions. They call upon each of us to choose to “deploy our critical capacities in judging the quality and worth of alternative ways of being . . . [by] deploying our agentic capacities to modify ourselves by engaging in identity experimentation and meaning locations within the vast space of possibilities that have been and are being tried by our fellows.”10 We have landed very close to home: right back in the Augustinian tradition, dressed up a bit in contemporary and scientific language and filled in with a larger and more accurate range of how human beings live and what they value.

William Casebeer, in his response to Flanagan, Sarkissian, and Wong, reads the brain science data as allowing “executive functions” to take the place of free choice in ethical decision making. Casebeer aims to revive and revise Aristotle’s biological conception of the human person in terms of natural and inherent aims and the dynamic operation of organic systematicity. Flourishing can be understood from a biological standpoint as “proper functioning,” yet still incorporate a range of flexibility and various human possibilities and capacities, he says. Casebeer proposes that these proper human functions are “natural facts” that are normative because they involve an internal standard in the way physical health does. You know when your heart function is optimal, and you certainly know when it isn’t. So, too, for the body as a whole and the person as a whole, Casebeer argues. He proposes that moral statements can be reduced to statements about functions, that human beings have multiple functions, and that these functions are relative to environments and have co-evolved with those environments. So the state of various types of functioning can be determined and evaluated. Casebeer supports a version of compatibilism, he says, whereby the controlling or executive functions of the brain can be assessed and responsibility assigned in terms of the proper operation of the directing cognitive systems. He bases this claim on a distinction between “well-ordered and disordered cognitive systems” as a basis for “maintain[ing] attributions of responsibility.”11 He remarks that he follows the argument of Patricia Churchland that the distinction between well-functioning and poorly functioning brain systems of executive control “might allow us to salvage understandings of moral responsibility which are generally compatible with those required by traditional moral theory.” I take this to mean that we are responsible for our decisions as originating actions in some sense (even though our cognitive capacities are embedded in webs of causes) because they are those of a decision-making capacity producing action. So despite the nod to Aristotle’s biological systems theory, we are still working with a definition of moral agency as conscious originating (free) choice—with cognitive systems doing the choosing. That Casebeer offers his theory as a version of compatibilism suggests he is arguing that we derive an internal feeling of the origination of our actions because their proximate cause (that is, the last cause in the series) is within the executive areas of the brain. Yet he is not claiming that executive functions and decisions are themselves uncaused or that their relative state of good functioning (flourishing) is self-originating. (I will return to the discussion of executive capacities in subsequent chapters to determine whether they can be recruited to offer a scientific warrant for conscious originative choice or even a consistent feeling of originative choice.) So Casebeer, too, is making a valiant effort to salvage a version of the Augustinian tradition of free choice as underlying and necessary for moral agency and responsibility—by assigning it to a natural brain mechanism.

The Scientific Search for the Sources of Moral Agency:
An Overview and a Sampling

The first volume, The Evolution of Morality, includes essays on whether human morality is innate and the direct result of evolutionary processes. What could have led to moral sentiments, beliefs, and rules? they ask. Are these distinct, hardwired capacities in the brain? Moreover, what do claims of an evolutionary history mean for the prescriptivity, the authority, of morals? Several essays propose that specific moral rules (in one form or another) are innate, and the authors speculate on evolutionary scenarios that could have given rise to these hardwired moral injunctions. A good example is an essay by Debra Lieberman of the University of Hawaii about the incest taboo. Lieberman argues that a negative moral sentiment has come to trigger the avoidance of sexual relations among near genetic relatives as the evolutionary result of a history of the detrimental consequences of inbreeding.12 Geoffrey Miller of the University of New Mexico argues in another chapter that sexual selection works to favor mates with moral virtues and hence their predominance in the gene pool. Other chapters argue for and against particular sets of innate moral rules and principles. One recurring argument is that moral rules are analogous to Noam Chomsky’s claim of an innate natural universal grammar, which every particular language instantiates in its own cultural idiom. (Chomsky’s claim of the innateness of grammar is no longer universally accepted, however; more on this later.) Most of the authors argue that if morality is a direct result of evolution, it has to be innate and content specific—in other words, a particular evolutionary history resulted in a specific innate moral sentiment or rule. By contrast, Chandra Sekhar Sripada argues in the first volume that the innateness of a moral structure of some kind in the brain would not have to prescribe particular moral norms but instead could produce tendencies or innate (moral) biases toward certain moral norms. Sripada’s theory of innate biases is intended to help account for cultural differences and changes in moral sentiments and values over time. The theory of a combination of innate moral modules plus a sociocultural overlay occurs in a number of different versions in the three volumes.

A number of the cognitive scientists who contributed to volume 2, The Cognitive Science of Morality, approach the investigation of the moral capacity by observing patterns in the cognitive processes involved in forming moral judgments, emotions, and actions, and they also invoke a Chomskian model of an innate and specific human language ability as the relevant analogy.13 Marc D. Hauser, Liane Young, and Fiery Cushman, in “Reviving Rawls’s Linguistic Analogy: Operative Principles and the Causal Structure of Moral Actions,” argue that all human beings have a moral faculty and not just a moral capacity. The authors make their case by citing Chomsky’s theory of a discrete universally human and unique linguistic faculty. Hauser, Young, and Cushman point out that they regard many “domain-general” cognitive systems as providing inputs constitutive of moral judgment. Nevertheless, they are “committed to the existence of some cognitive mechanisms that are specific to the domain of morality.” These constitute, they say, a “moral faculty”:

    These [dedicated moral] systems are not responsible for generating representations of actions, intentions, causes, and outcomes; rather they are responsible for combining these representations in a productive fashion, ultimately generating a moral judgment. Our thesis is that the moral faculty applies general principles to specific examples, implementing an appropriate set of representations. We refer to these principles as an individual’s “knowledge of morality” and, by analogy to language, posit that these principles are both unconsciously operative and inaccessible.14

A moral faculty, according to this approach, is an innate universal specialized system of moral judgment causally responsible for moral appraisal; it is discrete and hence independent of (other) reasoning and also of emotion. The authors propose a research program to test whether several identifiable and universal moral principles are operative in moral action and justification, in the same way grammatical principles are held to be universally operant in human linguistic performance. The arguments both for and against such a moral faculty turn on the available evidence for such a specialized discrete and localized unitary mechanism. The team has begun to develop a battery of paired dilemmas to pinpoint principles, to determine whether these principles guide moral judgment, and to explore whether subjects refer to these principles in their moral justifications. So far they have been engaged in exploring three moral principles: (1) harm by commission is worse than harm by omission; (2) harm intended as the means to a goal is worse than unintended harm in pursuit of a goal; and (3) harm involving physical contact is worse than the same harm without physical contact.

Jonathan Haidt and Frederik Bjorklund, in “Social Intuitionists Answer Six Questions About Moral Psychology,” also present a Chomskian case for moral judgment as produced by a discrete hardwired faculty, yet they allow for a social process that nuances it.15 According to them, the moral faculty produces quick, automatic intuitions below the level of conscious thought, reasoning, and decision making.16 They argue that moral intuitions are products of human evolution. Haidt and Bjorklund count themselves as Humeans on the origin of morals: they say that morals arise as immediate sentiments (of right and wrong) that are universal to the human species. They make the case for morality as originating in what they call “a small set of innately prepared, affectively valenced moral intuitions.” They also put the point in more colloquial terms: these are evolved “quick intuitions, gut feelings, and moral emotions.”17 They argue for a version of modularity: “Most psychologists,” they write, “accept Fodor’s (1983) claim that many aspects of perceptual and linguistic processing are the output of modules which are informationally encapsulated special purpose processing mechanisms. Informational processing means that the module works on its own proprietary inputs. Knowledge elsewhere in the mind will not affect the output of the module.” Fodor himself dismisses modularity when it comes to “higher cognition”—a position that, if true, would invalidate not only Haidt and Bjorklund’s model but that of Hauser and colleagues as well as many others. Nevertheless, Haidt and Bjorklund maintain that for their own theory, “all we need to say is that higher cognitive processes are modularized ‘to some interesting degree’” and that “there can be bits of mental processing that are to some degree module-like.”18

Haidt and Bjorklund also cite evidence for what they call the “inescapably affective mind,” by which they mean that the mind is always judging everything we encounter on a scale of good and bad. They call this our ever-present “like-ometer.” They argue that research has shown the brain is composed of two systems of ongoing valuation: an evolutionarily ancient, automatic, very fast, affective system and a “phylogenetically newer,” slower, motivationally weaker, cognitive one. They cite studies by Gazzaniga of split-brain patients that expose the left side of the brain as the “interpreter” offering a post hoc running commentary on behavior. Psychologists call this kind of rationalization the mind engaging in “confabulation.” Conscious verbal reasoning, Gazzaniga says, is thus not at all the command center of action but instead “more like a press secretary whose job it is to offer convincing explanations for whatever the person happens to do.”19 And other research has given support to this contention, demonstrating that our everyday thinking largely serves to bolster our already favored opinions.20 (I’ll have a lot to say about the self-servingness of our beliefs in a later chapter.)

So a set of hardwired intuitions makes morals possible and also sets constraints for the social nuancing of them, Haidt and Bjorklund argue. They regard the social contribution to morals as extremely important, and they hypothesize that it works in the following way: moral intuitions are immediate, evolutionarily set beliefs and motivations, but ex post facto reasoning comes in to justify, explain, specify, and extend those insights. That reasoning is socially influenced and transmitted, so when someone has a (hardwired) moral intuition, he or she interprets it via the prism of the social reasoning and understanding that he or she has heard from others. Reasoning is important but occurs and is influential almost entirely in social transmission rather than in an internal personal and individual reasoning process that gives rise to a moral judgment. While moral reasoning is rarely ever the source of an individual’s moral judgments, they maintain, “moral reasons passed between people have a causal force” nevertheless.21 So one’s own moral reasoning is fundamentally an exercise in rationalizing one’s intuitions. This is true across the board, they propose, except in highly specialized contexts such as the philosophy classroom. They raise this question:

    Can anyone ever engage in open-minded, non–post hoc reasoning [i.e., moral reasoning that does not consist in rationalizations after the fact] in private? Yes . . . [but that is] hypothesized to occur somewhat rarely outside of highly specialized subcultures such as that of philosophy, which provides years of training in unnatural modes of human thought.22

Moral intuitions are set by a process of evolution plus social exchanges, they say, hence their term for it: a social intuitionist model (or SIM). It is a species-wide model offering moral insights that evolved for and in the species, so it offers what they regard as true or real morals for all human beings while also allowing for cultural nuances.

Haidt and Bjorklund see the unconscious and conscious systems as generally in competition, but the edge is almost always given to the unconscious quick one. “Modern social cognition research,” Haidt and Bjorklund remark, “is largely about the disconnect between automatic processes, which are fast and effortless, and controlled processes, which are slow, conscious, and heavily dependent on verbal thinking.”23 Their hypothesis about how SIM works is that in the great majority of morally “eliciting situation[s],” a person’s unconscious and primitive “like-ometer” settles on a moral judgment via the quick and automatic route. Then either more recently evolved conscious cognitive processes (reason) rationalize the already made moral judgment or social persuasion intrudes to modify the quick automatic judgment toward that of social partners. This social persuasion also generally operates, they maintain, via automatic, unconscious influences rather than via conscious reasoning and rational discourse.24 In rare cases, two other routes of moral judgment are possible: in the first, consciously reasoning together with others, such as in a philosophy class, results in a moral judgment; in the other, private conscious rational reflection results in such valuation. But these cases, they say, are unusual.

Thus moral judgment, in Haidt and Bjorklund’s estimation, turns out to be generally, and in almost all cases other than highly specialized ones, the result of the primitive automatic affective cognitive system. Higher cognition comes in largely as ex post facto rationalization, on one hand, and as social persuasion that merely nuances the primitive, on the other. As the authors suggest, this is a contemporary version of Hume’s notion of reason as slave to the passions, and hence of how (primitive) emotions drive morals.25 Moral judgment can be stopped in its tracks, however—for example, by other intuitions or by socially modified and mediated valuations. They offer an example of a prejudiced immediate feeling toward someone based on race or sex or something of that kind. A person can block that fast judgment in the light of other valuations; in their view, most of these other valuations would be obtained at least initially via social persuasion. Nevertheless, they maintain that “moral reasoning is . . . usually engaged in after a moral judgment is made, in which a person searches for arguments that will support an already-made judgment.”26 Haidt and Bjorklund conclude by identifying a set of innate moral modules that they believe are species-wide and develop in children on a set schedule if all goes well. These are innate intuitions that become externalized, rather than external social values that become internalized on society’s schedule, they say. Haidt and Bjorklund do not take a hard-and-fast position on the degree of innate modularity they are arguing for. They say it is somewhere on a continuum between simple preparedness and a discrete modularity according to which capacities are hardwired by evolution to meet the long-enduring structures of the environment in which human beings evolved and live.27

Nevertheless, Haidt and Bjorklund express a particular affinity for a version of modularity in which each capacity is unconnected to the others: “Because cognitive modules are each the result of a different phylogenetic history, there is no reason to expect them all to be built on the same general pattern and elegantly interconnected.”28 Whatever the precise degree of innateness and discrete modularity turns out to be, they believe that five innate human moral modules can be identified, and children come to express them in stages: (1) sensitivity to harm and expressions of caring, (2) fairness and reciprocity, (3) recognition of hierarchy and respect for authority, (4) concern for purity and sanctity, and (5) recognition of in- and out-groups and the boundaries between them. Society helps that externalization develop by, for example, teaching certain games that help kids accomplish a given moral developmental stage. The authors cite the example of the game of “cooties,” which they say is played all over the United States: it is based on the purity foundation and involves recognition of status and in- and out-groups. Eight-to-ten-year-olds play it, they say, because that is the time when the fourth developmental stage is coming online.29 Society nuances the expressions of these basic modules, since the modular capacities underdetermine the precise morals and virtues a society develops along each of the trajectories. Virtues are thus “constrained social constructions.”30 Cultures and societies both nuance the basic moral insights and selectively emphasize some over others.31 Finally, Haidt and Bjorklund add to their innate moral modules approach the explanation that people differ in their moral performance because of differences in innate temperament.32 Yet they make clear that temperament does not affect any global moral traits, since global moral tendencies, in contrast to situational virtues, have been ruled out by contemporary research in social psychology. So in the end they characterize their view of morals or virtues as “a set of skills needed to navigate the social world” that are “finely tuned automatic reactions to complex social situations.”33 It would seem that what is proposed here are hardwired global responses that are perhaps nuanced to specific situations via the social transmissions that are taken in as unconscious skills.

The responses to Haidt and Bjorklund raise the question of whether the innate and social aspects of the theory can be held together and exactly in what ways different aspects of ethics are assigned to each dimension. One of the critical responses to this essay suggests that the social persuasion dimension of the SIM is problematic because it introduces into intuitionism a social reasoning process driven by bias (rationalizing). The basic virtue of moral intuitions seems to be that, as purported products of human evolution, they are presumed to be universally hardwired action prompts that are of objective prosocial benefit to the species. The social enculturation dimension fits uneasily with this innate intuitionism. The social suasion link seems in fact to vitiate the purported objectivity and reliability of the intuitions, since it is based on shared rationalizations and on inducing an overriding social conformity.34 It appears that at the end of the essay, where the authors are trying to address objections that have been raised about the SIM in the past, they introduce both far more work for the social nuancing dimension of the model and also some new theoretical dimensions that don’t seem to fit with it comfortably.

Haidt and Bjorklund’s commitment to hardwired innate moral modules seems to be driven by their presumption that if moral intuitions evolved over the existence of the species to contribute to its perpetuation, they would be “real moral truths,” if only species-wide. It is wrong to murder or commit incest (as another version of this argument goes) because we have a specieswide innate taboo against murder or incest. But we have all kinds of capacities, including both prosocial ones and aggressive competitive ones. Why would or should the prosocial ones carry more moral authority and justification for us? Are innate tendencies that hoodwink our moral intentions rather than serve them, such as self-deception, any less real, less human, or less the products of evolution? Another essay commenting on this paper makes the point that a theory aimed at describing how morals come about has somehow segued into a theory of moral justification, an account of why these intuitions are true and authoritative for humans.35 Walter Sinnott-Armstrong in his essay later in volume 2 also challenges the widespread version of moral intuitionism that holds that the immediacy of moral belief not only is descriptive of how moral beliefs arise psychologically but also serves as justification for such beliefs.36 I would extend this critique to raise the question of what justifies presuming that prosocial human tendencies are a natural category, “ethics,” that can be identified as different from other innate human psychological tendencies and therefore has some special normative status, whereas other innate tendencies (such as some cognitive framing, for example) are considered to interfere with ethics rather than contribute to it.

What Does It Mean to “Naturalize” Ethics?

More important to my purpose is the problematic character of the implicit assumption that the naturalizing project is the search for hardwired, discrete morals (whether rules, beliefs, or sentiments). The authors’ presumption that ethics is “natural” thus entails that it is broadly cross-cultural, always the same beneath the cultural overlays. Naturalizing ethics in this way implies that ethics is hardwired and hence to be contrasted with the conventional and cultural, which is either left out of the picture or brought in as an overlay that adds a small nuance. This view of what counts as nature conceives the natural as marking out an arena that excludes anything that is a product of nurture. But the very contrast between nature and nurture itself is a cultural artifact, for it is a claim with a specific cultural history and provincialism, namely Augustine’s two cosmic principles, as I showed in the chapter on the Augustinian legacy. So we are still in an Augustinian cultural world, one that drives the presuppositions of what we are looking for when we search for ethics in nature. The approach Haidt and Bjorklund take, which involves searching for the hardwired module that culture nuances, reappears in many versions in the essays in the three volumes and beyond. I’ll give a few examples. In volume 1, Peter Tse complicates and nuances how moral innateness could possibly work by proposing that morals need not be directly written into the brain, so to speak, but instead could have arisen as a consequence of the evolution of symbolic thought. Symbolic thought is generally held by brain scientists to result from the “conceptual binding” of many modules, enabling the formation of a single representation ranging over many domains. Binding also makes it possible for one thing to stand for something else—which is the basis of abstraction (including categorical abstractions), symbolization, and metaphorical thinking, among other cognitive capacities. Tse proposes that moral categories arise from this kind of process. Conceptual binding is influenced by experience, but what is bound together are hardwired modules resulting from evolutionary history. Tse proposes that moral categories might have arisen in this way, and he speculates that a category of evil might have arisen from binding together tokenism, sadism, and rejection of the body.

A salient and influential example of the explanatory strategy of searching to identify some form of nurture overlying hardwired moral modules that are the product of evolution is Ursula Goodenough and Terrence W. Deacon’s “From Biology to Consciousness to Morality.” The authors propose a speculative evolutionary scenario that has some similarities to both Tse’s and Haidt and Bjorklund’s approaches.37 Like Tse, they credit wide-ranging connective representational blending and the sign capacity, by which one thing can stand for and point to another, for the way that morals emerge from the evolutionary inheritance. Yet they also introduce into the mix what they regard as a uniquely human capacity for self-reflection. They propose that uniquely human morals arise from self-reflection upon inherited primate hardwired modular (that is, discrete) social abilities inherited from humans’ primate ancestors. This human moral capacity was made possible by language, they argue, since they regard self-reflection as a product of symbolic language. It is implied that language and the flexibility it offers introduce a chasm between lower primates and humans. The human moral capacity is thus bootstrapped onto a capacity Goodenough and Deacon regard as uniquely human: language. (The claim that self-reflection is a product of language is controversial and not universally accepted, however.) Goodenough and Deacon propose that each individual engages in self-reflection upon four evolutionary inherited hardwired primate prosocial capacities: empathy, strategic reciprocity, nurture, and hierarchy. These primate protomorals result in a set of universally human, self-discovered virtues that bubble up in each of us as subjective moral intuitions. They believe that each of us discovers versions of these four virtues within our consciousness as more or less unanalyzable (yet culturally nuanced) basic moral experiences, and thereby we each lay claim to our universal human moral nature as a feeling of inner compulsion.38 Goodenough and Deacon, like Tse, admit that this evolutionary approach to the nature and origin of morals can be and must always remain speculative: “The [evolutionary] scenario is by definition a speculation (what actually happened may never be known), but we find the scenario heuristic, helping us to focus on what is distinctive about human mentality.”39

Yet the speculative character of the “evolution of morality” scenarios makes them far too open to being blank slates upon which we project cultural or other prejudices and then attribute those projections to the very makeup of the brain. What are these other than just-so stories to justify whichever selves we wish to justify? Male over female? Western over non-Western? Human over animal? Should we now embrace hierarchy as a true value for us rather than strive for equality, because hierarchy has an evolutionary history? What about fairness, which also often appears in the catalogue of traits inherited from primate kin? It is hardly a surprise that prosocial behaviors in primates and other animals had survival value, as Darwin pointed out. A Hobbesian view of relentless competitive struggle was not Darwin’s assessment of evolution but a social Darwinist misreading of both nature and Darwin. Morality is not a mere veneer of sociality over brutish, ruthless nature, nor is it merely Freud’s social restraints on aggressive and sexual drives.40 Goodenough and Deacon have divided up the human brain and assigned certain traits (empathy, hierarchy, etc.) to its animality, and others to a unique humanity (self-reflection borne of another allegedly uniquely human capacity, language). So we get a human version of prosocial emotional traits shared across species with a cognitive ability supplied by a unique human evolutionary inheritance.

Too Much Philosophy—and Too Little

Goodenough and Deacon’s model is a kind of hybrid of a Kantian criterion for autonomous moral judgment as self-reflection (that is, being able to reflect upon one’s own motives and behavior and articulate the reasons for them) and a Humean notion of innate moral sentiments that are natural virtues, functioning beneath the level of thought and spanning humans and our close animal kin. So what we seem to have here is an implicit and presupposed Humean-Kantian hybrid model of morals read into—or out of—the evolutionary evidence. The implication is that self-reflection (via language) offers an evolutionary basis for self-origination, that is, free choice influenced by moral sentiments of evolutionary origin, but not, apparently, compelled by them. What this amounts to is that we are free human beings strapped onto (or sitting atop) evolutionarily determined bodies harboring set patterns of emotional response. There is a veiled appeal to unarticulated assumptions—some version of the Augustinian free choice tradition—about how moral agency works, so that the philosophic search is directed at identifying underlying brain mechanisms that can help explain the presupposed way that moral agency is held to operate (namely, some version of rational free choice). This is surely reading our provincial religio-cultural legacy into nature, rather than vice versa. Those who fail to toe the line are often subject to withering criticism. A case in point is the primatologist Frans de Waal and his investigation of sympathy and other prosocial behaviors in primates. De Waal’s philosophical critics largely presuppose a Kantian morality in which free, conscious, rational decision making in conformity to universal impartial moral principles is the sine qua non of true ethics, and they roundly fault de Waal for not holding such Kantian presuppositions about moral agency and for not judging nonhuman primates by that standard.41

So an overarching problem with the various studies is that the authors generally fail to raise the issue of how to frame or define the problem of moral psychology. As a result, there is something rather haphazard as well as culturally narrow about the projects. The haphazardness can be detected in the isolated character of some of the focuses, for example, of incest as a potentially hardwired moral taboo. The question of what relationship a purported innate feeling of the moral repulsiveness of incest might have to the larger moral domain, if there is such a domain, is not raised. Is the human moral capacity a collection of such discrete hardwired feelings of universal moral repulsiveness or moral approbation? Goodenough and Deacon’s moral experience argument presupposes that morals are both innate and an evolutionary inheritance, and hence also individual and a subjective human category of experience that we impose upon the world. Should we just accept this Kantian presupposition? Why should we think with Flanagan and others that to have ethics at all we have to freely choose (whatever that means) our identities and life paths? If sexual selection favors reproductive partners who are morally virtuous, what version of virtuousness would that be? Is this a reproductive preference for cooperation over competition? For fair play over deceit? Moreover, what would such a claim, if true, actually explain? Why is this a claim of the evolutionary transmission of moral values rather than of merely prosocial psychological traits (if traits are global in that way, which other research has shown they in fact are not)? What is systematically lacking is a sense of a consciously articulated and rationally and empirically defended common domain or set of phenomena as the subject of exploration rather than a hodgepodge of presupposed ethical examples whose status is ambiguous.

Unexamined Culturally Narrow Presuppositions
and the Hope of a Remedy

If unexamined and unconscious cultural presuppositions are driving the search for ethics—and hence at least in part the way the evidence is selected and interpreted—what would happen if different cultural assumptions guided the search into the discoveries in the new brain sciences? Why not search instead among our evolutionary kin for protoexamples of the classical Greek notion of ethics as the cultivation of beauty and the beautiful life, as harmony and balance in self and relationships? Why take for granted that ethics consists in moral principles that are innate ideas that come to us in flashes of insight, as Descartes did, or that they are innate and universal sentiments of sympathy beneath the level of thought, as Hume did, rather than habits developed through repetition and experience, as the standard Western reading of Aristotle had it (and as Casebeer and the character education advocates purport to embrace while in the end appealing to some form of free choice)? Or why not presuppose that ethics consists in intellectual honesty and clarity of moral vision, while moral evil is a form of ignorance and intellectual benightedness, as the ancient Greek dramatists, Plato, and the Arabic Aristotle did? Why not presuppose that ethics is a type of health and general well-being and good functioning, while immorality is a diseased state of pain and impaired functioning that requires remedy, as the Stoics did? (Casebeer starts off in this direction but then superimposes upon it a free will compatibilist theory.) These are all metaphors we live by, as George Lakoff puts it. We are too prone to reading our deep and unconscious cultural metaphors back into nature, discovering within it, lo and behold, our own assumptions and impositions. It would be better to have on hand multiple possible models of what ethics is about and to hold them lightly so as not to narrow the search from the get-go and presuppose its findings or stuff them into a ready-to-hand bell jar. The following chapters will try to do just that: I will broaden the search for the human moral capacity by extending the cultural bounds of what we are looking for to include (but not be confined to) Aristotelian-Spinozist understandings.

Nevertheless, Goodenough and Deacon, Haidt and Bjorklund, and Tse have the advantage of a more nuanced insight into what they are looking for over the claims of simple inheritance of evolutionary hardwired moral rules or virtues (for example, of Lieberman and Miller). Claiming, for example, that an incest taboo is simply a hardwired modular (isolated) emotional response to a triggering situation has the disadvantage of emptying it of moral content. If morals consist of automatic and isolated responses—like our knee reflexes—the one thing they can’t be is chosen (an Augustinian model) or contextually meaningful and shaped (an Aristotelian habituation or an Aristotelian-Spinozist intellectualist model). In guiding what to look for, the implicit hardwired modular model jettisons nurture and reduces nature to material causes alone, thereby rendering immoral actions and differences in the behavior of the same people in different situations largely inexplicable. Nevertheless, we are back at the Augustinian reduction of nature to the material—yet another unexamined Western cultural presupposition implicitly framing the search for a science of ethics. There is too little self-awareness of cultural location and history driving these studies. The search for ethics in human and other primate minds is still too beholden to narrow unconscious cultural presuppositions and a very specific philosophical history, and the theological tradition from which it emerged, both about what ethics looks like and what the science can contribute.

I hoped to avoid the problem of what counts as ethics by preceding this chapter and the next with three chapters on what ethics actually looks like on the ground, beginning with an initial chapter on our American view of ethics. Then in the following two chapters I attempted to show why our standard American way of thinking about ethics got it mostly wrong because it can’t explain the examples of ethics from the Holocaust and the social psychology studies of ethics that I introduced in Chapters 2 and 3. There is just too much presumed confidence that we all know what we are referring to when we talk about ethics—that we at least know what the problem is, if not the solution. But I am trying to argue from a number of different standpoints that that is not the case. Instead we harbor unwarranted assumptions, some of which are culturally provincial to Americans and Europeans. Moreover, not only the presuppositions about ethics but also the science we have so far seen brought in as explanatory often has a piecemeal flavor to it, rather than being a more systematic and holistic look at what evolutionary biology generally now thinks can be said about the brain and what that might inform us about human agency generally—and not just about the narrower domain of moral agency. Perhaps we ought to be focusing our attention on a very general human tendency for the emergence of norms and for the demand or desire for normative performance of self and others, instead of on a presupposed narrower domain of the purportedly legitimately “ethical.” Where does the striving for normativity come from both in evolution and in the brain, and how does it operate as a mechanism? What is the range of such normative demands—what do they look like in all kinds of practices, thought, institutions, social contexts, relationships, and cultures?

In the next chapter I will present an overview of some relevant current research on neuroplasticity, the vast flexibility and changeability of the brain/mind through experience. Scientists are just beginning to explore and grasp the extent of the brain/mind’s plasticity. If we take that plasticity seriously, there are some discoveries about the brain from even further afield that I believe need to be taken into account when rethinking ethics. My tack will be to further complicate the analysis of ethics by introducing some other discoveries about the mind/brain and hence to try to situate ethics within an even larger range of human engagements. By going further afield I hope to contextualize ethics more broadly in culture, in social and psychoanalytic and neuropsychoanalytic psychology, in political psychology, in neurochemistry, and even in open adaptive systems theory and the like. I shall propose, among other things, that we need to think of ethics in terms of the overall nature and life path of a “self.” I will not, of course, try to resurrect the Cartesian substantive self, the self as “thinking thing.” Instead I will point to the growing evidence for a notion of the self as a neurological and unconscious process of self-mapping—where our limbs are, how we are feeling today, what hurts, how pollution is affecting us, et cetera—and also as emerging from basic emotional systems that are homologous in all mammals. Both neurological mapping and our basic emotions give us a feeling of self in ongoing and remembered responses to the environment, and especially the social environment.

Breaking Out of the Box

At present some of those engaged in research in the new moral psychologies, in contrast to Hauser and colleagues, deny the likelihood of a moral faculty that is a specific innate brain system dedicated to morality.42 Jesse Prinz argues that all of the data cited by Hauser and colleagues can be explained with reference to general-purpose emotion systems and socially transmitted rules. In my next chapter I will begin a discussion of neuroplasticity that introduces a seminal essay by the affective neuroscientists Panksepp and Panksepp in which they argue strongly against the claim that higher functions are either hardwired or encapsulated (modular).43 I think we should leave it to the neuroscientists who study the actual mechanisms of moral cognition and affect to settle the question, or at least weigh in.

In another essay Jesse Prinz challenges claims of moral innateness: he questions not only whether there is an innate moral faculty but even whether there is any innate moral content (rules, principles, or sentiments). His argument poses a challenge not only to strong innateness arguments such as Lieberman’s about an inherited hardwired incest taboo but also to softer innateness arguments such as Sripada’s innate moral biases, Tse’s category of moral evils, and Goodenough and Deacon’s universal moral intuitions based on self-reflection upon inherited primate prosociality. Prinz argues that if morals were innate (and hence hardwired or the result of hardwiring), the moral capacity would be a localized and fixed brain area or system rather than a broadly distributed and plastic set of connections resulting from how experience variously shapes the brain’s neural networks. Prinz argues against there being such an encapsulated moral faculty. Innateness, he says, generally presupposes both functional and anatomical modularity. Modularity means that a capacity is localized in the brain and processes information specific to that domain without access to other such modules. “To be anatomically modular is to be located within proprietary circuits of the brain,” he writes.44 Rather, he provides evidence that this is not the case when it comes to morals. He says evidence supports the idea that “moral stimuli recruit domain-general emotion regions and regions associated with all manner of social reasoning.” Hence “there is no strong evidence for a functional or anatomical module in the moral domain.”45 As a result, without modularity, it is hard to argue for the innateness of morality. Morals are more likely a result of the recruitment of other capacities, especially of broader social cognition and emotions, for new or different uses than the purposes for which they evolved. Prinz concludes that if neuroplasticity is dominant in human behavior, then morality cannot be directly innate but must instead result indirectly from evolutionary processes of environmental fit that affect other mechanisms that come to have effects upon, or can be targeted for use in, moral life as well. He argues that the evidence points toward the second alternative, that morality is a “by-product” of faculties that evolved for other purposes.

An important bone of contention in the debate between those who favor the innateness and modularity of the moral capacity and those who reject it involves the distinction between the violation of conventional rules and the violation of moral norms. Many of those who argue for innateness hold the view that the distinction between convention and morality supports the hardwired modularity of a universal moral core of sentiments and beliefs, which can be contrasted with merely social conventions that differ across cultural context. Morals, they say, are therefore natural, unlike social and cultural conventions. Prinz shows that evidence from recent experiments designed to test just that hypothesis contests the distinction. A crucial piece of evidence comes from studies of psychopaths, who some scientists maintain cannot distinguish between moral and conventional rules. Nevertheless, evidence going back decades and confirmed again and again suggests that psychopaths do not have a specific, “selective” moral deficit but instead have a diminished capacity to feel negative emotions or recognize them in others. As a result, they do not respond normally to fear conditioning. They have diminished startle responses, little depression, high pain thresholds, and difficulty in recognizing facial expressions of sadness, pain, and disgust. He writes, “Without negative emotions, psychopaths cannot undergo the kind of conditioning process that allows us to build up moral rules from basic emotions. Psychopathy is not a moral deficit but an emotional deficit with moral consequences.”46 Brain imaging studies of healthy people viewing pictures that portrayed morally fraught scenes involving physical assault, abandoned children, and the like confirmed the evidence regarding psychopathy, Prinz says, because they showed that these scenes (in contrast to scenes that were merely disgusting or disturbing but did not raise moral issues) activated areas of the brain devoted to social cognition. Hence the evidence for a specific moral brain module and capacity in either anatomy or function is lacking, he contends.47 Moral sentiment and thinking seem to draw on social cognition and emotion.

Prinz’s view of the moral capacity as a human construction rather than an innate inheritance, while a minority view among the contributors to the three volumes, nevertheless sets the stage for a broader and integrative view of the nature and origins of the moral capacity. Still, adequate evidence has not yet been brought in support of morals being an outgrowth of other capacities rather than a unique and modular capacity in itself. Prinz makes just this point in his response to the critique of his essay. He admits that he “need[s] to show that the data are rich enough to allow the acquisition of moral competence without a domain-specific learning mechanism.” He proposes that “we should be open to the possibility that moral competence, like religion, tool use, and the arts is a byproduct of more general psychological resources.”48 The jury still seems to be out, the evidence yet too thin on both sides. Prinz’s own view is of morals as sentimental norms of moral praise and blame.49 The debate between those who argue that ethics is fundamentally an emotional capacity with cognitive effects and those who think it is essentially cognitive with emotional consequences is also visible in many of the papers.

Muddying the Waters I: Cognitive Heuristics

Gerd Gigerenzer begins his essay in volume 2 with a description taken from Christopher Browning’s book about Reserve Police Battalion 101 (which I discussed at length in Chapter 2).50 Gigerenzer analyzes the moral failure here in terms of heuristics, rapid cognitive processes that take place below the level of consciousness. These heuristics are sorts of rules of thumb that we employ in making decisions, but they are beneath awareness and happen automatically. The unconscious rule that drove the men in Police Battalion 101, according to Gigerenzer, is “Don’t break ranks.” That is a “social heuristic” that drives behavior, he claims. The next example of a social heuristic that Gigerenzer introduces comes from a comparison of the number of Americans willing to be organ donors versus the number willing in various European nations. Twenty-eight percent of Americans are willing to be organ donors, compared to 17 percent of the British and more than 99 percent of the French and Hungarians. What do these statistics show—that the Americans and British are selfish and the French and Hungarians are of a different moral stripe? Not so, according to the author. Rather, there is a simple heuristic rule of thumb that accounts for these startling discrepancies. In the United States and Britain, a citizen has to opt in to an organ donor program, whereas in France and Hungary one has to opt out. Such a small and seemingly insignificant variable, Gigerenzer says, makes all the difference, for doing nothing (not opting out) is far easier than doing something (opting in). This is called the “default rule.” The available evidence, Gigerenzer contends, indicates that it is not preference or moral principle that is the deciding factor here but instead a seemingly morally irrelevant factor: the way the choice is constructed rather than its substance. Nevertheless, in the United States more people violate the default rule by opting in than violate it in France by opting out.

Having used the notion of unconscious cognitive heuristics to throw a monkey wrench into the possibility, and certainly the reliability, of morally intentional action, Gigerenzer then proposes to reconstruct ethics on a sounder basis. He characterizes heuristics as both fast and, because they involve decisions based upon little information, “frugal.” From the perspective of fast and frugal heuristics, morals cannot be relied upon to function at the level of the individual but instead must be induced through situational, social, and institutional nonmoral mechanisms. So heuristics represent a psychological phenomenon that poses a challenge to the existence of hardwired moral emotions, which are claimed to steer all of us away automatically and unconsciously from incest (Lieberman), for example, or toward empathy (Goodenough and Deacon). Heuristics have to be taken into account, Gigerenzer maintains, and the situation and the environment must be constructed carefully so that they shape action toward positive ends. They cannot be ignored or discounted without moral danger, as the example of Reserve Police Battalion 101 illustrates. Gigerenzer contends that it is the structure of the context that must be manipulated to produce morally desirable action from individuals, instead of relying on individual moral choice or moral training. Neither individual moral emotional motivation nor cognitive moral judgment can be relied upon to override the operation of seemingly trivial cognitive heuristics, Gigerenzer concludes. Cognitive heuristics are morally neutral but can be recruited toward morally better or worse ends—and therein lie both the danger and the hope.

The science of heuristics focuses on these three questions, Gigerenzer says: (1) What is in the adaptive toolbox, that is, what are the various heuristics? (2) Which environments can be structured to take advantage of these heuristics? (3) How can the heuristics be manipulated to solve specific human problems and how can environments be structured to take advantage of the heuristics? Gigerenzer’s approach transforms ethics into a range of ecological problems to be addressed at the systems level rather than at the level of the individual mind and individual choices and actions. Gigerenzer calls the moral thinking involved “ecological rationality” and describes the solution as cognitive environmental engineering—the design of environments to fit the human mind. The heuristic analysis of Reserve Police Battalion 101, for example, focuses on the context and situation rather than on personality traits (such as the authoritarian personality) or culture (anti-Semitic prejudice, for example). This approach is sensitive to context rather than being focused on the individual. “Heuristics,” Gigerenzer concludes, “provide explanations of actual behavior; they are not normative ideals. Their existence, however, poses normative questions.”51 Social heuristics pose a challenge to all the varieties of Western moral theories:

    If moral action is based on fast and frugal heuristics, it may conflict with traditional standards of morality and justice. Heuristics seem to have little in common with consequentialist views that assume that people (should) make an exhaustive analysis of the consequences of each action, nor with the striving for purity of heart that Kant considered to be an absolute obligation of humans. And they do not easily fit a neo-Aristotelian theory of virtue or Kohlberg’s sophisticated postconventional moral reasoning.52

The advantages of heuristics are clear: they actually explain (some) behavior in terms of its causes, and that explanation can serve as the basis for interventions that produce morally desirable results. Knowledge of heuristics makes changes in behavior possible and controllable. The cost is in identifiable moral action for its own sake, out of moral motives. Gigerenzer also challenges the assumption of so many brain scientists that there is a discrete moral domain or capacity that can be discovered within the mind’s architecture or functioning. Rather, the notion of heuristics reduces the moral domain to a larger social domain and puts the moral focus on the group and the context rather than on the individual. As Gigerenzer puts it, “Heuristics that underlie moral actions are largely the same as those underlying behavior that is not morally tinged.”53 Moreover, the causal explanation of moral action motivated by heuristics bypasses the debate about whether moral actions are motivated by emotions or cognition. Instead the relevant distinction is whether an action is motivated by unconscious or conscious reasons. As I have previously noted, the evidence that conscious reasons actually cause behavior is in dispute, and in fact there is growing evidence that what we think are our reasons for our actions largely cash out as ex post facto rationalizations for our decisions and actions. The absence of conscious awareness of the cognitive heuristics that drive many of our actions makes them like other decision making rather than unlike it, Gigerenzer points out. Decision making both within and outside the moral domain is largely unconscious. (More about our sources and levels of awareness or consiousness will be introduced later on.) But heuristics, unlike many other kinds of unconscious causes of decisions, are easily made accessible to consciousness and can be taught.

Heuristics are not just products of our individual brains but are often present as the socially embedded unconscious rules of institutions, rules that often conflict with the purported public principles of those institutions, Gigerenzer says. He offers as an example the British legal system, in which magistrates are supposed to follow due process in bringing a defendant to justice but in fact instead follow a heuristic to protect their own institution from being held liable if a defendant out on bail commits another crime. Hence they err on the side of jailing rather than allowing bail. So the facts on the ground show that the institution rewards institutional self-protection over justice for defendants and for the public. It can do so because there is no feedback mechanism that provides evidence of how well it is actually protecting both defendants and the public. He calls this and similar cases “split-brain institutions,” likening it to what happens in split-brain patients, where the left side of the brain serves only to rationalize and confabulate what the right side is doing. Gigerenzer remarks that medical institutions are particularly prone to the heuristic of protecting the employees and the institution instead of the patients, those whom the institution is ostensibly designed to benefit. Again, this is so because there is no feedback mechanism to provide evidence of how well the institution’s aim is being carried out, but there is plenty of blame for the employees if a miss occurs. So the employees act to avoid the possibility of blame (and potential lawsuits) rather than in pursuit of the ostensible purpose of the institution. Part of the heuristic is to keep out of consciousness what is really happening and to give it a public name and face that ascribes to itself an ideal that is in fact avoided in the interest of the institution’s self-protection. As a consequence, overtreating and overmedicating become the standard practice. Individual self-deception and institutional collusion keep the whole thing going. Gigerenzer remarks that although there is a general consensus among many in the field that “heuristics are always second-best solutions, which describe what people do but do not qualify as guidelines for moral action,” he thinks heuristics in some cases, such as ecological ones, can be prescriptive as well as descriptive.54

The evidence from heuristics points to several hypotheses about ethics. First, the locus of ethics here is the interaction of mind and environment. It is in this in-between area that ethics should be sought, not in the mind alone, and especially not in the individual mind alone. Second, moral decision making is to be sought within unconscious cognitive processes, not just, or perhaps not even primarily, in conscious thinking. I will return in later chapters to elaborate on the evidence for these two points.

Muddying the Waters II: Cognitive Framing

Does ethics consist in moral intuitions, as those who advocate hardwired moral modules in one form or another contend? Do we have strong moral beliefs that arise immediately from an objective grasp of a situation? In his essay “Framing Moral Intuitions,” Walter Sinnott-Armstrong challenges the claim that moral beliefs arise as innate, basic, and discrete human responses to a set of clearly discernible universal and repeated situations. Those who advocate innate moral intuitions hold that there are intuitive moral responses that are not derived from other beliefs, or justified in terms of other beliefs, but are instead singular responses that feel right or wrong just in themselves. Sinnott-Armstrong provides evidence that we have no such basic moral intuitions. His data come from studies of cognition that expose how seemingly morally irrelevant factors in the framing of moral questions affect moral judgments in ways that they would not if intuitionism were true. For example, modifying word choice and context, without changing the basic situation described or its meaning, can systematically affect moral intuitions.55 (These findings are consistent with those of the heuristics experiments discussed previously.) Sinnott-Armstrong first cites a famous experiment of Tversky and Kahneman, who were the first to study framing effects. Subjects were asked either of two versions of a hypothetical story. Both versions amounted to the same thing but were framed differently. This was the first version:

    Imagine that the U.S. is preparing for an outbreak of an unusual Asian disease which is expected to kill 600 people. Two alternative programs to fight the disease, A and B, have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: If program A is adopted, 200 people will be saved. If program B is adopted, there is a ⅓ probability that 600 people will be saved, and a ⅔ probability that no people will be saved. Which of the two programs would you favor?

The second version presented the same story but worded the choice between the alternatives in the following way:

    If program C is adopted, 400 people will die. If program D is adopted there is a ⅓ probability that nobody will die and a ⅔ probability that 600 people will die.

Clearly, program A is the same as C and program B is the same as D. The two versions differ merely in their wording, with the first version referring to how many people will be saved and the second version to how many people will die. The results, however, did not reflect this equivalence. Instead, while 72 percent of the subjects chose program A over B, only 22 percent chose program C (the equivalent of A) over D (the equivalent of B). It appears that the language of saving versus of dying made all the difference. But perhaps the issue here is more substantive, Sinnott-Armstrong suggests, and that what we are seeing is not just a wording difference but also a real difference in connotation between a saving intervention and its lack. Saving sounds worth doing no matter what.

In another experiment two groups were given hypothetical scenarios of a classic moral story about a trolley car that has lost its brakes and is hurtling down the tracks toward five innocent people who will lose their lives if the trolley continues straight ahead, but only one person will lose his life if the trolley is diverted onto a side track by activating a switch. Imagine you are a bystander who can flip the switch. Most people agree that it is not morally wrong to flip the switch and cause the death of one person to save five. Experimenters tested students to see if the same moral conclusion held across different wordings. Half of the questionnaires used the word kill, and the students were asked to make a moral choice between throwing the switch, which would result in the death of one innocent person, or doing nothing, which would result in the deaths of five innocent people. The second group received questionnaires that used the word save, and they had a choice between throwing the switch, which would result in five innocent people being saved, and doing nothing, which would result in one innocent person being saved. The responses slightly favored action when the question was asked in the save version and slightly disagreed with taking action when it was posed in the kill version. The experimenters found that the wording effect accounted for about a quarter of the total variance. Nothing had been changed—not consequences, intentions, or facts—except the wording.

In another experiment the change in wording was even less substantive since it introduced only a change in the order of the scenarios presented. In the first version of this experiment the change in order had no effect, but in the second version it did. In the second version a seemingly irrelevant and negligible change had a significant effect on the moral judgments people gave. In the first version of this experiment 180 students were asked how strongly they disagreed or agreed, on a scale of +5 to −5, with each of several versions of moral dilemmas set out on a form. There were three pairs of forms. Form 1 posed three moral problems. The first was the trolley problem just mentioned. The second gave a hypothetical case of saving five dying people by scanning the brain of a healthy person, but the healthy person would die as a result. In the third problem, the only way to save five dying people was to transplant the organs of one healthy person who would as a result die. All of the scenarios were described in the language of saving. An alternative version of this form had exactly the same scenarios but in reverse order. In this experiment, no difference was noted—that is, no framing effect was found.

A second experiment, however, did show framing effects. In this experiment the trolley problem was set out first. Then a variant of the trolley problem was introduced as the second moral dilemma: according to this new scenario, a button could be pushed that would cause the train to jump the tracks, saving the five but causing the death of one. The third moral problem was also a trolley scenario: in this one, only pushing a very large person onto the track in front of the trolley would stop it from killing the five. In the case of this set of forms, the order in which the three problems was presented had a significant effect on the moral judgments. There were two findings: people approved of a moral action far more when it appeared first in the sequence rather than last. Second, when the button scenario followed the original trolley problem, there was far more approval of it than when it followed the pushing-the-person dilemma. The experimenters concluded that when dilemmas are more homogeneous (as was the case in the second experiment), the context has an effect. Maintaining consistency with the initial judgment seemed to play a role in the subsequent moral evaluations. Other (and more realistic) cognitive framing experiments of moral evaluation exhibited a similar ordering effect and also a general tendency for later ratings to be more severe than earlier ones. Subjects showed a general tendency to increasing blame over time. In all the cases neither the facts, nor the consequences, knowledge, or intention, changed in the different hypothetical situations presented. Yet the valuations of the same situations did change, exhibiting an ordering effect.56 Sinnott-Armstrong interprets the data in terms of his presupposition that morals consist in unchanging real truths and that cognitive framing interferes with these truths. In his view, since moral truth is always the same, the cognitive framing effect interferes with accurate moral “perception.” He concludes that the evidence of a framing effect shows that discrete and ubiquitous innate moral intuitions of situations are not adequately reliable, so moral intuitions are not justified or justifiable without further inferences. “Framing effects,” he says, “signal unreliability.”57 He uses these examples to defeat the case for moral intuitionism as an accurate and universally human innate ability in the way language is thought to be, since the “studies show that moral intuitions are subject to framing effects in many circumstances.”58

The framing effect poses a challenge to the many versions of the claim that there are hardwired discrete moral mechanisms. Haidt and Bjorklund’s assumption that there are fixed ethical situations that are readily apparent and have clear moral meanings that trigger set moral responses is a case in point. They offer a diagram of the SIM with a single circle containing the eliciting situation, and an arrow points to everything else that follows. Sinnott-Armstrong’s cognitive framing effects make such an initial clarity and univocality of meaning of the ethical situation highly unlikely, for even small changes in wording or in context seem to affect the meaning of any given situation, moral or otherwise. In the next chapter I will cite an array of research on how large-scale cognitive framing is ubiquitous and includes the many different cultural, social, and historical as well as personal psychological meanings we attribute to our worlds and to the array of situations within them. The possible meanings and interpretations of situations, it will become apparent, are almost infinite. Hence Sinnott-Armstrong’s conclusion of the mere error-proneness of our “perceptions” of situations is not the only way to read the data he presents.

But why does Sinnott-Armstrong presuppose that cognitive framing is merely a source of error in moral reasoning and decision making rather than evidence for how human moral thinking actually operates? Cass Sunstein, in contrast to Sinnott-Armstrong, argues for a larger role for cognitive framing in ethics. He proposes that the discussion of cognitive shaping of moral decisions needs to go beyond unconscious fast heuristics and focus more generally on cognitive frames, particularly on the cognitive framing of moral situations. Cognitive frames are largely unconscious, yet they are not fast heuristic rules of thumb. They consist in implicit gestalt (global) interpretations of situations.

In the next chapter I will elaborate on the importance and ubiquity of unconscious sociocultural cognitive framing driving the interpretation of situations. Cognitive moral framing implies that ethics is not merely about action, choosing and adhering to principles or having moral sentiments, but instead is embedded in the interpretations given to situations. So rather than merely playing the role of interference, cognitive framing has a large positive role to play in ethics. What Sinnott-Armstrong has identified as errors in ethical valuation may be instead just what ethics as practiced looks like.

Muddying the Waters III: Conscious and Unconscious
Choices and Decisions

Let us grant for the moment Haidt and Bjorklund’s presupposition that innate morals, if they could be found, are moral truths for all human beings. Yet even the intuitionist’s “ethical facts” version of the project to naturalize ethics runs aground. For Haidt and Bjorklund base their claim of innate moral intuitions on their prior claim that research has shown a strong line of demarcation between unconscious, automatic, rapid mental processes and conscious, highly verbal, slow ones. The assumption of that division in the mind presupposes that, in principle, there could be evolved moral intuitions that in some respects remain segmented and isolated (modular) from other areas of mental functioning like culture and language; they bubble up and then are nuanced with contextual factors. But in fact there is increasing evidence that is erasing the distinction between unconscious and conscious processes that the authors depend upon—which, in effect, is a line they are attempting to draw between nature and nurture. While I agree with the authors about the central importance for rethinking ethics of the discovery that “consciousness is at the level of a choice that has already been made,” there is now a growing body of evidence that unconscious processes encompass and govern most executive, that is to say highly cognized, decision-making processes as well as more primitive automatic ones.59 This new research indicates that not just primitive intuitions are unconscious, but so is much of the culturally rich and complex thinking involved in judgments and decisions. Haidt and Bjorklund should perhaps welcome this new direction, for giving up the presupposition of innate, modular moral intuitions would make it possible for them to reconcile their claims of social skills and situationism with their claims to automaticity and the unconsciousness of moral judgment and action. So a smoother and more complex interaction of the innate with the social and cultural (what’s learned), even at the most unconscious level, appears likely in the light of research on neuroplasticity and on cognitive framing. Moreover, the presumption that “animal” equals “hardwired” and “human” equals “softwired,” and hence that our animal evolutionary inheritance and human capacities can be neatly divided and assigned discrete levels of function, is also being challenged.

Darcia Narvaez makes the point that it is likely that moral perceptions of situations—really, interpretations of them (as well as responses to them)—are part of, and beholden to, large-scale complex systems of meaning derived from experience and “softwired” into the brain. She calls for a “biopsychosocial” approach of “ecological contextualism” to be brought to bear upon understanding ethics in the light of the increasing scientific acceptance of the neuroplasticity of the neocortex. Narvaez suggests that the unconscious automaticity of moral judgment comes not from its hardwired evolutionary origins (as Haidt and others presume) but instead from the transition of a consciously learned skill to a developed unconscious expertise. This is the way we all experience the learning process—for example, driving a car or riding a bicycle. The novice-to-expert paradigm, she says, is ubiquitous in cognitive science research. And she calls attention to the social contexts in which we develop cognitive schemas; we are not mere passive perceivers.60 Narvaez introduces a model of “cognitive-affective-action schemas,” which are products of both human construction and cultural inheritance, as a possible model for rethinking ethics in a deeply and broadly contextualized way.61 Yet she is careful not to reduce morals to mere social conformity, and accuses Haidt and Bjorklund of doing just that. Narvaez suggests that we can avoid that trap when we pay attention to our role in the development of our morals. Citing the dangers of Nazi Germany or the Cambodians under Pol Pot, she remarks, “It is shocking to read Haidt and Bjorklund assert that ‘a fully enculturated person is a virtuous person.’”62 Is Narvaez’s reliance on our “active” role in creating and adhering to morals a way of falling back on some version of free will yet again? We can choose to opt out of the immoral, she says, a view that once again seems to make moral action and responsibility depend upon the mystery of human freedom of choice. I think there are better, less magical ways of getting out of this mess. Narvaez nevertheless has both aptly captured the need for contextual explanation and also turned our attention to the moral dangers of social context. She has brought us back to real moral life and away from the contrived, decontextualized hypothetical moral problems of so much philosophical and psychological research, typified by the trolley problem.

Narvaez’s cognitive-affective-action schemas (and the work of the cognitive linguist George Lakoff on cognitive framing in morals and politics, to which I will turn in the next chapter) suggest that we must go beyond the SIM’s rationalization/reason model of ethics to bring in culture and narrative, self and society and situation. Not only does the initial moral eliciting situation appear to be embedded within cultural traditions and narratives, but so does the ethical action. I will argue that looking at cognitive framing can expose how cultural and other contextual interpretations and narratives in fact drive a foregone moral conclusion. (This supports the classical Greek and Spinozist positions that ethics is about how intellectual understanding shapes desire and directs motivation, and highlights how moral virtue lies in honesty, completeness, and ultimately independence from immediate social meanings and pressures.) Brain research by Michael Gazzaniga and others indicates that human beings have a tendency to persistence in belief over truth, tradition over innovation, and the status quo over revolution. The resistance to conceptual rethinking documented in Thomas Kuhn’s famous Structure of Scientific Revolutions is now explicable by neuroscience. So all kinds of theories that presuppose discrete and localized capacities, including Haidt and Bjorklund’s SIM, may have to be questioned and rethought in far more broadly contextualized and flexible ways if the evidence eventually pushes us definitively toward the nonmodular, nonlocalized neuroplastic explanations of thinking and action. A naturalistic account of ethics could then be based on a more integrative and plastic account of the brain and mind, one in which nature includes nurture rather than is defined in contrast with it or as an overlay upon it.

Muddying the Waters IV: Anthropological and Sociological
Evidence for the Cultural Variability of Morals

John M. Doris and Alexandra Plakias further deepen our sense of the importance of taking social context into account in coming to understand how ethics operates. They draw upon a growing body of evidence from anthropology and other social science research to introduce salient examples of substantive cultural differences in morals. These, they argue, make it unlikely that we can identify a set of universal moral principles or intuitions that are hardwired products of evolution beneath all the cultural differences.63 The intractability of moral disagreements also contributes to the unlikelihood that there are real “moral facts” (although, of course, not all agree that such disagreements are substantially irresolvable). In the late 1970s the philosopher J.L. Mackie argued against the possibility of moral agreement, a position that Doris and Plakias argue is now supported by empirical research. Moral judgments, Mackie suggested, can better be explained as reflecting ways of life rather than as perceptions of moral facts, the way that, say, perceptions of objects reproduce them mentally.64 There are no moral facts, according to this argument, to make moral judgments either true or false in the way that there are physical objects whose descriptions can be compared to the things described.

Doris and Plakias call to our attention the work of anthropologist Michelle Moody-Adams in which she examines the ethnographic literature to see if a universal ethic can be found beneath the cultural diversity. While Moody-Adams is wary of drawing conclusions from an anthropological literature that has had little sensitivity to philosophical nuance, she nevertheless embraces a strong view of the situational meaning of behavior and language. According to Doris and Plakias, she holds that “all institutions, utterances, and behaviors have meanings that are peculiar to their cultural milieu, so that we cannot be certain that participants in cross-cultural disagreements are talking about the same thing.”65 Moody-Adams questions whether cross-cultural differences can even be identified, let alone confirmed. She also doubts that cultures have any kind of systematic internal consistency in morals that would lend itself to cross-cultural comparison. An early (1954) ethnographic study, Hopi Ethics, by philosopher Richard Brandt found moral diversity between Hopis and white Americans. The salient evidence for substantive moral disagreement between the two groups was in attitudes about children inflicting pain on pet animals. The Hopis had no moral repulsion to it, whereas American whites pretty much did across the board. Yet given both the paucity of reliable evidence and also the disagreements among philosopher-anthropologists, Doris and Plakias regard the anthropological literature as leaving the question open, and instead they turn to evidence from a large body of cultural psychology research on similarities and differences in emotional and cognitive responses.

Doris and Plakias discuss at some length a study by Richard Nisbett and his colleagues of regional patterns of violence in the American non-Hispanic South (the old Confederate states) versus the American North. Nisbett and colleagues applied the principles and tools of cognitive social psychology to the problem.66 The American South, they concluded, exhibits persistent patterns common to what anthropologists call “cultures of honor”: a focus on the importance of male reputations for strength and of quick retaliation for insult or other violation. The culture of honor, they suggest, explains a disparity in violence that persists between North and South. The South has a greater percentage of homicides resulting from arguments, but not of homicides committed in the course of robbery or other felonies; white southerners are more likely to view violence as justified in response to insult and as a sign of “manhood” than do northerners; legal statistics show that southern states permit more violence in defense of property and person than do northern states. The Nisbett group not only looked at statistics but conducted an experiment that involved sending job application letters to employers all over the country from a fictitious applicant who revealed in the letter that he had a single blemish on his record—he had been convicted of manslaughter for accidentally killing a rival in a barroom brawl after the rival had boasted of sleeping with his fiancée. In another letter, a fictitious applicant admits that he had been convicted of stealing a car when his family was short of money. No regional differences were found in the second scenario, but the southern employers were more sympathetic to the first fictitious applicant’s violent action (to maintain his honor) than were northern employers.67

In another experiment by Nisbett and colleagues, northern and southern male subjects were unsuspectingly tested for testosterone levels after receiving seemingly inadvertent insults. The subjects were told they were having their blood sugar measured as they performed certain tasks. After a control sample was taken, each subject walked down a narrow corridor where he was bumped into by an experiment confederate who also called him an “asshole.” Saliva samples taken immediately afterward were analyzed for levels of cortisol and testosterone, the first indicating stress, anxiety, and arousal and the second indicating aggression and dominance. Southern subjects had great increases in these hormones, while northern subjects showed small increases. Nisbett and colleagues concluded that southerners exhibit a stronger response to insult than northerners.68 Doris and Plakias propose that this case shows a substantive difference in moral values down to the bone—to the hormonal level. And hence the cultural difference is fundamental and results in a substantive and irresolvable moral disagreement on the permissibility of violence between individuals. They also say that this is a case in which ideal conditions cannot be imagined in which the disagreement would disappear. So Doris and Plakias say that Nisbett’s group has come up with evidence of an identifiable and solid case of fundamental and irresolvable disagreement in morals—a case that poses a challenge to the likelihood of hardwired moral intuitions and basic moral facts.

Brian Leiter, in commenting on Doris and Plakias’s essay, points out that the partiality we all feel toward our own family, nation, and other groups amounts to familiar and ubiquitous evidence against the possibility of universal solutions to all moral problems. That impartiality should be the norm, he quips, may be exactly the moral value on which no agreement can be found.69 Another commentator finds the ubiquity of disagreement unproblematic. He proposes turning to a medical model of ethics: real values are to be found on the analogy of health rather than as discrete hardwired principles in our brains. He specifies that he means a model of physical health rather than of psychological health, which he regards as too fraught to be useful.70 With this, he leaves us hanging.

Muddying the Waters V: Moral Responsibility Does Not Depend on Free Will or Free Decision

Julia Driver presents evidence from empirical studies that free causal agency in performing an action is not in fact the basis people use for holding others morally responsible for their actions, as we in the West presume. Interestingly, even in our own cultural world, in which a strong linkage of causal origination and responsibility is presumed and institutionalized in law and elsewhere, the way people actually assign responsibility in many cases does not follow the cultural rule.71

Driver first points out that we often call something the “cause” of an event by picking it out of a large array of other factors that we then label as “background conditions” or “causal factors.” In fact, what we say is the cause often means simply the assignment of responsibility. So Driver brings up a classic case of a man who carefully stored his wood in his basement. A pyromaniac entered the cellar and burned the house down. While we would call the home owner’s storing of the wood a background condition, we assign full causal responsibility to the pyromaniac. Both are causal factors, yet it is the unusual and reprehensible action that gets called the cause of the fire. In a contrasting case, a home owner who leaves his home unlocked and has his spoons stolen is assigned some causation in the stealing of the spoons. So the concept of “cause” here is a simple stand-in for the assignment of responsibility. From a strictly causal perspective, all kinds of factors contributed to what happened in each case, including in the first case the presence of oxygen in the atmosphere, which enabled the wood to burn. So it’s not really “cause” in the strict sense that’s being talked about or assigned but in fact it is responsibility that is snuck in as if it were a neutral description of simply what happened. Driver says other cases reveal that we assign responsibility even in the “complete and total absence of causal connection.”72

From these thought experiments Driver now turns to empirical studies of how people assign moral responsibility and blame in order to determine how the attribution of cause enters in. Generally people are reluctant to say that someone is a cause of something bad if that person is not blameworthy, she says. Driver describes a study by M.D. Alicke from 1992 in which subjects were asked to assign “primary causation” in various scenarios. The experiment went as follows:

    John was driving over the speed limit (about 40 mph in a 30 mph zone) in order to get home in time to . . .

Socially desirable motive

    . . . hide an anniversary present for his parents that he had left out in the open before they could see it.

Socially undesirable motive

    . . . hide a vial of cocaine he had left out in the open before his parents could see it.

Other cause

        Oil spill As John came to an intersection he applied his brakes, but was unable to stop as quickly as usual because of some oil that had spilled on the road. As a result, John hit a car that was coming from the other direction.

        Tree branch As John came to an intersection, he failed to see a stop sign that was covered by a large tree branch. As a result, John hit the car that was coming from the other direction.

        Other car As John came into an intersection, he applied his brakes, but was unable to avoid a car that ran through a stop sign without making any attempt to slow down. As a result, John hit the car that was coming from the other direction.

Consequences of accident

        John hit the driver on the driver’s side, causing him multiple lacerations, a broken collarbone, and a fractured arm. John was uninjured in the accident. Complete the following sentence: The primary cause of the accident was . . .73

Alicke found that subjects who were given the scenario with John’s motive as the socially undesirable one considered John the “primary cause” of the accident far more frequently than did subjects who were given the scenario in which John was assigned the socially desirable motive. Joshua Knobe of the University of North Carolina points out that causal attributions do not work in the neutral descriptive way that social scientists believe they do. Instead, moral or other normative valuations slip into the descriptions. He suggests that people most often follow a particular sequence: first they judge that someone has acted wrongly, then they attribute causal primacy to that person, and finally they assign blame to that person.74 So the assignment of primary cause is the effect of already having made a moral judgment; it is not that the cause is assigned independently of moral valuation and then made, but the assignment of cause instead is actually dependent upon the moral judgment already in effect and is an expression of it. Driver concludes that “normative considerations influence causal attributions.”75

In addition, causal primacy is often attributed to an act that, in contrast to others, is out of the norm or the ordinary course of things. Consequently it’s not necessarily the action closest to the outcome that’s considered the primary cause. Driver cites Joel Feinberg’s assessment of the 1962 Cuban missile crisis as “caused” by the Soviet construction of missile bases on Cuba, since the introduction of those bases was a radical departure from previous standard practice. So it is the unusualness that is the salient feature that draws to it the attribution of primary cause. But does the unusualness criterion cover the moral cases? If so it would seem to bring us back to more morally neutral descriptions. Driver says the jury is still out on this. Yet she thinks that the evidence of Alicke’s empirical study and the thought experiments of Knobe and others provide solid support for the view that the assignment of cause is consequent upon a prior moral judgment and against the standard view that the moral judgment of responsibility results from an independent and objective analysis of causes. Joshua Knobe and Ben Fraser, in their comments on Driver’s essay, provide further evidence that bears out the suspicion that it is not, in fact, the atypicality of an action that suggests to people its causal primacy but its moral value.76 For example, in an experiment Knobe and Fraser conducted, subjects were given a story about a philosophy department. The receptionist in the department would keep her desk stocked with pens, and while administrative assistants were allowed to take pens from there professors were supposed to buy their own. Yet invariably professors, not just administrative assistants, took the pens. One Monday an administrative assistant and Professor Smith walked by the receptionist’s desk and both took pens. Later that day the receptionist had to take an important phone message and there were no pens left. Subjects were asked whether they agreed or disagreed with the following two statements: “Professor Smith caused the problem” and “The administrative assistant caused the problem.” Subjects overwhelmingly agreed that the professor had caused the problem. This was the case even though both behaviors were typical. Yet only the professor’s action was morally wrong. Because here the prior moral assignment of responsibility seems to have driven the assignment of cause, atypicality cannot be a factor, they concluded. These results and those of other relevant experiments led Knobe and Fraser to conclude that “moral considerations . . . play a fundamental role in the way people think about causation.”77 We could extrapolate that the theological, philosophical, and cultural claim that moral responsibility depends upon the capacity to (causally) originate one’s actions rests on a conceptual and linguistic confusion, as well as upon its political and social utility as a myth.

The second commentary on Driver’s essay addresses legal responsibility. For someone to be criminally responsible for an action, he or she must have acted from a guilty mind, John Deigh points out. This is different from the assignment of cause, for someone can be the immediate cause of someone’s death but not be blameworthy, as a postman who delivers a bomb may be the immediate cause of a death but certainly is not responsible for it. Deigh further raises the objection to the causal account of responsibility by citing the principle of complicity. Someone who joins a criminal enterprise is responsible for the enterprise’s harmful consequences whether or not those consequences are caused by the person’s own actions or by those of his or her accomplices. Hence being criminally responsible is detached from having one’s actions actually cause the event. Both commentaries go further than Driver in uncoupling moral responsibility and cause. Deigh ends his essay by citing Hart and Honoré’s observation in 1959 that causation in the law either refers to an outcome that wouldn’t have occurred had it not been for the defendant or simply means that the defendant is morally responsible. So the very notion of being the cause collapses into that of moral responsibility, both in the law and in common parlance.78

We have come full circle, back to the problem of free will in ethics. Freedom of the will or of decision presupposes that to be responsible a person has to have fully or at least mostly originated an action. But it turns out that that is not the way we actually assign moral responsibility. According to the research just presented, we determine the moral responsibility and then assign the cause. So the causal claim, the free will claim, is actually mere shorthand for saying “You’re morally responsible for having done X.” So why do we think we need to be “free” to be morally responsible? What does that freedom mean and entail? Why do Westerners hold on for dear life to the notion of free will or decision as if our civilization would collapse without it, whereas ancient Greeks, Cambodians, and so many others do not have such a notion yet have morals and assign moral responsibility? And the facts on the ground even in our own Western practice seem to highlight that problem. We have just seen that we don’t need to assume a person has free will or choice to hold that person morally responsible. Instead we seem to assign free will, the causal origination of a (culpable) action, to just that person we have already decided is morally responsible. So we don’t really mean exactly what we say—we’re misdescribing, misunderstanding our own words and actions. And that misunderstanding has all kinds of cultural, social, political, legal, and personal repercussions, not least of which is an overwhelming tendency to blame the individual (it’s his free will, after all) and to let the group and the institution off the hook.