I’m going to tell you a brief story. Pause after you read it and decide whether the people in the story did anything morally wrong.
A family’s dog was killed by a car in front of their house. They had heard that dog meat was delicious, so they cut up the dog’s body and cooked it and ate it for dinner. Nobody saw them do this.
If you are like most of the well-educated people in my studies, you felt an initial flash of disgust, but you hesitated before saying the family had done anything morally wrong. After all, the dog was dead already, so they didn’t hurt it, right? And it was their dog, so they had a right to do what they wanted with the carcass, no? If I pushed you to make a judgment, odds are you’d give me a nuanced answer, something like “Well, I think it’s disgusting, and I think they should have just buried the dog, but I wouldn’t say it was morally wrong.”
OK, here’s a more challenging story:
A man goes to the supermarket once a week and buys a chicken. But before cooking the chicken, he has sexual intercourse with it. Then he cooks it and eats it.
Once again, no harm, nobody else knows, and, like the dog-eating family, it involves a kind of recycling that is—as some of my research subjects pointed out—an efficient use of natural resources. But now the disgust is so much stronger, and the action just seems so … degrading. Does that make it wrong? If you’re an educated and politically liberal Westerner, you’ll probably give another nuanced answer, one that acknowledges the man’s right to do what he wants, as long as he doesn’t hurt anyone.
But if you are not a liberal or libertarian Westerner, you probably think it’s wrong—morally wrong—for someone to have sex with a chicken carcass and then eat it. For you, as for most people on the planet, morality is broad. Some actions are wrong even though they don’t hurt anyone. Understanding the simple fact that morality differs around the world, and even within societies, is the first step toward understanding your righteous mind. The next step is to understand where these many moralities came from in the first place.
I studied philosophy in college, hoping to figure out the meaning of life. After watching too many Woody Allen movies, I had the mistaken impression that philosophy would be of some help.1 But I had taken some psychology courses too, and I loved them, so I chose to continue. In 1987 I was admitted to the graduate program in psychology at the University of Pennsylvania. I had a vague plan to conduct experiments on the psychology of humor. I thought it might be fun to do research that let me hang out in comedy clubs.
A week after arriving in Philadelphia, I sat down to talk with Jonathan Baron, a professor who studies how people think and make decisions. With my (minimal) background in philosophy, we had a good discussion about ethics. Baron asked me point-blank: “Is moral thinking any different from other kinds of thinking?” I said that thinking about moral issues (such as whether abortion is wrong) seemed different from thinking about other kinds of questions (such as where to go to dinner tonight), because of the much greater need to provide reasons justifying your moral judgments to other people. Baron responded enthusiastically, and we talked about some ways one might compare moral thinking to other kinds of thinking in the lab. The next day, on the basis of little more than a feeling of encouragement, I asked him to be my advisor and I set off to study moral psychology.
In 1987, moral psychology was a part of developmental psychology. Researchers focused on questions such as how children develop in their thinking about rules, especially rules of fairness. The big question behind this research was: How do children come to know right from wrong? Where does morality come from?
There are two obvious answers to this question: nature or nurture. If you pick nature, then you’re a nativist. You believe that moral knowledge is native in our minds. It comes preloaded, perhaps in our God-inscribed hearts (as the Bible says), or in our evolved moral emotions (as Darwin argued).2
But if you believe that moral knowledge comes from nurture, then you are an empiricist.3 You believe that children are more or less blank slates at birth (as John Locke said).4 If morality varies around the world and across the centuries, then how could it be innate? Whatever morals we have as adults must have been learned during childhood from our own experience, which includes adults telling us what’s right and wrong. (Empirical means “from observation or experience.”)
But this is a false choice, and in 1987 moral psychology was mostly focused on a third answer: rationalism, which says that kids figure out morality for themselves. Jean Piaget, the greatest developmental psychologist of all time, began his career as a zoologist studying mollusks and insects in his native Switzerland. He was fascinated by the stages that animals went through as they transformed themselves from, say, caterpillars to butterflies. Later, when his attention turned to children, he brought with him this interest in stages of development. Piaget wanted to know how the extraordinary sophistication of adult thinking (a cognitive butterfly) emerges from the limited abilities of young children (lowly caterpillars).
Piaget focused on the kinds of errors kids make. For example, he’d put water into two identical drinking glasses and ask kids to tell him if the glasses held the same amount of water. (Yes.) Then he’d pour the contents of one of the glasses into a tall skinny glass and ask the child to compare the new glass to the one that had not been touched. Kids younger than six or seven usually say that the tall skinny glass now holds more water, because the level is higher. They don’t understand that the total volume of water is conserved when it moves from glass to glass. He also found that it’s pointless for adults to explain the conservation of volume to kids. The kids won’t get it until they reach an age (and cognitive stage) when their minds are ready for it. And when they are ready, they’ll figure it out for themselves just by playing with cups of water.
In other words, the understanding of the conservation of volume wasn’t innate, and it wasn’t learned from adults. Kids figure it out for themselves, but only when their minds are ready and they are given the right kinds of experiences.
Piaget applied this cognitive-developmental approach to the study of children’s moral thinking as well.5 He got down on his hands and knees to play marbles with children, and sometimes he deliberately broke rules and played dumb. The children then responded to his mistakes, and in so doing, they revealed their growing ability to respect rules, change rules, take turns, and resolve disputes. This growing knowledge came in orderly stages, as children’s cognitive abilities matured.
Piaget argued that children’s understanding of morality is like their understanding of those water glasses: we can’t say that it is innate, and we can’t say that kids learn it directly from adults.6 It is, rather, self-constructed as kids play with other kids. Taking turns in a game is like pouring water back and forth between glasses. No matter how often you do it with three-year-olds, they’re just not ready to get the concept of fairness,7 any more than they can understand the conservation of volume. But once they’ve reached the age of five or six, then playing games, having arguments, and working things out together will help them learn about fairness far more effectively than any sermon from adults.
This is the essence of psychological rationalism: We grow into our rationality as caterpillars grow into butterflies. If the caterpillar eats enough leaves, it will (eventually) grow wings. And if the child gets enough experiences of turn taking, sharing, and playground justice, it will (eventually) become a moral creature, able to use its rational capacities to solve ever harder problems. Rationality is our nature, and good moral reasoning is the end point of development.
Rationalism has a long and complex history in philosophy. In this book I’ll use the word rationalist to describe anyone who believes that reasoning is the most important and reliable way to obtain moral knowledge.8
Piaget’s insights were extended by Lawrence Kohlberg, who revolutionized the study of morality in the 1960s with two key innovations.9 First, he developed a way to quantify Piaget’s observation that children’s moral reasoning changed over time. He created a set of moral dilemmas that he presented to children of various ages, and he recorded and coded their responses. For example, should a man named Heinz break into a drugstore to steal a drug that would save his dying wife? Should a girl named Louise reveal to her mother that her younger sister had lied to the mother? It didn’t much matter whether the child said yes or no; what mattered were the reasons children gave when they tried to explain their answers.
Kohlberg found a six-stage progression in children’s reasoning about the social world, and this progression matched up well with the stages Piaget had found in children’s reasoning about the physical world. Young children judged right and wrong by very superficial features, such as whether a person was punished for an action. (If an adult punished the act, then the act must have been wrong.) Kohlberg called the first two stages the “pre-conventional” level of moral judgment, and they correspond to the Piagetian stage at which kids judge the physical world by superficial features (if a glass is taller, then it has more water in it).
But during elementary school, most children move on to the two “conventional” stages, becoming adept at understanding and even manipulating rules and social conventions. This is the age of petty legalism that most of us who grew up with siblings remember well (“I’m not hitting you. I’m using your hand to hit you. Stop hitting yourself!”). Kids at this stage generally care a lot about conformity, and they have great respect for authority—in word, if not always in deed. They rarely question the legitimacy of authority, even as they learn to maneuver within and around the constraints that adults impose on them.
After puberty, right when Piaget said that children become capable of abstract thought, Kohlberg found that some children begin to think for themselves about the nature of authority, the meaning of justice, and the reasons behind rules and laws. In the two “post-conventional” stages, adolescents still value honesty and respect rules and laws, but now they sometimes justify dishonesty or law-breaking in pursuit of still higher goods, particularly justice. Kohlberg painted an inspiring rationalist image of children as “moral philosophers” trying to work out coherent ethical systems for themselves.10 In the post-conventional stages, they finally get good at it. Kohlberg’s dilemmas were a tool for measuring these dramatic advances in moral reasoning.
Mark Twain once said that “to a man with a hammer, everything looks like a nail.” Once Kohlberg developed his moral dilemmas and his scoring techniques, the psychological community had a new hammer, and a thousand graduate students used it to pound out dissertations on moral reasoning. But there’s a deeper reason so many young psychologists began to study morality from a rationalist perspective, and this was Kohlberg’s second great innovation: he used his research to build a scientific justification for a secular liberal moral order.
Kohlberg’s most influential finding was that the most morally advanced kids (according to his scoring technique) were those who had frequent opportunities for role taking—for putting themselves into another person’s shoes and looking at a problem from that person’s perspective. Egalitarian relationships (such as with peers) invite role taking, but hierarchical relationships (such as with teachers and parents) do not. It’s really hard for a child to see things from the teacher’s point of view, because the child has never been a teacher. Piaget and Kohlberg both thought that parents and other authorities were obstacles to moral development. If you want your kids to learn about the physical world, let them play with cups and water; don’t lecture them about the conservation of volume. And if you want your kids to learn about the social world, let them play with other kids and resolve disputes; don’t lecture them about the Ten Commandments. And, for heaven’s sake, don’t force them to obey God or their teachers or you. That will only freeze them at the conventional level.
Kohlberg’s timing was perfect. Just as the first wave of baby boomers was entering graduate school, he transformed moral psychology into a boomer-friendly ode to justice, and he gave them a tool to measure children’s progress toward the liberal ideal. For the next twenty-five years, from the 1970s through the 1990s, moral psychologists mostly just interviewed young people about moral dilemmas and analyzed their justifications.11 Most of this work was not politically motivated—it was careful and honest scientific research. But by using a framework that predefined morality as justice while denigrating authority, hierarchy, and tradition, it was inevitable that the research would support worldviews that were secular, questioning, and egalitarian.
If you force kids to explain complex notions, such as how to balance competing concerns about rights and justice, you’re guaranteed to find age trends because kids get so much more articulate with each passing year. But if you are searching for the first appearance of a moral concept, then you’d better find a technique that doesn’t require much verbal skill. Kohlberg’s former student Elliot Turiel developed such a technique. His innovation was to tell children short stories about other kids who break rules and then give them a series of simple yes-or-no probe questions. For example, you tell a story about a child who goes to school wearing regular clothes, even though his school requires students to wear a uniform. You start by getting an overall judgment: “Is that OK, what the boy did?” Most kids say no. You ask if there’s a rule about what to wear. (“Yes.”) Then you probe to find out what kind of rule it is: “What if the teacher said it was OK for the boy to wear his regular clothes, then would it be OK?” and “What if this happened in another school, where they don’t have any rules about uniforms, then would it be OK?”
Turiel discovered that children as young as five usually say that the boy was wrong to break the rule, but that it would be OK if the teacher gave permission or if it happened in another school where there was no such rule. Children recognize that rules about clothing, food, and many other aspects of life are social conventions, which are arbitrary and changeable to some extent.12
But if you ask kids about actions that hurt other people, such as a girl who pushes a boy off a swing because she wants to use it, you get a very different set of responses. Nearly all kids say that the girl was wrong and that she’d be wrong even if the teacher said it was OK, and even if this happened in another school where there were no rules about pushing kids off swings. Children recognize that rules that prevent harm are moral rules, which Turiel defined as rules related to “justice, rights, and welfare pertaining to how people ought to relate to each other.”13
In other words, young children don’t treat all rules the same, as Piaget and Kohlberg had supposed. Kids can’t talk like moral philosophers, but they are busy sorting social information in a sophisticated way. They seem to grasp early on that rules that prevent harm are special, important, unalterable, and universal. And this realization, Turiel said, was the foundation of all moral development. Children construct their moral understanding on the bedrock of the absolute moral truth that harm is wrong. Specific rules may vary across cultures, but in all of the cultures Turiel examined, children still made a distinction between moral rules and conventional rules.14
Turiel’s account of moral development differed in many ways from Kohlberg’s, but the political implications were similar: morality is about treating individuals well. It’s about harm and fairness (not loyalty, respect, duty, piety, patriotism, or tradition). Hierarchy and authority are generally bad things (so it’s best to let kids figure things out for themselves). Schools and families should therefore embody progressive principles of equality and autonomy (not authoritarian principles that enable elders to train and constrain children).
Kohlberg and Turiel had pretty much defined the field of moral psychology by the time I sat in Jon Baron’s office and decided to study morality.15 The field I entered was vibrant and growing, yet something about it felt wrong to me. It wasn’t the politics—I was very liberal back then, twenty-four years old and full of indignation at Ronald Reagan and conservative groups such as the righteously named Moral Majority. No, the problem was that the things I was reading were so … dry. I had grown up with two sisters, close in age to me. We fought every day, using every dirty rhetorical trick we could think of. Morality was such a passionate affair in my family, yet the articles I was reading were all about reasoning and cognitive structures and domains of knowledge. It just seemed too cerebral. There was hardly any mention of emotion.
As a first-year graduate student, I didn’t have the confidence to trust my instincts, so I forced myself to continue reading. But then, in my second year, I took a course on cultural psychology and was captivated. The course was taught by a brilliant anthropologist, Alan Fiske, who had spent many years in West Africa studying the psychological foundations of social relationships.16 Fiske asked us all to read several ethnographies (book-length reports of an anthropologist’s fieldwork), each of which focused on a different topic, such as kinship, sexuality, or music. But no matter the topic, morality turned out to be a central theme.
I read a book on witchcraft among the Azande of Sudan.17 It turns out that witchcraft beliefs arise in surprisingly similar forms in many parts of the world, which suggests either that there really are witches or (more likely) that there’s something about human minds that often generates this cultural institution. The Azande believed that witches were just as likely to be men as women, and the fear of being called a witch made the Azande careful not to make their neighbors angry or envious. That was my first hint that groups create supernatural beings not to explain the universe but to order their societies.18
I read a book about the Ilongot, a tribe in the Philippines whose young men gained honor by cutting off people’s heads.19 Some of these beheadings were revenge killings, which offered Western readers a motive they could understand. But many of these murders were committed against strangers who were not involved in any kind of feud with the killer. The author explained these most puzzling killings as ways that small groups of men channeled resentments and frictions within the group into a group-strengthening “hunting party,” capped off by a long night of communal celebratory singing. This was my first hint that morality often involves tension within the group linked to competition between different groups.
These ethnographies were fascinating, often beautifully written, and intuitively graspable despite the strangeness of their content. Reading each book was like spending a week in a new country: confusing at first, but gradually you tune up, finding yourself better able to guess what’s going to happen next. And as with all foreign travel, you learn as much about where you’re from as where you’re visiting. I began to see the United States and Western Europe as extraordinary historical exceptions—new societies that had found a way to strip down and thin out the thick, all-encompassing moral orders that the anthropologists wrote about.
Nowhere was this thinning more apparent than in our lack of rules about what the anthropologists call “purity” and “pollution.” Contrast us with the Hua of New Guinea, who have developed elaborate networks of food taboos that govern what men and women may eat. In order for their boys to become men, they have to avoid foods that in any way resemble vaginas, including anything that is red, wet, slimy, comes from a hole, or has hair. It sounds at first like arbitrary superstition mixed with the predictable sexism of a patriarchal society. Turiel would call these rules social conventions, because the Hua don’t believe that men in other tribes have to follow these rules. But the Hua certainly seemed to think of their food rules as moral rules. They talked about them constantly, judged each other by their food habits, and governed their lives, duties, and relationships by what the anthropologist Anna Meigs called “a religion of the body.”20
But it’s not just hunter-gatherers in rain forests who believe that bodily practices can be moral practices. When I read the Hebrew Bible, I was shocked to discover how much of the book—one of the sources of Western morality—was taken up with rules about food, menstruation, sex, skin, and the handling of corpses. Some of these rules were clear attempts to avoid disease, such as the long sections of Leviticus on leprosy. But many of the rules seemed to follow a more emotional logic about avoiding disgust. For example, the Bible prohibits Jews from eating or even touching “the swarming things that swarm upon the earth” (and just think how much more disgusting a swarm of mice is than a single mouse).21 Other rules seemed to follow a conceptual logic involving keeping categories pure or not mixing things together (such as clothing made from two different fibers).22
So what’s going on here? If Turiel was right that morality is really about harm, then why do most non-Western cultures moralize so many practices that seem to have nothing to do with harm? Why do many Christians and Jews believe that “cleanliness is next to godliness”?23 And why do so many Westerners, even secular ones, continue to see choices about food and sex as being heavily loaded with moral significance? Liberals sometimes say that religious conservatives are sexual prudes for whom anything other than missionary-position intercourse within marriage is a sin. But conservatives can just as well make fun of liberal struggles to choose a balanced breakfast—balanced among moral concerns about free-range eggs, fair-trade coffee, naturalness, and a variety of toxins, some of which (such as genetically modified corn and soybeans) pose a greater threat spiritually than biologically. Even if Turiel was right that children lock onto harmfulness as a method for identifying immoral actions, I couldn’t see how kids in the West—let alone among the Azande, the Ilongot, and the Hua—could have come to all this purity and pollution stuff on their own. There must be more to moral development than kids constructing rules as they take the perspectives of other people and feel their pain. There must be something beyond rationalism.
When anthropologists wrote about morality, it was as though they spoke a different language from the psychologists I had been reading. The Rosetta stone that helped me translate between the two fields was a paper that had just been published by Fiske’s former advisor, Richard Shweder, at the University of Chicago.24 Shweder is a psychological anthropologist who had lived and worked in Orissa, a state on the east coast of India. He had found large differences in how Oriyans (residents of Orissa) and Americans thought about personality and individuality, and these differences led to corresponding differences in how they thought about morality. Shweder quoted the anthropologist Clifford Geertz on how unusual Westerners are in thinking about people as discrete individuals:
The Western conception of the person as a bounded, unique, more or less integrated motivational and cognitive universe, a dynamic center of awareness, emotion, judgment, and action organized into a distinctive whole and set contrastively both against other such wholes and against its social and natural background, is, however incorrigible it may seem to us, a rather peculiar idea within the context of the world’s cultures.25
Shweder offered a simple idea to explain why the self differs so much across cultures: all societies must resolve a small set of questions about how to order society, the most important being how to balance the needs of individuals and groups. There seem to be just two primary ways of answering this question. Most societies have chosen the sociocentric answer, placing the needs of groups and institutions first, and subordinating the needs of individuals. In contrast, the individualistic answer places individuals at the center and makes society a servant of the individual.26 The sociocentric answer dominated most of the ancient world, but the individualistic answer became a powerful rival during the Enlightenment. The individualistic answer largely vanquished the sociocentric approach in the twentieth century as individual rights expanded rapidly, consumer culture spread, and the Western world reacted with horror to the evils perpetrated by the ultrasociocentric fascist and communist empires. (European nations with strong social safety nets are not sociocentric on this definition. They just do a very good job of protecting individuals from the vicissitudes of life.)
Shweder thought that the theories of Kohlberg and Turiel were produced by and for people from individualistic cultures. He doubted that those theories would apply in Orissa, where morality was sociocentric, selves were interdependent, and no bright line separated moral rules (preventing harm) from social conventions (regulating behaviors not linked directly to harm). To test his ideas, he and two collaborators came up with thirty-nine very short stories in which someone does something that would violate a rule either in the United States or in Orissa. The researchers then interviewed 180 children (ranging in age from five to thirteen) and 60 adults who lived in Hyde Park (the neighborhood surrounding the University of Chicago) about these stories. They also interviewed a matched sample of Brahmin children and adults in the town of Bhubaneswar (an ancient pilgrimage site in Orissa),27 and 120 people from low (“untouchable”) castes. Altogether it was an enormous undertaking—six hundred long interviews in two very different cities.
The interview used Turiel’s method, more or less, but the scenarios covered many more behaviors than Turiel had ever asked about. As you can see in the top third of figure 1.1, people in some of the stories obviously hurt other people or treated them unfairly, and subjects (the people being interviewed) in both countries condemned these actions by saying that they were wrong, unalterably wrong, and universally wrong. But the Indians would not condemn other cases that seemed (to Americans) just as clearly to involve harm and unfairness (see middle third).
Most of the thirty-nine stories portrayed no harm or unfairness, at least none that could have been obvious to a five-year-old child, and nearly all Americans said that these actions were permissible (see the bottom third of figure 1.1). If Indians said that these actions were wrong, then Turiel would predict that they were condemning the actions merely as violations of social conventions. Yet most of the Indian subjects—even the five-year-old children—said that these actions were wrong, universally wrong, and unalterably wrong. Indian practices related to food, sex, clothing, and gender relations were almost always judged to be moral issues, not social conventions, and there were few differences between the adults and children within each city. In other words, Shweder found almost no trace of social conventional thinking in the sociocentric culture of Orissa, where, as he put it, “the social order is a moral order.” Morality was much broader and thicker in Orissa; almost any practice could be loaded up with moral force. And if that was true, then Turiel’s theory became less plausible. Children were not figuring out morality for themselves, based on the bedrock certainty that harm is bad.
Actions that Indians and Americans agreed were wrong:
• While walking, a man saw a dog sleeping on the road. He walked up to it and kicked it.
• A father said to his son, “If you do well on the exam, I will buy you a pen.” The son did well on the exam, but the father did not give him anything.
Actions that Americans said were wrong but Indians said were acceptable:
• A young married woman went alone to see a movie without informing her husband. When she returned home her husband said, “If you do it again, I will beat you black and blue.” She did it again; he beat her black and blue. (Judge the husband.)
• A man had a married son and a married daughter. After his death his son claimed most of the property. His daughter got little. (Judge the son.)
Actions that Indians said were wrong but Americans said were acceptable:
• In a family, a twenty-five-year-old son addresses his father by his first name.
• A woman cooked rice and wanted to eat with her husband and his elder brother. Then she ate with them. (Judge the woman.)
• A widow in your community eats fish two or three times a week.
• After defecation a woman did not change her clothes before cooking.
FIGURE 1.1. Some of the thirty-nine stories used in Shweder, Mahapatra, and Miller 1987.
Even in Chicago, Shweder found relatively little evidence of social-conventional thinking. There were plenty of stories that contained no obvious harm or injustice, such as a widow eating fish, and Americans predictably said that those cases were fine. But more important, they didn’t see these behaviors as social conventions that could be changed by popular consent. They believed that widows should be able to eat whatever they darn well please, and if there’s some other country where people try to limit widows’ freedoms, well, they’re wrong to do so. Even in the United States the social order is a moral order, but it’s an individualistic order built up around the protection of individuals and their freedom. The distinction between morals and mere conventions is not a tool that children everywhere use to self-construct their moral knowledge. Rather, the distinction turns out to be a cultural artifact, a necessary by-product of the individualistic answer to the question of how individuals and groups relate. When you put individuals first, before society, then any rule or social practice that limits personal freedom can be questioned. If it doesn’t protect somebody from harm, then it can’t be morally justified. It’s just a social convention.
Shweder’s study was a major attack on the whole rationalist approach, and Turiel didn’t take it lying down. He wrote a long rebuttal essay pointing out that many of Shweder’s thirty-nine stories were trick questions: they had very different meanings in India and America.28 For example, Hindus in Orissa believe that fish is a “hot” food that will stimulate a person’s sexual appetite. If a widow eats hot foods, she is more likely to have sex with someone, which would offend the spirit of her dead husband and prevent her from reincarnating at a higher level. Turiel argued that once you take into account Indian “informational assumptions” about the way the world works, you see that most of Shweder’s thirty-nine stories really were moral violations, harming victims in ways that Americans could not see. So Shweder’s study didn’t contradict Turiel’s claims; it might even support them, if we could find out for sure whether Shweder’s Indian subjects saw harm in the stories.
When I read the Shweder and Turiel essays, I had two strong reactions. The first was an intellectual agreement with Turiel’s defense. Shweder had used “trick” questions not to be devious but to demonstrate that rules about food, clothing, ways of addressing people, and other seemingly conventional matters could all get woven into a thick moral web. Nonetheless, I agreed with Turiel that Shweder’s study was missing an important experimental control: he didn’t ask his subjects about harm. If Shweder wanted to show that morality extended beyond harm in Orissa, he had to show that people were willing to morally condemn actions that they themselves stated were harmless.
My second reaction was a gut feeling that Shweder was ultimately right. His explanation of sociocentric morality fit so perfectly with the ethnographies I had read in Fiske’s class. His emphasis on the moral emotions was so satisfying after reading all that cerebral cognitive-developmental work. I thought that if somebody ran the right study—one that controlled for perceptions of harm—Shweder’s claims about cultural differences would survive the test. I spent the next semester figuring out how to become that somebody.
I started writing very short stories about people who do offensive things, but do them in such a way that nobody is harmed. I called these stories “harmless taboo violations,” and you read two of them at the start of this chapter (about dog-eating and chicken- … eating). I made up dozens of these stories but quickly found that the ones that worked best fell into two categories: disgust and disrespect. If you want to give people a quick flash of revulsion but deprive them of any victim they can use to justify moral condemnation, ask them about people who do disgusting or disrespectful things, but make sure the actions are done in private so that nobody else is offended. For example, one of my disrespect stories was: “A woman is cleaning out her closet, and she finds her old American flag. She doesn’t want the flag anymore, so she cuts it up into pieces and uses the rags to clean her bathroom.”
My idea was to give adults and children stories that pitted gut feelings about important cultural norms against reasoning about harmlessness, and then see which force was stronger. Turiel’s rationalism predicted that reasoning about harm is the basis of moral judgment, so even though people might say it’s wrong to eat your dog, they would have to treat the act as a violation of a social convention. (We don’t eat our dogs, but hey, if people in another country want to eat their ex-pets rather than bury them, who are we to criticize?) Shweder’s theory, on the other hand, said that Turiel’s predictions should hold among members of individualistic secular societies but not elsewhere. I now had a study designed. I just had to find the elsewhere.
I spoke Spanish fairly well, so when I learned that a major conference of Latin American psychologists was to be held in Buenos Aires in July 1989, I bought a plane ticket. I had no contacts and no idea how to start an international research collaboration, so I just went to every talk that had anything to do with morality. I was chagrined to discover that psychology in Latin America was not very scientific. It was heavily theoretical, and much of that theory was Marxist, focused on oppression, colonialism, and power. I was beginning to despair when I chanced upon a session run by some Brazilian psychologists who were using Kohlbergian methods to study moral development. I spoke afterward to the chair of the session, Angela Biaggio, and her graduate student Silvia Koller. Even though they both liked Kohlberg’s approach, they were interested in hearing about alternatives. Biaggio invited me to visit them after the conference at their university in Porto Alegre, the capital of the southernmost state in Brazil.
Southern Brazil is the most European part of the country, settled largely by Portuguese, German, and Italian immigrants in the nineteenth century. With its modern architecture and middle-class prosperity, Porto Alegre didn’t look anything like the Latin America of my imagination, so at first I was disappointed. I wanted my cross-cultural study to involve someplace exotic, like Orissa. But Silvia Koller was a wonderful collaborator, and she had two great ideas about how to increase our cultural diversity. First, she suggested we run the study across social class. The divide between rich and poor is so vast in Brazil that it’s as though people live in different countries. We decided to interview adults and children from the educated middle class, and also from the lower class—adults who worked as servants for wealthy people (and who rarely had more than an eighth-grade education) and children from a public school in the neighborhood where many of the servants lived. Second, Silvia had a friend who had just been hired as a professor in Recife, a city in the northeastern tip of the country, a region that is culturally very different from Porto Alegre. Silvia arranged for me to visit her friend, Graça Dias, the following month.
Silvia and I worked for two weeks with a team of undergraduate students, translating the harmless taboo stories into Portuguese, selecting the best ones, refining the probe questions, and testing our interview script to make sure that everything was understandable, even by the least educated subjects, some of whom were illiterate. Then I went off to Recife, where Graça and I trained a team of students to conduct interviews in exactly the way they were being done in Porto Alegre. In Recife I finally felt like I was working in an exotic tropical locale, with Brazilian music wafting through the streets and ripe mangoes falling from the trees. More important, the people of northeast Brazil are mostly of mixed ancestry (African and European), and the region is poorer and much less industrialized than Porto Alegre.
When I returned to Philadelphia, I trained my own team of interviewers and supervised the data collection for the four groups of subjects in Philadelphia. The design of the study was therefore what we call “three by two by two,” meaning that we had three cities, and in each city we had two levels of social class (high and low), and within each social class we had two age groups: children (ages ten to twelve) and adults (ages eighteen to twenty-eight). That made for twelve groups in all, with thirty people in each group, for a total of 360 interviews. This large number of subjects allowed me to run statistical tests to examine the independent effects of city, social class, and age. I predicted that Philadelphia would be the most individualistic of the three cities (and therefore the most Turiel-like) and Recife would be the most sociocentric (and therefore more like Orissa in its judgments).
The results were as clear as could be in support of Shweder. First, all four of my Philadelphia groups confirmed Turiel’s finding that Americans make a big distinction between moral and conventional violations. I used two stories taken directly from Turiel’s research: a girl pushes a boy off a swing (that’s a clear moral violation) and a boy refuses to wear a school uniform (that’s a conventional violation). This validated my methods. It meant that any differences I found on the harmless taboo stories could not be attributed to some quirk about the way I phrased the probe questions or trained my interviewers. The upper-class Brazilians looked just like the Americans on these stories. But the working-class Brazilian kids usually thought that it was wrong, and universally wrong, to break the social convention and not wear the uniform. In Recife in particular, the working-class kids judged the uniform rebel in exactly the same way they judged the swing-pusher. This pattern supported Shweder: the size of the moral-conventional distinction varied across cultural groups.
The second thing I found was that people responded to the harmless taboo stories just as Shweder had predicted: the upper-class Philadelphians judged them to be violations of social conventions, and the lower-class Recifeans judged them to be moral violations. There were separate significant effects of city (Porto Alegreans moralized more than Philadelphians, and Recifeans moralized more than Porto Alegreans), of social class (lower-class groups moralized more than upper-class groups), and of age (children moralized more than adults). Unexpectedly, the effect of social class was much larger than the effect of city. In other words, well-educated people in all three cities were more similar to each other than they were to their lower-class neighbors. I had flown five thousand miles south to search for moral variation when in fact there was more to be found a few blocks west of campus, in the poor neighborhood surrounding my university.
My third finding was that all the differences I found held up when I controlled for perceptions of harm. I had included a probe question that directly asked, after each story: “Do you think anyone was harmed by what [the person in the story] did?” If Shweder’s findings were caused by perceptions of hidden victims (as Turiel proposed), then my cross-cultural differences should have disappeared when I removed the subjects who said yes to this question. But when I filtered out these people, the cultural differences got bigger, not smaller. This was very strong support for Shweder’s claim that the moral domain goes far beyond harm. Most of my subjects said that the harmless-taboo violations were universally wrong even though they harmed nobody.
In other words, Shweder won the debate. I had replicated Turiel’s findings using Turiel’s methods on people like me—educated Westerners raised in an individualistic culture—but had confirmed Shweder’s claim that Turiel’s theory didn’t travel well. The moral domain varied across nations and social classes. For most of the people in my study, the moral domain extended well beyond issues of harm and fairness.
It was hard to see how a rationalist could explain these results. How could children self-construct their moral knowledge about disgust and disrespect from their private analyses of harmfulness? There must be other sources of moral knowledge, including cultural learning (as Shweder argued), or innate moral intuitions about disgust and disrespect (as I began to argue years later).
I once overheard a Kohlberg-style moral judgment interview being conducted in the bathroom of a McDonald’s restaurant in northern Indiana. The person interviewed—the subject—was a Caucasian male roughly thirty years old. The interviewer was a Caucasian male approximately four years old. The interview began at adjacent urinals:
INTERVIEWER: Dad, what would happen if I pooped in here [the urinal]?
SUBJECT: It would be yucky. Go ahead and flush. Come on, let’s go wash our hands.
[The pair then moved over to the sinks]
INTERVIEWER: Dad, what would happen if I pooped in the sink?
SUBJECT: The people who work here would get mad at you.
INTERVIEWER: What would happen if I pooped in the sink at home?
SUBJECT: I’d get mad at you.
INTERVIEWER: What would happen if you pooped in the sink at home?
SUBJECT: Mom would get mad at me.
INTERVIEWER: Well, what would happen if we all pooped in the sink at home?
SUBJECT: [pause] I guess we’d all get in trouble.
INTERVIEWER: [laughing] Yeah, we’d all get in trouble!
SUBJECT: Come on, let’s dry our hands. We have to go.
Note the skill and persistence of the interviewer, who probes for a deeper answer by changing the transgression to remove the punisher. Yet even when everyone cooperates in the rule violation so that nobody can play the role of punisher, the subject still clings to a notion of cosmic justice in which, somehow, the whole family would “get in trouble.”
Of course, the father is not really trying to demonstrate his best moral reasoning. Moral reasoning is usually done to influence other people (see chapter 4), and what the father is trying to do is get his curious son to feel the right emotions—disgust and fear—to motivate appropriate bathroom behavior.
Even though the results came out just as Shweder had predicted, there were a number of surprises along the way. The biggest surprise was that so many subjects tried to invent victims. I had written the stories carefully to remove all conceivable harm to other people, yet in 38 percent of the 1,620 times that people heard a harmless-offensive story, they claimed that somebody was harmed. In the dog story, for example, many people said that the family itself would be harmed because they would get sick from eating dog meat. Was this an example of the “informational assumptions” that Turiel had talked about? Were people really condemning the actions because they foresaw these harms, or was it the reverse process—were people inventing these harms because they had already condemned the actions?
I conducted many of the Philadelphia interviews myself, and it was obvious that most of these supposed harms were post hoc fabrications. People usually condemned the actions very quickly—they didn’t seem to need much time to decide what they thought. But it often took them a while to come up with a victim, and they usually offered those victims up halfheartedly and almost apologetically. As one subject said, “Well, I don’t know, maybe the woman will feel guilty afterward about throwing out her flag?” Many of these victim claims were downright preposterous, such as the child who justified his condemnation of the flag shredder by saying that the rags might clog up the toilet and cause it to overflow.
But something even more interesting happened when I or the other interviewers challenged these invented-victim claims. I had trained my interviewers to correct people gently when they made claims that contradicted the text of the story. For example, if someone said, “It’s wrong to cut up the flag because a neighbor might see her do it, and he might be offended,” the interviewer replied, “Well, it says here in the story that nobody saw her do it. So would you still say it was wrong for her to cut up her flag?” Yet even when subjects recognized that their victim claims were bogus, they still refused to say that the act was OK. Instead, they kept searching for another victim. They said things like “I know it’s wrong, but I just can’t think of a reason why.” They seemed to be morally dumbfounded—rendered speechless by their inability to explain verbally what they knew intuitively.29
These subjects were reasoning. They were working quite hard at reasoning. But it was not reasoning in search of truth; it was reasoning in support of their emotional reactions. It was reasoning as described by the philosopher David Hume, who wrote in 1739 that “reason is, and ought only to be the slave of the passions, and can never pretend to any other office than to serve and obey them.”30
I had found evidence for Hume’s claim. I had found that moral reasoning was often a servant of moral emotions, and this was a challenge to the rationalist approach that dominated moral psychology. I published these findings in one of the top psychology journals in October 199331 and then waited nervously for the response. I knew that the field of moral psychology was not going to change overnight just because one grad student produced some data that didn’t fit into the prevailing paradigm. I knew that debates in moral psychology could be quite heated (though always civil). What I did not expect, however, was that there would be no response at all. Here I thought I had done the definitive study to settle a major debate in moral psychology, yet almost nobody cited my work—not even to attack it—in the first five years after I published it.
My dissertation landed with a silent thud in part because I published it in a social psychology journal. But in the early 1990s, the field of moral psychology was still a part of developmental psychology. If you called yourself a moral psychologist, it meant that you studied moral reasoning and how it changed with age, and you cited Kohlberg extensively whether you agreed with him or not.
But psychology itself was about to change and become a lot more emotional.
Where does morality come from? The two most common answers have long been that it is innate (the nativist answer) or that it comes from childhood learning (the empiricist answer). In this chapter I considered a third possibility, the rationalist answer, which dominated moral psychology when I entered the field: that morality is self-constructed by children on the basis of their experiences with harm. Kids know that harm is wrong because they hate to be harmed, and they gradually come to see that it is therefore wrong to harm others, which leads them to understand fairness and eventually justice. I explained why I came to reject this answer after conducting research in Brazil and the United States. I concluded instead that:
• The moral domain varies by culture. It is unusually narrow in Western, educated, and individualistic cultures. Sociocentric cultures broaden the moral domain to encompass and regulate more aspects of life.
• People sometimes have gut feelings—particularly about disgust and disrespect—that can drive their reasoning. Moral reasoning is sometimes a post hoc fabrication.
• Morality can’t be entirely self-constructed by children based on their growing understanding of harm. Cultural learning or guidance must play a larger role than rationalist theories had given it.
If morality doesn’t come primarily from reasoning, then that leaves some combination of innateness and social learning as the most likely candidates. In the rest of this book I’ll try to explain how morality can be innate (as a set of evolved intuitions) and learned (as children learn to apply those intuitions within a particular culture). We’re born to be righteous, but we have to learn what, exactly, people like us should be righteous about.