In much of current discussion, morality is said to be outside of the purview of science because science is descriptive, not prescriptive. Simply said, this means that science cannot tell you what to do. Although there is certainly a clear and important difference between description and prescription, the implications of the science of decision-making for morality are more complex than they would seem at first glance.
Everyone agrees that there is a fundamental philosophical difference between description and prescription. In the famous words of David Hume, there is a difference between “is” and “ought to be.”1 Description is about what will happen. (If you drop that heavy rock, it will fall. If you drop it on your foot, it will hurt.) Prescription is about what you should do. (Don’t drop that rock on your foot!)
This distinction is very clear in modern medicine. Description is the effect of the treatment. (Antibiotics are ineffective against viruses. Bacteria evolve in response to antibiotics.) Prescription is what we should do to treat the disease. (If you have a bacterial infection, antibiotics are a good idea. If you have a viral infection, don’t use antibiotics because you are only helping the bacteria in the world evolve resistance to them.)
Science is fundamentally descriptive. Science is a method to determine the causal structure of the world through prediction and replication.A We need a different term for how to make prescriptive decisions between individuals and groups who disagree. These are policy questions, and the best term, I think, is politics.B Saying that prescriptive decisions are fundamentally political questions is not meant to imply that they depend on one’s party allegiance (capital-P Politics) or the specifics of what gets written into laws (morality is not legality)—we are talking small-p politics here: How do humans interact with other humans and their societies? As we will see below, humans spontaneously organize into small-p political structures to handle their social interactions, and much of that small-p politics is unwritten social constructions.3
As we will see, the role of small-p politics becomes clear when we disagree about goals. If one kid wants to play basketball with dad outside and the other kid wants to play a computer game with dad inside, some negotiation has to occur to settle a fair distribution of dad’s time. Similarly, a company may decry regulations that are preventing it from opening new stores and new markets, while others demand those regulations to ensure a locally desired principle. Or one may need to make a decision about whether to risk prospecting for oil in fragile shrimp-farming grounds. Notice that I have been very careful in these examples not to make any claims about right and wrong; I have only shown examples where goals differ and negotiation is required. Right and wrong in each of these examples will likely depend on the details of the situation. For example, it makes a big difference if those regulations are preventing the company from dumping toxic waste into a neighboring city’s drinking water or if those regulations state that only white people are allowed to own companies.
The statement that goals differ does not imply that both sides are morally equivalent (the members of one side may say that their goal is to exterminate all people of a certain ethnicity, while the other side may say that they want to both live in peace). Similarly, the statement that negotiation is required does not mean that an accommodation has to be made—in the American Civil War and World War II, the “negotiation” between two opposing moral codes (slavery or not, fascism and genocide or not) entailed the two sides fighting a war.
There are three literatures that have tried to claim control of the prescriptive magisteria of morality: religion, philosophy, and economics.
In the separation of “is” from “ought to be,” morality is often seen as the purview of religion and philosophy.4 Stephen Jay Gould famously argued for what he called “NOMA,” or non-overlapping magisteria, and tried to define the difference between science and religion as being of different “magisteria” (different realms of thought)—religion could remain about what we should do, while science could remain about what happens when we do something. Religion could be about meaning, while science could be about the physical world. Religion could be about why, while science could be about how.
Although many people have argued that morality is the purview of religion, I will argue here that this is incorrect.C In fact, religion is also descriptive, not prescriptive. The prescription of religion is based on creating a descriptive theory in which the prescription becomes obvious—God says this is what you must do. If you do this, you will be rewarded. The reward is usually some sort of heaven, while punishment is hell.6 (Even religions without gods describe reward and punishment for correct behavior, such as the ideas that appropriate behavior will allow one to reach nirvana, or that good deeds will lead to reincarnation into a better life in the next cycle.) But reward can also be suggested to occur in this life, such as the prosperity theology movement arising recently in the United States.
Notice that this is fundamentally a descriptive statement: If you do the right thing, then you will be rewarded. The fact that this is descriptive can be seen in Pascal’s famous wager in which he has to decide whether or not to believe in God, and decides to do so because the cost of belief if God does not exist is small, while the cost of disbelief if God does exist is large.7 The bet is laid down on the accuracy of the description. The prescription comes from the goal of attaining reward (Heaven) and avoiding punishment (Hell).
The separation of prescription from religion can be seen in George Bernard Shaw’s play Man and Superman, where several of the characters reject the goal (rejecting prescription but not description) and decide they would rather go to Hell than suffer in Heaven. Similarly, in Milton’s Paradise Lost, Lucifer famously says, “It is better to rule in Hell than to serve in Heaven.” We can say that these people are misguided, that they have made the wrong choices (in Milton that’s clear, in Shaw not so much), but the fact that a change in goals changes the morality implied by religious belief makes clear the inadequacy of religion as the prescriptive magisteria.
Some nonsupernatural moral codes are often referred to under the rubric of religion, such as Confucianism, Taoism, and Utilitarianism.8 However, these are really examples of optimization functions—Confucianism’s stated goal is to maximize stability within a society, while Taoism aims for “inner peace” and harmony with one’s environment, and Utilitarianism attempts to maximize total happiness. These are prescriptive statements—they define a goal to optimize and say that we “ought” to do things that maximize that goal. The discussion of optimization functions leads us first into philosophy and then to economics.
Historically, philosophers have been searching for a categorical imperative, a simple statement that can be used to define morality from first principles.9 The classic example is of course the Golden Rule, “Do unto others as you would have them do unto you,” with versions in the Torah, the New Testament, and the Analects of Confucius.10 The complexity of this is interesting in that versions differ in terms of whether phrased in positive (Do unto others what you would have them do unto you) or negative (Do not do unto others what you would not have them do unto you) terms.
Many such categorical imperatives have been proposed, but, as pointed out by Patricia Churchland in her book BrainTrust, it has always been possible to find extreme situations that undercut these imperatives. Applying the rule to real situations demands exceptions, and we end up with a hodgepodge of rules and regulations. For example, the statement that we should increase the total happiness in the world suggests that one would have to sacrifice food for one’s children to feed starving children elsewhere. While we might celebrate someone who works tirelessly for starving children, there aren’t many people who would be willing to let their own child starve to death for another’s, even for two others. Similarly, the statement that we should do unto others what we would like to have done to us if we were that other person leads to problematic situations in the face of fanatical prejudices. Imagine someone who says they would rather die than live as a paraplegic. Obviously, we would not want that person to carry out their imagined wishes on someone else who is paralyzed. In fact, paraplegic (and even tetraplegic) patients who become paralyzed from an accident tend to return to the same happiness levels they had before their accident.11 (Happiness readjusts allostatically to the new situation.)
Again, we come back to the political question of How do we negotiate different goals? John Rawls put forward an elegant proposal based on his theory of the ignorance of original position.12 Rawls suggests that laws and justice should be defined in such a way that these are the laws one would like to see given that one does not know where one will be in the society before one writes the laws. So, you would not want to make a law endorsing slavery because you would not know beforehand whether you were going to be born a slave or free. Rawls’ proposal is a philosophical description of a political solution. He explicitly states that his goal is that he starts from the assumption of a constitutional democracy, from the assumption that all men (all people) are created equally.
The problem with this theory is, of course, that people are not always willing to assume that they could have started anywhere in society. Aristocracies, religious selectionism, and differences in individual talent can lead to an unwillingness to accept Rawls’ premise. People are not equally talented. Some people are smarter than others. Some people are stronger. Some people are more deliberative and others more emotional. Although these differences are influenced by genetics, they are clearly an interaction between those genetics and societal influences.13
As pointed out by Stephen Jay Gould in A Mismeasure of Man, these genetic effects are notoriously difficult to measure accurately, are dangerously susceptible to cultural influences, and tend to get swamped by variability in those cultural influences. Nevertheless, it is easy for a successful individual (who is usually the one writing the law) to look at an unsuccessful individual and say that the reason for the difference in success is a moral failing. Rawls’ proposal is fundamentally dependent on empathy, the ability to put oneself in another’s place (a critical component of both religious and philosophical moral codes14).
It can be difficult to say, “There but for the grace of God, go I.” Amoral individuals have explicit difficulties with empathy.15 One of the things we will see later, when we directly examine what people do, is that people tend to be less willing to share if they think they’ve worked for their reward.16 Even if we believe in Rawls’ premise, we will still need to understand how to negotiate with individuals who do not.
One field that has, in fact, made “scientific” claims about morality is economics. These claims have been implicit rather than explicit, but by our separation between descriptive and prescriptive, a remarkable number of economic results are prescriptive rather than descriptive. This is because they are defined under an optimization function.
The most common optimization function used in economics is that of efficiency, which is defined as an economy with no missed opportunities.17 Imagine that country A can grow carrots for $10 a bushel and celery for $20 a bushel, while it costs country B $20 to grow a bushel of carrots but only $10 a bushel to grow celery. This means that if each country grows all of its own vegetables, the two countries are wasting money—they are inefficient. It would be more efficient for country A to grow only carrots and country B to grow only celery and for country A to trade some of its carrots to country B for some of the celery grown in country B. This would be more efficient even if there was a small cost to transporting carrots and celery across the border. If transportation cost $1 a bushel, then country A could effectively grow carrots for $10 and celery for $11. Similarly, country B could effectively grow celery for $10 and carrots for $11. Economists usually discuss this in terms of monetary costs, but we get the same results if we think of this in terms of person-power, how many hours it takes how many people to accomplish something. Country A grows carrots with less work than country B, while country B grows celery with less work than country A. Together, they can grow more food with less work. This is the advantage of specialization.
This all sounds well and good, but it assumes that the goal of an economy is to minimize the “missed opportunities.” The effect of this free-trade agreement between countries A and B is that the celery growers in country A are now out of work, as are the carrot growers in country B. If your goal is to maximize jobs, then this deal is a bad one. The problem is that economists use words laden with morality like efficiency or rationality.18;D An inefficient car that gets poor gas mileage is obviously a bad thing. Is an inefficient economy also a bad thing?
The assumption that efficiency is the only goal is a strange one.20;E Sometimes inefficiency is an obvious problem, such as when the costs and purchasing prices are too far out of balance and the economy grinds to a halt, or with corruption that pulls economic production out of an economy;22 but, as with a car that gets poor gas mileage, efficiency is really only one element of a more complex optimization function.23 Lots of people buy inefficient cars because they like the safety features or the ability to carry large objects (like lots of kids) or for their style and their look.
In fact, these economic descriptions of maximizing efficiency are poor descriptors of what humans actually do, particularly at the microeconomic (individual) level, and require extensive additional components to properly describe behavior.24
At a microeconomic level, these questions of interpersonal interactions can be captured in the context of game theory,25 which assumes that two people are playing a game. This game is assumed to be specified as a sequence of choices (actions and their consequences) leading to a final set of payoffs. Each situation is a node in a tree, and each action takes the two players from one node to another. (Notice how this concept of situation and action mirrors the decision-making definitions we’ve been using throughout this book.) A classic example would be the game tic-tac-toe (Figure 23.1). At the start, the situation has an empty board. Player X has nine options. When player X puts an X in one of the squares, player O now has eight options. At the end of the game, there are three possible payoffs: Player X wins and O loses, Player O wins and X loses, or there is a tie. (It can be shown that if both players play optimally, tic-tac-toe will always end in a tie.) Strategy is defined as the set of choices that a player would make at each point in the sequence. (What should Player X do at the start of the game? [The center is optimal.] What should Player O do when Player X has two in a row? [Block it.])
If neither player can improve his or her strategy by changing something, then the players are in a Nash equilibrium.26;F The problem with this is that this breaks down in cooperative games. (Nash’s proof of optimality was technically only for zero–sum non-cooperative games.28) As we will see later, in cooperative games (such as the prisoner’s dilemma, below), a player can improve his or her gain by not cooperating, but if both players don’t cooperate, then they both do worse. In fact, as we will see below, people are remarkably cooperative, even when it is explicitly stated that games will only be played once and when no one knows who you are, so that there is no opportunity for reciprocity to affect decision-making.29
Figure 23.1 GAME THEORY AND TIC-TAC-TOE. Each configuration of the tic-tac-toe board is a state. The action of each player changes the state. Tic-tac-toe is a zero–sum game: if player X wins, then player O loses, and vice versa. Compare the prisoner’s dilemma game (Figures 23.2 and 23.3).
The rest of this chapter will ask the question What do humans do in their interactions? Rather than address the prescriptive question, I’m going to spend the rest of this chapter addressing what we know about human behavior (descriptively). Just as knowing the correct description improves prescription in medicine (see the antibiotic example earlier), we can hope that a better description of human behavior may help us know what prescriptions are possible.G Just as we sidestepped the question of whether taking drugs is right or wrong in Chapter 18, and instead asked Why do people take drugs? and How do we change that (if we want to)?, I am going to sidestep the question of what is moral and instead ask What do people do when faced with moral (social, personal interaction) questions?
Fundamentally, the things that we call “morality” are about our interactions with other people (and other things, such as our environment). Our daily interactions rest on an underlying societal trust (civilization).32 As noted in the previous chapter (where we discussed what makes us human), these groups spontaneously organize, whether it be for mutual protection, for societal stability, or because well-organized groups out-compete poorly organized groups. Simply stated, if two armies face each other and one works together, but the other is made up of individually selfish members, it is pretty clear which group will most likely win the battle.33
So what does that mean for morality? How can science address morality? What does it mean to study morality? We know humans can be gloriously moral (Martin Luther King, the Freedom Riders, soldiers sacrificing their lives for their country, firefighters running into a burning building, donations to charity). We know humans can be gloriously immoral (Nazis, the KKK, genocide in Rwanda, Serbia, and Darfur). The first question we will need to ask is What do humans do when faced with a moral situation? What do humans do, and what are the parameters that drive their action choices? If we understand how humans think about morality, then we will be able to decide whether we want to change it, and if so, how to change it.
Fundamentally, morality is about our interaction with each other. As with the other questions we’ve been addressing in this book, in order to study morality, we need a way to take those interactions and bring them into the laboratory, where we can use controlled experiments to study them. Early attempts at morality studies were based on role-playing games, where confederates would pretend to structure the world in such a way as to drive certain behaviors. Two very famous early experiments on morality were the Milgram obedience experiment and the Zimbardo prison experiment.34
In the Milgram experiment, a subject was told that he or she was to play the role of “teacher” while another played the role of “learner.” (The subject thought the learner was another subject also participating in the experiment and did not know that the learner was an actor.) The two were separated by a wall, so they could not see each other. The teacher provided questions to the learner (arbitrary word-pairs that the learner had to memorize). The “learners” gave wrong answers to many of the pairs. Whenever the learner got it wrong, the teacher was to push a button that would shock the learner. As the learner made more errors, the voltage was increased to the point that the learner would scream, bang on the wall, beg to stop, and finally, go silent. In reality, there were no shocks. The “learner” was an actor playing a part. But the descriptions from the subjects (playing the role of “teacher”) make it very clear that they believed the shocks were real.
The Milgram experiment is usually taken as demonstrating the human unwillingness to question authority; however, although many subjects continued to deliver shocks to the maximal levels, many of those subjects asked to stop and continued only after being assured that it was OK, even necessary, to go on. Other subjects refused to deliver shocks beyond certain levels. Over the subsequent years, Milgram ran dozens of variations of the experiment trying to identify what would make people more or less likely to continue. The numbers varied dramatically based on the physical and emotional connection between the teacher and the learner and on the authority inherent in the experimenter assuring the subject that the experiments were necessary. Recent re-examinations of the original data suggest that the assurance that the experiments were “necessary” was a critical component driving the subjects’ willingness to continue.35 Most likely, a combination of both is needed—an authority figure who can convince one that the crimes are necessary for an important cause. This is a good description of many of the worst evils perpetrated by humans on each other (genocide to “purify the race,” torture to “extract information,” etc.). The key to the Milgram experiment may have been an authority figure stating that the “experiment requires that you continue” and implying that not continuing would ruin an important scientific endeavor.
In the Zimbardo experiment, two groups of college students were arbitrarily assigned roles of guards and prisoners in a mock jail in the basement of a Stanford University psychology building. Over the course of less than a week, the guards became arbitrarily violent, abusive, and even sadistic. It was so bad that the experiment was stopped after only six days—more than a week before it was supposed to. The Zimbardo experiment is usually taken as demonstrating how power structures drive dehumanization and mistreatment.
The problem with both of these experiments is that they address extremes and they are one-off experiments. They are very hard to replicate, and they are very hard to use to get at the underlying mechanisms of human behavior. Nevertheless, both of these effects clearly occur in real life. For example, the recent cases of torture in Abu Ghraib prison are examples of both the Milgram and Zimbardo effects. (What limited investigations have been done suggest that the soldiers were pushed to create negative-impact situations by interrogator authority figures and then went beyond and reveled in the torture they were doing as the moral authority deteriorated.36)
Although extremely important, the Milgram and Zimbardo studies tell us what we fear we already knew—humans are often susceptible to authority (particularly an authority cloaked in a cause), and humans given power will often misuse it. But most people don’t live their lives in these extremes. These studies, and the horrors of Rwanda or Abu Ghraib, are extremes. Certainly, these situations do occur, but the question with situations like the Milgram and Zimbardo experiments (or Rwanda, the Holocaust, Nanking, Abu Ghraib) is never Are they wrong? They are obviously wrong; the question is how to prevent reaching them in the first place. The contributions of the Milgram and Zimbardo studies have been When are humans susceptible to authority figures? What makes them able to question that authority? (Remember that some subjects in the Milgram experiment did question the experimenter and stop the experiment and that other subjects were clearly uncomfortable and asked if the other person was OK and continued only after assurance from the authority figure that the experiment was necessary. One soldier was uncomfortable enough with the situation at Abu Ghraib to leak the photos to the media.)
The revulsion that we feel at what is clearly evil (abuse of children by trusted authority figures in the Catholic church or the abuse of prisoners [including innocent children] by soldiers at Abu Ghraib or the murder of millions of innocents in genocide) identifies that evil exists in the world and that humans can do evil, but it doesn’t really help us get at the mechanism of that evil. In fact, our institutions have a demonstrated inability to deal with evil of that magnitude (e.g., the Catholic church’s inability to come to terms with pedophile priests, or the U.S. government’s inability to bring the originators of the Abu Ghraib incidents to justice, or the inability of the United Nations to stop genocides in Rwanda, Bosnia, or Darfur). The problem isn’t Can we identify evil? We reliably identify true evil. The problem with true evil is dealing with it.
Instead, the more difficult question is that of morality in our everyday lives.H In our daily lives, we are given opportunities to do good and bad—whether or not to give a dollar to the homeless person on the street, whether or not to cheat on a test or on our taxes. For most people in the world, our normal lives do not resemble either Milgram’s or Zimbardo’s horrors. In her chapter in the book Moral Markets, Lynn Stout points out that most of our lives are spent in “passive altruism”—we don’t randomly beat people up on the street and steal their money.38
Some have suggested that this is due to having laws and that people would devolve into violent “savages”I without them, for example as depicted in the famous (and deeply flawed) book Lord of the Flies by William Golding. But actually, small communities generally do not look anything like Lord of the Flies. They contain neighbors helping each other, sharing food, with extensive examples of both passive and active altruism.41
When we see our neighbors’ car stuck in the snow (or even a stranger’s car, as happened outside my house in the big snowstorm we had last winter), we don’t stand laughing, we don’t shoot the person and take their car, we all bring out our shovels and snowblowers and clear out the car and get the person safely back on their way. Firefighters, police officers, and ordinary citizens rush toward danger to help those in need. When the 35W bridge in Minneapolis fell during rush hour in August 2007, throwing cars into the Mississippi River near the university, our first responders were on the scene within minutes, and college students, faculty, and medical personnel from the local hospital ran to help. Even from a bridge carrying rush-hour traffic, only 13 people died that day, due in large part to the remarkable work by the first responders and the help they got.42 Even ruthless pirate ships operated under (written and often very detailed) community rules. The Invisible Hook by Peter Leeson gives a vivid description of how the “lawless” pirates used carefully constructed internal rules to ensure a viable community. We don’t live in a Lord of the Flies situation. So, then, what makes communities work? If it’s not the laws, what is it?
Throughout this book, we’ve seen the importance of breaking the mechanism of decision-making into its component parts. We’d like to be able to understand the mechanisms of morality in a way that is repeatable and can be studied reliably in the context of a laboratory. It would be really useful to be able to study moral questions in such a way that we can ask how the brain represents these issues, perhaps by studying them while observing brain activity using fMRI. To get at these questions in a more controlled, more scientific (and less disturbing) setting, we will turn back to game theory. However, instead of trying to determine what the optimal choice is under some normative (prescriptive) assumption, we will examine what choices humans actually make. These choices become particularly interesting when we make the games non-zero–sum.
A zero–sum game is one in which the total wins and losses cancel out. That is, if one player wins, then another must lose. In physics, the total amount of energy within a closed system cannot be created or destroyed. If two billiard balls hit each other, and one speeds up, then the other must slow down. Life, however, is not zero–sum. Many human jobs take a tremendous amount of training and practice. I would prefer to pay an expert to rebuild my roof so that it will be done right rather than try to learn how to do it myself and probably do a mediocre job of it. This is the advantage of specialization. Each person becomes better at his or her task, able to produce both more and better products, which we can trade between each other, thus increasing the total amount of goods and services we have in the system. Our economy has become more efficient.J
Thus, we need laboratory experiments in which we have non-zero–sum games. Several such games have been developed for studying moral questions.43 The simplest one is the prisoner’s dilemma, which introduces the issue of cooperation and defection into our discussion. In the prisoner’s dilemma, a pair of thieves have been caught robbing a store, but the police can’t find the hidden loot. Without the loot, the police have only enough evidence to get them on breaking and entering. But if the police can get one prisoner to confess, they can convict the other one. The police tell each one that if he confesses and the other doesn’t, they’ll throw the book at the other one and let the confessing prisoner off free. If they both confess, then the prisoners will both get light sentences (presumably because the police won’t need to go to trial). Basically, we can write this as a table with four conditions: if player 1 confesses but player 2 doesn’t, then player 1 gets 0 years but player 2 gets 10 years. Similarly, if player 2 confesses and player 1 doesn’t, player 2 gets 0 years but player 1 gets 10 years. If neither confesses, they get 1 year. If they both confess, they get 5 years (Figure 23.2).
The key to the structure of this game is that cooperatingK when the other player defects against you plays you for a sucker and you lose big. However, if both players defect, then both players lose. We can write this in a positive form as well as in a negative one.L Simply put, imagine two players; each one can either “cooperate” or “defect.” If both cooperate, both win $50. If one cooperates and the other defects, that defecting player gets $80 and the other player gets nothing. If both defect, neither gets anything (Figure 23.3). As with the negative version, the key is that if player 1 cooperates, then it is better for player 2 to defect than to cooperate, but both players defecting is worse than both players cooperating.
Of course, the key to a community is that we don’t tend to play these games only once; we play what is more generally called the repeated (or “iterated”) prisoner’s dilemma.46 In this version, the two players keep playing again and again. Imagine two communities trading goods between them. Your community grows wheat and the other grows fruit. In year one, you bring wheat to the other community and they give you fruit for it. In year two, they bring you fruit and you give them wheat for it. Every other year, you put yourself at risk, bringing lots of wheat to their community where they have all their people and they could steal it from you and send you home without fruit. Of course, if they did that, you wouldn’t come back. This leads to the simple concept of “tit-for-tat.”
|
Prisoner 1 “cooperates” (stays silent) |
Prisoner 1 defects (confesses) |
Prisoner 2 “cooperates” (stays silent) |
Prisoner 1 gets 1 year Prisoner 2 gets 1 year |
Prisoner 1 gets 0 years Prisoner 2 gets 10 years |
Prisoner 2 defects (confesses) |
Prisoner 1 gets 10 years Prisoner 2 gets 0 years |
Prisoner 1 gets 5 years Prisoner 2 gets 5 years |
Figure 23.2 The prisoner’s dilemma: negative version.
In the early 1980s, Robert Axelrod, a scientist interested in game theory and cooperation, ran a series of computer tournaments in which players submitted strategies for cooperation and defection.47 Each player would play a sequence of 200 prisoner’s dilemma games with each other player. (That is, each player would play an iterated prisoner’s dilemma with 200 repetitions against each other player.) What this meant was that a player could punish or trick another player. Although lots of very complicated strategies were submitted, the consistent winner was a very simple strategy called “tit-for-tat” submitted by Anatol Rapoport, a mathematician studying game theory himself. The strategy was to cooperate on the first play, and from then on to do whatever the other player did. If the other player started defecting, tit-for-tat would defect, but as long as the other player cooperated, tit-for-tat would cooperate.
One of the things that Axelrod discovered in his series of tournaments is that a community of tit-for-tat players was remarkably stable. If one simulated evolution among the players (that is, the players with the largest wins replicated a larger proportion of that strategy into the next generation, while players with the smallest wins died off), then tit-for-tat generally filled out the population and became an evolutionarily stable population—other strategies could not break in to the community. Remember that tit-for-tat playing against tit-for-tat would both cooperate. Anyone defecting against it would win big once, but would then be punished by tit-for-tat and would start losing. Although the tit-for-tat player would lose against that one defector, it would still do well cooperating with the other tit-for-tat players, but the defector would lose against everybody.
The problem with tit-for-tat, particularly when playing against another tit-for-tat player, is that if there are any errors, then defections begin to propagate through the future. Imagine if one time on your way to take your wheat to the other community, you are set upon by bandits and all your wheat is stolen. The other community thinks you defected and, ignoring all your pleas and explanations, refuses to trade with you again. Axelrod found that when errors are included in the game, the optimal strategy was “tit-for-tat with forgiveness.” Even among a series of defects, occasionally the player would try a cooperation. It might be just enough to reinstate both players cooperating again.
The key to all of these issues is that two players cooperating produces more advantage than alternating defections (see Figure 23.3). Strategies in a game world in which this is true evolve to produce communities that cooperate.
Fundamentally, we are social creatures. We live in just such communities. Several studies have shown that if you have two groups (think of this as two tribes or two villages), one of which contains a lot of altruistic cooperators (think of this one as the high-tax, high-collaboration village, where people volunteer to help each other out) and one of which contains a lot of selfish defectors (think of this as the “I got mine” village, where people do their best to get their own from the community), then the altruistic village will succeed as a group better than the selfish village.48 The people in the altruistic village will spend more time in the cooperate–cooperate entry of the interaction table, while people in the selfish village will spend more time in the defect–defect entry of the interaction table.
|
Player 1 cooperates |
Player 1 defects |
Player 2 cooperates |
Player 1 gets $50 Player 2 gets $50 |
Player 1 gets $80 Player 2 gets $0 |
Player 2 defects |
Player 1 gets $0 Player 2 gets $80 |
Player 1 gets $0 Player 2 gets $0 |
Figure 23.3 The prisoner’s dilemma: positive version.
This is a form of group selection, in that even though the determination of selective success is completely based on the individual, certain groups do better than other groups. Groups that are successful may grow large enough to split into two groups, while groups that are not successful may destroy themselves from the inside. The evolutionary implications of this group-selection process is laid out most clearly in the book Unto Others, by Elliot Sober and David Sloan Wilson.
This theory, however, does predict that one is likely to be more cooperative with members of one’s own village or group rather than between villages or groups. This difference is called in-group altruism and out-group xenophobia. In his book Darwin’s Cathedral, David Sloan Wilson argues that one of the keys to humanity is the flexibility and complexity with which we define our groups. He argues that many of our societal constructs (religion, nationalism, culture, even sports franchises) are ways of defining who is in the in-group and who is not. In Darwin’s Cathedral, Wilson explicitly examines the development of Calvinism in the 1530s, where exile and ostracism were the main tools of ensuring in-group cooperative behavior.
In his book Historical Dynamics, Peter Turchin defines a term asabiya, which means “cohesion” or “group unity.” Turchin argues that the success of dynasties, tribes, nation-states, and groups depends on the willingness of individuals to sacrifice for the group. Turchin argues that groups, nation-states, tribes, and dynasties all have a cycle, where groups arise with a lot of asabiya (in-group cooperation), take over (because they are stronger together than other groups that cannot work together), but then lose that asabiya over a couple of generations (defectors spread within the group), which makes them vulnerable to the next group with asabiya.
This effect that Turchin is describing occurs because although everyone in the altruistic village will do better on average, a selfish person in the altruistic village is going to do even better than the altruists in that village.49 (That person will spend more time in the I-defect-you-cooperate entry in the interaction table. Think of the person who rides the government-built subway system to protest paying taxes.50) This sets up a conflict between levels: at the level of the villages, altruistic villages do better than other villages, but at the individual level, selfish defectors within that village do better than the altruists in that (mostly altruistic) village. There are two ways to deal with this problem (called the free-rider problem51): exile and altruistic punishment.
Exile entails identifying those defectors and explicitly pushing them out of the community. One of the major purposes of gossip seems to be the identification of such defectors.52 Exile forces the person out of the community, thus removing the person from the cooperate–cooperate entry in the table. Many groups use ostracism or exile as a devastating punishment, one that is clearly strong enough to keep communities in line.53
A great example of the development of in-group altruism as critical for success on a group-by-group basis can be seen in the development of shipboard rules on pirate ships.54 It would be hard to imagine more selfish individuals than pirates, but being trapped on a ship together in the middle of the open ocean, facing battle together, certainly defines a very specific in-group. The key factor described by Leeson in The Invisible Hook is that of punishment for stealing from the crew or for not pulling one’s weight. Not cooperating with the rest of the crew led to the rest of the crew punishing the selfish pirate, who would then change his ways or else.
Altruistic punishment entails the community identifying and explicitly punishing defectors. In a sense, a group of cooperators come together and each sacrifices a small amount to ensure that the advantage a defector gains is minimal.
The simplest game that gets at this question of altruistic punishment is the ultimatum game, a laboratory experiment that directly tests a person’s willingness to sacrifice for fairness.55 In the ultimatum game, one player (the donor) gets $20 to split with another player (the recipient). The recipient then gets to decide whether to “take it or leave it.” If the recipient accepts the distribution, both players get the money, as divided by the donor. If the recipient rejects the distribution, neither player gets anything. In a perfectly “rational” world, the recipient should take whatever he or she gets. The donor should thus give the recipient the minimum amount (say $1 or 1¢) and the recipient should take it. But if you look at your own reaction to this, you will probably have a visceral, emotional reaction to being given only $1 or 1¢ and throw it back in the donor’s face. In practice, recipients reject anything below about a third of the money, and donors (knowing this) divide the money in about a 60/40 split.M
Interestingly, other animals, even highly intelligent animals like chimpanzees, do not seem to show this propensity for altruistic punishment; they take whatever is given to them.58 There are some differences in how the chimpanzees have been run through the experiments (they have to run multiple cases because one can’t tell the chimpanzee how to play the game). And chimpanzees are sensitive to inequality and unfairness, particularly among compatriots.59 Nevertheless, there seem to be clear differences between humans and our closest genetic relative. Although chimpanzees are our closest genetic relative, they have very different social interactions and do not live in the same village-based, close-knit communities that humans do.60 Other animals that do live in such communities (such as wolves) do seem to show altruistic punishment.61 This suggests that the difference may be less an issue of humanity than an issue of community.
The point here is that part of our brains demand fairness. Cooperation, trust, and the lack of it produce changes in activation of emotional reward circuitry.62 fMRI results have identified that altruistic punishment (rejecting an unfair offer) is associated with increases in blood flow in the amygdala and the insula, two structures involved in emotional (Pavlovian) reactions to negative results. In fact, rejection of unfair offers is correlated to the emotion of anger and disgust, and punishment of unfair players is associated with the emotion of satisfaction. Punishment of unfair players produces increases in blood flow in positive emotional structures such as the ventral medial prefrontal cortex and the ventral striatum. The satisfaction of punishing unfair players occurs even when the punisher is a third party—that is, player A defects against player B, who cooperated (and thus got played for a sucker), and player C spends some of his or her own money to punish player A. Players observing an unfair player receiving painful shocks expressed satisfaction, an achieved desire for revenge, and showed activity in reward-related areas such as the ventral striatum. When we evaluate goals, some of those goals are complex and social.
Variations of the ultimatum game have been examined as well. If the recipient thinks the other person is actually a computer program offering choices randomly, that recipient is more likely to take unfair offers63 (implying that the rejection of an unfair offer is about punishing other players). Donors who think they “earned” the money (through some effort, say by performing a task prior to playing the game) offer less than if the money is just given to them. The harder the donor worked for the money (the harder the task was), the less they will offer.64;N
As noted above, altruistic punishment may be the reason for gossip, shame, and reputation. One of the human interactions that I have always found particularly interesting is the success of Amnesty International. One of the main things Amnesty International does is to write letters to dictators, governments, and human-rights abusers to free dissidents and stop those human-rights abuses. They don’t threaten. They don’t have an army. They don’t even threaten with religious “burn-in-Hell” threats. They merely write letters—and the remarkable thing is that this often works.
A similar issue relates to that of forgiveness. As we saw in the tit-for-tat experiments, the most resistant strategy to errors and noise is tit-for-tat with forgiveness. Although some crimes may be unforgivable, forgiving (possibly after completing an appropriate punishment) is a tool to rehabilitate a defector. If the person cooperates from then on, he or she can again become part of the community. In one of the most remarkable stories of our time, after the end of apartheid in South Africa, Archbishop Desmond Tutu led the Truth and Reconciliation Commission, where the criminals were asked to provide a full description, disclosure, and apology.
The science of morality is still in its infancy. At this point, it is concentrating on the descriptive question of What do humans actually do? The evidence seems to be that we have evolved a host of “moral” judgments that allow us to live in (mostly) peaceful communities. These interactions are not always simple and depend on a complex interaction that encourages cooperation and reduces the advantages of defection.
What is particularly interesting is that humanity agrees in large part on morality, particularly that of social interactions. The Golden Rule is recognized the world over, as part of most religions and as part of most nonreligious moral tracts. Specific prohibitions (gay marriage, abortion, eating pork) vary from religion to religion and from culture to culture, but the primary social and moral interactions (be fair in your dealings with others, treat others as you would like to be treated, help when you can) do not.
When a nonbeliever pressed Rabbi Hillel (110 BCE –10 BCE) to describe his entire religion while standing on one foot, promising to convert if Hillel could do so, Rabbi Hillel replied, “That which is hateful to you, do not do to others. All the rest is just commentary. Go and learn.”67 The Golden Rule is a first step toward cooperation, but it alone is not enough. The commentary is long and complicated, and still being discovered.
• Elliot Sober and David Sloan Wilson (1998). Unto Others: The Evolution and Psychology of Unselfish Behavior. Cambridge, MA: Harvard.
• David Sloan Wilson (2002). Darwin’s Cathedral: Evolution, Religion, and the Nature of Society. Chicago: University of Chicago Press.
• Robert Axelrod (1984/2006). The Evolution of Cooperation. New York: Basic Books.
• Patricia S. Churchland (2011). BrainTrust. Princeton, NJ: Princeton University Press.
• Sam Harris (2010). The Moral Landscape. New York: Free Press.
• Paul J. Zak [Editor] (2008). Moral Markets: The Critical Role of Values in the Economy. Princeton, NJ: Princeton University Press.