4

GAME THEORY, EMOTIONS, AND THE GOLDEN RULE OF ETHICS

THE PRISONER’S DILEMMA IS PERHAPS THE MOST OVERUSED PARADOX in all of the social science literature, but it continues to fascinate anyone, whether a professional researcher or not, who finds himself caught within its slippery traps. Can introducing rational emotions into the equation help us to find our way out of the paradox?

Let’s briefly review the elements of the Prisoner’s Dilemma. Two suspected bank burglars are caught and arrested. The police, however, lack sufficient evidence. Without a confession from at least one of the suspects, the police will have no choice but to release them.

Each prisoner is held in isolation in a separate cell. The police interrogator calls each one in turn into the interrogation room and offers him the following deal: if one of the two of you confesses while the other refuses to confess, the confessing one will be released. The refusing one will be convicted and serve a five-year sentence in prison. If both of you confess, you will both be convicted, but in return for your confessions we will go easy on you, and you will be sentenced to only four years. The prisoners also know that if neither of them confesses, the police will be unable to charge them with burglary but only with reckless driving during the chase, which will lead to a one-month prison sentence.

Each of the two prisoners must decide how to respond to this offer—but without having any opportunity to coordinate in any way with the other prisoner locked away in the separate cell. Will the prisoners confess or not?

Place yourself in the shoes of one of the prisoners, and you will quickly realize that it is always in your best interest to confess, regardless of what you expect your partner in crime will do. If she confesses, then your confession along with hers reduces your prison term by one year (from five years to four years). If she refuses to confess, your confession buys you an immediate release to freedom.

This result, however, is paradoxical. The two prisoners both find themselves using rational and selfish considerations to conclude that they should confess to the crime, leading to four-year prison terms for each of them. But if they were both to refuse to confess instead, they would both be much better off, serving only minor one-month sentences.

The Prisoner’s Dilemma is not an idle intellectual amusement—it is a core concept in game theory. Game theory is, in essence, the study of interactive decisions. A “game,” in professional jargon, is any situation in which the actions of one person affect the situation of another person. Economic competition, violent conflict between nations, and even interactions within families can all be modeled using game theory.

The Prisoner’s Dilemma is often called the social dilemma game by social science researchers, because it succinctly describes a wide range of social and economic situations, including environmental pollution, tax evasion, military draft dodging, and even cutting in line at the bank. In all of these cases there is an action that is preferable from the perspective of each isolated individual. However, if all (or even a majority) of the participants do undertake that action, then everyone ends up suffering. In the real world, how should we resolve such dilemmas? What makes people prefer to cooperate in such situations even when there is no way to impose cooperation?

One answer to this question was given by Robert Aumann in a series of research papers for which he was awarded the Nobel Prize in economics in 2005 (the prize was co-awarded to Aumann and Schelling that year).1 Social variations on the Prisoner’s Dilemma are frequently “repeated games,” in which the same interactive situation is repeated a large number of times involving the same group of players. The repetition makes choosing a selfish action potentially very costly: people remember your behavior from past occasions. Thus, a player who acts selfishly (such as by confessing in the classic Prisoner’s Dilemma) is liable, when the same situation arises again, to be punished by the other players, who will then also choose actions that are favorable to their purely selfish interests (such as confessing themselves).

Aumann constructed a mathematical model of repeated games and showed that in repeated situations it is possible to achieve cooperation through rational considerations. Aumann’s theory deserves a full discussion on its own, and it will be presented in greater detail in the next chapter. There is, however, another possible answer based on my own research.2 Understanding it requires introducing one of the central concepts of game theory, “Nash equilibrium,” named after John Nash, who won the Nobel Prize in economics in 1994 and became world famous as the central figure in the film A Beautiful Mind. Nash first had the idea for his equilibrium concept in the early 1950s; it eventually became an extremely important concept that is used throughout the social sciences.

To explain the idea behind Nash equilibrium, we will concentrate on games involving two players. Each player has a list of actions (or strategies) that he or she may make use of. A pair of such actions, one chosen by each player from the list of available actions, determines a payoff for each player. An equilibrium is achieved if the action chosen by each player is the “best reply” to the action chosen by the other player. In other words, neither player can improve on the payoff received by choosing a different action.

To take a more specific example, consider a game called the battle between the sexes: you and your spouse need to decide where to go out tonight.

There are two possibilities, a ballet production or a boxing match. Unfortunately, you and your spouse have divergent preferences: you insist on an evening at the ballet, while your spouse refuses to give up the opportunity to enjoy a good boxing match.

After a lot of fruitless deliberation you decide that the choice will be determined in the following manner. Each of you will write down either “ballet” or “boxing” on a slip of paper, without knowing what the other wrote down or discussing the matter between you. The slips of paper will be handed to your neighbor, Mrs. Brown, promptly at 7 p.m. Mrs. Brown will then read aloud what was written on the slips. If both of you wrote down the same activity for the evening, then that is what both of you will do together. On the other hand if you wrote different activities, you will both stay home, missing out on a night out. Suppose now that you each rate your preferred activity as worth $200, while you consider your less-preferred activity as worth $100. Staying at home is the worst option from your perspective, worth $0. What is an equilibrium in this game?

The only possible equilibrium here is achieved if either both of you write down “ballet” or both of you write down “boxing”; if you both insist on writing down your most preferred activity, then you will end up staying at home. It follows that the only way to improve the situation is if one of you gives in and agrees to go to his or her less-preferred activity. But this is where the “trap” in this example lies: if both you and your spouse decide to be magnanimous and give in to the other’s preferences, you will end up staying at home (recall that you are not permitted to discuss with each other what you are going to write down).

Can our couple improve on their chances of arriving at an identical choice of activity that will guarantee an evening out? Of course! For example, the boxing fan might place a boxing glove on the dining room table as a sharp hint that she has no intention at all of backing down from her preferred activity, whatever the consequences may be. That might well have the effect of persuading the ballet aficionado that unless he wants to spend the evening at home, he has no choice but to go along with his wife’s preferences, increasing the probability that he will write down “boxing match” as his choice.

Alternatively, the husband might wish to preempt such a move by his wife by loudly playing the music from Tchaikovsky’s Swan Lake in the living room to signal that he is sticking to ballet as his choice, come what may, thus increasing the chances that his wife will buckle and match his choice by writing down “ballet” as her choice.

Lacking the option of speaking to each other directly, the couple may indeed resort to such signals as a way of improving the chances of arriving at an equilibrium in the battle of the sexes game. But how is this related to emotions?

Emotions are, in effect, a signaling mechanism that enables us to coordinate our actions and arrive at an equilibrium in a wide range of games in which we participate on a daily basis. Emotions also enable us to create new equilibria that could not exist in a world of pure thinking and reason. In many cases they improve our social situations through this mechanism.

To understand this important point, let’s go back to the Prisoner’s Dilemma and show how emotions can create a cooperative equilibrium even when the game is played only once. For this purpose we will describe the Prisoner’s Dilemma in a slightly different way:

Imagine that you and a complete stranger participate in an experiment. You are each initially given $100. You are then asked to choose one of two possible actions, “take” or “be generous” (the two of you have no opportunity to discuss your choices prior to making them). If one of you chooses take while the other chooses be generous, then the one who chooses be generous is required to transfer the entire sum of $100 to the taker. If both of you choose take, then each of you will be required to return $50 to the experimenter. Finally, if both of you choose be generous, you will each receive an extra $50 from the experimenter, and both of you will go home $150 richer.

Note the resemblance of this game to the game Split or Steal mentioned in the previous chapter. Here, as in that game, if all you care about is attaining the highest possible monetary reward, then you should always choose take. That choice will always give you more money, regardless of what the other participant chooses.

Now let’s add emotions to this game. Suppose that in addition to the monetary payoff that you receive in the game, you also place a value on being a decent and fair individual on the one hand and on not being “a pushover” on the other. If you choose take while the other player chooses be generous, then you will feel ashamed at your greediness; that sense of shame has some negative value, which we choose to describe here as equivalent to losing $100. On the other hand if you choose be generous while the other player chooses take, then you will feel a sense of insult and anger, which we can also say is equivalent to being penalized $100. If both of you choose take or both of you choose be generous, then you will experience a neutral emotional reaction.

Now, assuming that the other player has identical emotional reactions in these situations, with identical monetary values attached to them, the analysis of the game changes significantly. Whereas the best case of the take option was once $200 in cash, now it is $100, because of what you might call the shame penalty. This new value is lower than what you get for be generous, meaning $150 is your best-case scenario. Thus the two players’ simultaneous choice of be generous becomes a new equilibrium, meaning both parties are likeliest to cooperate as opposed to being selfish.

Put very simply, this means that the presence of emotions in the equation, even negative ones like anger and shame, can lead to better outcomes for both players. This explanation, however, is still incomplete. I want to show that the emotions described in the above example were not arbitrarily chosen and that in fact they serve the narrowest material interests of those who feel those emotions.

Suppose that having emotional responses also gives a player a fairly good ability to anticipate the emotional reactions of others. Next, imagine what would happen if one of the players in the Prisoner’s Dilemma game is an emotionless individual who employs only the coldest of considerations aimed at maximizing the amount of money he gets, while the other player has the reasonable emotional responses (along with the emotional anticipation capabilities) described above. Let’s suggestively call the cold and calculating player Mr. Brain while the other player is Mr. Emotions.

Mr. Brain will certainly choose take, since he has no shame. But Mr. Emotions is likely to recognize that he is facing off against Mr. Brain and therefore to anticipate that Mr. Brain will choose take. In such a situation, if Mr. Emotions chooses be generous, he will lose twice: once when he loses the $100 given to him at the start of the game, and once when he feels insulted, which is equivalent to a further penalty of $100, leading to a total loss of $200. If, on the other hand, he chooses take, he loses only $50. Mr. Emotions thus concludes that he should also choose take, and both he and Mr. Brain walk away with $50. This contrasts with the situation that holds if both players are emotional players, which as we showed above enables an equilibrium that grants them $150. The conclusion is that emotional behavior is advantageous: in this simple example there is a positive monetary advantage to emotional responses.

This example is taken from a mathematical model I have developed that generalizes the concept of Nash equilibrium. The model shows that the main motivation for cooperation in many games similar to the Prisoner’s Dilemma is an emotional need for reciprocity, such as a feeling of shame at an expression of greediness when others are being generous or a sense of anger and insult in the face of greediness on the part of others. This pair of emotions come together to form the golden rule of ethics, which is also called the ethic of reciprocity.

The golden rule is much touted in our religious texts and taught to every schoolchild as a means to protect the emotions of others—something you have to do even though it runs against your personal desires. But as these experiments show, it is just as importantly a means to look out for our own narrow interests.