ON DECENCY, INSULT, AND REVENGE
Why Don’t Suckers Suffer from Disgust?
THE 1994 NOBEL PRIZE IN ECONOMICS WAS AWARDED TO REINHARD Selten, along with John Nash, for his contributions to game theory. Selten developed a dynamic equilibrium concept in which players think forward in a manner similar to the way chess and checkers players try to think several moves ahead.
Selten’s student Werner Güth conducted a simple experiment in 1982 called the ultimatum game.1 In this game two players divide a sum of money, say $100, between them based on the following rule: the first player offers the second player a sum of money from the $100 (he can offer to give anything from $0 to the entire $100). If the second player accepts the offer, then the $100 is divided among the players according to the terms of the offer. If the offer is rejected, the experimenter takes away the $100 and both players walk away with nothing. The first player’s offer is in effect a “take it or leave it” ultimatum, explaining the game’s name.
Two selfish and rational players playing this game will agree to a division granting the proposer $99 and the responder only $1. Since the game is played only once, the responding player should accept whatever nonzero sum of money is offered to him since even one dollar is preferable to getting nothing at all. The proposer, knowing this, should put forward the lowest possible offer, one dollar.
That is what Selten’s model of equilibrium predicts would happen. But Selten (whom I had the privilege of working with for two years), is not only a great scientist, he is also a man of outstanding intellectual integrity. He felt dissatisfied with the equilibrium concept that had earned him an international reputation and eventually won him the Nobel Prize. Selten predicted that actual plays of the ultimatum game will usually result in a division of the money completely different from his equilibrium.
Güth’s experiment, conducted in Germany with a large number of participants, revealed that in most cases the money was divided 50–50 among the two players. In addition, most offers made by the first player that amounted to 35 percent or less of the money were rejected by the responding player. In other words, the responding player was usually willing to give up the opportunity to receive $35 as long as this resulted in the offering player not receiving the $65 that he had greedily wanted to take for himself.
Hundreds of articles have been written about the ultimatum game since Güth’s famous results were published. Researchers in economics, business administration, political science, psychology, anthropology, and philosophy have written on the subject. Many research studies have compared how players in different cultures behave in the ultimatum game, including African tribes and isolated tribes in the Amazon River basin. A group of researchers at the Max Planck Institute in Germany even published an article in 2007 on the subject of how chimpanzees play the ultimatum game.2 (In case this sounds improbable, here’s how it worked: The chimpanzees were sitting in separate cages facing a device with two pairs of trays: one with five bananas to each chimpanzee and the other with nine bananas to chimpanzee A and only one to chimpanzee B. The device allowed chimpanzee A to pull whichever pair of trays it chose, but it could only pull these trays halfway. For the bananas to be claimed, chimpanzee B had to agree to this choice and do its own pulling.)
The ultimatum game has attracted attention in fields far removed from pure game theory because it deals with a question that is elementary and very important for all the social sciences: How applicable is the assumption that individuals are selfish and rational, keeping in mind that this assumption underpins most theoretical models in economics and many of the social sciences?
Variations of the ultimatum game have been studied to gain insights into the differences in the reasoning used by the proposer versus the reasoning of the responding player. An offer of a 50–50 split of the money on the part of the proposer might be motivated either by a desire for equality and decency or by fear that the responder will spurn an offer that is too low. To ascertain the true motivation of the proposer, researchers have suggested studying behavior in the dictator game, as mentioned in the previous chapter, instead of the ultimatum game. In the dictator game the second player must accept the first player’s offer; she has no recourse to take revenge against an insultingly low offer by denying both himself and the first player any monetary reward.
If players who offer a 50–50 division in the ultimatum game also make the same offer in the dictator game, then we can deduce that their primary motivation is a desire for equality, because in the dictator game the second player has no recourse to punish the first player. If, on the other hand, they switch to giving low offers in the dictator game, then that would be a strong indication that their primary motivation for offering a 50–50 split in the ultimatum game is a fear of losing all the money due to the possible revenge of the second player in response to a lower offer, rather than any desire for equality. The results of experiments in which players play both the ultimatum game and the dictator game show that player behavior in the ultimatum game is quite rational: players learn how to predict the reactions of the other players and seek the lowest offers they can get away with, without triggering rejection on the part of the other players, in order to maximize their profits.
Many important insights have been revealed by comparing the behaviors of people from different cultures when they play the ultimatum game. One published research paper on the subject compared ultimatum game players in the United States, Japan, Slovenia, and Israel.3 The research study found significant differences between different cultures, whether players were in the role of proposers or responders. Players in Israel tended to propose the lowest offers for dividing the money. Japan was not far behind Israel, in second place in terms of the selfishness of the offers made by proposing players. Players in Slovenia and the United States were much more generous in their offers.
The most astonishing result of this intercultural comparative research study, however, was the close correlation between the offers made and the responses to them. In both Israel and Japan responders tended to accept relatively low offers. But when similar offers—and even more generous offers—were made by proposers in the United States, they were often summarily rejected by responders.
The conclusion we take from this experiment is that norms of what constitutes fairness are relative and culturally determined. An offer considered fair in Japan or Israel may be construed as outrageously low in the United States. Conversely, a normal offer in the United States may be seen as generous (or even a “sucker’s offer”) in Israel. An offer deemed unfair by both cultures will almost always be rejected. Even Israelis, the least prideful group in accepting money in this game, tend to reject offers of 20 percent or less, but their threshold of acceptance is lower than that of Americans.
Proposing players “magically” know what constitutes fairness in their own cultures and try to make the lowest offers that will likely be accepted by responders. Their behavior is very consistent with assumptions of selfishness and rationality. This ability to read signals of fairness, as we saw in Chapter 5, is one of the important virtues of rational emotions. It eliminates lots of unnecessary disagreement and wasted time.
Several years ago my colleague Shmuel Zamir and I published a paper describing the results of an experiment we conducted on the ultimatum game in changing environments.4 In a stable and homogeneous society, norms of fairness will also be stable and unchanging. But in dynamic societies in which immigrants and people of differing cultural backgrounds mix, standards of fairness are created in a process of learning and constant adaptation; in such situations norms can change much more rapidly than we tend to imagine. To understand these dynamics, we brought together many players in our laboratory. Each player played the ultimatum game repeatedly, each time facing off against a different player. After playing about ten games against human players, some of the players in this experiment were paired off against virtual opponents—computer programs that we had created.
There were two types of virtual players. Virtual player of type A was programmed to make particularly low offers, between 13 percent to 16 percent when playing as the proposer, and to accept any offer above 16 percent when playing as the responder. Virtual player of type B was programmed to make generous offers between 45 percent to 50 percent as the proposer and to accept only offers that were above 45 percent when playing as the responder.
One group of human players in this experiment was paired off against virtual players of type A after playing ten games against a human opponent, while a second group of players was similarly paired off against virtual players of type B. The human players were not informed that at a certain point they would be playing against a computer program.
The experiment was conducted in Israel. In the first part of the experiment, in which human players faced human players in a series of ten games, the players played consistently with the norm of fairness typical of Israelis—the most common offer was slightly below 40 percent. But after a further ten to fifteen games against virtual players, the two groups adopted different norms of fairness. Human players playing against type A virtual players made offers that ranged between 20 percent to 40 percent while the players facing off against type B virtual players shifted to making offers that never fell below 50 percent.
These new norms were rapidly adopted under the pressures of two different forces. The human players who were paired off in the role of the responder against a proposing type A virtual player had to contend with very low offers that they initially rejected. With time, however, they were forced to accept those offers because continued rejection would have meant that they would have walked away from the experiment with very little in their pockets. Human players playing the role of the proposer who attempted to make outrageously low offers, down to 17 percent, when playing opposite a virtual responder of type A were surprised to discover that those offers were consistently accepted. That encouraged them to continue to experiment with low offers. Eventually most of their offers dropped down to very low levels. Similar dynamics, but in reverse, were observed among the human players playing against virtual players of type B. Offers that were even slightly less generous than an even division of the money were rejected and players were “educated” to offer only equal divisions.
We concluded from this experiment that norms of fairness can be quite fragile. A principled stand to reject any offer I regard as insultingly low can easily disappear if I see that almost all the offers I get are insultingly low. In fact, such offers will then cease to be insulting almost by definition.
The behavior of the proposers in the ultimatum game is consistent with assumptions of selfishness and rationality. The behavior of the responders, however, remains a bit of a mystery. Why would anyone in the role of accepting or rejecting an offer leave money on the table just to punish the other player for making an insultingly low offer, when the game is played only once and the two players will never see each other again? Robert Aumann suggested an interesting answer in a distinction he makes between “act rationality” and “rule rationality.” According to this theory, limits to the cognitive resources available to us cause us to adopt simple behavioral rules that work well in most of the social interactions we encounter, but not necessarily in all of them. In other words, rather than planning out every little detail of our interactions, we settle for a pretty good plan and stick to it.
The rule of thumb that the responders to offers in the ultimatum game are using can be summarized as “never look like a sucker.” Since most of the important social interactions we have in our lives are repeated interactions, sticking to this rule is efficient. In repeated interactions an expression of willingness to accept low offers will likely lead others to try to exploit us the next time we interact with them again. Rule rationality is often driven by emotions, especially what we have called rational emotions. The desire for revenge and punishment, a sense of insult versus a sense of honor, are all elementary mechanisms for creating optimal rules to be used in daily interactions that are similar to the ultimatum game.
This theory has recently been supported by the results of an important neuroeconomic study. Neuroeconomics is a new field of inquiry in economics that studies the brain activity that takes place in people when they make economic decisions.5 Researchers in economics and psychology have increasingly been using magnetic imaging of the brain in recent years in order to map brain activity while decisions are being made. The specific regions of the brain that are being used at any given moment are identified using measurements of oxygen consumption.
In one study, fMRI devices measured the relative activity in different parts of the brains of subjects while they were playing the role of responders in the ultimatum game. Researchers discovered that extremely low offers triggered activity in parts of the brain associated with disgust and the vomit reflex. The sense of disgust that accompanies our reactions to insulting offers is possibly part of a mechanism that evolved to protect us from being exploited in repeated interactions.
In short, it seems, people are literally disgusted by unfair behavior. Do we really want to use reason to talk ourselves into accepting it?