FIVE

Economics on the Move

In the twentieth century, economics, which had previously been completely integrated into the social and human sciences, created a new identity for itself – but at the price of becoming disconnected from the other disciplines.

The science of economics developed the fiction of homo economicus, that is, the simplifying hypothesis that decision makers are rational, meaning that they act in their own best interests given the information at their disposal (although economics emphasizes that this information may be partial or manipulated). Economic policy recommendations are consequently based on the idea of externalities, or market failures, which result in a difference between individual rationality and collective rationality such that what is good for an individual economic agent is not necessarily good for society as a whole.

Recently, through research on behavioral patterns and neuroeconomics, economists have turned back to psychology. The motive for this revival is the need to gain a better understanding of behavior. In fact, the construct of homo economicus (and its counterpart, homo politicus) has been controversial, as it is evident that we do not always behave as rationally as this hypothesis predicts. We all suffer from flawed thinking and decision making. More generally, over the past twenty years, economics has moved closer to the other social sciences, taking on board many of their insights. To be mildly provocative, I would even argue that anthropology, law, economics, history, philosophy, psychology, political science, and sociology are really one discipline, because their subjects of study are the same: the same people, groups, and organizations.

These forays by economists into the other human and social sciences are not evidence of voracious imperialism. Other disciplines have their own characteristics. They are often (though not always) less quantitative and less inclined toward formal theoretical analysis and the statistical processing of data. Perhaps a more significant difference is that researchers in other areas of the human and social sciences do not all adhere to the principle of methodological individualism cherished by economists,1 according to which the incentives and behavior of individuals must be the starting point for understanding the behavior of the groups to which they belong. In my view, it is essential that all the disciplines in the human and social sciences are open to, and nourish, one another. Economists have much to learn from other disciplines, and in turn their work can open new lines of research into individual behavior and social phenomena.2

Whole books could be written about how the discipline of economics now operates far beyond its traditional boundaries. The purpose of this chapter will be simply to provide a few examples of this, and for this purpose I have chosen mainly themes close to my own research interests. I hope the reader will pardon this self-indulgent choice. My research covers only a small part of this expanded domain, but I hope to give the reader an idea of how much research economists currently do outside their classical territory.

AN AGENT WHO IS NOT ALWAYS RATIONAL: HOMO PSYCHOLOGICUS

For a long time, homo economicus has been represented as a decision maker who is aware of his own interests and pursues them in a rational way. He might lack information, in which case his decisions might not be as good as those he would make if he had full knowledge of the facts. He could also choose not to be completely informed, or to not think things through in detail, because to do so costs time and, potentially, money.3 But he pursues his own interests perfectly, whatever they were.

CONTRARY TO OUR PERSONAL SELF-INTEREST

Now let us give, by way of contrast, a few examples that do not correspond to the homo economicus model, possibly leading to dysfunctional behavior.4

We Procrastinate

The first example results from a simple lack of will. Too strong a preference for the present leads to procrastination, to putting off disagreeable tasks, to not committing enough to the future, to behaving impulsively. Many studies have been devoted to this short-termism, the early Greek philosophers discussed it, and Adam Smith addressed it in his book The Theory of Moral Sentiments (1759). But for almost the entire twentieth century, the subject disappeared from economists’ field of research. This has changed now.

Economists are interested in the phenomenon of procrastination because it has important consequences for economic policy. We often act against our own interests: left to ourselves, we tend not to save enough for our retirement, to abuse alcohol and drugs, to become addicted to gambling, to buy too quickly from door-to-door salesmen just to get rid of them, to eat too much fat and sugar, to continue to smoke when we would like to stop, to watch television when we really wanted to work or to spend time with other people. In short, what we do today is not always consistent with what we might have wished we would do.

We can think about our short-term behavior in terms of a conflict of goals between our different, successive “selves” (or “temporal incarnations”). We would like to stop smoking, but our present self wants to smoke one last pack of cigarettes, and leaves the disagreeable task of stopping to tomorrow’s self. Of course, tomorrow’s self also won’t have the self-discipline to stop. We always put too much weight on immediate pleasures and costs, and thereby sacrifice our long-term interests.

Policy makers face the dilemma of whether to respect the choices made by individuals (the present selves who make the decisions) or to act paternalistically (which can be interpreted as defending the individual’s longer-term interests). There are good reasons to be wary of paternalism generally, because it can be used to justify all kinds of state intrusion into personal choices. But it is easy to see why a government might want to correct the bias of procrastination. That is what they do when they heavily subsidize retirement saving through a funded pension scheme, or guarantee a minimum retirement pension through a pay-as-you-go system, as in France and some other European countries. The government is also acting paternalistically when it levies high taxes on tobacco; or prohibits or regulates the market for drugs or gambling; or insists on a “cooling-off period” to allow consumers time to change their minds about certain purchases made by their present selves (for example, in the case of door-to-door selling).

Neuroscientists are also very interested in this phenomenon. Researchers have studied, for example, what happens in the brain when individuals are faced with intertemporal choices (decisions made today about the future). Volunteers are asked whether they would prefer to receive ten dollars immediately or fifteen dollars in six months – an extremely high interest rate, well in excess of normal interest rates on our savings. When they choose the immediate ten dollars, their limbic system is activated. The limbic system, which plays an important role in emotion, is an ancient part of the brain, well developed in all animals. When the option of fifteen dollars in six months is chosen, the prefrontal cortex, much more developed in humans, is activated.5 There may be a tension between the drive for instant gratification and our long-term interests, aspects handled by different parts of the brain.

We Make Mistakes in Forming Our Beliefs

Most of our decisions have uncertain effects. This makes it important not to have too distorted a view of the respective probabilities of possible consequences of our actions. We are sometimes very poor statisticians, though. For example, a classic mistake is to think that nature will make sure that the actual outcomes will quickly match the theoretical probability of those outcomes. (Those who have learned statistics know that this in fact requires a very large number of draws, in order to be able to apply the “law of large numbers.”) We all know that flipping coins will give an equal chance of heads or tails; if we flip a coin a great many times, the proportion of tails will be close to 50 percent.6 Yet many of us make the mistake of believing that, when heads comes up three times in a row, the probability that next time it will be tails is greater than the probability that it will be heads.7 However, the coin has no memory; it will fall either way with a probability of 50 percent. This bias is also found when professionals carry out repetitive tasks: judges ruling on requests for asylum, loan officers in a bank granting credit, or baseball umpires calling strikes, all tend to make decisions that “compensate” for their recent decisions. In other words, a decision one way is more likely if the preceding decision went in the opposite direction.8

Another widespread flaw is the difficulty we have in correctly adjusting our beliefs to take account of new information. High school and university statistics lessons teach Bayes’s theorem, a formula describing the correct way to update probabilities in the light of new information. In standard micro- and macroeconomic models, agents are assumed to review their beliefs rationally (that is, in line with Bayes’s theorem) as soon as they have new information. But in the real world this is often not the case. This is true for even the best-educated. As I noted in chapter 1, Kahneman and Tversky showed that medical students at Harvard, an elite group, made elementary statistical errors in calculating the probability of an illness on the basis of symptoms alone, demonstrating that statistical computation is not intuitive.9 Another famous experiment by the same authors involved asking, “Is it more probable that more than 60 percent of the births on a given day would be boys in a small or in a large hospital?”10 Most people reply that the probability must be the same, no matter what the size of the hospital. However, the probability that more than 60 percent of newborns are male is higher when the hospital is smaller. Intuitively, in a hospital that had one birth per day, the probability of this being a boy would be (about) 50 percent; with two births per day, the probability that more than 60 percent of the births would be boys is the probability that both the births would both be boys, or 25 percent. With a large number of births, the probability that more than 60 percent of newborns are boys becomes almost zero: the number of boys born in a large hospital will be close to 50 percent, and thus lower than 60 percent.

We Feel Empathy

We don’t always act in our own material interest, for example the self-interest that would maximize the money in our bank account or more generally our command over goods or amenities. We give to charities. We help strangers when we know we will never see them again. In both cases, we expect nothing in return.

Adding empathy to the description of an economic agent’s goals poses no problem for classical economic theory, as it simply requires redefining self-interest: if I internalize part of your well-being, it becomes, de facto, mine. However, pro-social behavior – that is, behavior in which the individual does not put his own interests above those of everyone else – is much subtler than that, as we will see. Simply adding a dose of empathy to homo economicus only slightly improves this paradigm’s power to explain how individuals really behave.

And Then Also …

There are other deviations from pure rationality studied in experimental economics: excessive optimism, a strong aversion to losses, the sometimes useful but often counterproductive role of our emotions in decision making, selective memory, and our own manipulation of our beliefs.

PRO-SOCIAL BEHAVIOR

Let us turn to pro-social behavior, that is, behavior in which individuals do not give priority to their own material interests, but internalize the well-being of others in a disinterested way. This behavior contributes greatly to the quality of social life. Of course, some of our cooperative behavior only appears to be pro-social. In a repeated relationship we have an interest in behaving well, even from the narrow perspective of our self-interest. The person with whom we are interacting, or the social group to which we belong, will behave differently toward us depending on whether we cooperate or pursue our own short-term interest.

But as we have noted, in a narrow economic model, no one gives to charities, invests in socially responsible mutual funds, buys fair trade products, or works for NGOs at salaries far below the average. Nor do we find any economic agent who votes, because voting cannot be explained by self-interest: the probability that your vote is pivotal, and could change the election result, is almost zero, except in very small groups. Even in the famously close American presidential election in 2000, when the Florida outcome determined the winner, the difference was a few hundred votes. One single vote would have changed nothing. Voting solely to increase the chances that one’s preferred candidate is elected would never be worth the quarter of an hour it takes to do it, in the narrow rational choice approach. This means we are either deluding ourselves by thinking that our vote will in reality advance our preferred cause, or we are not voting to satisfy our economic or ideological interest, but rather because we think it is a duty to do so; we want to look good to others and to ourselves.11

More generally, individuals sometimes make decisions that do not correspond to their strict material interests, and altruism is one of the reasons we might use to explain why they do so. But altruism alone is a much too simplistic explanation, as we are about to see.

ALTRUISM AND SELF-IMAGE

The internalization of others’ well-being allows us to explain the existence of charitable donations, but it doesn’t explain everything. To understand why, it is useful to refer to a well-known game in the social sciences, the “Dictator Game” (see Figure 5.1).

In conditions of anonymity,12 an individual (an active player, called the Dictator) is asked to choose on the computer between action A, which guarantees the Dictator gets six dollars and gives one dollar to the other participant in the experiment (a passive player unknown to the Dictator); and action B, which gives five dollars to each of them. We can describe action A as selfish and action B as generous. Rational behavior, in the classic sense, implies the active player will choose A, which maximizes the Dictator’s revenue. In practice, however, about three-quarters of the players asked to choose pick B.13 The sacrifice associated with generosity is small enough that most players choose it. But can we say that this is because they have simply internalized the other player’s well-being?

image

Figure 5.1. The Dictator Game.

In fact, generosity is a very complex phenomenon with three motivating factors: intrinsic motivation (we are spontaneously and naturally generous), extrinsic motivation (we are moved by external incentives such as tax deductions to be generous), and the desire to look good (to project a good image to others and to ourselves).

It turns out that our self-image plays an important role in the Dictator Game, where the Dictator is dealing only with him- or herself. (Anonymity is total. Even the experimenter doesn’t know who the player is. Hence, concerns about social image play no role in most laboratory experiments.) More broadly, social image and social prestige are also essential motivations, indicated by the fact that only 1 percent of the donations made to museums or universities are anonymous. The same point is illustrated in Figure 5.2, which shows that when there are categories of donations (for example, a “Silver Donor” gives between $500 and $999, and a “Gold Donor” gives more than $1,000) we see more donations in amounts that allow the donor to just squeak into the next category up, rather than a uniform distribution of amounts donated.

image

Figure 5.2. Grouping phenomena (donations by category).

An interesting study of the same phenomenon focused on the introduction of voting by mail in several Swiss cantons.14 According to traditional economics, a priori the introduction of voting by mail would be expected to increase participation in elections, because the cost of voting (at least for those who prefer to vote by mail rather than going to the polling station) decreases. However, the experience showed that participation did not increase – and in some cantons, especially rural ones, it even decreased after voting by mail was introduced. The reason is that in villages where the electors know each other, and therefore social pressure is intense, people go to the polling station partly to show they are good citizens. As soon as there is a reasonable excuse for not going to the polling station, the loss of social prestige connected with not voting is no longer obvious, as no one can be sure you did not vote. This study demonstrates, once again, the complexity of social behavior and its motivations.

Reciprocal Altruism

Humans have an important characteristic distinguishing them from other species: cooperation among large groups of individuals who are not closely genetically linked. (Bee hives and ant colonies have strong genetic links among themselves, while cooperation within other species such as other primates occurs in small groups.) As I noted earlier, we need to distinguish between cooperation motivated by self-interest, based on a repeated relationship with another individual or group, and cooperation based on social preferences, as in the Dictator Game.

Another famous game involving social preferences is the Ultimatum Game. Player 1 is given the task of dividing a total of ten dollars between him- or herself and Player 2. So far, it resembles the Dictator Game. As in the Dictator Game, the ultimatum game guarantees the players’ anonymity: they do not know with whom they are playing, to rule out cooperation inspired by material self-interest. The Ultimatum Game differs from the Dictator Game because the outcome depends on Player 2’s goodwill: if Player 2 rejects the allocation proposed by Player 1, neither receives anything. In practice, an offer to split the ten dollars equally is always accepted, whereas when Player 1 offers Player 2 nothing, or just one or two dollars (leaving ten, nine, or eight dollars for Player 1) this is often rejected by Player 2. This happens even though Player 2 would be better off accepting one or two dollars rather than getting nothing. Anticipating this situation, Player 1 often rationally proposes distributions that are less extreme, or even equal.15 We are frequently moved by reciprocal altruism: we tend to be nice to people who treat us well, and conversely take revenge on people whose behavior to us, or people close to us, we find objectionable – even if this vengeance is costly for us.

Reciprocity seems to be universal. Research undertaken in fifteen microsocieties (such as the Hadza in Tanzania or the Tsimanes in Bolivia) found behaviors in the Ultimatum Game that are similar to those reported above. Interestingly, societies that involve a high level of exchange (and thus do not have a way of life centered on the family) seem to be more cooperative in these experiments.16

THE FRAGILITY OF ALTRUISM AND HONESTY

The Power of Excuses and Moral Wriggle Room

To understand the difficulties in trying to produce a coherent picture of altruism, let’s return to the Dictator Game, modifying it in the way illustrated in Figure 5.3.

There are two “states of nature.” In the first state, the rewards are the same as before, choice A being the selfish action and choice B the generous action. If the Dictator chooses A, the Dictator gets six dollars and the other player gets one dollar, whereas if the Dictator chooses B, both players receive five dollars. In the second state of nature, A is better than B for both players. In this second state of nature, it is therefore optimal for the Dictator to choose action A, from both his own individual and the collective points of view.

image

Figure 5.3. The moral wriggle room game.

So far so simple, except that at the beginning of the experiment the Dictator does not know which state of nature prevails. Both states are equally likely. The experimenter asks if the Dictator would like to know which it is (it will cost nothing to find out). A rational player should say yes. In particular, an altruist will want to know whether to choose B (in the first state of nature) or A (in the second state of nature, in which both players will do best with A). But experiments reveal that most Dictators do not want to make an informed choice; most prefer not to know the state of nature and choose A, the selfish act, hiding behind the “excuse” that there is a state of nature in which this choice would not penalize the other player. In other words, they prefer not to know that they may be in the first state, which would force them to choose between selfishness and altruism. This is the behavior of the pedestrian who crosses the road to avoid meeting a beggar to whom he or she will feel “obliged” to give.17

A laboratory experiment conducted by Armin Falk (of the University of Bonn) and Nora Szech (of the University of Karlsruhe) and published in Science shows that sharing responsibility may erode moral values.18 This erosion applies to markets, but it is just as powerful as soon as a decision involves a single other person, enabling a semblance of shared responsibility. In all organizations, the existence of excuses (“I was asked to do it,” “Someone else would do it if I didn’t,” “I didn’t know,” “Everybody does it”) makes individuals less resistant to unethical behavior. An important goal of research is to understand better how institutions, from markets to administered organizations, affect our values and behavior.

Contextual Effects

Let’s consider another variant of the Dictator Game (Figure 5.4) in which the experimenter adds a third option, C, which is even more selfish than option A. Normally, one would expect a subject who altruistically chooses B when the choice is between only A and B (as in Figure 5.1) to choose B again when the more selfish option C is on the table. In other words, the introduction of option C should not affect the frequency of the generous choice B;19 and in particular it should not affect the choice between A and B for those who would not choose C whatever happened. In practice, however, the addition of C significantly diminishes the frequency with which B is chosen, and makes the choice of A proportionately much more probable than B.20 Alternatives may be relevant, even if they are not chosen!

There are several possible interpretations of the importance of context here. For instance, it may be that option C provides the Dictator with a narrative (“I wasn’t really being selfish”) by making option A seem less selfish than when the choice was only between A and B. Option A becomes a compromise. Or perhaps the player interprets the introduction of option C as a signal of a norm, indicating that the experimenter does not necessarily expect him or her to be generous. Either way, this experiment and others show the importance of the context in which decisions are made – an irrelevant alternative (to the extent that we would not choose it in any case) can affect our choice.

image

Figure 5.4. The importance of context.

More generally, context can influence choices when individuals interpret the way in which choices are presented (and not solely the options themselves) as relevant. This idea has been applied in many ways. For example, a company or a state that offers its employees or citizens a default option for a retirement savings strategy is implicitly asserting that this choice is suitable for most people, even if other choices might be better for some people in some situations. There is an extensive literature on this guidance of decisions, described as “libertarian paternalism,”21 (or “nudging”). The oxymoron “libertarian paternalism” expresses the idea well: the individual has complete freedom to make the best choice, if he knows what it is; but his choice is guided when he lacks important information or remains undecided.

The Role of Memory

Many other experiments show that our pro-social behavior is fragile and complex, and that memory plays an important role. Consider a game created by psychologists in which players can cheat without being unmasked. For example, a volunteer participant in the experiment receives a random allotment of between one dollar and ten dollars (a figure shown on his computer screen), with a probability of 1/10 for each amount. The experimenter does not know this figure. The volunteer declares the figure, and receives the amount he declares. He can therefore cheat and declare seven dollars (and receive seven dollars) even though he is only entitled to five. How can cheating be spotted under such conditions? By the frequency of the declarations.22 If the subjects are honest and the sample is sufficiently large, approximately 10 percent of the sample should declare one dollar, 10 percent two dollars, and so on. So, if high figures are declared more frequently than they should be, that indicates cheating (probably not in a uniform way though: we know from other experiments that some people never cheat, whereas others do cheat, but to varying degrees). But the experiment is not over.

In a second phase, the game is played again, but only after the experimenters have read out the Ten Commandments or the university’s honor code to the participants.23 The participants cheat much less in this second experiment than they did in the first. This is another experiment that undermines the traditional concept of a wholly rational homo economicus, just as it disproves any other equally simplistic theory of behavior. Reading the Ten Commandments or the university’s honor code makes one’s cheating harder to ignore, and thus more difficult to repress in one’s memory.

When We Are Punished for Our Good Deeds …

To illustrate the full complexity of generosity, we should mention the experiments on ostracism conducted by Benoît Monin and his coauthors.24 These experiments confirm that we like generous people … unless they are too generous. We do not much care for people who give us lessons in morality, even indirectly. Individuals perceived as too generous end up being ostracized by others. The problem is that people who are too virtuous provide a comparison point25 that does no favors to our own image. Rather than endure this permanent reminder of our selfishness, we prefer to cold-shoulder those who make it too obvious.

MANIPULATING OUR OWN BELIEFS

Game theory and information theory have found an unexpected but natural home in psychology. For centuries (even millennia), psychologists and philosophers have emphasized the way people manipulate their beliefs: individuals usually seek to repress, forget, or reinterpret information that is unfavorable to them.26 Economists have recently been exploring these themes regarding individuals’ “self-manipulation” of their beliefs. For example, Roland Bénabou (Princeton University) and I have described the self-manipulation of beliefs as the equilibrium of a game between the different selves of the same individual – a game in which the individual may try to “forget” (repress) information that might damage his self-confidence.27 The individual manipulates his beliefs, and at the same time may be aware that he has a selective memory.

To understand self-manipulation, we must first understand the “demand” for self-manipulation: Why would an individual want to lie to himself? After all, classical decision theory shows that having better information allows us to make better decisions, in full knowledge of the facts. To repress information is to lie to oneself and thus degrade the quality of information and, consequently, one’s decision making as well. We can identify three reasons why individuals may try to lie to themselves:

1.  The fear of a lack of willpower and of the concomitant procrastination that might occur in the future (more self-confidence enables one to counteract, at least in part, this lack of willpower, by giving oneself the energy to act).

2.  The fact that we feel pain and pleasure before the actual experience – our projecting into the future gives rise to “anticipatory utility or disutility.” We enjoy vacations and other pleasant events well before they occur. Conversely, the very prospect of a surgical intervention makes us unhappy. The existence of anticipatory (dis)utility explains why we often forget possible negative outcomes such as accident or death, with both functional and dysfunctional consequences: this lack of concern makes for a happier life while at the same time leading to inefficiencies in decision making – for example, not having a medical test or not wearing a seatbelt in one’s car.

3.  The “consumption” of beliefs people have about themselves (we care about our self-image; we want to believe that we are intelligent, attractive, generous, and so on).

On the “supply” side of self-manipulation, self-deception may operate through:

1.  The manipulation of memory (through strategies of encoding, repression, or rehearsal)

2.  The refusal to hear, process, or pay attention to certain kinds of information

3.  The choice of actions that signal particular character traits.

Plato insisted that manipulating one’s own beliefs is bad for individuals. On the other hand, many psychologists (William James, Martin Seligman, and others) have emphasized that people need to see themselves in a positive light in order to motivate themselves both to engage in activities and to further their own well-being. When motivated by the fear of a future lack of willpower, this self-deception can be shown to be beneficial for individuals with a serious self-control problem, but not otherwise. Since then, we and other researchers have studied other themes connected with the manipulation of beliefs, ranging from the analysis of personal resolutions, rules of life, identity, and religious precepts to the impact of collective beliefs on political choices.28

HOMO SOCIALIS

TRUST

Trust is at the heart of economic and social life. True, it is not always necessary. The invention of money, for instance, has simplified the mechanics of exchange. So long as we can verify the quality of a good, we can buy it from a stranger in exchange for money. If we cannot verify the quality of the good before purchase, we can often count on the mechanism of reputation: we return to a merchant with whom we were satisfied, or go to a merchant a friend has found satisfactory; the merchant understands this mechanism and will make an effort to build up and retain a loyal clientele.

In analyzing behavior, researchers are interested in the trust we place in others. In economic terms, it is simple to formalize this concept. It is treated as a problem of imperfect information about the reliability and preferences of others. Over time, all agents revise their beliefs about the people with whom they interact. By being around others and interacting with them, we learn about them and can evaluate better their reliability and the trust we can place in them.

We can thus learn whether people are trustworthy if we repeatedly interact with them, but we have less information about how to behave in a one-off interaction with a stranger; for instance, when we buy a souvenir whose quality we cannot evaluate at a tourist attraction, or when we decide to trust a neighbor or a babysitter we do not know well to take care of our children, or when we begin a personal relationship. We can form an opinion very rapidly about someone based on certain signals, but these opinions are very imperfect, a fact that has even served as the basis for TV game shows based on trust.29

We now know that our hormones influence us in this situation. The economists Ernst Fehr (Zurich), Michael Kosfeld (Frankfurt), and their coauthors30 injected volunteers with the hormone oxytocin31 as part of an experimental “trust game.” This game for two players, Player 1 and Player 2, can be described as follows:

•  Player 1 receives money from the experimenter, perhaps ten dollars, and chooses a sum between zero and ten dollars to give to Player 2. Player 1 keeps the rest.

•  Player 2 then receives, also from the experimenter, three times the amount of the figure given by Player 1; for example, Player 2 receives fifteen dollars if Player 1 has given half the initial ten.

•  Player 2 freely decides to give a sum back to Player 1. There is no obligation as to the amount. Player 2 is then in the position of a Dictator and can decide not to give anything, hence the importance of Player 1’s confidence in the reciprocity of Player 2.

Once again, the players are anonymous. Each player is behind a computer and does not know (and will never know) the identity of the player with whom he or she has been paired.

The ideal for both players (if they could agree in advance) is for Player 1 to give all ten dollars to Player 2. This would leave Player 1 with nothing, but would maximize the size of the pie to be shared (3 x $10 = $30). But the structure of the game means they cannot agree to a strategy in advance. The way the thirty dollars is shared is therefore totally at the discretion of Player 2. Giving all ten dollars to Player 2 would thus require Player 1 to have an enormous amount of confidence in Player 2’s reciprocity.

Player 2’s “rational” behavior (that is, the choice that maximizes revenue) obviously consists in keeping everything. For Player 1, anticipating that Player 2 will give nothing back, it consists in not giving anything. These “rational” choices minimize the size of the pie (which remains equal to the initial ten dollars kept by Player 1). In practice, things go differently in experiments. A nonnegligible number of individuals in the position of Player 2 feel obliged to reciprocate when Player 1 has trusted them. Rationally anticipating this behavior, Player 1 gives some money, hoping that Player 2 will behave reciprocally.

The interesting point noted by Ernst Fehr, Michael Kosfeld, and their coauthors is that the injection of oxytocin makes it possible to increase the feeling of trust in the other player, and thus increase, on average, how much Player 1 gives. This is not very reassuring, because it is easy to imagine commercial applications for altering behavior in this way.

With or without oxytocin, the trust game just described reproduces in the laboratory the mechanism of reciprocity, one of the most powerful social mechanisms. As I have said, we feel an obligation to those who have shown generosity toward us, and we may seek to take revenge on people who are rude to us – even at a personal cost. This principle is commonly used in marketing, where free samples and gifts try to play on the principle that “who gives, receives.”

Reciprocity in economics inspired the hypothesis that an employer can increase profits by offering a higher salary to prospective employees than is needed to attract them (i.e., the market rate for the job) because they will be grateful for the generosity and will work harder. This seems to be true.32 However, the effect may be temporary, as is shown by an experiment conducted on a tea plantation in India.33 The base salary of the pickers was increased by 30 percent, while their variable remuneration (which depended on the quantity of tea they picked) was reduced. Overall they would earn more, no matter how much tea the pickers harvested (but the least productive pickers had the highest relative pay raise).34 Contrary to the predictions of the classic economic model, the productivity of the pickers (who now had weaker incentives to work because their remuneration depended less on the quantity picked) significantly increased in comparison to the control group. At the end of four months, however, homo economicus was back: the conventional economic prediction that weaker incentives would reduce effort was more or less verified.

STEREOTYPES

Sociologists rightly emphasize the importance of not observing individuals out of context, that is, without taking into account their social environment. Individuals are part of social groups, and these groups affect how they will behave. The group defines the individual’s identity and the image that he or she wants to project to others and to him- or herself. Individuals serve as models or examples: seeing people close to us, people we trust and with whom we identify, behaving in a certain way affects our behavior. Here I would like to discuss another kind of influence exercised by the group: the influence that operates through the way others view the group.

Countries and ethnic or religious groups are perceived as “honest,” “industrious,” “corrupt,” “aggressive,” or “concerned about the environment,” in the same way that firms get a good or a bad reputation depending on the quality of their products.35 Such stereotypes and collective reputations affect the trust placed in members of a group when they interact with people outside the group.

In a way, the group’s reputation (whatever it is at a particular moment in time) is no more than the result of the past behavior of the individuals who compose it. Suppose that individual behavior is observed only imperfectly; in fact, if individual behavior was observed perfectly, individuals would be judged entirely on this behavior, and collective reputation would play no role at all. Conversely, if they were never observed, individuals would make little effort to behave responsibly, because the collective reputation is a public good for the group. Defending the collective reputation entails an entirely private cost but a benefit that is shared by the whole community. This is why there is a tendency toward free-riding. Taxi drivers who charge undue supplements or winemakers who adulterate their wine do great harm to other members of their profession. It is thus possible to reconcile methodological individualism (the taxi driver will pursue his own interest, which may not coincide with that of the group) and holism (according to which the behavior of individuals cannot be understood without considering the properties of the whole of which they are parts).

Individual behavior and collective behavior are, to some extent, complementary. An individual has weak incentives to behave well if his community has a bad reputation, because he will not be trusted by others anyway and therefore will have fewer opportunities to interact with – and less incentive to develop a good reputation among – those outside his community. In turn, this rational behavior by the individual reinforces others’ prejudices regarding the group and contributes to its negative stereotype. Two groups that are, a priori, identical may have very different stereotypes. It is possible to show, moreover, that collective reputations are subject to hysterisis;36 in particular, a country, a profession, or an enterprise can suffer from prejudices for a very long time before being able to correct its reputation. Thus it is better to avoid a bad collective reputation at all costs, because it can become self-fulfilling and may very well persist.

HOMO INCITATUS: THE COUNTERPRODUCTIVE EFFECTS OF REWARDS

THE LIMITS OF INCENTIVES IN MAINSTREAM ECONOMICS

In chapter 2, we saw that economists are often criticized for emphasizing the role of incentives, i.e., for envisioning a world in which an agent’s behavior is guided only by carrot and stick. There is some truth in this view, insofar as understanding the role of incentives is the bread and butter of the discipline of economics, but it also ignores the evolution of economics over the past thirty years.

First, economists have argued that incentives “work better” – that is, create behavior more in accord with the objectives of the organization or the society – in some circumstances than in others, when the effects of incentives can be limited or even counterproductive. The corresponding theories and empirical results are entirely in line with our personal experiences. Here are a few examples.

Suppose that the economic agent has several tasks to complete. For example, a teacher (in a school or university) has, on the one hand, to pass on to the student the knowledge necessary to move on to the next class, pass an exam, or get a job. On the other hand, taking a longer-term perspective, the students must be trained to think for themselves. If the teacher is paid based on the success of pupils in their exams, the teacher will focus on exam technique, to the detriment of the students’ long-term development, which is much more difficult to measure and thus more difficult to reward. That does not mean that we have to give up on the idea of providing incentives for teachers; in certain environments, they may be beneficial. Esther Duflo, Rema Hanna and Steve Ryan have shown, through an experiment conducted in India, that teachers react positively to financial incentives and supervision, with the result that students are absent less often and perform better.37 But we must be very careful not to distort the educational process by introducing incentives that are not well thought out and tested.

The problem of “multitasking” is found in many domains.38 This book provides some examples: some agents in finance, facing incentives based on short-term performance, behaved in ways that were harmful over the long term, leading to the crisis of 2008 (chapter 12). A regulated company that is generously rewarded for reducing costs will tend to sacrifice maintenance, increasing the risk of accidents. Therefore, strong incentives to reduce costs must be accompanied by increased supervision of maintenance by the regulator (chapter 17).

Numerous other drawbacks of strong incentives have been pointed out in economic research. These kinds of incentives are not appropriate when the individual contribution of an agent within a team is difficult to identify, or more generally when the agent’s performance depends on factors that cannot be measured, and over which the agent has no control. In such circumstances the agent would end up being rewarded simply for being lucky enough to have good teammates. Conversely, the agent might be unjustly punished for the bad luck of having poor teammates. Another limitation of strong incentives arises in a hierarchy. Such incentives increase the benefits of manipulating information and thereby encourage collusion in internal cliques. For example, a foreman may collude with workers to misreport performance or the difficulty of their task, or top executives may capture the board of directors to the detriment of shareholder interests. Finally, strong incentives may not be needed: if the relation between principal and agent is repeated, a relation of trust may helpfully replace formal incentives and actually improve on them to the extent that it is flexible, i.e., contingent on fine information.

THE CROWDING OUT OF INTRINSIC MOTIVATION

Another kind of criticism is that extrinsic incentives can kill intrinsic motivation. Consequently, an increase in extrinsic incentives may be counterproductive and result in less participation or less effort. This question is crucial for public policy. For example, should people be paid for giving blood, as is the case in some countries? Should we count on the goodwill of individuals or put a policeman on duty? To protect the environment, is it better to subsidize the purchase of hybrid cars or the purchase of green boilers in the home instead?

To study pro-social behavior, Roland Bénabou and I started with the assumption that individuals differ both in terms of their intrinsic motivation to participate in providing a public good and in their desire for individual gain. Individuals are moved by three factors: an intrinsic motivation to contribute to the public good, an extrinsic motivation in the form of a financial reward (r in Figure 5.5) for behaving well – or, equivalently, in the form of a penalty equal to r for bad behavior – and attention given to the image of themselves that they project. Starting from a statistical distribution of agents’ characteristics of intrinsic motivation and financial motivation, we determined the way in which behavior changes (on average) according to the extrinsic incentive given to individuals (see Figure 5.5, which shows on the vertical axis the total quantity of the public good provided by the agents, and on the horizontal axis the payment given to those who contribute).

image

Figure 5.5. Intrinsic motivation and extrinsic motivation. supply of public good in relation to a monetary incentive r; the different curves correspond to different levels of the individual’s attachment to the image of himself that he projects (a higher curve corresponds to a greater importance accorded by the individual to his image). When the image becomes sufficiently important for the individual, an interval appears in which an increase in the reward has a counter-productive effect.

Using this model, we can look into questions such as: Should we pay someone for giving blood? For homo economicus, it is clear that a reward will encourage him to increase his donations; this is shown in Figure 5.5 by the lower curve, for which a higher reward increases “donations” of blood. But if the individual is preoccupied with his image, the results are what an economist would have to describe as curious.

In a famous book published in 1970, Richard Titmuss explained that we should not pay blood donors because it would destroy their motivation to behave in a pro-social way.39 Considering the contribution of different types of motivation helps us understand this argument. On the graph in Figure 5.5, we can see that if the donor puts enough weight on his image, there comes a point at which the total quantity of blood donated diminishes as the financial reward increases. This is because, if they are paid, then donors who give their blood in part to project an image of themselves as generous fear that others will suspect that they are in it only for the money. Thus, the presence of image concerns can break the positive relationship (generally assumed in microeconomics) between remuneration and effort or results.

Thus, extrinsic incentives can squeeze out intrinsic incentives. Beyond the possible crowding-out effects of incentives, which probably feature less frequently in the contracting and trading situations most often studied by economists than in social contexts, the theory predicts that financial incentives will promote pro-social behavior less often when the people being paid are observed by peers who might doubt their motivations. When that happens, the image the beneficiaries project if they respond to financial incentives may suffer. These considerations are very useful for public policy. If we return to the question asked earlier – Is it better to subsidize the purchase of a hybrid car or the purchase of a green boiler? – the answer is clearly that it is better to subsidize the heating system, because it is not usually seen by other people, so financial incentives will be more powerful in this case. A car is visible to everyone, and thus the weight of social approval will also be taken into account by the buyer.

This theory has been tested in the laboratory and in the field. In particular, a team of two economists and the psychologist Dan Ariely40 have shown that individuals contribute more to a good cause if others can see them doing it (confirming the hypothesis that people are motivated by the image of themselves that they project), and that financial incentives are very powerful when the contribution is not observed. But the team also showed that monetary incentives have little effect when the contribution is observed. This confirms the theory that people worry that if they are paid, their contribution will be interpreted as a sign of greed rather than generosity, and thus send the “wrong” signal to others.41

The idea that norms emerge from the social signals conveyed by behavior can also be tested in the field. Recent studies have measured the impact of extrinsic incentives on social norms and individuals’ behavior in areas as varied as tax evasion in Britain,42 which ethnic group Chinese parents of mixed origin (one parent being Han and the other belonging to a minority) claim their children to belong to,43 and the desertion of British soldiers during the First World War.44

In the example of blood donations, we saw that the contribution to the public good may decrease if donors are paid, because their generosity may then be interpreted as greed. A different channel for the crowding out of intrinsic motivation by extrinsic incentives arises when the payment might reveal information held by the “principal” (the individual who sets the reward) about the task, or about his trust in the person doing it. This idea once again overlaps with the work of psychologists, for whom a reward can have two effects: the classic incentive effect (it encourages us to make a greater effort) and the effect connected with what this reward reveals about the agent’s competence or the difficulty of the task. For example, paying your child to get good grades in school can, over the long run, have a perverse effect, because children risk losing their intrinsic motivation to study and may become motivated solely by the money. The theoretical explanation is different from that for blood donations:45 children may interpret the promise of a reward as a signal of the task’s lack of intrinsic interest, or of a lack of confidence in their ability or their motivation to do the work, all of which reduce their intrinsic motivation. This theory predicts that the reward will have a short-term positive effect, but will also have an addictive effect over a longer term. If the reward is subsequently removed, the child’s motivation will be less than it would have been if no reward had ever been offered.

More generally, we need to pay attention to what others infer from our choices. In a company, we know that supervising the effort made by subordinate employees too closely can send a signal that they are not trusted, and can destroy their self-confidence and personal motivation. Supervision can also undermine the norm of reciprocal altruism analyzed earlier in this chapter. A classic experiment based on a variant of the trust game shows that the desire of Player 1 (who must decide whether to trust the reciprocity of the other player or not) to ensure that the other player returns at least part of the sum given him can be counterproductive: you can’t simultaneously trust another person and show that you don’t trust him!46

HOMO JURIDICUS: LAW AND SOCIAL NORMS

Economists often see the law chiefly as a set of incentives: the prospect of a fine or prison term dissuades us from driving too fast, stealing, or committing other crimes. Psychologists and sociologists do not entirely share this view. They think it may often be more effective to use persuasion and social sanctions to induce pro-social behavior.

For one thing, the state cannot establish formal incentives everywhere. For many minor misdemeanors, such as dropping litter or disturbing the peace, the police and the courts cannot be used because the cost would be too high. Furthermore, it is impossible to define precisely what is expected of us: It is natural to give directions to a stranger, but where do our obligations to help stop? At some point, society has to decide for itself, and so social norms have an important role to play both in defining what is expected of us and in creating social incentives to behave better than we otherwise would.

A second line of departure is the emphasis on “expressive law.” Legal scholars, while recognizing the importance of the law in shaping incentives, stress that a law or regulation also expresses social values. Thus, according to legal scholars, in matters of public policy one cannot count exclusively on sanctions and financial incentives to obtain pro-social behavior from economic agents.

The sociologist Robert Cialdini of Arizona State University has defined two types of norms.47 The descriptive norm reveals to individuals how their peers or their community behave. For example, they can be shown their peers’ average consumption of electricity, how much other people recycle or give to charities. The prescriptive norm is the one endorsed by their peers or their community. Clearly some of our choices are dictated by considering the judgment of our peers and by their behavior. In an experiment focusing on prescriptive norms, conducted at Princeton in reaction to the overconsumption of alcohol on campus, the experimenters showed that, in reality, most students did not necessarily drink too much because they wanted to. Rather they did it because they thought (wrongly) that other students found it “cool.” This type of intervention in the creation of social norms thus seeks to provide information to economic agents about what others do (the consumption of alcohol or electricity, for example) or what people find acceptable.

But according to Cialdini, as in economic theory,48 we need to take care about the messages we choose. A government that asked its citizens to pay their taxes by arguing that “many of your fellow citizens avoid paying taxes, so because we are not collecting enough revenue, your payment is particularly valuable for society” would probably not be very successful in boosting tax collection. Messages more likely to encourage pro-social behavior must be chosen. For example, information must be given when it is likely to produce pro-social behavior: “x percent of your fellow citizens recycle,” where x is a surprisingly high percentage, if true. It is better to highlight the positive virtues of our fellow citizens.

Laws are equally expressions of social values and thus send messages concerning the cost of individual behavior, general morality, or social values. Clearly some public policies are dictated by these considerations. Consider, for example, the punishments imposed for crimes: the classic economic analysis might recommend alternative punishments (financial or public service) that are more effective socially and that cost less than sending someone to prison. Yet some citizens would consider this too economic an approach, normalizing behavior they consider to be unacceptable.

Similarly, the debate about capital punishment is essentially based on the idea that capital punishment reflects on the society that imposes it. For most legislators in the majority of developed countries, this is an image which is violent and disrespectful of human dignity; but, by contrast, for the majority of the legislators in the United States, capital punishment is a clear signal that society does not tolerate some types of behavior. The cost-benefit approach (which would frame capital punishment in the following terms: Does capital punishment deter people from committing crimes, and at what cost to society?) plays a minor role in the debate. In short, the debate on capital punishment generally does not take place in the context of classic cost-benefit analysis, but rather in the context of the values of the society. This is an area that lies outside traditional economics. This example also allows us to understand why modern societies, seeking to signal their values, outlaw cruel and unusual punishment, even in cases where the person involved consents to the punishment in full knowledge of what he is doing. Thus, a majority of citizens might say that substituting a flogging for a prison term, even with the full consent of the criminal, would be wrong, despite the fact that it would be cheaper for society.

Finally, the use of incentives can signal our fellow citizens’ lack of enthusiasm for the public good, and so damage the norms of civil behavior and be counterproductive. To the extent that we all want to retain the illusion that the society in which we live is virtuous, this also sheds light on the widespread resistance to what economists have to say, because economists are often the bearers of bad empirical news concerning how virtuous people are.

MORE UNEXPECTED LINES OF INQUIRY

Finally, I would like to say a few words about two fields that one does not normally associate with economics, but which are rapidly expanding: evolutionary economics and the economics of religion.

HOMO DARWINUS

One of the most significant advances in the last two decades of economic research is that we can begin to reconcile the economic view of humans with Darwin’s vision of us as the result of natural selection. There are many examples of cross-pollination between economics and evolutionary biology. For example, social preferences, which are crucial for an economist (as this chapter has shown) can also be examined from the point of view of evolution.49

Biologists have also contributed to game theory. For instance, we owe the first model of the “war of attrition” (which describes the collective irrationality of situations such as a war or a strike in which each party suffers but clings to the hope that the adversary will surrender first) to the biologist Maynard Smith (1974). This idea was subsequently refined by economists.

The theory of signaling is a third area of interest shared by biologists and economists. The general idea of this theory is that wasting resources can be beneficial for an individual, an animal, a plant, or even a state, if doing so can convince others to adopt conciliatory behavior. Animals use a whole series of signals that are expensive or even dysfunctional (such as the peacock’s plumes) to seduce partners or to deter predators. Similarly, humans sometimes take risks to impress their rivals or a person they want to attract, or a company might sell at a loss in the attempt to convince its rivals that its costs are low or that its financial basis is solid, thereby encouraging them to quit the market. Shortly after the appearance of a famous article on signaling by the economist Michael Spence,50 the biologist Amotz Zahavi published research on the same theme.51 These articles take up and formalize the works of sociologist Thorstein Veblen (The Theory of the Leisure Class, 1899) and French sociological approaches to social differentiation (Jean Baudrillard, The Consumer Society: Myths and Structures, 1970; Pierre Bourdieu, Distinction: A Social Critique of the Judgment of Taste, 1979). Ideas about signaling have their origin in Darwin’s The Descent of Man (1871), published long before economists or sociologists took an interest in them.

Overall, the boundaries between economics and the natural sciences are no more watertight than those between economics and other human and social sciences.

HOMO RELIGIOSUS

In view of the importance of religion in the organization of political and economic life in most countries, the economist cannot, as a scientist, ignore it. To avoid any misunderstanding, it must be emphasized that the economist’s role is not to evaluate religious belief itself, but to focus on those aspects of religion upon which economics may usefully shed light. The “economics of religions” reappeared in the discipline about twenty or thirty years ago, but it was an old area of investigation.52 Adam Smith was interested in the financing of the clergy.53 His theory, demonstrating an awareness of the problem of moral hazard, was that if they were financed directly by believers (rather than by the state or the religious hierarchy), the clergy would serve believers and religion better.

Max Weber’s The Protestant Ethic and the Spirit of Capitalism defined the theme of the socioeconomic impact of religion. Weber’s thesis was that the Protestant Reformation had a major impact on the rise of capitalism, generating a vast amount of debate in the human and social sciences. Today, econometric studies allow us to examine the facts in greater detail. Weber noted that Protestants earned more than Catholics in those regions where they lived side by side, and that in addition wealthy families and territorial collectivities embraced Protestantism more rapidly. The research also sheds light on causalities. For instance, Maristella Botticini (Bocconi University in Milan) and Zvi Eckstein (University of Tel Aviv) have challenged the traditional explanation of Jewish economic success. The traditional view was that, having been driven out of certain professions, Jews took refuge in banking, craftsmanship, and commerce, which transformed them into an urban, educated community.54 According to Botticini and Eckstein, this transformation happened before they were excluded from other professions. They argue that because Judaism required the reading of the Torah and promoted literacy in Talmudic academies, the Jewish community’s human capital increased and prepared it for the financial and juridical aspects that were later to prove more useful than traditional skills, such as knowing how to plant wheat.

In the same spirit, Mohamed Saleh has studied the Islamization of Egypt in the centuries following the Muslim conquest in AD 640.55 He documents conversions to Islam and the development of the relative incomes of Copts and Muslims. The Coptic community, which initially constituted almost the entire population of Egypt, became much smaller than the Muslim community, but was educated and wealthy. Mohamed Saleh has an economic explanation. As in many similar countries, non-Muslims had to pay a poll tax but Muslims did not. Less wealthy and less religious Copts converted to Islam, making the remaining Coptic community on average more pious and wealthy. This selection effect lasted for many centuries.

Of course, economists have also studied competition between religions – here again, not over religious ideas, about which they have no specific expertise, but in economic dimensions. It is well known that religions offer benefits to attract believers. Sometimes they even fulfill the function of the “welfare state” (which may be one of the factors explaining the alliance between religious groups and the fiscally conservative right wing).56 For example, some Muslim organizations provide insurance, education, and local public goods. Religious groups even sometimes act as “two-sided markets”57 by helping their members select a potential marriage partner.58 Finally, economists investigate the connections between religion and science.59

Taken together, these examples constitute, of course, no more than a brief and selective introduction to a vast disciplinary field that is steadily evolving. We are witnessing a gradual reunification of the social sciences. This reunification will be slow, but it is inevitable – in fact, as I said in the introduction to this chapter, anthropologists, economists, historians, legal scholars, philosophers, political scientists, psychologists, and sociologists are interested in the same individuals, the same groups, and the same societies. The convergence that existed until the end of the nineteenth century must be reestablished. This will require these scientific communities to be open to the techniques and ideas of the other disciplines.