CHAPTER 4
Beliefs, Heuristics and Biases
The representativeness heuristic
Foundations in evolutionary neurobiology
Case 4.2 Trading on testosterone
Case 4.3 Celebrity contagion and imitative magic
Why would anyone pay $959,500 for a used guitar?
Because it was owned by Eric Clapton is the simple answer. Similar reasoning explains why someone would pay $48,875 for a tape measure that had belonged to Jackie Kennedy or $3300 for Bernie Madoff’s footstool. However, behavioral economists are curious as to why celebrity ownership should inflate values so highly, a phenomenon known as ‘positive contagion’.
4.1 The standard model
Assumptions
In the previous chapter we have examined how people form attitudes, values, preferences, and finally make choices. In terms of the standard model described in (1.1), these aspects are largely related to component (4), involving utilities, and, when it comes to choices, component (1) involving maximization. However, various assumptions were made at that stage regarding the options and outcomes of these options in the decision-making process. The factor we want to focus on in this chapter is the certainty of these outcomes. Therefore the component we now need to examine is (3), related to probabilities or beliefs. As far as beliefs are concerned, the main assumptions in the standard model are that decision makers have perfect as opposed to bounded rationality, and that they are Bayesian probability estimators. Let us explain each in turn:
1 Perfect rationality
This means that people not only have all the relevant information pertaining to a decision but also have the cognitive resources to process it instantly and costlessly. If this is not the case, and it is obviously unlikely in most real-life situations, then we can say that there is bounded rationality. This term was introduced by Simon (1955), who was the first researcher to emphasize its implications for decision-making. The most general implication is that we tend to use heuristics in many decision-making situations; these are ‘methods for arriving at satisfactory solutions with modest amounts of computation’ (Simon, 1990). The term heuristic was originally introduced in psychology to refer to simple processes that replace complex algorithms (Newell and Simon, 1972), and has become extended now to include any decision rules that we implement as short-cuts to simplify and or accelerate the decision-making process. A good example is to never order the lowest price or highest price items on the menu in a restaurant. This might imply that the decision-maker believes that neither of these items represents good value. As we shall see, there are a large number of heuristics described in the behavioral literature, and indeed some have proposed that there are too many. Shah and Oppenheimer (2008) propose that there is much redundancy in the field of heuristics, with different names for similar and overlapping concepts, too much domain-specificity, and insufficient attention paid to the overriding principle that heuristics are effort-reducing mechanisms. We shall return to these issues at various points throughout the following chapters.
The most significant implication of using heuristics is that they often result in biases, meaning systematic errors. In terms of beliefs these errors are factual; biases can also occur in terms of preferences, where the errors may result in non-optimal choice.
2 Bayesian probability estimation
This means that people are able to estimate probabilities correctly, given the relevant information, and in particular are able to update them correctly given a sequence of prior outcomes. The interpretation and significance of this will be examined in the section on the law of small numbers, along with deviations in estimation or biases, but a simple example will suffice at this stage. When a coin is tossed several times and comes up heads each time, a correct Bayesian updater will still estimate the probability of heads on the next coin toss as being 0.5, since the prior outcomes have no effect on the next outcome in this situation. However, many people tend to incorrectly assume that the prior outcomes do affect the probability of the next outcome here (as it would in other situations), and estimate the probability of the next outcome being a head as less than 0.5. This is an example of a ‘mean-reverting’ regime resulting in the ‘gambler’s fallacy’. Both of these terms are explained in the third section related to the law of small numbers.
The Bayes formula in general terms is as follows:
(4.1) |
This formula can be used to estimate probabilities of the truth or falsehood of events that are not random, but where the truth is unknown, such as when a die has been thrown within a cup – there is a definite outcome, but until the cup is removed we do not know what it is.
Bayes’ theorem updates or modifies probabilities, given new pieces of evidence, in the following way:
Where
P(E)= Σ P(E|Hi) P(Hi)
The factor P(E|H) / P(E) represents the impact that the evidence has on the belief in the hypothesis. The interpretation of this factor, and an example of an application of all of the above concepts, is given in the discussion of base rate bias in the next section.
4.2 Probability estimation
The types of deviation described in this section relate to rational Bayesian updating. There are various aspects of bias here.
The availability heuristic
People are often lousy at estimating probabilities of events occurring, especially rare ones. They overestimate the probability of dying in plane crashes, or in pregnancy, or suffering from violent crime. An often-quoted example of overestimating low probabilities concerns playing the lottery. The California lottery, one of the biggest in the world, requires matching six numbers between 1 and 51 in order to win the main prize. The odds against doing this are over 18 million to one. In other words, if one played this lottery twice a week, one could expect to win about every 175,000 years. It was found by Kahneman, Slovic and Tversky (1982) that people overestimated the odds of winning by over 1000%.
In many of their papers Kahneman and Tversky have suggested that people use an availability heuristic when estimating probabilities. This means that people believe that events are more frequent or more probable if examples of it are easier to remember. In general, this heuristic works reasonably well because it is easier to recall examples of events that happen more frequently. However, the main source of error here is salience; this factor features in other types of bias also, but the main effect here is that events that have been well publicized or are prominent in people’s memories tend to be estimated as having exaggerated probabilities. Thus it has been found that there is an increased purchase of earthquake insurance following a recent quake, in spite of the fact that such events will be less likely to recur in the short-term future. This error may be compounded by the effect proposed by Vosgerau (2010), related to misattribution due to arousal.
The representativeness heuristic
In general the representativeness heuristic refers to the phenomenon that global judgment of a category is determined primarily by the relevant properties of a prototype (Tversky and Kahneman, 1971, 1983; Kahneman and Tversky, 1972, 1973). This means that people have the tendency to evaluate the likelihood that a subject belongs to a certain category based on the degree to which the subject resembles a typical item in the category. Although this strategy may be effective in certain circumstances, the basic principles of probability and set theory are often ignored by people in making judgments involving representativeness. An illustration of this phenomenon is where respondents are given a description of a personality of a woman, Linda, who has the characteristics of a typical feminist. The majority of respondents rank the statement ‘Linda is a bank teller’ as less likely than the conjunctive statement ‘Linda is a bank teller and an active member of the feminist movement’ (Tversky and Kahneman, 1983). In this case the strong representativeness of feminism overcomes the basic probability rule that P(A and B) can never be higher than P(A). The difficulties that people have in reasoning related to connectives and conditionals has been observed in a number of studies (Johnson-Laird, Byrne and Schaeken, 1992; Johnson-Laird et al., 2000).
Base rate bias
A more complex example involving conditional probabilities is given by Casscells, Schoenberger and Grayboys (1978), and relates to the problem of ‘false positives’. This involves a situation where a person takes a medical test, maybe for a disease like HIV, where there is a very low probability (in most circumstances) of having the disease, say one in a thousand. However, there is a chance of a false prediction; the test may only be 95% accurate. Under these circumstances people tend to ignore the rarity of the phenomenon (disease) in the population, referred to as the base rate, and wildly overestimate the probability of actually being sick. Even the majority of Harvard Medical School doctors failed to get the right answer. For every thousand patients tested, one will be actually sick while there will be fifty false positives. Thus there is only a one in fifty-one chance of a positive result meaning that the patient is actually sick.
This example can be explained in more detail using Bayes’ theorem. For simplicity, it is assumed initially that if the patient has the disease the test returns a positive result 100% of the time, meaning that there are no false negatives.
Let A represent the condition in which the patient has the disease, and B represent the evidence of a positive test result. Then, the probability that the patient actually has the disease given the positive test result is
This means that the probability that a positive result is a false positive is about 1 − 0.0196 = 0.98, or 98%. If, more realistically, there is also a chance of the test returning a false negative, this would mean that P(B|A) < 1, and this would modify the result slightly. The difference would be small, assuming that the chance of a false negative is low; for example, if the probability of a negative result given the person has the disease is 0.99, then P(A|B) = 0.0194.
In general terms, if it is likely that the evidence E (a positive test) would be observed when the hypothesis under consideration (the person is sick) is true, but, when no hypothesis is assumed, it is inherently unlikely that E would have been the outcome of the observation, then the factor P(E|H) / P(E) will be large. Multiplying the prior probability of the hypothesis, P(H), by this factor would result in a larger posterior probability of the hypothesis given the evidence. However, if P(H), the base rate, is very low, the posterior probability will still tend to be low. Thus the consequence of base rate bias, meaning ignoring the base rate, is that we tend to overestimate the probability of being sick, given a positive test.
Conversely, if it is unlikely that the evidence E would be observed if the hypothesis under consideration is true, but a priori likely that E would be observed, then the factor would reduce the posterior probability for H. Under Bayesian inference, Bayes’ theorem therefore measures how much new evidence should modify a belief in a hypothesis.
The ‘law of small numbers’
The main error here is when people apply principles that apply to infinite populations to small samples. We will examine the model described by Rabin (2002a). This model examines the situation where people are observing a sequence of signals from a process that involves independent and identically distributed (iid) random variables. This means that each random variable has the same probability distribution as the others and all are mutually independent. A simple example is a sequence of coin tosses, where the probability distribution is 0.5 for a head and 0.5 for a tail for each toss, and the outcome of each toss has no effect on the outcome of any other toss. The model assumes that people believe, incorrectly, that the signals are drawn from an urn of finite size without replacement, whereas the correct assumption in this case is that there is replacement after each draw from the urn. We now need to examine the consequences of this incorrect assumption.
1 The ‘gambler’s fallacy’ effect
This effect derives its name from the observation that gamblers frequently expect a certain slot machine or a number that has not won in a while to be ‘due’ to win. We find that this effect occurs when the distribution of signals is known, as it is with the coin toss situation. If an urn contains 10 balls, 5 representing Up and 5 representing Down, and one ball is drawn at a time with replacement, this experiment is identical to tossing a coin. Thus if 3 successive draws all result in an Up outcome (equivalent to 3 heads in a row), then the rational person will estimate the probability of an Up on the next draw as 0.5. However, if the person believes that the balls are not being replaced, this means that there is only 2 Up balls left in the urn out of 7 balls in total, so they will estimate the probability of the next draw being Up as only 2/7 or about 0.286, with the probability of Down being 0.714. This is an example of the representativeness heuristic, in that the sequence Up, Up, Up, Down is judged as being more representative of the population than the sequence Up, Up, Up, Up. We shall encounter other examples of this heuristic in the following chapters. The gambler’s fallacy is sometimes referred to as the ‘law of averages’, in this case meaning that the number of Ups should on average be the same as the number of Downs, given there is a 50% chance of each event occurring.
There is a variety of empirical evidence supporting the existence of the gambler’s fallacy effect. For example, New Jersey’s pick-three-numbers game is a pari-mutuel betting system; this means that the fewer people bet on a number, the higher is the expected payout. It has been found that the amount of money bet on a particular number falls sharply after the number is drawn, and only gradually returns to normal after several months (Clotfelter and Cook, 1993; Terrell, 1994).
There is an interesting explanation for this apparently irrational phenomenon in terms of evolutionary psychology (Pinker, 1997). It is proposed that in our past evolutionary environment there was often good reason to believe that a series of common outcomes would be likely to be broken at some point. This was particularly true for meteorological events, like rain or sunshine. Of course, the expected length of the series would depend on the circumstances, but, just as a cloud eventually blows past the sun, at some point the probability becomes higher that in the next time period the sun will come out again. We shall see that many of the biases that people have are based on evolutionary adaptations or factors in our past.
In the examples people are incorrectly inferring from a sequence of identical signals, like 3 Ups, that the next signal or outcome will be of a different type. However, there are many situations where people make exactly the opposite inference, that the sequence will continue. This contradictory finding is now described and explained.
2 The ‘hot hand’ effect
This effect derives its name from the mistaken belief among basketball players and fans that a player’s chance of hitting a shot is greater following a hit than following a miss on the previous shot (Gilovich, Vallone and Tversky, 1985). Although it appears that this ‘overinference’ is the opposite of the gambler’s fallacy, it is actually a complementary effect, again involving a misapplication of the assumption of non-replacement.
The effect arises when there is uncertainty regarding the distribution of signals, for example, whether a stock price will go up or down in any particular time period. It is instructive here to follow the example given by DellaVigna (2009), involving a mutual fund with a manager of uncertain ability. This time the situation involves two urns, each with 10 balls; the well-managed fund has 7 Up balls and 3 Down balls, meaning the fund goes up in value 7 times out of 10, while the poorly managed fund has 3 Up balls and 7 Down balls, meaning it only goes up 3 times out of 10. There is a prior probability of 0.5 that the fund is well managed and a probability of 0.5 that the fund is poorly managed, so that before we observe any draw of a ball from an urn it is equally probable that the fund is well or poorly managed. Balls are then drawn in sequence from an urn, but the investor does not know which urn they are drawn from. After observing a sequence of 3 Up balls the investor has to compute the probability that the urn drawn from was the one with 7 Up and 3 Down balls, which is equivalent to estimating the probability that the fund is well managed after it has gone up 3 times in succession.
The rational investor will implement Bayes’ theorem to solve the problem, on the assumption that the balls are replaced after each draw. Repeating the Bayes formula in (4.1):
Thus the rational investor computes the probability that the mutual fund is well managed as:
This equals 0.73/(0.73 + 0.33) = 0.927.
However, if the investor behaves according to the law of small numbers and assumes that there is no replacement after each draw, the Bayesian expression becomes:
P(Well|UUU) = (7/10×6/9×5/8)/[(7/10×6/9×5/8) + (3/10×2/9×1/8)] = 0.972.
Thus we can see that this type of investor will over-infer about the ability of the mutual fund manager after three good performances. When the rational investor forecasts the performance of the fund in the next period, they will calculate the probability of an Up performance as 0.927 × 0.7 + (1 – 0.927) × 0.3 = 0.671. On the other hand, the law-of-small-numbers investor, assuming they believe that the urn is replenished after three periods, estimates the probability of an Up performance as 0.972 × 0.7 + (1 – 0.972) × 0.3 = 0.689, representing a perceived more probable outcome.
There are various studies relating to financial markets which provide evidence for the ‘hot hand’ effect. Benartzi (2001) found that the degree to which employees invest in their own firm’s stock depends strongly on the past performance of the stock. In companies in the lowest 20% of performance in the past ten years, 10.4% of employee savings were allocated to the same firm’s stock, compared to 39.7% for firms in the top 20%. Overinference in stock holdings can cause predictability in returns, since investors will tend to overinvest in stocks with high past returns, making them overpriced and reducing their later returns, as demonstrated by De Bondt and Thaler (1985).
3 Synthesis
So far it might appear that the contradictory effects of the ‘gambler’s fallacy’ and the ‘hot hand’ are difficult to reconcile with each other. However, a study by Barberis, Shleifer and Vishny (1998) demonstrated that the law of small numbers could lead to both effects, causing both underreaction and overreaction to market signals. In the short term investors follow the ‘gambler’s fallacy’, believing that a series of identical signals, like the stock price rising, will be followed by a fall (a ‘mean-reverting’ regime). Thus they do not invest in the stock (underreact), causing it to be underpriced, and returns will continue to be high over a short period of time, demonstrating positive correlation or momentum. However, after a longer sequence, the investors overinfer, and expect a ‘trending’ regime, whereby the stock is now expected to continue to rise. This ‘hot hand’ effect causes overreaction, as investors now overinvest, making the stock overpriced, and reducing returns, this time demonstrating negative correlation of returns in the long term.
There are other applications of the law of small numbers that help to solve the apparent contradictions between the ‘gambler’s fallacy’ effect and the ‘hot hand’ effect. One of these relates to the purchase of lottery tickets. As we have seen, people often avoid betting on numbers in a lottery if they have recently won, demonstrating a ‘gambler’s fallacy’ effect. Yet, there is evidence that people also have an increased probability of buying their tickets from stores that sold winning tickets the previous week; winning stores experience a 12% to 38% relative sales increase in the week following the sale of a large-prize winning ticket. Guryan and Kearney (2008) have investigated this ‘hot hand’ effect and propose an explanation for the paradoxical combination of the two effects in lottery betting. They suggest that:
The perception of heterogeneity (e.g. among lottery retailers, basketball players, or mutual fund managers) necessary for a belief in the hot hand comes not from the signals produced by the data-generating process – as the representativeness explanation would require – but rather from the characteristics of the data-generating process itself, namely whether the data-generating process is perceived as having an animate or an intentional element (p. 459).
Research in psychology provides some support for this hypothesis (Ayton and Fischer, 2004; Caruso and Epley, 2007). In terms of a lottery situation, Guryan and Kearney suggest that the selection of the balls does seem to involve a random process, without any intentional element, and therefore the law of small numbers would cause people to exhibit a ‘gambler’s fallacy’ effect as they expect a small sample to resemble the underlying population; thus winning numbers are not expected to occur again in the near future. However, with stores there could be a human element in how winning stores are selected, leading to a ‘hot hand’. How this human element operates in this case is open to speculation: it might be that the store is chosen deliberately by the person buying the winning ticket; or the location of the winning ticket could be attributable to a corrupt lottery commissioner, bearing in mind that the winning store owner receives 1% of the prize and thus has an incentive for bribery.
Guryan and Kearney also note that the ‘lucky store’ effect is larger in areas with more high-school drop-outs, more people living in poverty, and more elderly. They suggest that this may be caused by cognitive biases.
4.3 Self-evaluation bias
This factor is sometimes described in terms of overconfidence, but we shall see that, while overconfidence is an important aspect, there are other aspects, including its opposite, underconfidence. In addition there is self-serving bias, which while it often involves overconfidence, can relate to other aspects of belief. We shall see that visceral factors are relevant here also. Thus we use the term self-evaluation bias as a general all-embracing term that includes all aspects of beliefs where some kind of evaluation of the role of the self relative to a situation is involved.
Overconfidence
It has been claimed that ‘No problem in judgment and decision-making is more prevalent and more potentially catastrophic than overconfidence’ (Plous, 1993, p. 217). It is useful to distinguish between three different kinds of overconfidence, since this helps to explain apparent inconsistencies in empirical findings, and we will follow the classification proposed by Moore and Healy (2008). This involves the concepts of overestimation, overplacement and overprecision.
1 Overestimation
This relates to overestimation of one’s actual ability, performance, level of control or chance of success. Empirical evidence suggests that this is a widespread phenomenon extending to many situations. People overestimate their abilities to perform various tasks, overestimate how quickly they can finish a project (which seems to be happening with this book!), overestimate their faculty for future self-control (examined in Chapter 8), and overestimate their abilities to manage companies in the case of CEOs (Malmendier and Tate, 2005; 2008). People can also be unrealistically optimistic about their future prospects (Buehler, Griffin and Ross, 1994; MacDonald and Ross, 1999).
However, we shall also see that in some cases people underestimate their abilities and are overly pessimistic. This will be explained in the section relating to underconfidence, where a comprehensive theory relating to confidence will be described.
2 Overplacement
This aspect of overconfidence is sometimes referred to as the ‘better-than-average’ (BTA) effect: well over half of survey respondents typically rate themselves in the top 50% of drivers (Svenson, 1981), ethics (Baumhart, 1968), managerial prowess (Larwood and Whittaker, 1977), productivity (Cross, 1997), health (Weinstein, 1980), and skill in solving puzzles (Camerer and Lovallo, 1999). Again this effect is reversed under certain conditions considered later.
3 Overprecision
This refers to excessive certainty regarding the accuracy of one’s beliefs. Studies frequently ask their participants questions with numerical answers (e.g. ‘How long is the Nile River?’) and then have participants estimate confidence intervals for their answers). Results show that these confidence intervals are too narrow, suggesting that people are too sure they know the correct answer. For example, Alpert and Raiffa (1982) found that a group of MBA students asked for 98% confidence intervals stated intervals that only contained the correct answer 57% of the time instead of the expected 98%. Similar results have been found in experimental studies by Klayman et al. (1999) and Soll and Klayman (2004), and have been duplicated in the field in the case of trading by individual investors (Odean, 1999; Barber and Odean, 2001). In this last case investors overestimated the precision of their information about individual companies, with the result that they traded too much. Barber and Odean further found that men were more overconfident in this respect than women.
Underconfidence
Empirical studies have sometimes found conflicting results, in that sometimes people underestimate their abilities, control, and also underplace their performance relative to others (Kirchler and Maciejovsky, 2002; Fu et al., 2005; Burson, Larrick and Clayman, 2006). Some studies have reported overconfidence when the tasks were easy (like driving), or success likely, and underconfidence when tasks were difficult (like playing the piano), or success unlikely. This phenomenon is referred to as the ‘hard-easy’ effect (Lichtenstein and Fischoff, 1977).
There has also been conflict in that other studies have reported underconfidence with easy tasks and where success is likely. Moore and Healy (2008) suggest that this conflict is caused by the confound between overconfidence and overplacement. They have proposed a theory that can explain these empirical anomalies and resolve the apparent conflicts. This is described as follows:
People often have imperfect information about their own performances, abilities, or chance of success. However, they often have even worse information about others. As a result, people’s estimates of themselves are regressive, and their estimates of others are even more regressive. Consequently, when performance is high, people will underestimate their own performances, underestimate others even more so, and thus believe that they are better than others. When performance is low, people will overestimate themselves, overestimate others even more so, and thus believe that they are worse than others (p. 503).
Thus according to this theory it is possible, and indeed likely, that people will combine overestimation with underplacement and vice versa. Moore and Healy conducted an experiment involving students performing trivia quizzes, and the results supported their theory.
There is one other factor that has been suggested as playing a role as far as excessive optimism and pessimism is concerned. This is the role of arousal, sometimes referred to as visceral influences. Vosgerau (2010) has proposed that people judge the likelihood of desirable and undesirable events to be higher than similar neutral events because they misattribute the arousal caused by those events to their greater perceived likelihood. Thus we may overestimate the likelihood of a terrorist attack or getting cancer; similarly we may overestimate the likelihood of our country winning the World Cup in soccer. Vosgerau finds evidence of this misattribution phenomenon in four studies.
The misattribution effect above may also explain another curious aspect of behavior: people’s reluctance to exchange lottery tickets (Risen and Gilovich, 2007). Miller and Taylor (1995) have pointed out that precisely because undesirable outcomes that result from actions taken are more painful than identical outcomes that result from actions foregone, instances in which one has been punished for acting are likely to be overrepresented in the memory. The aversion caused by this anticipated regret from switching would then be mistaken for the increased probability of the event occurring. Thus we may be reluctant to exchange lottery tickets; similarly, we may be disinclined to switch lines at the supermarket checkout when our line appears to be going slowly and the line next to us is speeding along. There may be another aspect to this behavior that we will discuss later, in connection with ‘tempting fate’.
Self-serving bias
This term has been used to describe a number of belief biases that are different in nature. For example, it has been used to refer to the asymmetry whereby people ascribe their successes to their own ability or skill, but ascribe failures to situational factors, the actions of other people, or bad luck (Zuckerman, 1979). People also tend to overestimate their contribution to joint or team projects (Ross and Sicoly, 1979). These are aspects of overconfidence, and conform to the findings of much social cognitive research, which suggests that people shape their beliefs and judgments of the social world to maintain sacrosanct beliefs of the self as a capable, lovable and moral individual (for a recent survey see Dunning, 2007). This phenomenon has implications for consumer preferences, as we shall see in the next chapter.
A further aspect of this general phenomenon is that self-serving bias relates not just to individuals’ evaluations of themselves, but also to groups with which they are affiliated. Observe any team game with partisan spectators; the different fans will interpret the play, and in particular aspects involving foul play or penalties, quite differently. This kind of self-serving bias is consistent with the observation by Glaeser (2003):
Mistaken beliefs will be more common when errors increase the current flow of utility. Thus, if people enjoy anticipating a rosy future, they should believe stories that make them overly optimistic and in particular, they should happily accept stories about a life after death (p. 4).
There is some evidence for both psychological and neurological mechanisms related to this overoptimism. It has been reported that depressed subjects make more accurate assessments, and so are more realistic than normal subjects; this phenomenon has been labeled depressive realism (Abramson, Metalsky and Alloy, 1979). It has also been suggested that the phenomenon of Pavlovian withdrawal associated with predictions of negative outcomes is an important route to the over-optimism of normal subjects, and that one of the underlying neural malfunctions associated with depression is associated with a weakening of this withdrawal, thereby leading to more accurate, but more pessimistic, evaluations (Huys and Dayan, 2009). This means that when normal people contemplate the future, any thought leading towards a negative outcome will engender a Pavlovian withdrawal response, which may lead to the thought being terminated. There are similarities here with Damasio’s somatic marker hypothesis. It has been suggested that this withdrawal is mediated by the neurotransmitter 5-HT, which opposes dopamine (Daw, Kakade and Dayan, 2002), and that depressives have low effective 5-HT levels (Graeff et al., 1996), resulting in their withdrawal mechanism being impaired.
Two related phenomena have been termed confirmatory bias and self-attribution bias, which are really flip sides of the same coin. Confirmatory bias refers to the tendency to interpret new, ambiguous information as being consistent with one’s prior beliefs, while self-attribution bias refers to the tendency to discount information that is inconsistent with one’s prior beliefs (DellaVigna, 2009). For example, in financial markets, as traders receive additional private information, in the short term they interpret the information that confirms their existing beliefs as being more informative, rejecting non-confirming information, and this causes them to become more overconfident and trade excessively. The main implication of this is that it leads to momentum, meaning that there is positive correlation of returns in the short term, so that a stock that goes up in value in one day or over a few days may well continue to go up in the next short time period. In the long term prior beliefs are adjusted in line with the additional information and valuation returns to fundamentals. These effects may operate in the opposite direction to the effects of the law of small numbers, but if they are strong enough they may contribute to bubbles in asset markets. Confirmatory and self-attribution bias may also explain why people like to invest in their own company’s stock if they are overconfident about their company’s performance. Furthermore, it may help to explain why they prefer to invest in national companies rather than foreign companies for a similar reason.
An additional psychological factor related to self-serving bias refers to an aspect of self-deception; people may ‘confabulate’ their intentions, meaning that they invent them after they have taken some action, because of cognitive dissonance (Festinger, 1957). This psychological theory often relates to people changing their beliefs in order to reconcile them with their past actions and behavior. The situation is demonstrated by Aesop’s fable of the fox and the sour grapes. The fox wanted the grapes, but when she found she couldn’t reach them she decided that they were probably sour, so she revised her original intention and believed that she never really wanted the grapes in the first place.
One might at this point question why we have this faculty for self-deception. Surely it would be better if we could see things as they really are, so what evolutionary purpose could it possibly have? Evolutionary biologist Robert Trivers (1991, 2011) and evolutionary psychologist Steven Pinker (1997) have speculated that self-deception has evolved as a form of commitment. The nature and purpose of commitment is discussed in more detail in Chapter 8 in relation to intertemporal decision-making, but at this stage we can simply say that this theory involves the concept of an evolutionary arms race in psychological terms. Our emotions are a form of commitment so, for example, people may be less inclined to harm us if they know it will make us angry and retaliate. However, anger can be faked to have the same effect. To be credible, commitments like the facility for anger have to be hard to fake. Smiling is notoriously hard to fake, since voluntary or deliberate smiling involves different muscles and parts of the brain (the cerebral cortex) compared to involuntary or genuine smiling, controlled by the limbic system. Taking the arms race one step further, in order to ‘fake’ emotions that are hard to fake, Trivers and Pinker propose that the best solution is to genuinely feel emotions like anger, fear, shame, guilt, sympathy and gratitude, i.e. believe in false feelings and intentions that one does not really have. Obviously, there is an important role for neuroeconomics here, in order to compare brain processes with self-reported feelings and behavior.
Visceral fit
The phenomenon of cognitive dissonance involves visceral factors, since we tend to be emotionally attached to our beliefs. Recent research has also pointed to a similar phenomenon, in that when there is a fit or match between our current visceral state and the visceral state associated with an outcome we are judging, we tend to increase our estimate of the likelihood of this outcome occurring (Risen and Critcher, 2011). For example, if we are in a visceral state of being warm, this tends to increase our belief in the reality of global warming. Of course, if this experiment is performed naturally, as when we ask people about their beliefs in the probability of global warming on a hot day, then the resulting bias could be explained by the law of small numbers. In this case people would be using the current warm temperature as a diagnostic device for estimating the probability of warm temperature in the future. However, Risen and Critcher found that their subjects expressed a stronger belief in global warming even when the experiment was performed in a warm room. They, therefore, eliminated the explanation that temperature was being used as a diagnostic, and instead propose a simulational fluency explanation. This means that people construct mental images of hot outdoor scenes more clearly when they are in a hot room than when they are in a normal room, suggesting that, when warm, participants had a more fluent or clear representation of heat-relevant stimuli.
The above research concentrates mainly on the effect of heat as a visceral factor. However, there may be wider implications of the concept of visceral fit to other visceral states. For example, a possible change in government policy, like higher taxes, may make us angry; if we are currently in an angry state, would this make us believe such a change in policy is more likely? Further research is needed in this area to clarify the effects of visceral fit in different situations.
4.4 Projection bias
Another kind of bias where people have systematically incorrect beliefs is that they expect their future preferences to be too close to the present ones. For example, we may have learned from experience not to go to the supermarket when we are hungry – we tend to buy all kinds of junk that we don’t normally eat or want to eat, and not only is our bill higher than normal but we also end up with stuff we don’t consume or don’t want to consume. This happens because at the time of shopping we incorrectly anticipate that our future hunger will be as great as it is now. The term ‘projection bias’ was introduced by Loewenstein, O’Donoghue and Rabin (2003) to describe this phenomenon. They proposed a simple model as follows: assume that utility u is a function of consumption c and of state variable s (which incorporates tastes or preferences), so that
u = u(c, s)
The current state is s′ and the (unknown) future state is s. Then, when predicting the future utility û(c, s), a person with projection bias expects utility
û(c, s) = (1 – α)u(c, s) + αu(c, s′) |
(4.2) |
whereas the person without projection bias (who has complete knowledge about the future state s) has expected utility û(c, s) = u(c, s). The parameter α (which must be between 0 and 1) measures the extent of projection bias, so that if α = 0 there is no projection bias, and if α = 1 there is full projection bias.
Read and van Leeuwen (1998) confirmed this effect in a study of office workers. These workers were asked to select a healthy snack or an unhealthy snack to be delivered a week later (in the late afternoon). One group of workers was asked the question at a time when they may have been hungry, in the late afternoon, and 78% chose an unhealthy snack. The other group was asked the same question after lunch, when they were probably satiated, and only 42% chose the unhealthy snack.
Evidence of projection bias has also been provided in the field; an example is a study by Conlin, O’Donoghue and Vogelsang (2007). They examined the effect of weather at the time of purchase on the return of cold-weather apparel items that had been ordered. The standard economic model predicts that there should be no relationship here, or a negative one if colder weather at time of purchase is correlated with colder weather later, making people less likely to return the item. The projection bias hypothesis predicts the opposite effect, with people overestimating their later use and being more likely to return the item. The authors of study did indeed find the opposite effect, estimating that a reduction in the order-date temperature of 30˚F (17˚C) increases the average return rate of a cold-weather item by nearly 4%. In this case the model (4.2) above estimates the value of α to be about 0.5, indicating that consumers predict future tastes roughly half-way between present tastes and actual future tastes.
An associated kind of bias is hindsight bias, which could be considered to be a retrospective projection bias. This means that events seem more predictable in retrospect than in prospect, as in ‘we knew it all along’. There is again evidence for this phenomenon both from experiments and in the field. For example, a study by Biais and Weber (2009) conducted an experiment with 85 investment bankers in London and Frankfurt and found not only evidence of hindsight bias among some subjects but also that the biased agents have lower performance.
4.5 Magical beliefs
This title is a general term for certain irrational beliefs that violate the assumptions of the standard model, but do not fit it any of the above three categories. They are often termed ‘superstitions’ in folk psychology. Two main categories are important to discuss here.
Tempting fate
This phenomenon has been touched on earlier in connection with arousal and misattribution of probability. We have seen, for example, that people can be reluctant to switch lines at supermarket checkouts or exchange lottery tickets. There are widespread instances or applications of this: if you don’t take your umbrella to work, it’s bound to rain; if you don’t do your homework reading, the teacher is bound to pick on you in class to answer questions on it. There are various factors involved here: we have seen that the misattribution effect states that undesirable outcomes resulting from actions taken are more painful than identical outcomes that result from actions foregone, and the aversion caused by this anticipated regret from switching would then be mistaken for the increased probability of the event occurring. We shall also see in the next chapter that loss-aversion is an important factor governing decision-making in risky or uncertain situations.
It is interesting to note that this superstition is a cultural universal. In some cultures people explicitly believe in fate or some supernatural being or force which can act with discretion in the relevant circumstances, but even in cultures where there is no explicit belief in the intervention of some supernatural agent, the superstition exists at an intuitive level that we should not tempt fate.
Contagion
Disgust is a strong aversive emotion or visceral factor. It has been described as ‘a revulsion at the prospect of (oral) incorporation of an offensive substance’ (Rozin and Fallon, 1987, p. 23). However, we can note that disgust can also be prompted by touch or even proximity, not just ingestion. Disgust causes certain unique responses as an emotion: a distinct facial expression with closed nostrils, an attempt to get away from the disgusting object, and a physiological response of nausea, as well as an emotional state of revulsion.
It might be initially thought that disgust is not that important in terms of having a frequent and significant effect on behavior. Evidence suggests otherwise, for two reasons:
1 A large number of everyday objects can cause disgust – a survey by Morales and Fitsimons (2007) found that six of the top-ten-selling nonfood supermarket items elicit feelings of disgust, including trash bags, cat litter and diapers. Many food or ingested items also elicit disgust, such as cigarettes, mayonnaise, oils and lard. Therefore consumers are likely to experience some degree of disgust routinely on shopping trips.
2 The property of contagion means that other products coming into contact with a disgusting object are contaminated – this process is described by a phenomenon referred to by anthropologists as ‘sympathetic magic’. This is not just a belief system found in primitive cultures; it exists in the same general forms in all cultures, although in developed countries people are often reluctant to admit such beliefs for fear of appearing foolish. One of the fundamental laws of sympathetic magic is the ‘law of contagion’. This law states that objects or people can affect each other by merely touching, that some or all of the properties of the disgusting object or person or transferred, and that this transfer is permanent. Thus the law is sometimes referred to as ‘once in contact, always in contact’.
The ramifications of the two factors described above are widespread as far as consumer behavior is concerned, but before discussing these it is appropriate to give some explanation as to why the ‘law of contagion’ is a universal phenomenon, given its sometimes strange effects. For example, Rozin, Millman and Nemeroff (1986) found that a drink touched briefly by a sterilized cockroach became undesirable, as did a laundered shirt previously worn by a disliked person, although subjects were often not able to verbalize or acknowledge their contagion belief. Sometimes people are not even conscious of their emotional disgust, but it is reflected in lower evaluations of products. We really need to consider the evolutionary psychology involved here: a product contagion heuristic really follows on from a general ‘contact causing’ inference. For example, if people eat a lot of fat, they tend to get fat; eating a lot of garlic leads to a garlic odor on the breath and the body. Furthermore, biologically speaking, it would have been a useful adaptation in human history to avoid situations where contamination was possible. Cockroaches can cause contamination of food through microbial infection, as can contact with raw meat, dirt or faeces. Thus contamination has historically speaking been a significant and dangerous problem in terms of human survival; we seem as a species to err on the cautious side and misapply the concept in situations where science now informs us it is not relevant.
What about the consumer behavior ramifications? Morales and Fitsimons (2007) found that direct physical contact itself was not necessary for a contagion effect to occur; merely a perception of contact was sufficient. Thus raw meat or drinks in transparent containers were more likely to cause a perception of contamination than if they were packaged in opaque containers. In the supermarket situation contamination can occur either in terms of proximity of items on the shelves, or proximity in the shopping cart. Thus, when lard is positioned on shelves near other baking products, pans and utensils, as is commonly found, these other products are likely to receive a lower evaluation from consumers.
There are obviously implications here in terms of managerial policy. Managers need to take care in determining shelf location for products to minimize the effects of lower evaluations. Even though they cannot control proximity in the shopping cart, they can allow consumers to take avoidance measures. Opaque and substantial packaging may be important for some products. Some supermarkets are now providing facilities for double-wrapping meat.
There is one final point that is worth raising here in terms of disgust. We have concentrated so far on situations where the disgust is purely physical; people can also feel moral disgust. An outstanding example is the 2010 BP oil spill in the Gulf of Mexico. In this case the discussion may combine elements of physical and moral disgust; people do not like to see birds coated in oil, but they have also been revulsed by the seeming negligence and reactions of BP management, and the ramifications of this have been huge. Not only has BP been treated as a pariah by press and public internationally, but other oil companies have suffered also, as has the US president. Evidence also exists of increased anti-British sentiment in the US as a result of the disaster. The plunge in the BP share price, arguably not justified by economic fundamentals, has important implications for UK pension funds (and ultimately pension investors) that have invested heavily in BP stock.
So far we have examined only the negative aspects of contagion. As was indicated in the introduction to the chapter, contagion can have positive aspects also. It can vastly inflate the values of objects, such as ‘Blackie’, a guitar once owned and played by Eric Clapton, which sold for $959,500 in 2004. According to Paul Bloom (New York Times, 2011, 8 March):
Our results suggest that physical contact with a celebrity boosts the value of an object, so people will pay extra for a guitar that Eric Clapton played, or even held in his hands.
This is the same kind of thinking that makes people reluctant to wear the sweater of a murderer. Newman, Diesendruck and Bloom (2011) find that people value highly the possessions of celebrities even if they despise them, since they expect the possessions of notorious celebrities, like Saddam Hussein, to be valued by others. Furthermore, the values of these possessions are significantly reduced if they are washed or in some way sterilized.
A similar psychology applies even with replicas of objects owned by celebrities. In this case the phenomenon is known as ‘imitative magic’, meaning that things that look alike are alike. Thus a replica of ‘Blackie’, perfect down to the cigarette burns and belt buckle scratches fetched $30,500 in auction in March 2011. Less perfect replicas sell for lower prices, but are still valued highly. The replica fetish is important in the musical business, extending not only to guitars and strings but also to amplifiers, microphones and other instruments.
It is important to realize the foundations for these magical beliefs in evolutionary biology. As Fernandez and Lastovicka (2011) state:
Beliefs about contagion, and especially biological contagion, by our ancestors are one of the reasons why we are here today. Those who did not stay away from those who died from the plague in the Dark Ages also died of the plague; those who died of the plague in the Dark Ages likely have few, if any, descendants today. So in our modern and scientific world, these manners of magical thinking still persist.
The subject of magical beliefs and contagion is examined in more detail in Case 4.3.
4.6 Causes of irrationality
We have now surveyed a variety of situations where people exhibit a formation or holding of beliefs that violates the kind of rationality proposed in standard economics. It is, therefore, worthwhile at this stage to discuss the causes of the underlying phenomena.
Baumeister (2001) has identified five different causes of irrational, or what he terms self-defeating, behavior. We can really equate self-defeating behavior with behavior that is not in a person’s long-run self-interest. One can, of course, question whether it is legitimate to equate irrational behavior with self-defeating behavior, and this aspect will be discussed further in the final subsection. However, what is important here is the usefulness of the categories that he proposes in terms of analysis. These categories are: emotional distress, threatened egotism, self-regulation failure, rejection and belongingness, and decision fatigue. The first of these categories involves a variety of factors, and it is helpful to discuss memory and cognitive dissonance as separate categories.
Emotional distress
The general impact of emotions on preferences and choices has been discussed in the previous section. We now need to explain how and why these effects occur, recognizing that this remains a highly controversial area in psychology.
There has been a lot of research into the effects of emotions on decision-making. The conventional attitude taken by economists, and also by philosophers in the Kantian tradition, is that emotions tend to cloud good judgment, resulting in ‘irrational’ decisions or self-defeating behavior. However, this raises the issue mentioned in the previous section in relation to evolutionary psychology: How can emotions serve as an adaptive evolved psychological mechanism? If they were a maladaptation people with genes for emotional behavior would not have passed them on to succeeding generations, and we would now be living in a world full of unemotional people, like Mr Spock from Star Trek; this is clearly not the case. In the late 1980s the economist Robert Frank (1988) proposed a theory that emotions served as a commitment mechanism, and thus were a useful adaptation. Frank’s theory was supported by independent research by Jack Hirshleifer (1987). The neuroscientist Damasio (1994) also researched the role of the emotions in decision-making, by examining patients with brain damage, again concluding that emotions could be an aid as well as a hindrance. These theories are discussed in more detail later in this section. At this stage we can summarize the situation by saying that emotions can lead to either better or worse decisions, depending on the circumstances.
While the effects may be unpleasant and destructive, the evolutionary advantages of such changes in behavior are obvious. The need to satisfy or reduce basic drives is fundamental to survival and reproduction. This also applies to the sensation of pain; its unpleasantness is a signal that something is wrong with the biological system, and we should be doing something to remedy the situation (get out of the heat/cold, rest an injured limb, defend ourselves against the person attacking us).
One aspect of emotional distress where research has indicated a bad effect on decisions is the role of anger on risk-taking. Leith and Baumeister (1996) found that people who were upset were more inclined to take foolish risks, like betting on long shots in a lottery. There were various possible explanations for this; for example, people who were already upset had less to lose by taking a long shot and more to gain, while people who were in a good or neutral mood had more to lose by taking a long shot. However, Leith and Baumeister were able to eliminate this explanation by further experimentation, requiring respondents to reflect on their decisions for about a minute before choosing. Although the respondents were still angry when they made their decision they now became more risk-averse. Thus it seems that emotional upset does indeed cloud judgment of risk, and that when upset people are forced to think about things they make better decisions.
In the above situation emotional distress can lead to irrational decisions or self-defeating. At the same time we have indicated that our emotions may be an aid to good decision-making. This now needs to be discussed further.
The theory of emotions as an evolved psychological mechanism or mental adaptation was described by Frank (1988) in the seminal work Passions within Reason. According to Frank our emotions serve as commitment devices, meaning that they commit us to perform certain actions at a later time if other people behave in certain ways. The nature of commitment in general is considered in Chapter 8 in relation to inter-temporal decision-making, and it is also relevant in game theory. Frank’s insight was to see the role of emotions in prompting us to perform actions that we would not carry out if we were acting on purely ‘rational’ grounds. A simple example can illustrate the situation. Imagine that we make an agreement with another person such that we perform some work for them now in exchange for being paid afterward. Such ‘delayed exchange’ contracts have been extremely common in human history, on both a formal and informal basis. The person doing the work first is always subject to a ‘holdup’ problem (unless the details are formalized in a written contract), in that the other party can renege on the deal. Without any formal contract the cheated party has no comeback, and a ‘rational’ person may simply write off the loss, and put it down to experience. An ‘emotional’ person on the other hand would be angry with the cheat and take steps to gain revenge, at risk and cost to himself, which the ‘rational’ person would be unwilling to take. However, the knowledge that an emotional person may react in this way might well be enough to prevent the other party from cheating in the first place. This is an example of what is called a ‘reputation effect’; emotional people may gain a reputation for not standing for any nonsense or backsliding in their dealings, thus encouraging others to be straight with them.
This example illustrates how our emotions can serve our long-run self-interest, but Pinker (1997) has gone a step further, showing how our emotions can backfire on us, referring to them as ‘doomsday devices’, after the movie Dr. Strangelove. The problem with doomsday devices is that they cannot be disarmed, even if they are activated by mistake, and will explode regardless of the consequences. Thus they may lead to futile and self-destructive reactions; a well-known example is the successive rounds of retaliation that occur with feuds between gangs or clans. It is possible that the reaction to social rejection, discussed in more detail later, is of this type. Emotions are indeed a two-edged sword.
Memory
As far as our emotional states over time are concerned there are two important factors that need to be discussed, in terms of both their causes and their implications:
1 People tend to revert to a ‘normal emotional state’ after any kind of emotional experience, whether it be pleasant or unpleasant.
2 People tend to overestimate the length of time that it will take to revert to this normal state.
The first aspect of human nature has long been known. According to Adam Smith in The Theory of Moral Sentiments of 1759:
The mind of every man, in a longer or shorter time, returns to its natural and usual state of tranquillity. In prosperity, after a certain time, it falls back to that state; in adversity, after a certain time, it rises up to it.
This ‘certain time’ turns out to be a shorter rather than a longer time in general. There is now a substantial body of research showing that emotional reactions to life-changing events are surprisingly short-lived (Suh, Diener and Fujita, 1996; Frederick and Loewenstein, 1999). When people win large amounts of money in a lottery, they do not remain happy for very long (Brickman, Coates and Janoff-Bulman, 1978; Kaplan, 1978). In the opposite direction, the majority of bereaved spouses reported themselves to be doing well two years after the death (Lund, Caserta and Diamond, 1989; Wortman, Silver and Kessler, 1993). Similarly, people who have suffered serious injury confining them to a wheelchair have recovered equanimity within a period of a year.
The second aspect of human nature is less well known. However, experiments have been performed that measure people’s forecasts of emotional events and compare these with their actual duration, and there is evidence of a consistent durability bias in both directions. Thus there is a tendency for people to overestimate the duration of their reactions to both positive and negative emotional events (Gilbert et al., 1998; Wilson, Lindsay and Schooler, 2000).
A number of theories have been proposed to explain both of the above factors. These are discussed further in the final chapter in relation to happiness.
Cognitive dissonance
Self-deception is an important category of irrational behavior, as we have seen. It can be the result of a type of emotional distress discussed earlier. The most important psychological theory that is relevant here is that of cognitive dissonance, originated by Festinger (1957). This theory states that people are motivated to avoid having their attitudes and beliefs in a dissonant or conflicting relationship, and they feel uncomfortable when dissonance occurs. This discomfort can cause people to do many things that could be classed as irrational. Thus cognitive dissonance generally involves people justifying their actions by changing their beliefs. This is because it is often easier to change one’s beliefs than to change actions that have already been taken. However, cognitive dissonance may also involve situations where beliefs are held steadfastly in spite of contrary evidence. This is the kind of situation where the fourth criterion for rationality is relevant. These violations of rationality occur when it is harder to change an ingrained belief system than to change one’s interpretation of empirical evidence. There may be a variety of ways to explain away uncomfortable empirical findings, as many smokers can attest.
We can, of course, ask the question why such behavior, or the mental processes leading to such behavior, can have evolved as an adaptive response. It might initially seem that such processes would be maladaptive, obscuring the realities of situations and leading to bad decisions. While self-deception may certainly lead to bad decisions, as we will see in more detail in some case studies, it may also be advantageous in other respects. In particular it may bolster confidence and self-esteem, increasing one’s sense of wellbeing, and, if one can also deceive others, it may have the effect of increasing one’s status in society. The evolutionary psychologist Pinker (1997) has gone so far as to claim that self-deception can be adaptive because it makes it easier to deceive others. If we really believe that we are the best person to do a certain job, in spite of our lack of ability, then we are more likely to convince others and be offered the job.
Threat to self-esteem
There is considerable evidence that concern with self-esteem can affect the quality of decision-making. In particular there appears to be a relationship between low self-esteem and self-defeating behavior such as self-handicapping, binge eating and alcohol abuse. These consequences are discussed later in the section. However, research also indicates that the relationship is not a straightforward one. People with high, but misplaced, self-esteem may also indulge in alcohol and drug abuse, believing that they are strong enough to withstand the harmful physical effects and the tendency to addiction. This can be referred to as the ‘peacock’s tail’ syndrome, after the theory of the evolutionary biologist Zahavi (1975), that the seemingly useless and wasteful peacock’s tail evolved as a sign of health for its owner, who was strong enough to withstand the waste of resources.
Baumeister, Heatherton and Tice (1993) also found evidence of a more complicated relationship between self-esteem and quality of decision-making. In general they found that people with high self-esteem made better decisions in risk-taking experiments, in terms of judging their own performance better than people with low self-esteem, and gambling in an appropriate manner. However, when people with high self-esteem received a blow to their pride they started to make bad decisions, worse even than those with low self-esteem, by making large bets that were not justified by their own performance. They seemed to be anxious to wipe out the loss of face involved.
Failure of self-regulation
Self-regulation in the current context refers to the need for individuals to reflect on advantages and disadvantages before making decisions rather than acting impulsively. One aspect of this has already been described, in connection with emotional distress. Another aspect of self-regulation involves the weighing of long-run costs against short-run benefits of decisions. This aspect is often referred to as intertemporal decision-making, and will be discussed in Chapters 7 and 8. Self-regulation in this situation involves the delay of gratification. The ability for self-regulation is obviously a useful adaptation, enabling our ancestors to withstand temptations that would have resulted in early death, and encouraging them to make long-run investments in the health of themselves and their families.
There may be different reasons why self-regulation breaks down, as we have seen in various contexts. One factor that can be repeated at this stage is that the capacity for self-regulation is an exhaustible resource, much like physical strength (Muraven, Tice and Baumeister, 1998; Muraven and Baumeister, 2000). This phenomenon will be discussed in Chapter 8, in connection with the study by Shiv and Fedorikhin (1999). The study showed that cognitive load reduced self-control, as people who had to remember longer numbers were more likely to eat chocolate cake. The similarity between the capacity for self-regulation and physical strength is two-fold. First, it is easily depleted in the short run, so that people cannot continue to resist temptation indefinitely; also as they have to deal with more stress in one situation, they tend to lose control in other situations, for example by smoking, drinking or eating more. Second, the capacity for self-regulation appears to be something that can be increased in the long run, just as a muscle adapts to physical exercise by becoming stronger in the long run. For example, Muraven, Baumeister and Tice (1999) found that repeated exercises in self-control, such as trying to improve posture, over a period of two weeks led to improvements in self-control in laboratory tasks relative to people who did not exercise.
Decision fatigue
It seems that people not only tire when it comes to self-control, they also tire of making decisions in general. This may well be the main reason that people are creatures of habit; having a routine avoids the need to expend scarce resources by making choices. A good illustration of this phenomenon is provided by the research of Twenge et al. (2000). They found that a group of respondents who had to make a series of product choices had a reduced capacity for self-regulation compared with a control group. The capacity for self-regulation was measured by asking the respondents to drink as much as they could of an unpleasant, bitter tasting beverage. This finding suggests that people tire of making decisions, and when they do so it is possible that any further decisions that are forced on them before they have had time to recover may result in a fall in quality. Military psychologists have found a similar tendency with commanders in battle (Dixon, 1976).
Interpersonal rejection
Humans have a strong innate desire to belong to a social group that is virtually universal. The evolutionary advantages of this are obvious, which is why this desire tends to be even greater and more fundamental than the desire for self-esteem. However, if people feel rejected socially, this appears to be such a psychological blow that they cease to function effectively in a number of ways. Experimental research indicates that they make poorer decisions, making more unhealthy choices, gambling foolishly, and also becoming more aggressive and less cooperative. Even performance on intelligence tests is adversely affected. The reasons for this general loss of effective function are not clear at present; further research needs to be performed in this area, probably of a neuroscientific nature. It is likely that rejection causes a change in the body’s output of hormones and neurotransmitters. Research has already established for example that winning teams and their supporters both enjoy an increase in testosterone output following victory, while losers and their supporters suffer from a drop in testosterone.
Foundations in evolutionary neurobiology
We now need to say something in general about all the above causes of irrationality. Our starting point is to take a reductionist approach. We must examine how the human organism, and in particular the brain, evolved if we are to gain a real understanding of behavior. The human organism did not evolve in order to be a rational decision-making system, or to maximize utility, wellbeing or hedonic pleasure. The forces of natural selection have caused us to be designed as a system that maximizes biological fitness. Biological fitness relates not only to our own individual survival and reproduction, but also in broader terms to the survival of our relatives who share the same genes. Those of our ancestors who were most successful in achieving biological fitness were most able to spread their genes, ensuring the survival of more people with the same genetic abilities. In order to achieve this end, however, the brain and other body mechanisms must have a signaling system to guide the brain to make the correct decisions. This is where pain and pleasure enter the picture. Pain generally can tell us we have made a bad decision as far as biological fitness is concerned, whereas pleasure can tell us we have made a good decision. Thus we can say that the individual is prompted to maximize hedonic pleasure as a means to the ultimate end of maximizing biological fitness. It is largely the indirect nature of this mechanism that leads to the objection described earlier that inappropriate norms are used to judge rationality.
Furthermore, it should be recognized that this hedonic pleasure relates not only to conventional goods but also to what we call moral sentiments. This point has been made earlier, in Chapter 3, but it is important to realize that talk about ‘life being about more than happiness’ misunderstands this crucial insight. Although morality has been heavily influenced by cultural factors over the last few thousand years, it originally evolved for the same reason as our physical organs, to maximize biological fitness. Ultimately this involves the same signaling system in terms of pain and pleasure. If we feel the pain of guilt this may be a signal that we have made a bad decision; others may punish us if they discover our actions. Likewise the pleasure, in terms of ‘warm glow’ or pride, in performing a virtuous action or ‘doing our duty’ may signal a good decision; others may reward us.
The essential problem with this mechanism is that our hedonic system can be easily hijacked. One main reason for this is that there is always a time lag between the optimal design and the demands of the current environment. Just as the military are often accused of preparing to fight the last war, our brains and physiological systems are geared to dealing with the demands of a past environment. Thus we have cravings for salt and sugar, which in the past were vital nutrients necessary for survival, but which now cause all kinds of health problems when consumed in excess. Our endorphin receptors in the brain can be fooled into craving opiates as a source of pleasure, getting us addicted to hard drugs. It can also be argued that our hedonic system may be hijacked in the case of our moral sentiments as well; for example, a bad or brutal environment may eliminate feelings of guilt for performing antisocial actions.
Another reason why our hedonic systems can be hijacked is that they use heuristic devices to achieve their ends. Natural selection is a ‘blind watchmaker’, as Dawkins (1986) has elegantly described it, and is a mechanistic rather than a teleological process. This means that it has no ‘purpose’; it builds on what has developed from the past, rather than by looking ahead to the future and setting a goal. The implications of this mechanistic process are often misunderstood even by educated and intelligent commentators, so it is worthwhile expanding on this aspect. The maximization of biological fitness, or ‘selfish gene’ theory as it is sometimes described in reductionist terms (Dawkins, 1976), is sometimes rejected out-of-hand on the basis that it cannot explain why we use contraceptive devices and other non-reproductive sexual practices. The response to this is that our brains are not designed to further reproduction directly. This would involve fantastically complicated neural machinery, which may well not adapt well to changes in the environment, and which would use great resources of precious energy. Instead, our brains operate using basic heuristic processes, so that sexual activity in general gives pleasure, regardless of whether it results in reproduction. The association of sexual activity with pleasure is generally sufficient to promote reproduction, and certainly has been throughout evolutionary history until very recently.
On this foundation of neurobiology, Zak has argued that people are neither rational nor irrational; rather, people are ‘rationally rational’ (Zak, 2011):
The rational rationality model predicts that people will invest scarce cognitive resources in solving a decision problem only when the expected payoff is sufficiently large. Otherwise, human beings will expend the minimum resources needed to achieve a ‘good enough’ outcome. ‘Good enough’ means that there is a wide range of acceptable choices. Rational rationality is similar to Herbert Simon’s notion of satisficing (Simon, 1956), but clearly identifies when people will satisfice and when they will not. Rational rationality occurs because cognitive resources are constrained and the brain evolved to conserve energy and deploy these resources only as needed. The neuroscience behind rational rationality requires that any economic model identify why individuals would expend scarce brain resources when making a decision rather than rely on previously learned heuristics (p. 55).
Thus, it may make sense to formulate dual-process models of reasoning and judgment, which involve the operation of different decision-making systems in different situations (Epstein, 1994; Osherson, 1995; Evans and Over, 1996; Sloman, 1996; Stanovich, 1999). The essence of such models is that in certain situations people use analytical, logical, rule-based systems with a relatively high computational burden, while in other situations people use various types of heuristic procedures. The use of heuristics can be viewed as a short-cut; frequently it results in an efficient use of personal resources, leading if not to optimization at least to satisficing. However, like many short-cuts, the use of heuristics can also lead to many bad decisions in situations where a more cognitive, analytical approach is desirable. Thus heuristics are both a good and a bad method of decision-making, depending on the circumstances.
If we accept the abundant evidence of the mechanistic nature of evolution and the way it builds structures without purpose, the implication of the use of heuristic devices is that they provide simple rules for appropriate action in a given situation, but they tend to be highly fallible. Many of the anomalies that we have now observed with the standard model are a result of this factor. Therefore, because of the way in which our brains and minds have evolved, we may be bad at performing what may seem simple abstract tasks, using inappropriate heuristic devices. However, these are tasks that have never been required in our ancestral past. On the other hand, human beings are extremely good at performing complex tasks that we take for granted, like visually following an object, changing focus and perceptions of color, speed and distance as it moves, and making the necessary biomechanical adjustments involved in catching a ball. Even the most advanced artificial intelligence systems designed cannot rival this performance. The moral appears to be that we are good at what we need to be good at, or, more correctly, we are good at what we needed to be good at in our evolutionary past. This kind of behavior may then not be so ‘irrational’ after all; this raises the issue of appropriate norms, which is discussed further in the final chapter.
At the psychological level, the kind of ‘irrationality’ that we observe in human belief systems may not really be irrational in evolutionary terms either, in spite of initial appearances. Hood (2006) has claimed that the human mind has adapted to reason intuitively, in order to develop theories about how the world works even when mechanisms cannot be seen or easily deduced. This adaptation has had huge benefits in terms of the development of scientific theories related to invisible forces like gravity and electromagnetism. However, according to Hood, it also results in people being prone to making irrational errors, in particular relating to superstition and religion. This is because in our evolutionary past it has been more advantageous from a survival viewpoint to believe in a cause-and-effect relationship that does not exist (for example, God punishing people with bad weather) than not to believe in a cause-and-effect relationship that does exist (for example, the growl behind the nearby bush being caused by a lurking predator). Thus people tend to be overly fond of positing cause-and-effect relationships, even when none exists. Hood claims that it is therefore unlikely that we will evolve a rational mind, and that religion and superstition are here to stay.
4.7 Summary
4.8 Review questions
1 Explain the difference between overestimation and overplacement, and why this distinction is important.
2 Explain the ‘hard–easy’ phenomenon and why it occurs.
3 Explain what is meant by the representativeness heuristic, giving an example.
4 Explain what is meant by base rate bias, giving an example.
5 Explain what is meant by ‘the law of small numbers’.
6 Explain why people tend to avoid betting on numbers that have recently won in a lottery, but also may want to buy their tickets from a store where a big-winning ticket has recently been sold.
7 Give an example of a situation where projection bias can result in a bad decision.
8 Explain why people may be reluctant to ‘tempt fate’.
9 Explain why contagion is relevant in supermarket shopping; give two examples of how the concept of contagion can affect people’s shopping habits.
10 Give two examples of how managers can reduce the effects of contagion.
4.9 Review problems
1 ‘Gambler’s fallacy’ effect
A coin is tossed 12 times; on the first three tosses it lands up heads.
a) What would be the rational gambler’s estimate of the probability of the fourth toss resulting in a tail, assuming the coin is unbiased?
b) What would be the gambler’s estimate of the probability of the fourth toss resulting in a tail, if he believes in the ‘gambler’s fallacy’?
2 ‘Hot hand’ effect
A manager of unknown ability is managing an investment fund. Well-managed funds rise in value in 60% of time periods, while badly managed funds only rise in value 40% of the time. It is observed that a particular fund rises in value on four consecutive occasions. Assuming that there is a prior probability of 0.5 that the fund is well-managed:
a) What would be the rational investor’s estimate of the probability of the fund being well managed, after observing four consecutive rises?
b) What would be investor’s estimate of the probability of the fund being well managed, if she believes in the ‘hot hand’ effect?
4.10 Applications
Case 4.1 Fakes and honesty
THOSE who buy counterfeit designer goods project a fashionable image at a fraction of the price of the real thing. You might think that would make them feel rather smug about themselves. But an intriguing piece of research published in Psychological Science by Francesca Gino of the University of North Carolina, Chapel Hill, suggests the opposite: wearing fake goods makes you feel a fake yourself, and causes you to be more dishonest in other matters than you would otherwise be.
Dr Gino and her colleagues provided a group of female volunteers with Chloé sunglasses that cost about $300 a pair, supposedly as part of a marketing study. They told some of the volunteers that the sunglasses were real, and others that they were counterfeit. They then asked the volunteers to perform pencil-and-paper mathematical quizzes for which they could earn up to $10, depending on how many questions they got right. The participants were spun a yarn about how doing these quizzes would allow them to judge the comfort and quality of the glasses.
Crucially, the quizzes were presented as ‘honour tests’ that participants would mark themselves, reporting their own scores to the study’s organisers. The quiz papers were unnumbered and thus appeared to be untraceable, and were thrown away at the end of the study. In fact, though, each had one unique question on it, meaning that it could be identified—and the papers were recovered and marked again by the researchers after they had been discarded.
Of participants told that they were wearing authentic designer sunglasses, 30% were found to have cheated, reporting that they had solved more problems than was actually the case. Of those who thought they were wearing fake sunglasses, by contrast, about 70% cheated.
The results were similar when the women completed a computer-based task that involved counting dots on a screen. In this case, the location of the dots determined the financial reward. The women who thought they were wearing counterfeits lied about those locations more often than those who did not.
In a third part of the study, the participants were asked questions about the honesty and ethics of people they knew and people in general. Those who thought they had knock-offs were more likely to say that people were dishonest and unethical.
It looked, then, as if believing they were wearing fakes made people feel like fakes. To test that hypothesis, Dr Gino and her colleagues ran the experiment again, this time including a test meant to detect self-alienation. They asked the participants if they agreed with statements like, “right now, I feel as if I don’t know myself very well”. Those who thought they were wearing fakes did indeed feel more alienated from themselves than those who knew they were wearing the real things.
It remained possible, however, that it was the sense of self-alienation which was normal, and that believing you were wearing designer glasses made you more at ease with yourself. To test that, the team ran one final set of experiments, in which some people were given sunglasses to wear without any indication of their provenance. Volunteers in this control group behaved like those who believed the glasses were authentic.
The moral, then, is that people’s sense of right and wrong influences the way they feel and behave. Even when it is someone else who has made them behave badly, it can affect their subsequent behaviour. Next time you are offered fake accessories, beware: wearing them can make you feel like a bad person, not a better one.
Source: The Economist, June 24, 2010
Questions
1 What does this case study tell us about self-evaluation bias?
2 Is the concept of self-alienation consistent with Glaeser’s claim that mistaken beliefs will be more common when errors increase the current flow of utility?
3 Explain how the situations described in the case study may involve the concept of contagion.
Case 4.2 Trading on testosterone
Financial traders take a lot of risk, but often make high rewards. Until recently the role of the endocrine system in a trader’s success or failure had not been investigated. We now know that a high level of testosterone is a good predictor of daily trading profitability, while a high level of cortisol is associated with a high variance in a trader’s profit and with market volatility. This was established by a study by Coates and Herbert in 2008, who examined 17 male traders in the City of London over a period of 8 days, recording testosterone and cortisol levels at 11am and 4pm, coinciding generally with the start and end of the main trading of the day. These findings are important in understanding not just the underlying psychology behind bubbles and crashes in financial markets, but also the physiology. A high level of testosterone tends to lead to high confidence and increased risk-taking. In a bull or rising market this will tend to lead to greater profits, which may engender more confidence and risk-taking in the future. Cortisol is a stress hormone. When markets are falling or when they are highly volatile this increases stress, which in turn tends to cause not only caution, but a reluctance to transact altogether. In this light it is easy to see how a crash or credit crunch can occur. The psychology of the market is driven by an underlying physiology.
Questions
1 Explain how overconfidence and underconfidence are related to neurotransmitters.
2 Explain one main implication of this study as far as trends in financial markets are concerned.
3 How is the above case related to the concept of reductionism described in the previous chapter?
Case 4.3 Celebrity contagion and imitative magic
When Eric Clapton’s Fender Stratocaster guitar, ‘Blackie’, sold for $959,500 in 2004, this set a record price for a guitar sold in auction. A replica of Blackie, complete with every single nick and scratch, including the wear pattern from Mr Clapton’s belt buckle and the burn mark from his cigarettes, fetched $30,500 at auction in March 2011. Some psychologists believe they have developed a theory that explains these enormous values. After conducting experiments and interviewing guitar players and collectors, they have published papers analyzing ‘celebrity contagion’ and ‘imitative magic’. One of their conclusions is that the seemingly irrational desire for a Clapton relic, even an imitation of a relic, stems from an instinct crucial to surviving disasters like the Black Death: the belief that certain properties are contagious, either in a good or a bad way. Another conclusion is that the magical thinking chronicled in primitive tribes affects bids for memorabilia in auctions.
Some bidders might rationalize their purchases as good investments, or as objects that are worth having just because they provide pleasant memories and mental associations of someone they admire. But those do not seem to be the chief reasons for buying celebrity memorabilia, according to Newman, Diesendruck and Bloom (2011). The researchers asked people how much they would like to buy objects that had been owned by different celebrities, including popular ones like George Clooney and outcasts like Saddam Hussein. People’s affection for the celebrity did not predict how much value they assigned to the memorabilia – apparently they were not buying it primarily for the pleasant associations.
Nor were they chiefly motivated by the prospect of a profit, as the researchers discovered when they tested people’s eagerness to acquire a celebrity possession that could not be resold. That restriction made people less interested in items owned by villains, but it did not seriously dampen their enthusiasm for relics from their idols.
The most important factor seemed to be the degree of ‘celebrity contagion’. The study found that a sweater owned by a popular celebrity became more valuable to people if they learned it had actually been worn by their idol. But if the sweater had subsequently been cleaned and sterilized, it seemed less valuable to the fans, apparently because the celebrity’s essence had somehow been removed.
‘Our results suggest that physical contact with a celebrity boosts the value of an object, so people will pay extra for a guitar that Eric Clapton played, or even held in his hands,’ is the opinion of Paul Bloom. This sort of direct physical contact helps explains why the original Blackie guitar sold for nearly $1 million – Mr Clapton had played it extensively for more than a decade. But why build an exact replica of the guitar and all its nicks and scratches?
The replica’s appeal is related to another form of thinking called the law of similarity, according to Newman, Diesendruck and Bloom (2011). That is a belief in what is called ‘imitative magic’; this proposes that things that resemble each other have similar powers:
Cultural practices such as burning voodoo dolls to harm one’s enemies are consistent with a belief in the law of similarity. An identical Clapton guitar replica with all of the dents and scratches may serve as such a close proxy to Clapton’s original guitar that it is in some way confused for the real thing. Of course, the replica is worth far less than the actual guitar that he played, but it still appears to be getting a significant amount of value for its similarity.
Even a mass-produced replica guitar without the nicks and scratches can become magical, according to another study by Fernandez and Lastovicka (2011). The researchers conducted in-depth interviews with 16 men who owned more than one guitar and resided either in New Zealand or the United States, including one who had spent hundreds of thousands of dollars on replicas of Beatles gear. They found that many participants believed in the idea of ‘contagious magic’ (the idea that two entities that touch can influence each other). For example, many fans want to have rock stars sign their instruments, and one established performer explained how he used another rock star’s discarded guitar strings.
The research also revealed that replica guitars appeal to participants’ belief in imitative magic:
They often bought the best possible copy they could attain, and then if needed, made further changes to it so that it resembled the desired object even more closely.
The authors go on to explain that, for example, some consumers switch out knobs on their guitars to more closely resemble the instruments of the artists they admired.
These consumers did not conform to the theory described in existing academic literature that a mass-produced object could never acquire the aura of a fetish, the anthropological term for an object believed to have supernatural powers. The guitar collectors insisted that even a factory-made replica of a famous musician’s guitar had a certain something that enabled them to play better music. According to Fernandez and Lastovicka (2011):
Consumers use contagious and imitative magic to imbue replica instruments with power. Semiotically signified magical thinking causes replicas to radiate aura and thus transforms them into fetishes.
Of course, the collectors still preferred a beat-up guitar used by a star to a brand-new replica of it. One of them told the researchers how he had improved his own guitar-playing by using old guitar strings that had been discarded by Duane Allman. This belief in contagious magic may sound irrational, but it makes a certain evolutionary sense.
Questions
1 Explain why magical beliefs are related to probabilities as well as utilities.
2 Explain why a belief in contagious magic makes a certain evolutionary sense.
3 Why would sterilizing an object have an effect on its value?
4 What is the connection in the case with voodoo dolls?
5 Explain whether the theory described in the case applies to objects like Jackie Kennedy’s tape measure and Bernie Madoff’s footstool.