3
The Definition of Value

Value measures how much one is willing to pay, trade, or work for a reward, or to work to avoid a punishment. Value is not intrinsic to an object, but must be calculated anew each time. This produces inconsistencies, in which different ways of measuring value can produce different orderings of one’s preferences.

What is “value”? What does it mean to value something? In psychological theories, value is thought of as that which reduces needs or which alleviates negative prospects1—when you are hungry, food has value; when you are thirsty, water has value. Pain signals a negative situation that must be alleviated for survival. In economics, the concept of revealed preferences says that we value the things we choose, and make decisions to maximize value.2 Although this is a circular definition, we will use it later to help us understand how the decision-making process operates. In the robotics and computer simulation worlds where many of the mechanisms that we will look at later have been worked out, value is simply a number r for reward or –r for punishment.3 The robotics and computer science models simply try to maximize r (or minimize –r).

In many experiments that study decision-making, value is measured in terms of money. But of course, money has true value only in terms of what it can buy.4 Economists will often tell us that money is an agreed-upon fiction—I am willing to sell you this apple for a green piece of paper that we both agree is worth $1 because I am confident that I can then use the paper later to buy something that I want. The statement that money is valuable only in terms of what it can buy is not entirely true either: money can also have value as a symbol of what it implies for our place in the social network.5 This is one reason why extremely well-paid professional athletes fight for an extra dollar on their multimillion dollar salaries—they want to be known as the “best-paid wide receiver” because that identification carries social value. Money and value are complex concepts that interact in difficult ways.

Part of the problem is that you cannot just ask people How much do you value this thing? We don’t have units to measure it in. Instead, value is usually measured in what people will trade for something and in the decisions they make.6 This is the concept called revealed preferences—by examining your decisions, we can decide what you value. Theoretically, this is very simple: one asks How much will you pay for this coffee mug? The price you’ll pay is the value of the coffee mug. If we measure things at the same time, in the same experiment, we shouldn’t have to worry about inflation or changes in the value of the money itself.

The problem is that people are inconsistent in their decisions. In a famous experiment, Daniel Kahneman and his colleagues Jack Knetsch and Richard Thaler divided their subjects into two groups: one group was asked how much money they would pay for a $6 (in 1990) Cornell University coffee mug and the other group was first given the coffee mug and then asked how much money they would accept for the mug. The first group was willing to pay much less on average ($3) than the second group was willing to accept for it ($7). This is called the endowment effect and is a preference for things you already own.7

Simply phrasing decisions in terms of wins or losses changes what people choose.8 The interesting thing about this is that even once one is shown the irrationality, it still feels right. This effect was first found by Daniel Kahneman and Amos Tversky, who made this discovery sitting in a coffee shop in Israel, asking themselves what they would do in certain situations. When they found themselves making irrational decisions, they took their questions to students in their college classes and measured, quantitatively, what proportion made each decision in each condition.

In the classic case from their 1981 paper published in the journal Science:9

Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimate of the consequences of the programs are as follows:

Problem 1: If Program A is adopted, 200 people will be saved. If Program B is adopted, there is 1/3 probability that 600 people will be saved, and 2/3 probability that no people will be saved. Which of the two programs would you favor?

Problem 2: If Program C is adopted, 400 people will die. If Program D is adopted, there is 1/3 probability that nobody will die, and 2/3 probability that 600 people will die. Which of the two programs would you favor?

A careful reading of the two conditions shows that Program C is identical to Program A, while Program D is identical to Program B. Yet, 72% of the first group chose program A and 28% chose program B, while only 22% of the second group chose program C and 78% chose program D! Kahneman and Tversky interpret this as implying that we are more sensitive to losses than to gains—we would rather risk not gaining something than we would risk losing something. This is clearly a part of the story, but I will argue that there is a deeper issue here. I will argue that “value” is something that we calculate each time, and that the calculation process doesn’t always come up with the same “rational” answer.

Illogical value calculations

The most interesting thing about these results (of which there are many) is that even when they are illogical (like the disease example above), they often still sound right. There are other examples where we can see that decisions are irrational, and don’t sound right when they are pointed out to us, but which humans definitely show when tested. The best examples of most of these come from the advertising and marketing worlds.

A person goes into an electronics store to buy a television and finds three televisions on sale, one for $100, one for $200, and one for $300. Imagine that the three televisions have different features, such that these are reasonable prices for these three TVs. People are much more likely to buy the $200 TV than either of the other two. If, in contrast, the person goes into the store and finds three televisions on sale, the same $200 TV, the same $300 TV, and now a fancier $400 TV, the person is more likely to buy the $300 TV. In the first case, the person is saying that the $200 TV is a better deal than the $300 one, but in the second case, the person is saying that the $300 TV is the better deal. Even though they’ve been offered the same televisions for the same prices, the decision changed depending on whether there is a $100 TV or a $400 TV in the mix. This is called extremeness aversion and is a component of a more general process called framing. In short, the set of available options changes your valuation of the options. This is completely irrational and (unlike the other examples above) seems unreasonable (at least to me). Yet it is one of the most reliable results in marketing and has probably been used since the first markets in ancient times through to the modern digital age.10

Products in advertisements used to compare themselves to each other. Tylenol would say it was better than aspirin and Coke would say it was better than Pepsi. I remember an RC Cola commercial with two opposing teams fighting about which drink they preferred, Coke or Pepsi, while a third person sits on the sidelines drinking an RC Cola, out of the fray, smiling. Advertisements today rarely mention the competition. This is because one of the heuristics we use is simply whether we recognize the name or not.11 So although RC Cola was trying to communicate that Coke and Pepsi were the same, while RC Cola was different, what people took from the advertisement was a reminder that everyone drank Coke and Pepsi. Just mentioning the name reminds people that it exists and reinforces the decision to choose it. Familiarity with a brand name increases the likelihood that one will select it.

It is election season as I write this. All around my neighborhood, people have put out signs for their candidates. The signs don’t say anything about the candidates. Often they don’t even have the party the candidates belong to. Usually, it’s just the name of the candidate, and sometimes, an appeal to “vote for” the candidate. What information do I get from seeing a sign that says nothing other than “Vote for X”? I suppose that one may need to be reminded to vote at all, but then why does the sign include “for X” on it? (One of my neighbors does have a large handmade sign she puts out every election season that just says “VOTE!” on it.) It is true that one can get a sense of the grouping of candidates from people who put out multiple signs: given that I like person X for state representative, seeing that all the people with person X also have person Y for county prosecutor, while all the people who have person X’s opponent have person Z for county prosecutor, might suggest to me that I would like person Y over person Z for county prosecutor. But lots of people have just one sign out. All that tells me is that lots of people like person X. Why would knowing that lots of people like something suggest that I would too?

There are actually three things that these single-sign houses are doing. First, they are increasing my familiarity with that name. As with the products, just knowing the name increases one’s likelihood of voting for someone. Second, if lots of people like a movie, it’s more likely to be a good movie than one everybody hated. Using the same heuristic, if everyone likes a candidate, isn’t that candidate more likely to be a good choice? And third, we like to be on the winning team. If everyone else is going to vote for someone, he or she is likely to win. Of course, we’re supposed to be voting based on who we think is a better choice to govern, not who is most popular. But it’s pretty clear that a lot of people don’t vote that way.

These effects occur because we don’t actually calculate the true value of things. Instead, we use rules of thumb, called heuristics, that allow us to make pretty good guesses at how we value things.12 If you like all the choices, picking the middle one is a pretty good guess at good value for your money. If you’re making a decision, familiarity is a pretty good guess. (Something you’re familiar with is likely to be something you’ve seen before. If you remember it, but don’t remember it being bad, how bad could it have been?)

Some economists have argued that evolutionarily, heuristics are better than actually trying to calculate the true value of things because calculating value takes time and heuristics are good enough to get us by.13 But a lot of these nonoptimal decisions that we’re making are taking time. Knowing that Programs A and C are the same and that programs B and D are the same in the Kahneman and Tversky flu example above doesn’t change our minds. This makes me suspect that something else is going on. It’s not that heuristics are faster and we are making do with “good enough.” I suspect that these effects have to do with how we calculate value. We cannot determine how humans calculate value unless we can measure it. So, again, we come back to the question of how we measure value.

Measuring value

With animals, we can’t ask them how much they value something; we can only offer them something and determine how much they’ll pay for it. Usually, this is measured in terms of the amount of effort an animal will expend to get the reward.14 How many lever presses is the animal willing to make for each reward?

Alternatively, we can give the animal a direct choice between two options:15 Would the animal rather have two apple-flavored food pellets or one orange-flavored food pellet? Would it rather have juice or water? We can also titrate how much of one choice needs to be offered to make the animal switch—if the animal likes apple flavor better than orange flavor, would it still prefer half as much apple to the same amount of orange?

Finally, we can also measure the negative consequences an animal will put up with to get to a reward.16 Usually, this is measured by putting a negative effect (a small shock or a hot plate) in between the animal and the reward, which it has to cross to get to the reward. (It is important to note that crossing the shock or hot plate is entirely up to the animal. It can choose not to cross the punishment if it feels the reward is not valuable enough.) A common experiment is to balance a less-appealing reward with a more-appealing reward that is given only after a delay.17 Because animals are usually hungry (or thirsty) when running these experiments, they don’t want to wait for reward. Thus delay becomes a very simple and measurable punishment to use—how long will the animal wait for the reward?

Of course, humans are animals as well and all of these experiments work well in humans: How much effort would a smoker spend to get a puff of a cigarette?18 How much money will you trade for that coffee mug?19 What negative consequences will you put up with to achieve your reward?20 How long will you wait for that larger reward?21 Will you wait for a lesser punishment, knowing that it’s coming?22

These four ways of measuring value all come down to quantifiable observations, which makes them experimentally (and scientifically) viable. Economists, who study humans, typically use hypothetical choices (“What would you do if …?”) rather than real circumstances, which, as we will see later, may not access the same decision-making systems. It has been hard to test animals with hypothetical choices (because hypothetical choices are difficult to construct without language), but there is some evidence that chimpanzees with linguistic training will wait longer for rewards described by symbols than for immediately presented real rewards.23 (Remember the chimpanzees and the jellybeans?) This, again, suggests that language changes the decision-making machinery. More general tests of hypothetical rewards in animals have been limited by our ability to train animals to work for tokens. Recent experiments by Daeyeol Lee and his colleagues getting monkeys to work for tokens may open up possibilities, but the critical experiments have not yet been done.24

One of the most common ways to measure willingness to pay for something is a procedure called the progressive ratio—the subject (whether it be human or not) has to press a lever for a reward, and each time it receives the reward, the number of times it has to press the lever for reward increases, often exponentially. The first reward costs one press, the second two, the third four, the fourth eight, and so on. Pretty soon, the subject has to press the lever a thousand times for one more reward. Eventually, the animal decides that the reward isn’t worth that much effort, and the animal stops pressing the lever. This is called the break point. Things that seem like they would be more valuable to an animal have higher break points than things that seem like they would be less valuable. Hungry animals will work harder for food than satiated animals.25 Cocaine-addicted animals will work harder for cocaine than for food.26 A colleague of mine (Warren Bickel, now at Virginia Tech) told me of an experimental (human) subject in one of his experiments who pulled a little lever back and forth tens of thousands of times over the course of two hours for one puff of nicotine!

It’s been known for a long time that drugs are not infinitely valuable. When given the choice between drugs and other options, both human addicts and animals self-administering drugsA will decrease the amount of drug taken as it gets more expensive relative to the other options.27 Drugs do show what economists call elasticity: the more expensive they get, the less people take. This is why one way to reduce smoking in a population is to increase the tax on cigarettes.

Elasticity measures how much the decision to do something decreases in response to increases in cost. Luxuries are highly elastic; nonluxuries are highly inelastic. As the costs of going to a movie or a ballgame increase, the likelihood that people will go decreases quickly. On the other hand, even if the cost of food goes up, people aren’t about to stop buying food. Some economists have argued that a good definition of addiction is that things we are addicted to are inelastic. This allows them to say that the United States is “addicted to oil” because we are so highly dependent on driving that our automobile use is generally inelastic to the price of oil. However, again, we face an interesting irrationality. The elasticity of automobile use is not linear28—raising the price of gasoline from $1 per gallon to $2 had little to no effect on the number of miles driven, but when a gallon of gasoline crossed the $3 mark, there was a sudden and dramatic drop in the number of miles driven in 2005. People suddenly said, “That’s not worth it anymore.” Of course, people then got used to seeing $3 per gallon gasoline and the number of miles driven has begun to increase again.

This irrationality is where the attempt to measure value gets interesting. In a recent set of experiments, Serge Ahmed and his colleagues found that even though the break point for cocaine was much higher than for sweetened water,B when given the choice between cocaine and sweetened water, almost all of the animals chose the sweetened water.29 (Lest readers think this is something special about cocaine, the same effects occur when examining animals self-administering heroin.30) Measuring value by how much the animal was willing to pay said that cocaine was more valuable than sweetened water. (Cocaine had a higher break point than the sweetened water did.) But measuring value by giving the animals a choice said that the sweetened water was more valuable. (They consistently chose the sweetened water over the cocaine.) The two measures gave completely different results as to which was more valuable.

One of my favorite examples of this irrationality is a treatment for addiction called Contingency Management, where addicts are offered vouchers to stay off drugs.31 If addicts come into the clinic and provide a clean urine or blood sample, showing that they have remained clean for the last few days, they get a voucher for something small—a movie rental or a gift certificate. Even very small vouchers can have dramatic effects. What’s interesting about this is that raising the cost of a drug by $3 would have very little effect on the amount of drug an addict takes (because drugs are inelastic to addicts), but providing a $3 voucher option can be enough to keep an addict clean, straight, and sober. Some people have argued that this is one of the reasons that Alcoholics Anonymous and its 12-step cousins work.C It provides an option (going to meetings and getting praised for staying sober) that works like a voucher; it’s a social reward that can provide an alternative to drug-taking.32

So why is it so hard to measure value? Why are we so irrational about value? I’m going to argue that value is hard to measure because value is not something intrinsic to an object.

First, I’m going to argue in this book that there are multiple decision-making systems, each of which has its own method of action-selection,33 which means there are multiple values that can be assigned to any given choice. Second, in the deliberative experiments we’ve been looking at, we have to calculate the value of the available options each time.34 (Even recognizing that options are available can be a complex process involving memory and calculation.) The value of a thing depends on your needs, desires, and expectations, as well as the situation you are in. By framing the question in different ways, we can guide people’s attention to different aspects of a question and change their valuation of it. In part, this dependence on attention is due to the fact that we don’t know what’s important. A minivan can carry the whole family, but the hybrid Prius gets better gas mileage, and that convertible BMW looks cooler. It’s hard to compare them.

This means that determining the true value of something depends on a lot of factors, only some of which relate to the thing itself. We’ll see later that value can depend on how tired you are, on your emotional state, and on how willing you are to deliberate over the available options.35 It can even be manipulated by changing unrelated cues, such as in the anchor effect, in which unrelated numbers (like your address!) can make you more likely to converge to a higher or lower value closer to that number.36

Economists argue that we should measure value by the reward we expect to get, taking into account the probability that we will actually get the reward, and the expected investment opportunities and risks.37 If we tried to explore all possibilities and integrate all of this, we could sit forever mulling over possibilities. This leads us to the concept of bounded rationality, introduced by Herb Simon in the 1950s, which suggests that the calculation takes time, that sometimes it’s better to get to the answer quickly by being less complete about the full calculation, and that sometimes a pretty good job is good enough.38 Instead, we use heuristics, little algorithms that work most of the time.

The problem with this theory is that even when we are given enough time, we continue to use these heuristics. Economists and psychologists who argue for bounded rationality (such as Gerd Gigerenzer39) argue that evolution never gives us time. But if the issue were one of time, one would expect that the longer one was given, the more rational one would be. This isn’t what is seen. People don’t change their minds about taking the middle-value television if they are given more time; in fact, they become more likely to pick the middle television, not less, with more time.40

We pick the middle television in the three-television example because one of the algorithms we use when we want to compare the choices immediately available to us is to find the middle one. Another example is that we tend to round numbers off by recognizing the digits.41 $3.99 looks like a lot less than $4.00. Think about your “willingness to pay” $3.75 per gallon for gasoline relative to $3.50 per gallon. If both options are available, obviously, you’ll go to the station selling it for $3.50. But if it goes up from $3.50 to $3.75 from one day to the next, do you really stop buying gasoline? Now, imagine that the price goes up from $3.75 to $4.00. Suddenly, it feels like that gasoline just got too expensive. We saw a similar thing at the soda machine by my office. When the cost went up from 75¢ to $1, people kept buying sodas. But then one day it went up from $1 to $1.25, and people said, “It’s not worth it” and stopped. (It’s now $1.50 and no one ever buys sodas there anymore.) Sometimes the cost crosses a line that we simply won’t put up with anymore.

Part of this is due to the mechanisms by which we categorize things. Whole dollar amounts draw our attention and we are more likely to pick them. In a recent experiment, Kacey Ballard, Sam McClure, and their colleagues asked people which they would prefer, $7 today or $20 in a week.42 (This is a question called delay-discounting, which we will examine in detail later in our chapter on impulsivity [Chapter 5].) But then they asked the subjects to decide between $7.03 today and $20 in a week. People were more likely to pick the $20 over $7.03 than $20 over $7 even. In a wonderful control, they then asked the subjects to decide between $7 today and $20.03 in a week. This time, people were more likely to pick the $7. There is no way that these decisions are rational, but they do make sense when we realize (1) that making the decision requires determining the value of the two options anew each time and comparing them, and (2) that we use heuristics that prefer even dollar amounts to compare them.D

Value is an important concept to understanding decision-making, but our brains have to calculate how much we value a choice. That calculation is based on heuristics and simple algorithms that work pretty well most of the time but can be irrational under certain conditions. How do we actually, physically determine value? What are the neurophysiological mechanisms? That brings us to the differences between pleasure, pain, and the do-it-again signal.

Books and papers for further reading

• Daniel Ariely (2008). Predictably Irrational: The Hidden Forces that Shape Our Decisions. New York: HarperCollins.

• Daniel Kahneman (2011). Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.

• Scott Plous (1993). The Psychology of Judgment and Decision-Making. New York: McGraw-Hill.