13
Motivation

The specific goals that the decision-making systems are trying to satisfy arise from intrinsic goal functions evolutionarily prewired into our brains. However, we can also learn to identify new goals that we have learned lead to those intrinsic rewards or avoid those intrinsic punishments. The vigor with which we attempt to attain those goals depends on the level of intrinsic need and the opportunity cost of chasing those goals at the expense of others.

So far, we’ve talked about decisions as arising from evolutionarily learned reflexes, and from Pavlovian, Deliberative, and Procedural systems that learn from rewards and punishments. We’ve talked about how animals integrate information about the world and categorize situations to make their decisions. Throughout our discussion, however, we have sidestepped a fundamental question: What defines the goal? We identified opioid signaling as measuring the positive and negative reward and punishment components, and dopamine signaling as measuring the differences between predicted and expected (see Chapters 3 and 4). But neither of these defines what those positive rewards or negative punishments are.

The first explicit models of decision-making assumed that the goal was to maximize value or “utility.”1 Adam Smith (1723–1790) argued that everyone is trying to maximize his or her own utility.2;A From the very beginning of the economic literature, it was immediately clear that it was impossible to define utility in a general way that would be the same for all people, and so the mathematicians Nicolaus Bernoulli (1695–1726) and his younger brother Daniel Bernoulli (1700–1782) proposed the idea of subjective utility, which suggests that each of us has an internal measure of that utility.B But it still doesn’t define how our interaction with the world is translated into rewards and punishments.

Similarly, the original computer models of decision-making defined the goal to maximize as a reward function of time or situation, but never defined what the reward was.7 These computer models did this to explicitly avoid the question that we are trying to address here. The authors of those computer models were trying to understand the decision-making algorithms in an imaginary (disembodied) computer world. When these algorithms were translated into robotics, the authors had to hard-code the goals into the robots.8

Intrinsic reward functions

Theorists studying how animals forage for food in the wild have argued that animals (including humans) are maximizing their energy gains and minimizing their energy costs.9 For example, imagine a hummingbird gathering nectar from a flower. To get the nectar, the hummingbird hovers in front of the flower, pulling nectar out. As the bird drinks the nectar it becomes harder and harder to pull nectar out. At some point, the energy being spent to flap its wings fast enough to hover is more than the energy it is pulling out of the flower, and it should go to another flower. Going to another flower is risky because the bird doesn’t know how much nectar is available at that other flower. Maybe it has already been tapped out. Maybe the bird will search for a long time. It is possible to calculate the optimal time that the bird should spend at a patch of food and when it should leave. This optimal time depends on the size of the patch, the expectation of how much food is left in the patch, what the expected travel time is to the next patch, and the likely amount of food the bird would expect to find at the next patch. These equations correctly predict stay and leave times for foraging and hunting birds, mammals, and insects, transitions in monkey decision-making experiments, even for human hunters and gatherers deciding when and where to look for food, and for people foraging for information on the Web and in the world.10

But these foraging theories are based on the assumption that the only purpose of these animals is survival. Darwin’s natural selection theory tells us that what really drives success is not survival but the procreation of genes into future generations, in particular successful procreation of future successful procreators.11 There are numerous examples where animals sacrifice themselves for their genetic future. Even something as simple as a parent feeding its children is proof that there is more to the picture than individual survival.C We all know examples of parents sacrificing for their children, whether it be penguins starving themselves to keep eggs warm, starlings bringing food back to a nest, a polar bear protecting her cubs, or human immigrants sacrificing their dreams so their children can succeed in a new land.

Of course, there are also the classic studies of the social insects (bees, ants), who live in large colonies where drones sacrifice for a queen and her children.12 The reason for this is now well understood, and is due to the fact that because of the way the insect genetics works, a drone is more genetically similar to her sisters than to her own children, so she evolutionarily prefers to make more sisters than to make more children.

These descriptions assume that animals are actually calculating how to maximize their genetic success. I don’t know about you, but I don’t generally spend each morning thinking, “How can I increase my genetic success today?” I’m pretty sure that no other animal does either. In large part, this is because determining what is going to lead to genetic success is very hard to predict. Such a calculation would take a tremendous amount of brain power—an individual spending too much time worrying about it is unlikely to actually succeed at it, and would quickly become genetically outcompeted.

So what does define the goals we want? One suggestion is that we have intrinsic goal functions evolutionarily prewired into our brains,13 but that doesn’t say what those goals are. We are back to the hard-coded goals that the roboticists put into their robots. These authors argue that our evolutionary history has hard-coded goals into our brains that we then seek out irrespective of their actual evolutionary use. For example, dogs like to chase things, presumably because in the evolutionary past, the primary things that they got to chase were big game like deer, moose, and elk. For many dogs, the primary things they get to chase now are cars. These concepts were pushed forward in the 1970s under the guise of sociobiology—usually stated too simply and too unwilling to address the complexity of primate interactions.14;D

The origins of sociobiology were in animal ethology studies, particularly insect studies.18 While insects are often interpretable as little robots,19 with simple routines that are often recognizable (moths fly in a spiral toward lights), and often with simple failure modes (as anyone watching a moth fly toward a flame can see), birds and mammals are more complex, and humans particularly so. Human failure modes are also particularly complex, but that doesn’t mean they’re not identifiable. (We will address many of the ways that the decision-making systems break down [their failure modes] in the next section of the book.)

Human social interactions are incredibly complex. This does not, however, imply that they are random, unobservable, or unidentifiable. As discussed in Chapter 12, novelists and playwrights have been observing humans for millennia. As an audience, we recognize that individual humans have a consistent character. In a novel or a play or a movie, if someone does something that is “out of character,” it shows up like a sore thumb. We have a very large portion of our brains devoted to recognizing social interactions. Does this mean we know which way an individual will jump? Not perfectly, but we can predict what an individual is likely to do, and we make our plans based on those predictions.

So what are those intrinsic goal functions? Obviously, when we are hungry, we want food; when we are thirsty, we want water. And obviously, we desire sex. Notice that a desire for sex is different than a desire for children. The evolutionary development of the desire for sex as the means of driving procreation rather than the desire for children as the means for procreation is a classic example of an intrinsic reward function. Of course, some couples do have sex because they are trying to have a baby. But, often, sex is for its own pleasures, and couples are surprised at the unexpected pregnancy.

What’s interesting, of course, is that humans (like all animals) are actually very picky about sex—partners who are more likely to produce strong, healthy, and intelligent children are more attractive sexually.20 In addition, because humans require long commitments to raise successful children, humans, particularly women, often demand some sort of commitment before sex.21 There have been a tremendous number of studies examining the complex interaction that arises in the human mating game, showing, for example, that beauty is not in the eye of the beholder, but that there are standards of beauty that are consistent across cultures and individuals that signal information about health, fecundity, and intelligence.22

Certainly some components of beauty do seem to vary from culture to culture, but these components tend to entail elaborate preparations that show that we have friends and the time to maintain our appearances. Because humans live in communities (it really does take a village to raise a child in normal human societies23), many of the variable components of definitions of beauty require a complex interaction with others of the community to maintain. (Think how difficult it is to cut one’s own hair.) The specifics of these preparations may change culturally, but they consistently indicate time and effort put into appearance.

But food, water, and sex are not enough for us: we also want to be respected, and we seek out music, excitement, family, and friends. Babies seek out their mothers; mothers and fathers seek out their children. As a specific case, let’s take maternal interactions. In one of the classic histories of the sequence of science, early studies of interaction suggested that animals did things only for food reward.24 This suggested to researchers that children were “attached” to their mothers because they were fed by them.E In particular, the behaviorist John Watson (who we have met before as one of the major players in the development of the concept of stimulus–response “habits”) argued that touching infants could only transmit disease and thus kids shouldn’t be touched any more than necessary.25 A third of a century later, a young psychologist named Harry Harlow, setting up a new laboratory at the University of Wisconsin, came across a description of how children being separated from their mothers as they entered a hospital would scream and cry. A few weeks later, his rat laboratory still not ready for experiments, he watched monkeys playing at the Madison zoo and realized he could directly test whether this anxiety was due to the separation from their mothers or whether it was some generalized stress, like going to the hospital. Harlow began a very famous series of experiments about motherhood and found that infants needed the warmth of a mother (or a mother-analog) as much as they needed food.26

Although these experiments galvanized the animal rights movement27 and are still held up as terrible, nightmarish experiments (and would be very difficult to get approved under today’s animal-care regulationsF), they were a key to understanding maternal interactions. As a consequence of these experiments, hospital procedures were changed. For example, infants in the neonatal intensive care unit (NICU) are now touched regularly by a parent or nurse, which has a dramatic effect on infant survival. These procedures were changed due to John Bowlby and his work on attachment theory.28 Although Bowlby’s initial hypotheses were arrived at simultaneously with Harlow’s first experiments (1958/1959), throughout the subsequent decades, Harlow’s careful experimental work formed the basis for Bowlby’s successful campaign to change hospital procedures, as evidenced by the prominent place Harlow’s experiments had in Bowlby’s influential book on attachment theory. Every child that has survived the NICU owes his or her life in part to Harry Harlow and those terrible experiments.G

We now know that maternal interactions between children and their parents (in humans both mothers and fathers) are driven in part by the neurotransmitter oxytocin, a chemical released in the hypothalamus in both mothers and infants shortly after birth. Oxytocin seems to be a key component of the social glue that holds couples and families together.32 Externally delivered oxytocin makes people more likely to trust a partner in economic decision games. In mothers and infants, oxytocin is released in response to holding children. (This was one of the points of Harlow’s work—part of the maternal bond is due to the social contact.33 In fact, oxytocin both induces mutual grooming and is released during mutual grooming in a number of mammalian species, including humans.) Merely holding and playing with a child releases oxytocin, even in men. I still remember vividly the first time I held each of my children and fed them and the absolutely overwhelming emotion associated with it. Even today, I really like cooking for my kids and seeing them like the food I’ve made.

Like eating, drinking, and sex, parental care is an intrinsic goal function. We don’t eat or drink because it’s necessary for survival, we eat and drink because we enjoy it, because it satisfies a need we feel in our bodies. Similarly, we don’t have sex because we are trying to procreate (usually). Proof that parental care is an intrinsic goal function can be seen in the way people treat their pets (and, perhaps, even in the fact that people keep pets in the first place). At one time, it might have been true that dogs were hunting companions, and that cats kept granaries clean of rodents, but most households in urban and suburban settings aren’t hunting for survival, nor do they have granaries to be kept rodent free.H Nevertheless, more than half of U.S. households have pets of one sort or another. People with pets live longer and are more happy than people without.34 Pets, particularly dogs and cats, capture some of that intrinsic parental care goal function. Just as dogs chase cars because they love to chase things, we have pets because we need that social interaction. Dogs have been shown to be therapeutic stress relievers for both traumatized children and for college students studying for finals. In college, during finals week, local groups would come around with their therapy dogs. I can say from personal experience that there is something remarkably relaxing about taking a break from studying to play with a big St. Bernard.

So what are the intrinsic goal functions of humans? The list would probably be very long and may be difficult to pin down directly, in part because some of these intrinsic goal functions develop at different times. It is well known that interest in the opposite sex fluctuates from not even noticing differences in the very young (say younger than seven years old), to not wanting to talk to the other sex (the famous “cootie” stage, say seven to eleven), to intrigue (twelve, thirteen), and eventually to full-blown obsession (say seventeen). Maternal and paternal interests also famously switch on at certain ages. But all of these intrinsic goal functions contain some learning as well. Although teenage girls are often obsessed with babies, teenage boys are often disdainful. However, teenagers, particularly teenage boys, with much younger siblings are often more parental (in the sense of having positive reactions to very young kids) than teenagers without younger siblings.

Many of the intrinsic goal functions in humans are cognitive. One of the most interesting that I find is the satisfaction in completion of a job well done. There is an inherent feeling of success in not just the completion, but the knowledge that the job specifically was well done.35 The act of discovery is itself a remarkably strong driving force. Although there have been those who have argued that curiosity is really due to issues of information gathering needed for proper behavior within an environment, several recent computational studies have found that an inherent drive to explore produces better learning than simple reward-maximization algorithms and that humans and other animals seem to have an over-optimism producing exploration.36 Even other primates will work harder for curiosity than for food rewards.37 Many scientists (including myself!) have a strong emotional feeling of accomplishment when they identify a new discovery—we now know something that was not known before, and that emotion of accomplishment is a strongly motivating force.38

Similarly, many of the clear intrinsic goal functions of humans are social, the respect of one’s peers (as, for example, sports stars who want to be “the highest paid wide receiver” or celebrities who can’t give up the limelight). This respect is often incorrectly called social “rank,” but primate societies are very complex and differ from species to species.39 For example, baboon societies include parallel hierarchies of females and males,40 and a remarkable reverse hierarchy in the males, where the most successful males (the ones who make decisions that everyone follows and the ones who have the most matings and the most children) are the ones who lose the fights (because they walk away).41;I

Human societies are particularly complex and leadership roles vary with situation and context.42 In small bands and tribes, social rank can be very dynamic. Individual expertise provides transient leadership roles and corresponding rank increases. In one of my favorite examples of this, for her Master’s thesis, my wife watched children in the Carnegie Museum in Pittsburgh.43 Some children are dinosaur experts, while other children are not. In the dinosaur section of the museum, dinosaur-expert children were treated with more respect and deference than their peers. As a control, she also watched those same children in the geology section, where their expertise no longer gave them that rank boost. Of course, rank can bleed over from one’s expertise to other aspects of life. The dinosaur-expert children were more likely to be asked to make unrelated decisions (such as where to go for lunch) than the nonexpert children. We see this, for example, in celebrities (such as actors or TV pundits) asked to comment on decisions as if they were scientific experts.

Neurophysiologically, many studies have found that these intrinsic goal functions increase activity in the dopamine systems (remember, dopamine signals value-prediction error and serves as a training signal) and increase activity in a specific set of ventral structures known to be involved in pleasure, displeasure, and drive. Generally, these studies have asked what turns on these ventral decision-making structures, including both the ventral striatum, which sits underneath the rest of the striatum, and the insula (which also sits at the ventral side of the brain, but more anterior, forward, than the ventral striatum).44 Although these studies can’t answer whether these functions are intrinsic or learned because they have to be studied in adults, the studies find an intriguing list of signals that drive this activity, including food, money, drugs, attractive faces, successfully playing videogames, music, social emotions such as love and grief, even altruism (e.g., giving to charity) and justice (e.g., punishing unfairness).45

In particular, humans seem to have hard-coded goals that encourage positive social interactions, particularly within defined groups.46 Humans seem to have an intrinsic goal of a sense of fairness, or at least of punishing unfair players. In one of the more interesting studies, people interacted with two opponents, one of whom played fairly and one of whom didn’t.47 The people and the two opponents were then given small (but unpleasantJ) electric shocks. The key question in the experiment was the activation of the ventral reward-related structures of the subject watching the two opponents get electronic shocks. In both men and women, there was more activation in the insula (signaling a negative affect) when the fair opponent got shocked than when the unfair opponent got shocked. In men, there was even activity in the ventral striatum (signaling a positive affect) when the unfair opponent got shocked. Part of the intrinsic goal function in humans is a sense of fairness in our interactions with others. We will see this appear again in our discussion of morality and human behavior (Chapter 23).

In the earlier chapters on decision-making, we discussed how we make decisions based on our goals, but didn’t identify what those goals were. We have mentioned some of the goals here, but the goals vary in large part from species to species and from individual to individual within a single species. Identifying the goals that generalize across human behavior and the distributions of goals that vary between humans are both areas of active research today.49 Some of those goals are learned over evolutionary timescales and are hardwired into an individual, while others are sociological in nature, depend on inherent processes in our cultural interactions, and must be learned within the lifespan, and some are due to an interaction between genetics and culture.50 The key point here, however, is that although goals are evolved over generations and learned over time in such a way that they have helped procreate our genetics, the goals that are learned are only indirectly aimed at successful procreation. Whether it be dogs chasing cars or college boys surfing the Internet for porn, it is the intrinsic goals that we are trying to satisfy. The decision-making systems that we have explored have evolved to satisfy these goals. In later sections, we will see how these indirect intrinsic goals can lead us astray.

Learning what’s important

Hunger informs us when we need sustenance, thirst when we are dehydrated, but people also eat when they are not hungry and drink when they are not thirsty. Actually, that’s not the full story—a more careful statement would be that we can become thirsty even when we are not dehydrated, and hungry even when we don’t need food. This occurs because certain cues (visual cues, odor cues) can drive us to become hungry and to desire food or to become thirsty and to desire water.

The smell of baking bread, an image of pizza flashed on the screen, cues that remind us of food can make us hungry. Animals that encounter a cue predictive of food are often driven to eat.51 Similarly, cues that appear before something bad make us afraid.52 We’ve already seen how these cues can lead us to Pavlovian action-selection, which we saw in Chapter 8 primarily involved emotions and somatic actions. In addition, these cues can create motivation that drives the other action-selection systems.53;K

These effects are known as Pavlovian-to-instrumental transfer (PIT) because they involve training similar to the learning we saw in Pavlovian action-selection systems (Chapter 8) and an influence on the other two action-selection systems (Deliberative and Procedural, which were historically known as “instrumental” learning; see Footnote 6C).55 In PIT, as in Pavlovian action-selection, two cues are presented such that one is informative about whether the other is coming (Pavlov ringing a bell means food will be delivered soon). However, here, instead of driving action-selection (salivate), the learned contingency changes your motivation, your desire for that outcome. In a sense, the bell makes the dog think of food, which makes it hungry.

Scientists studying the effects of marketing have suggested that restaurant icons (such as the “golden arches” of McDonald’s or the instantly recognizable Coke or Pepsi symbols) have a similar effect on people, making them hungry (or thirsty) even when they are not. Presumably, this only works on people with positive associations with these symbols. It might be very interesting to measure the emotional associations with certain fast-food restaurants (which tend to produce strongly positive emotions in some people and strongly negative emotions in others) and then to measure the effects of their symbols on choices, but, to my knowledge, this has not yet been done. However, Sam McClure, Read Montague, and their colleagues found that individual preferences for Coke or Pepsi were reflected in preferences for identical cups of soda labeled with different labels.56 They found that the label changed what people liked—Pepsi-liking people preferred the Pepsi-labeled cup, even though the two cups had been poured from the same bottle. Similarly, Hilke Plassmann and her colleagues studied effects of pricing on the perception of wine and found that people thought higher-priced wines tasted better, even when they were given identical wines in expensive and inexpensive bottles.57

The influence of this Pavlovian learning of contingencies (cue implies outcome) on instrumental action-selection (Deliberative, Procedural) can be either general, leading to increased arousal and an increase in speed and likelihood of taking actions overall, or specific, leading to increased likelihood of taking a specific action. In generalized PIT, a cue with positive associations produces a general increase in all actions, while in specific PIT, the cue produces an increase in actions leading to the reminded outcome but not to other outcomes.58 We will address generalized PIT in the next section, when we talk about vigor and opportunity costs; in this section, I want to look at the effects of specific PIT.

Imagine, for example, training an animal to push one lever for grapes and another lever for raspberries. (Simply, we make the animal hungry, provide it levers, and reward it appropriately for pushing each lever. Of course, in real experiments, these are done with flavored food pellets, which are matched for nutrition, size, etc. so that the only difference is the flavor.) Then, imagine training the animal (in another environment) that when it hears a tone, it gets a grape. (Simply, we play the tone and then quickly thereafter give the animal the grape. Do this enough, and the animal will expect that getting a grape will follow the tone.) Putting the animal back in the lever environment, when we play the tone, the animal will be more likely to push the lever for grapes. We can test for generalized versus specific PIT by examining whether the animal presses both levers (a generalized increase in activity) or only the grape lever (a specific increase in the action leading to getting grapes). What’s happening here is that the animal is reminded of the possibility of getting grapes by the tone, which leads to an increase in the likelihood of taking an action leading to getting grapes. In a sense, the tone made the animal hungry for grapes.

Of course, this also happens in humans. Seeing the paraphernalia for a drug that an addict tends to take leads to craving for the drug.59 Some cigarette smokers prefer to self-administer a nicotine-free cigarette rather than an intravenous injection of nicotine,60 suggesting that the conditioned motivational cue can be even stronger than the addictive substance. And, in the examples we keep coming back to in this section, marketing symbols increase our internal perceptions of hunger and thirst.61 I took my kids to a movie theater last week, and, while sitting waiting for the movie, my son complained that all of the food adsL were making him hungry, even though he had just eaten a large lunch. He said, “I wish they wouldn’t show those ads, so I wouldn’t get hungry because I’m already so full.”

Vigor—Need and opportunity costs

Most of the choices in our lives entail doing one thing at the expense of another: if we are working at our desk, we are not exercising; if we are sleeping, we are not working.… There are dozens if not hundreds of self-help books aimed at time management. The losses that come from not doing something can be defined as an opportunity cost62—what opportunities are you losing by choosing Option A over Option B?

Yael Niv and her colleagues Nathaniel Daw and Peter Dayan recently suggested that the average (tonic) level of dopamine is a good measure of the total reward available in the environment.63;M This suggests that tonic dopamine is related to the opportunity cost in the environment. An animal being smart about its energy use should change the vigor with which it works as a function of this opportunity cost. (If there are lots of other rewards in the environment, one should work faster, so one can get to those other rewards quicker. On the other hand, if there are not a lot of other rewards, then don’t waste the energy doing this job quickly.)

Interestingly, this tonic-dopamine-as-vigor hypothesis unifies the role of dopamine in learning (phasic bursts drive learning), incentive salience (dopamine implies a cue that leads to reward), and the motoric effects of dopamine in Parkinson’s disease, Huntington’s disease, and other dysfunctions of the basal ganglia.66

Parkinson’s disease occurs when the dopamine cells die, leaving the person with a severe deficiency in dopamine available to the basal ganglia.67 (Remember from Chapter 10 that the basal ganglia use dopamine to learn and drive Procedural learning.) For years, dopamine was thought to drive movement because excesses in dopamine to the basal ganglia produced inappropriate movements (called dyskinesias, from the Greek δυσ- [dys-, bad] + κινησimageα [kinesia, movement]), while deficiencies in dopamine to the basal ganglia produced a difficulty initiating movements and a general slowing of reaction time (called akinesia or bradykinesia, from the Greek α [a-, without] and the Greek βραδimage- [brady-, slow]).68

The other thing that drives motivation and the willingness to work is, of course, need. As we discussed earlier in this chapter (Intrinsic goal functions, above), a major role of motivation is to identify your needs—a state of dehydration means you need water. This is almost too obvious to say, but it is important to remember that these are not binary choices—it is not “Am I hungry?” but “How hungry am I?” The hungrier you are, the more willing you are to try new foods, the more willing you are to work for food, and the better that food tastes. Vigor also depends on how badly you need that outcome.69

Several authors have suggested that dopamine levels drive invigoration, from both the opportunity cost and need perspectives.70 Basically, these authors suggest that tonic dopamine levels enable an animal to overcome effort—the more dopamine available, the more the animal is willing to work for a result. Intriguingly, this has evolutionary similarities to the role of dopamine in very simple vertebrates, where dopamine invigorates movement.71

Manipulations of dopamine can drive an animal to work more for a specific reward cued by the presence of visual or auditory signals when that dopamine is increased.72 The specific interactions between dopamine, learning, invigoration, and reward is an area of active research in many laboratories around the world, and a source of much controversy in the scientific literature.73 Most likely, dopamine is playing multiple roles, depending on its source, timing, and neural targets.

In any case, how willing an animal is to work for reward or to avoid a punishment is an important motivational issue.74 Vigor is very closely related to the issue of general PIT discussed earlier in this chapter. The presence of cues that signal the presence of rewards likely implies that the world around you is richer than you thought. If there are more rewards around to get, working slowly means spending more of that opportunity cost, and a smart agent should get the job done quicker so that it can get back to those other rewards.

Summary

In previous chapters we talked about action-selection, but not about what motivated those actions. Motivation can be seen as the force that drives actions. Motivation for specific rewards depends on the specific needs of the animal (intrinsic goal functions), which can vary from species to species. Intriguingly, humans seem to have intrinsic goal functions for social rewards. In addition to the intrinsic goal functions, cues can also be learned that can drive motivation, either to specific outcomes (specific Pavlovian-to-Instrumental Transfer) or to invigorate the animal in general (general Pavlovian-to-Instrumental Transfer). These motivational components are an important part of the decision-making system, because they define the goals and how hard you are willing to work for those goals, even if they don’t directly select the actions to take to achieve those goals.

Books and papers for further reading

• David W. Stephens and John R. Krebs (1987). Foraging Theory. Princeton, NJ: Princeton University Press.

• Robert Wright (1995). The Moral Animal: Why We Are the Way We Are: The New Science of Evolutionary Psychology. New York: Vintage.

• Kent C. Berridge (2012). From prediction error to incentive salience: Mesolimbic computation of reward motivation. European Journal of Neuroscience, 35, 1124–1143.

• Yael Niv, Nathaniel D. Daw, Daphna Joel, and Peter Dayan (2006). Tonic dopamine: Opportunity costs and the control of response vigor. Psychopharmacology, 191, 507–520.