Paul Rozin and Peter M. Todd
The authors of this chapter and, presumably, all its readers are classified as vertebrates, in the class Mammalia. This class is a major group of vertebrates and includes those of greatest concern to all of psychology. The class is named and defined in terms of its mode of feeding infants. Among the mammals, two major orders, the Carnivora and the Insectivora, are named for their feeding habits. Many important taxa of nonmammals can also be distinguished by their feeding habits. Evolutionary biologists generally hold that the three most important pieces of information about an unknown animal are its taxonomic position, its feeding habits, and the ecological setting in which it lives. The three are closely related, especially ecology and food. Food choice coupled with ecology almost certainly are the major forces directing the evolution of animals. And the defining feature of animals is that they have to obtain food from living organisms in the external world.
Food (including water) holds a special place for biology and for psychology. Obtaining food is one of the six basic biological functions that engage behavior: breathing, excretion, sleeping, protection (avoiding harm), mating, and feeding. Breathing and excretion do not play a major role in psychology (pace Freud) because the behaviors, though frequent, are very basic and not very different between humans and nonhumans, and they do not vary across individuals in interesting ways. Sleeping, which is surely a boring behavior, has fascinated psychologists and often gets its own chapter in introductory psychology texts. Protection (avoiding predators, building nests, finding safe places to sleep, parasite avoidance) receives little attention in psychology though these behaviors are often quite elaborate (although see Barrett, Chapter 9, this volume; Duntley, Chapter 10, this volume; Schaller, Chapter 7, this volume; Silverman & Choi, Chapter 8, this volume). Parasite avoidance in particular has recently come to the attention of psychologists as a result of a renewed interest in disgust, driven in substantial part by the work of evolutionary psychologists (e.g., Curtis, 2013; Tybur, Lieberman, Kurzban, & DeScioli, 2013). A, if not the, major source of parasites comes through ingestion, that is, food and eating.
Eating occupies more time and thought than most of these other activities and plays a crucial role in biological and cultural evolution. And yet eating and food choice, other than abnormalities therein (obesity, eating disorders) and regulation of food intake (hunger, thirst), gets little or no attention in introductory psychology texts.
Food selection is performed very frequently in animals, including humans, and varies widely across species and within our special favorite species, Homo sapiens. In terms of both survival and elaboration, it is probably the most important psychologically relevant biological function. Although psychologists show almost no interest in food choice, it is food and water that were the core of the behaviorist enterprise that dominated American psychology in the mid-20th century. This was simply because they were convenient: An animal can work for hundreds of small food and water reinforcements a day.
Food choice and accompanying changes in the environment, probably co-occurring, are generally cited as the most important forces in human biological and cultural evolution. The invasion of the savannah niche by previously forest-dwelling hominids is a major step in the evolution of humans and their big brains. Among the most profound changes and advances that have caused the great elaborations of modern human life, through biological and cultural evolution, are meat consumption and hunting, the discovery and harnessing of fire for cooking (Wrangham, 2009), and the domestication of plants and animals (Diamond, 1997). All these are about food.
We claim that what has made humans so special is a combination of changes in food choice (P. Rozin, 1976) and accompanying changes in sociality (Humphrey, 1976). Two great problems humans have faced in their evolution and brain expansion are how to figure out, from an enormous number of options, what is toxic and what is nutritive—the omnivore's dilemma—and how to coordinate the activities of multiple individuals to satisfy the needs of an omnivorous food-selection pattern.
The enterprises of both evolutionary biology and psychology have two explanatory components (P. Rozin & Schull, 1988). One is to determine the adaptive value of a current behavior, in terms of the ancestral environment in which it evolved. The second is actually tracing out over time how something came to appear in our species. This is particularly challenging for evolutionary psychology, because unlike the skeleton, behavior leaves few fossilized traces.
The concept of preadaptation (also called co-optation and exaptation; Buss, Haselton, Shackelford, Bleske, & Wakefield, 1998; Gould, 1991) figures centrally in trying to account for the food world of contemporary human beings in terms of its evolved biological and cultural history. Mayr (1960) proposed that the major source of evolutionary “novelties” is the co-opting of an existing system for a new function, or preadaptation. Preadaptation can either replace an original function or add new functions to an existing system. A food-relevant example is the human mouth. The teeth and tongue evolved for food handling. However, by a process of preadaptation, they are now shared by the language expression system. Teeth and tongue are critical in pronunciation, but they did not evolve for that purpose. A process like preadaptation can be seen in both individual development and cultural evolution; in development it can be described as the accessing of previously inaccessible systems for a wider range of activities, functions, or elicitors (P. Rozin, 1976), whereas in cultural evolution, a new discovery can be leveraged quickly to new applications without having to wait for a genetic change.
Food itself has come to serve many functions—aesthetic, social, and moral—in addition to its original nutritive function (Kass, 1994; P. Rozin, 2007). The food vocabulary has expanded to encompass metaphorical functions, again by a process of preadaptation. The words taste and distaste indicate general aesthetic judgments. In Hindu India, food and eating are deeply social and moral activities (Appadurai, 1981).
The food cycle is a description of a sequence of activities that usually terminates in the consumption of food. For most animals, it begins with arousal, a motive for searching for an appropriate food item. This is the part of the cycle that has been of most interest to psychologists, in the form of two motivational systems—hunger and thirst. Hunger is principally activated by a shortage (or anticipated shortage) of energy. Thirst is activated by a shortage of water, the most fundamental nutrient. A possible third system, sodium appetite, clearly exists in rats and some other animals (Schulkin, 1991) and may exist in humans.
The second phase of the food cycle is search. Search has two very important psychological components. One has to do with what to search for, that is, the identification of good candidates for ingestion. This is particularly important for a generalist (omnivorous) animal, such as humans. The second has to do with the pattern of search—where to look and when to shift from one foraging area to another to maximize energy input with respect to energy expenditure. This aspect of behavior has been well studied in animal behavior in the framework of optimal foraging theory.
The third stage of the food cycle involves the capture of food, once it is identified. This may be trivially easy for some grazing animals, but for others, plant foods may be difficult to access and require specific adaptations, witness the giraffe. For animals that consume some animal life (carnivores and omnivores), the capture phase may be by far the most challenging in the food cycle. A wolf pack capturing an antelope involves many highly honed skills including social skills. The speed of a cheetah is clearly an adaptation to capturing swift prey, as is the sonar system of some types of bats, used to both detect and capture insects. Some of the best and most exciting examples of “arms races” occur in this area, such as the coevolution of bat predatory abilities (e.g., sonar) and moth protective predator detection abilities (Roeder, 1998).
The fourth stage is preparation of food for consumption, subsequent to capture. Again, for grazing animals or insect eaters, this phase is absent or minimal, but for some animals, access is a critical challenge. Oyster drills have to make holes in the oyster shell to obtain the meat inside, and many mammals and some birds have to crack nuts or shells. Food storage is also a part of this stage. Preparation becomes a major aspect of the relation of modern humans to food.
The fifth stage, consumption, is vitally important but usually less interesting. Of course, for humans, the meal is quite an elaborate and social consumption activity.
Depending on its type of food, an animal's adaptations to its food world will elaborate different phases of the food cycle. For the specialist, an animal that eats one relatively small class of foods, the identification of food (e.g., bamboo leaves for a panda) can be innate, and preparation and consumption is straightforward. Carnivores eat a wider class of foods, but they can often be encoded in a rather simple way, so that the detection of food is not problematic. For some species of frogs, if it is small and moving, it is food. There are adaptations needed for capturing prey (such as the speed of the cheetah), identifying vulnerable prey, and deciding where to forage. The same considerations hold in general for other animals with limited classes of food, such as exclusive leaf or exclusive fruit eaters.
The generalist eats a wide range of different foods. One subset of generalists, omnivores, consumes both animal and plant foods. For the generalist, there is no way to prespecify what is edible and what is not, or what is toxic, from the enormous set of food possibilities. This requires learning, but with some biological predispositions. Unlike plant generalists, omnivores have two more problems. First, it is usually more challenging for them to find and capture their animal foods. Second, animal foods are more likely to harbor dangerous pathogens, so the omnivore must manage to avoid both plant poisons and harmful micro-organisms contained in animal flesh. The latter is a special problem for modern humans because, for the past 8,000 years or so, they have lived close to domestic animals. But there is one major advantage that a carnivore or omnivore has: The animals they eat are much more like them, biochemically, than are plants. As a result, diets high in meat are nutritionally complete. The plant generalist faces the additional problem of establishing a repertoire of consumable species that together, but not separately, meet all nutritional needs. Mayr (1974) has identified mate choice as a closed system, meaning the target can be substantially prespecified in the genetic program, and food selection for generalists as an open system, meaning that the category of food is not well specified in advance, and much acquisition is involved.
Thus, it is the omnivores that face the greatest psychological food challenges. The return for this is an ability to live almost anywhere. It is no accident that three of the great omnivores, humans, rats, and cockroaches, are found all over the world. The demands of omnivore (and generalist) food identification establish a selection pressure for bigger and more computationally sophisticated brains (Milton, 1981).
Specialists have one system indicating a need for food and another for identifying the specific food. Humans and other generalists have at least two motivation systems, hunger and thirst, but for both, there is a major problem in identifying and innately specifying what entities in the world satisfy each motive. Thus, they also have a bit of the prewired specialist in them, that is, subsystems with specific arousal and motivation mechanisms (e.g., thirst) and relatively well-defined target entities (e.g., water) that can satisfy the motive.
For humans and other mobile species, finding food involves two primary steps: exploring for a new food source or returning to a previously encountered one, and exploiting (consuming or harvesting) that resource until the decision to leave that resource (and possibly return to exploring for another). The ways that other animals solve the problem of sufficient energy and nutrient intake have been widely investigated in behavioral ecology, including the framework of optimal foraging theory (Stephens & Krebs, 1986). For example, shore crabs preying on mussels will preferentially work to open mussels from which they can get the highest net energy gain: Larger mussels have more meat but it takes more time to open them (Elner & Hughes, 1978). The same theoretical approach has been combined with the tools of ethnography and archaeology to study human foraging behavior in the field of human behavioral ecology (Hawkes, O'Connell, & Rogers, 1997; Winterhalder & Smith, 2000). Two important threads of research on optimal foraging are the efficacy of foraging behaviors and the mechanisms driving those behaviors. The data on how precisely foraging is tuned to energy and nutrient needs are impressive, encompassing many species including humans. Less is known about the mechanisms that account for the finely tuned behavior, which involve biologically evolved tuning systems using acquired input from the environment for purposes of calibration.
When seeking food, an organism can search until a possible food resource is found, then assess the resource and decide whether it is edible and worth eating or collecting, and repeat this process until the decision is positive. Such exploration for new resources can follow trajectories that are shaped to cover the region to be searched without revisiting previously seen locations (Bell, 1991). It is also typically guided by the use of distal cues indicating the presence of those resources. For humans, the cues used are primarily visual, including the presence of other species seeking the same resource or the number of people around a resource as an indication of how rich it might be (Goldstone & Ashpole, 2004), as well as the communication signals of other conspecifics when foraging socially in groups. Silverman and Eals (1992) proposed that men and women would have different evolved exploration abilities stemming from sex differences in Pleistocene foraging roles, with men using more orientation-based navigation appropriate for pursuing and bringing home wide-ranging mobile prey, and women using more landmark-based searching appropriate for finding and returning to local plant-based foods. A variety of sources of evidence supports this hunter-gatherer theory of sex differences in spatial abilities (Silverman, Choi, & Peters, 2007), though much of it has been from laboratory studies with low ecological validity (but see Pacheco-Cobos, Rosetti, Cuatianquiz, & Hudson, 2010, for a field study). Men are also predicted to be better at finding an efficient path home after a foraging trip (e.g., to bring captured prey back quickly), in terms of minimizing distance traveled, as Silverman et al. (2000) found in an exploration task set in the woods.
If individuals have previously encountered a renewable resource (or it was not fully depleted when they left), then it could be profitable to return to it, depending on the other options available. In this case, memory for the location of the resource will be useful. Assuming the same evolved sex differences in foraging just mentioned, New, Krasnow, Truxaw, and Gaulin (2007) predicted that women would be better than men at recalling the spatial locations of plant-based foods. To test this, they took people around a farmers market to sample the foods at different stalls, and then they gave them a surprise memory task, asking them to point in the direction of the stall for each food they had tried. Women were indeed more accurate than men, by 7 degrees on average. But men and women were both better at remembering the locations of calorie-dense foods (such as almonds and olive oil) than calorie-sparse ones (lettuce, cucumber), indicating foraging behavior designed to promote return to the most profitable food resources.
Once a food resource has been found, the forager can decide how long to exploit that resource (e.g., consuming or harvesting it). If the resource is a single item, the decision can be about how much to consume (e.g., of a meal) or how to share it with others (e.g., for a hunted animal). The former is controlled in part by well-studied satiety mechanisms that can be strongly affected by top-down visually driven influences (as shown by people trying to eat until all the soup is gone in “bottomless bowls” (Wansink, Painter, & North, 2005) and reduced regulation of amount eaten in “dark restaurants” (Scheibehenne, Todd, & Wansink, 2010). Memory of amount recently consumed can also determine satiety: Amnesic patients will consume two or three consecutive lunches, because they do not remember having just consumed a culturally appropriate amount for a meal (P. Rozin, Dow, Moscovitch, & Rajaram, 1998). How much to eat is also influenced by social norms for consumption (including eating more when in groups—de Castro & de Castro, 1989), copying the amount eaten by other positively valued models (but not by negative, e.g., obese, models—McFerran, Dahl, Fitzsimons, & Morales, 2010), and environmental cues that are predominantly cultural (e.g., bowl or utensil size—see Wansink, 2006). Food sharing may be the outcome of multiple possible selective forces (Winterhalder & Smith, 2000), including exchange of uncertain large prey resources to minimize risk of insufficient calories for one's family or to show off hunting skills that may garner status or mating opportunities, and provisioning kin with steadier food sources such as tubers that take skill to process (the grandmother hypothesis for extended postmenopausal female life span—Hawkes, O'Connell, Blurton Jones, Alvarez, & Charnov, 1998).
Food resources can also take the form of patches of individual items concentrated in a local area, such as berries on a bush or fish in a pond. In this case, the forager must decide how long to exploit the patch—when to stop looking for more items in this patch and leave to either explore for another patch or engage in another activity. Typically, as a patch is exploited, there will be fewer items to find, so that the rate of return will fall over time, and at some point it will be better to leave the patch than to stay and try to find more in it. Such patch-leaving decisions have been studied in terms of optimal foraging theory, with the optimal strategy (under some assumptions) being to leave a patch when its current rate of return falls below the mean rate of return from optimally exploring and exploiting the whole distribution of patches in the environment (the marginal value theorem—Charnov, 1976). A forager should leave the current patch when it could do better by going elsewhere, including the costs of traveling to the next patch. Many species have been found to come close to the predictions of the marginal value theorem while foraging (Stephens & Krebs, 1986), typically by employing simple patch-leaving heuristics that approximate the optimal strategy (Bell, 1991).
Humans also appear to have a general expectation that resources will appear in patches or clumps (Wilke & Barrett, 2009), which in turn calls for psychological mechanisms to decide when to leave them. These mechanisms have been studied in lab tasks such as simulated fishing in a sequence of ponds (the patches), where participants foraged in each pond before deciding when to leave it and travel to the next one (Hutchinson, Wilke, & Todd, 2008), and a visual search task where participants had to find ripe berries in patches (Wolfe, 2013). People stayed longer in patches as the travel time between them increased and overall used patch-leaving mechanisms (including time elapsed between successive items) that produced near-optimal rates of finding resources in typical environments.
When an item that may potentially be food is encountered, the individual must decide whether to consume it. A variety of evolved mechanisms contribute to the ability of humans to assess edibility and learn about appropriate foods, both individually and socially.
The identification of foods is facilitated by some built-in sensory biases. Humans (and rats) at birth have an immediate acceptance of sweet-tasting fluids (Steiner, 1979). This is highly adaptive, since sweetness is most associated, in the environment, with fruits, a major source of nutrition. Human infants also show an innate aversion to bitter and sour tastes (Steiner, 1979). The bitter avoidance is clearly adaptive, since many common plant poisons are bitter. There is probably a genetically programmed preference for moderate levels of salt (sodium), a vital nutrient that is in short supply in many environments, but it emerges well after birth (Beauchamp, Cowart, & Moran, 1986). Finally, although there is no definitive data from newborn infants, there is probably a genetically programmed positive response to fatty textures (corresponding to the presence of fat, one of the three major macronutrients), and a genetically programmed aversion to oral irritants, such as chili pepper. It is not clear why humans have an irritant aversion. Many irritants in the environment are nutritive and have become major parts of human cuisines. All these evolved taste biases have been demonstrated in rats (Rattus norevegicus) and in a range of primates. More recent research on a number of mammals, including humans, has established the existence of a fifth basic taste, umami. This taste is elicited by many amino acids and thus constitutes a form of protein detector. At modest concentrations, it is attractive to many mammalian species, including humans (Galindo, Schneider, Stáhler, Tòle, & Meyerhof, 2012). Through a mixture of genetically programmed taste preferences and aversions, either present at birth or maturing during infancy, by the time of weaning human infants have a suite of taste preferences and aversions that help them negotiate the complex world of foods that they will experience.
A new food has the potential to be an addition to the diet, but it also has the potential to be toxic. Thus, as studied primarily in the rat, there is a clear conflict about sampling a new food. When facing a new food, rats consume small amounts, in isolation, allowing for the assessment of the effects of the food (P. Rozin, 1969). In humans, there is a great deal of ambivalence about new foods, and a very wide range of tendencies related to sampling or avoiding them. The individual difference in avoidance tendency is called neophobia and is measured with a standard neophobia scale (Pliner & Hobden, 1992). Neophilia is the opposite tendency to approach new foods. The cause of the wide variation in individual levels of neophobia (and neophilia) is unknown, but it has a significant heritability (as does the related behavior of food variety seeking; Scheibehenne et al., 2014).
There is a big problem in learning about the effects of foods. The ingestion event often precedes the consequences by hours, but basic Pavlovian conditioning was believed not to support conditioning with such long time intervals. The discovery in the 1960s of the adaptive specializations of learning about foods and their effects was a major event in the psychology of learning, which brought the evolutionary-adaptive approach to the fore and certainly constituted one of the major steps that set the stage for a flourishing evolutionary psychology. Work on specific hungers was pointing to special learning mechanisms (P. Rozin & Kalat, 1971), but the needed adaptive learning mechanisms were dramatically demonstrated by two classic experimental studies by John Garcia and his colleagues, in 1966 (summarized in Garcia, Hankins, & Rusiniak, 1974). This work, and its sequelae, broke the hegemony of the belief in one set of learning laws operative in many different situations. Long-delay learning, and the filtering of “relevant” stimuli (tastes) for ready association with negative gastrointestinal effects, explained effective poison avoidance in rats, and subsequently, specific hungers. Later work (e.g., Sclafani, 1999; Yeomans, 2010) documented long-delay learning in animals for positive events, like increases in the availability of energy some time after consumption. This work showed how the delay between the experience of food and its effects was bridged by special food-related learning mechanisms, including filtering out biologically irrelevant stimuli such as sights and sounds. For both rats and humans, upper gastrointestinal events, particularly nausea, seem to be the critical consequences that produce learned taste aversions (Pelchat & Rozin, 1982). Nausea following food ingestion, even after a delay, results in a dislike for the relevant food, whereas other negative visceral events (such as pain or allergy symptoms) following ingestion typically do not lead to a dislike but may lead to an instrumental, rationally discovered food avoidance.
As dietary generalists able to adapt to changing food environments, humans must also learn what is appropriate to eat in their particular locale. Some of this learning is the result of individual exploration, which may in some cases be guided by broadly useful innate biases. Adults and children (and macaques) learn about new foods and generalize that knowledge based on intrinsic features including color, texture, taste, and odor, but they use shape cues when learning and generalizing about useful artifacts. There is, however, evidence that this distinction may be absent in infants, leading some to question the status of food as a core domain of knowledge (Shutts, Condry, Santos, & Spelke, 2009), though it may be one that comes online only when it is needed after weaning.
Much of food learning follows what others are already successfully eating (Todd & Minard, 2014). This appears to begin in infancy, with 12-month-olds preferring to eat what adults from their own culture model eating with positive affect (Shutts, Kinzler, McKee, & Spelke, 2009). Wertz and Wynn (2014) found that 18-month-olds bias their learning about what others are eating toward plant sources over artifact sources, suggesting specialized responses to plants as potential foods. Consistent with an adaptive advantage of copying older individuals with greater knowledge of the local environment, Birch (1980) found that younger children (around ages 3–5 years) copied the food choices of (on average older) peers significantly more than the reverse, and Addessi, Galloway, Visalberghi, and Birch (2005) showed that young children would copy the specific novel food choices of familiar adults (but cf. P. Rozin, 1991, on limits of parental influence).
Like rats, humans show individual learning in the form of one-trial food avoidance, often being repelled for a lifetime from a food that they have been sickened by once (P. Rozin & Kalat, 1971). But although rats only socially learn food preferences (Galef, 2012), humans also learn what to avoid based on socially transmitted cues of disgust—seeing another person make a disgust face in response to a food may lead to an unwillingness to try that food oneself (Baeyens, Kaes, Eelen, & Silverans, 1996).
Given all these evolutionarily relevant cues that could possibly go into deciding what to eat—including sensory aspects, disgust, learned aversions, familiarity, handling time, cultural norms, family background, what others are eating—how is a decision ultimately made? Some factors (culture, disgust, and aversions) serve to narrow the range of items that would even be considered for consumption. To select from the remaining set of edible items, there is evidence that choices are not made by weighing and combining all the available information about each current option but rather are based on just a small set of cues processed in a quick heuristic manner (Scheibehenne, Miesler, & Todd, 2007; Schulte-Mecklenbeck, Sohn, De Bellis, Martin, Hertwig, 2013; Todd, Hertwig, & Hoffrage, Chapter 37, this Handbook, Volume 2). The most-used cues found in such studies of Western food choices are palatability and healthfulness (reflecting energy and nutrition content of the food) and price and convenience (reflecting opportunity costs and handling time).
There are several universals of the way humans deal with food, such as meals, social gatherings around food, processing of food in some ways, and the development of culture-specific cuisines. Cuisines can be described in terms of staple foods, flavorings, and preparation methods (E. Rozin, 1982), supplemented by a variety of rules about who eats with whom and how to consume food. There is a mapping of food onto other domains of life, including the social world and social status, the sharing of food as a bonding activity, and the emergence of food as a moral substance. All this can be described in terms of preadaptations. Food and eating have been transformed into a distinctively human activity, as a sign and expression of our civilization and our distance from animals: We eat in a mannered way, with implements, bringing food to our mouth (as opposed to the animal way of bringing the mouth to food). Leon Kass (1994) has described this elegantly in The Hungry Soul, pointing out that “We eat as if we don't have to, we exploit an animal necessity, as a ballerina exploits gravity” (p. 158).
Humans have developed adaptations with respect to food, many consequent on the crowded living and work specialization afforded by domestication (Wolfe, Dunavan, & Diamond, 2007). Some, like bitter avoidance or sweet preference, have clear evolutionary roots. Others, like cooking and other forms of food sterilization, are clearly cultural acquisitions, but their acceptance is driven by biologically evolved motives such as parasite avoidance. Though it is important to understand the adaptive value of culinary practices, such as the corn and bean staples of Mesoamerica that together provide an adequate mixture of amino acids, or the possible antimicrobial properties of garlic and some other spices (Billing & Sherman, 1998), the existence of such links does not itself tell us whether they have a genetic component. A challenge for evolutionary psychology is to define the interplay of evolved and cultural forces in these and many other food areas, as we outline in the following examples.
There are two very general ways in which evolutionary forces have affected food eaten by contemporary humans. The first route is only indirectly psychological. The human gut and dentition is adapted to a mixed animal and plant diet. The human inability to digest cellulose sharply limits the types of plant foods that can be nutritive. Second, the large human brain, itself partly a function of the challenges of an omnivorous diet and the sociality encouraged by that, becomes deeply involved in the elaboration of the human food world, including such culinary leaps as the invention of milk chocolate, or food preservatives, or the various combinations of flavors that characterize most of the world's cuisines. We next focus on more specific links between biological and cultural evolution in the food and food habits of humans.
The presence of sweet and fat preferences in rats and nonhuman primates, the presence of sweet (and probably fat) preferences at birth in humans, and the fact that mother's milk is both fatty and sweet make a very strong case for sweet/fat preference as biologically evolved in humans. It is clear that these two indicators of energy content have evolved to become pleasant sought-after tastes. By themselves these innate preferences would account for a well-documented ripe food preference (sweet) and meat (fat) preference in humans. However, their impact on the contemporary human food world has been much more massive than that.
Consider just sweet preference (P. Rozin, 1982). With domestication of plants, this led to the cultivation of sweet foods, including fruits, sugar beets, and sugar cane. The search for sweeter and sweeter tastes motivated the extraction, through a series of technical advances, of the source of the sweetness, sugars. Here, cultural innovations were motivated by evolved urges. With sugars available and plentiful (Mintz, 1985), added sweetness became affordable and common: Cane sugar is much cheaper than honey. This availability led to the widespread adoption of two of the favorite foods of humans, chocolate and coffee, which, without sugar, are often perceived as unpleasantly bitter. (Chocolate quintessentially combines the human desires for sweet and fat; its sweetness and melt-in-the-mouth fatty texture emerge through an elaborate set of processing techniques.) The colonization of the tropical Americas by Western Europeans was partly motivated by the availability of land there for the cultivation of sugar cane. And within the past half-century or so, with a surfeit of calories in the developed world and a growing obesity problem, we face a battle between our desire for sweet and fat and the excess calories that this motive causes us to ingest. This is a commonly cited case of evolutionary mismatch between our evolved psychology (and physiology) and our current environments (Nesse & Williams, 1995; Cordain et al., 2005). But the big brain that gave us domestication and sugar extraction, motivated by our evolved urge for sweet tastes, has now brought us artificial sweeteners, which seem to uncouple sweet tastes and calorie consumption (with possible behavioral consequences—see Wang & Dvorak, 2010).
Humans innately avoid bitter and irritant oral experiences. A casual examination of the contemporary human diet shows that we often overcome these innate aversions. Chili pepper, black pepper, and ginger, all producing innately aversive oral irritation, are among the most popular spices in the world. Chili pepper alone is consumed daily by over 2 billion humans (Rozin, 1990). The irritant property is probably an adaptation by plants to deter consumption by mammals. Birds, which effectively spread the seeds of these plants, do not show an irritant aversion.
The human culinary landscape contains many very popular bitter foods, including alcohol (ethanol), tobacco, chocolate, coffee, and a variety of vegetables. Generally, the bitter and irritant substances are consumed because they are liked: They illustrate a major inversion of innate preferences, or hedonic reversals. This looks like an antievolutionary turn in the modern culinary world, dating back thousands of years in culinary history. There are ancestral-adaptive accounts for some of these reversals. For example, there is evidence that some spices have antibacterial properties, and that in tropical cultures like India, meat dishes, which are more likely to harbor pathogens, are more highly spiced (Billing & Sherman, 1998), though this and other possible adaptive reasons for using spices, such as masking spoilage, are debated (McGee, 1998; Rozin, 1990).
There are many correlations between culinary practices and enhanced nutrition, but these only hint at evolutionary origins. Chili pepper, in particular, has many nutritive and culinary advantages (Rozin, 1990). Some of its effects, such as relieving vitamin A deficiency, may be subtle and slow to manifest themselves; others, like the facilitation of chewing of mealy diets through salivation, are readily apparent and easily learned. But none of these adaptive/selective effects of chili ingestion would explain why people come to like the burn of chili pepper or other irritants, as opposed to simply consuming more because it functions as a “medicine.”
We do not yet have an adequate theory about how this pervasive feature of human eating occurs. Exposure is necessary, of course, and the social context of consumption, particularly the presence of those who already enjoy the food in question, is probably critical. There are two theories that interestingly invoke some feature of human evolution (Rozin, 1990). One involves a normally adaptive, biologically programmed opponent process that produces a compensatory reaction to a negative stimulus (Solomon & Corbit, 1974). If this process is pushed further than it normally would be by cultural forces (e.g., when children copy others and consume chili pepper or tobacco that they normally find aversive), the compensation could grow to be greater than the sensation it is designed to neutralize, which could be a way of turning a pain into a pleasure.
A second theory of hedonic reversal attributes it to a characteristic of the evolved big brain. The theory, which can be described as “benign masochism” (Rozin, 1990; Rozin, Guillot, Fincher, Rozin, & Tsukayama, 2013), argues that humans get pleasure when they discover that their body is signaling danger, but they cognitively realize that they are not really in danger. It is the pleasure of mind over body and is illustrated not only by hedonic reversals in foods, but also by enjoyment of fear from roller coasters or scary movies and enjoyment of induced sadness from fictional portrayals, such as movies and plays, sad paintings, or music. Hedonic reversals seem to be uniquely human (unlike a prediction from opponent theories, which call on a mechanism widespread at least in mammals). Notably, Mexican pigs and dogs, which eat garbage including leftover Mexican food with chili sauce daily, do not develop a preference for chili pepper, a preference present in every person over five years old living in the same Mexican context (reviewed in Rozin, 1990).
Clearly, hedonic reversals involve strong cultural forces that can reverse innate tendencies, but accounts of these reversals engage the operation of biologically evolved systems. We do not, at this time, understand how the first adoptions of chili pepper into the human diet actually occurred, but we do know that tens of millions of humans in the contemporary world become chili likers every year.
As a result of practices by pre-Columbian Mesoamericans, the small and unpromising teosinte plant was selectively bred into corn (maize), a highly nutritive carbohydrate staple, which became the cornerstone of much of the pre-Columbian American diet. The seed stalk of corn is much larger than that of teosinte, the seeds on the cob are much larger and much more numerous, and unlike teosinte, they remain on the cob, convenient for harvesting. Much later in time, as a consequence of the green revolution and genetic engineering, corn emerged as the most efficient crop in the world in terms of calorie yield per acre, a staple for humans in parts of Africa as well as the Americas, and a major source of animal feed.
The traditional Mexican recipe for preparing corn is the tortilla. Corn is ground into a powder and mixed with “cal” (calcium hydroxide), ground seashells, and water to form dough cooked into tortillas. It was consumed by Cortez and his company. This tortilla technology has many nutritional adaptive values (Katz, Hediger, & Valleroy, 1974). The cal adds calcium to the mix, a mineral often in short supply in the Mesoamerican diet; it makes available the critical vitamin niacin and some essential amino acids from their bound, unusable state in corn. This is a classic case of a cuisine being adapted to optimize nutritional quality, in what Katz (1982) calls biocultural evolution.
But how were the nutritional advantages of tortillas discovered? The nutritional consequences are generally not salient upon ingestion and exert their effects primarily over days or weeks. What would cause people to experiment with adding things like seashells to corn, and what outcomes would encourage this enterprise, once it had been introduced? The tortilla technology makes it much easier to roll out a tortilla (P. Rozin, 1982), which may have been a palpable factor supporting the initial development of the technology. The failure of the Europeans to adopt corn may result from a simple fact: Although Cortez and later explorers brought corn back to Europe, they did not bring the tortilla technology, perhaps because only Mexican women make tortillas and the explorer parties included no Spanish women (P. Rozin, 1982).
The process of adoption of a particular culinary technology is clearer for manioc (cassava), another starch staple imported from the Americas (Brazil), primarily to Africa. Manioc is resistant to pests and easy to grow, but some varieties contain a deadly toxin, cyanide (reviewed in P. Rozin, 1982). Brazilian tradition treats manioc by grinding it and repeatedly rinsing it to remove the water-soluble cyanide. Although we do not know exactly how this procedure was invented, it is easy to imagine that the effectiveness of the procedure was highly salient given that the effects of cyanide are rapid and often deadly, and that the practice of rinsing food with water was previously established. Unlike the case of corn, the Brazilian culinary (detoxification) procedure was imported to Africa along with the manioc.
Milk is the first food of all newborn mammals, who are biologically adapted to nurse, as are their mothers (Simoons, 1982; reviewed in Rozin & Pelchat, 1988; Durham, 1991). The infant gut contains the enzyme lactase that splits the unique milk sugar, lactose, into its two nutritive and digestible components, glucose and galactose. Milk is only available to mammal infants: Mothers cease to produce it as the process of weaning occurs. The weaning period is especially critical as the transition away from the milk superfood and toward exposure to the abundance of potential food alternatives in the world. There are three adaptations that may facilitate weaning from milk, a never-again-to-be-available food. One is induced familiarity with a range of new foods from (a) the presence of food residues (e.g., odorants, as for garlic) in mother's milk, which facilitate acceptance of these foods in the weaning transition (see Mennella & Trabulsi, 2012, for humans; Galef, 2012, for rats) and (b) exposure to maternally consumed foods through odorants on her surface in conjunction with exhaled carbon disulfide in her breath (based on rat research, Galef, 2012). Second, movement away from milk may be encouraged by developing lactose intolerance, which would cause gastric discomfort from consuming large amounts of milk in the later nursing period (Rozin & Pelchat, 1988). Third, among the sugars, lactose is relatively low in sweetness. Given that milk has to be abandoned and ideally not strongly desired postweaning, it may be easier to wean away from a less sweet fluid (Rozin & Pelchat, 1988).
Milk could be a highly nutritive food for adults, if they could procure it, except for the fact that lactase is biologically programmed to gradually disappear from the gut at around the time of weaning. The normal adult mammal is lactose intolerant, because it cannot digest lactose, and its presence in the hind gut produces diarrhea and bloating, including gas pains (Simoons, 1970, 1982; Rozin & Pelchat, 1988). How is it then that a majority of the adults in the world consume dairy products, and in some cultures, as different as India, Denmark, and Canada, they form a major part of the diet? There were two main events leading to this consumption once animal domestication occurred, one cultural and the other biological (Durham, 1991; P. Rozin, 1982; P. Rozin & Pelchat, 1988; Simoons, 1970, 1982).
On the cultural side, humans discovered that they could “culture” milk, that is, let it ferment. Under commonly occurring situations, one of the main effects of this is that bacteria break down the lactose in milk to its digestible component sugars, leading to familiar products such as yogurt and cheese. These products contain lactose, but at substantially lower levels than raw milk, and they also can be stored much more conveniently and successfully than raw milk. These cultured milk products are the principal way that dairy products are consumed today in the Mediterranean and South Asia.
But a remarkable biological adaptation also occurred in some cultures (Simoons, 1970, 1982). These were pastoral groups, primarily in Northern Europe but some in Africa, as well. There is reason to believe that the single gene mutation that blocks the deprogramming of the lactase gene at weaning time was not uncommon. Individuals who possessed it would have had the additional advantage of being able to consume milk as adults. This advantage was almost certainly the selective force for an increasing presence of this gene, which by now is prevalent in Northern Europeans and some African pastoralists. So a human domestication activity, that is, a cultural event, set up a biological selection pressure, leading to gene change for these groups, whereas the great majority of humans on Earth remain in the original state of lactose intolerance, like the rest of mammals. In our present state of knowledge, we cannot describe the actual process through which either cultured milk or raw milk actually became part of the human diet.
Meat is a natural food for the human omnivore (Fiddes, 1991; Rozin, 2004). It is a nutritionally complete and calorically rich food and ranks among the most appealing foods for most contemporary human beings. But the story of meat as an ideal human food has two blots against it. First, it is hard to obtain meat: Animal food sources move, so the capture phase is challenging and often involves considerable energy expenditure and social cooperation. Second, partly because animal food is so biochemically similar to the human biochemical profile, parasitic organisms that can live in animals can often live and reproduce in humans (Curtis, 2013).
The two major meat problems were dealt with rather effectively in human history. At a historical time still in dispute (Wrangham, 2009), humans were able to harness fire and use it to cook animal foods. Cooking kills virtually all parasites, though of course they can reinfect food that is left uneaten after cooking. Animal domestication reduced the skill and energy expenditure needed to obtain animal food, but it also increased parasite risks, because, postdomestication, humans lived in much closer proximity to large animals than they ever had previously. Many parasites find friendly homes in both domesticated animals and humans. In contemporary cultures, we see the expression of the high benefits but significant risks of meat consumption. Meat is at once the favored food of humans, and the most tabooed (Fessler & Navarette, 2003). Although some animal taboos are complete, applying to all members of a group (e.g., Hebrew dietary laws), many are conditional, saying that some meats cannot be consumed by some categories of individuals or at some times. The favored status of meat is clearly illustrated by conditional taboos, because they usually restrict access to favored animal parts (usually muscle) to adult males, the most powerful individuals in traditional cultures.
The complexities of response to meat, and, in particular, the negative side, has been enhanced over the past thousands of years of human cultural evolution, through the development of religions and ideas about human nature, origins, and fates. Ideas about souls, considered as the spiritual links between humans and animals, modulate human reactions to meat. Furthermore, after domestication, with the decline in hunting and the specialization of individuals into different roles, including pasturing and butchering (Diamond, 1997), the distance between many individuals and the origins of their animal foods increased. As a result, concerns about killing animals became less salient for many meat eaters in the modern world. And with increased accessibility to foods, humans became more selective about what animals they would eat (only the muscle of three mammal species, out of several thousand, for most American adults). Religious and perhaps empathic concerns led to the rejection of all animal foods by many humans (e.g., the “ahimsa” no-killing-of-animals principle followed by many Hindus). Deep sensibilities about ancestors and descendants, including beliefs about reincarnation, may have imbued animals with symbolic values (e.g., Fiddes, 1991; Twigg, 1983). Modern concerns cover not only killing or maltreating animals but also the high environmental cost of rearing and consuming them, as opposed to plants. For many adults in the developed world now, both liberal sentiment and long-term health concerns have replaced parasite avoidance as the major deterrent to meat consumption. However, in spite of the biological risks and symbolic and empathic concerns, the strong biological appeal of meat and other animal products remains.
It is a short step from meat as a human food to the emotion of disgust. In the last decade, disgust has become the focus of a great deal of attention from evolutionary psychologists. The striking thing about disgust is that almost all foods that some people find disgusting are of animal origin (Angyal, 1941; P. Rozin & Fallon, 1987). So the strongest negative reactions that humans have to food are focused on the favorite food category! The ambivalence toward meat appears again.
There is little doubt that, in its origin, disgust is a food-rejection system. In English, dis-gust means bad taste, and the semantics are similar in French. One of the two most frequent facial expressions associated with disgust involves a gape and tongue extension. Both of these serve to expel substances from the mouth. And perhaps most critically, the physiological signature of disgust is nausea, a sensation that inhibits ingestion and often precedes vomiting, the ultimate form of food rejection. A fundamental question is what triggers food-rejection disgust (called pathogen avoidance disgust by many evolutionary psychologists).
Following on classic work on disgust by Darwin (1872/1965), Angyal (1941) defined disgust as “revulsion at the prospect of oral incorporation of an offensive object.” Angyal considered body waste products as a focus of disgust. P. Rozin and Fallon (1987, p. 23) added to Angyal's definition: “The offensive objects are contaminants; that is, if they even briefly contact an acceptable food, they tend to render that food unacceptable.” Disgust that is related to body substances and some foods, considered the original elicitors, is sometimes called “core” disgust. Given that the core disgust elicitors are animal products, including body wastes, and that contamination sensitivity characterizes the response to these elicitors, it is very reasonable to presume that parasite avoidance is the basic motivation for core disgust. Originally expressed as the disease avoidance model of disgust (Matchett & Davey, 1991), this view developed substantially from work identifying many characteristics of disgust that fit a parasite avoidance interpretation (Curtis, Aunger, & Rabie, 2004; Oaten, Stevenson, & Case, 2009; Tybur et al., 2013). The two strongest arguments for this view are (1) that humans have contamination sensitivity, which is a part of food rejection that only makes sense for microbes, for which tiny doses can multiply in the body (unlike toxins), and (2) that core disgust elicitors center on potential foods that are vehicles for harmful microbes: meat and body products.
It is important to separate this question of whether disgust is a parasite avoidance system (for which the evidence is strong) from a second question, assuming the parasite avoidance function: Is parasite avoidance disgust biologically evolved or learned? Other behaviors such as cooking, administration of antibiotics, and water purification are also very effective uniquely human means of protection against parasites but clearly are explained through cultural learning rather than biological evolution. There is abundant evidence (e.g., Curtis, 2013; Hart, 2011) that parasite avoidance is a fundamental challenge for mammals and many other species, engendering a widespread suite of behaviors (e.g., grooming) to reduce parasite risk (see Schaller, Chapter 7, this volume). So is disgust an inherited system (“emotion”) that fits in with these other evolved parasite avoidance behaviors? This has been widely assumed, but since disgust is not found in other animals and not present at birth in humans, two of the most convincing arguments for a genetic origin are not present. On the other hand, disgust (and contamination sensitivity) may be culturally universal in humans from about age 4 or 5 years onward (Hejmadi, Rozin, & Siegal, 2004). At this time, the most reasonable account of parasite avoidance disgust is that it is biologically evolved.
Another fundamental question regards the sequence of historical events that produced the widespread domain of disgust: How did a food-related emotion come to be applied to a very wide range of entities and situations, including contact with death or strangers, a variety of sexual acts (e.g., incest), and some moral violations? The process almost certainly involved preadaptation. The gape typically found in facial expressions of core disgust is part of an inherited response to bitter tastes (Grill & Norgren, 1978; Steiner, 1979); it is present at birth in humans and present in rats, primates, and other mammals. It is almost certainly true that the bitter face, which functionally rejects foods and signals this rejection, was preadapted for a new rejection function for spoiled and otherwise parasite-affected items, but it is not clear how or when this new function arose. In his analysis of disgust, Kelly (2011) highlighted this problem and postulated disgust as a combination (by preadaptation) of the innate poison (bitter) rejection and the innate parasite avoidance system (his “entanglement hypothesis”).
Disgust may be the quintessential example of how the food system serves as the foundation for other systems that share its properties. The first theory of the expansion of disgust (P. Rozin, Haidt, & McCauley, 2008) proposed four historical stages, beginning with core (food-related) disgust, expanding to reminders of humans' animal nature (e.g., sex, viscera, and most critically, death), then to a subset of interpersonal contacts, and finally to a subset of moral violations characterized as divinity violations (P. Rozin, Lowery, Imada, & Haidt, 1999, using the taxonomy of Shweder, Much, Mahapatra, & Park, 1997). Preadaptation is explicitly invoked as the mechanism for expansion, and the emphasis is on cultural evolution, with the possibility open that core disgust is biologically evolved. The animal reminder phase is postulated to center on avoidance of reminders of mortality, a major problem and threat faced uniquely by humans (Becker, 1973; Goldenberg et al., 2001). Tybur et al. (2013) proposed that parasite avoidance, by itself, can encompass animal-reminder disgust, since death, deformity, and visceral exposure are all signs of infection, as well as interpersonal disgust, since strangers are more likely sources of dangerous pathogens; then they postulated two other domains of disgust: sexual and moral. Crucially, both views (and that of Kelly, 2011) ground moral disgust in terms of its origin in a food-related system.
We must clearly distinguish between a genetic basis for a species-wide tendency and a genetic basis for individual differences on the trait in question. For example, it is clear that reading/writing is a cultural invention, but differences in reading ability have a substantial heritability. On the other hand, for the food domain, the basic preference for sweet tastes is clearly based on genetics, but individual differences in the manifestation of that preference, so far as we currently understand them, have a strong acquired component, via either cultural or individually experienced causes.
One potential biological component of individual differences in food preferences is genetically based sensory differences. There are many different bitter receptors, and genetic analysis has identified specific genetic bases for many of these. One of these loci is measured by the rated bitterness of phenylthiocarbamide (PTC) or the related chemical propylthiouracil (PROP; Bartoshuk, Duffy, & Miller, 1994; Tepper, 1998). There is some modest evidence for lower preferences for foods with a bitter component (e.g., coffee, beer, and many vegetables) among people with greater PROP sensitivity (Tepper, 1998). As taste genetics develops, there will be more opportunities to examine mappings between taste and preferences.
The few existing twin studies on genetics of food preferences provide modest and mixed results including very modest heritability for specific food preferences, but perhaps higher heritability for some categories of foods such as high-fat foods or fruits (Reed, Bachmanov, Beauchamp, Tordoff, & Price, 1997). A major role for genes as contributors to the very substantial within-culture variation in food preferences is, however, challenged by data for family resemblance in food preferences. Family resemblance, usually measured as similarity in preferences or preference patterns between adult (college student) children and their parents, confounds genetics and parental influence. Therefore, family resemblance correlations can only establish upper limits for genetic contributions. The literature on family resemblance for food (and music) preferences reports surprisingly low correlations averaging about r = 0.15 (P. Rozin, 1991).
A problem of particular interest in the context of evolution is reliable changes in food preferences across the life span. Some short-term changes in food choice of women in the first trimester of pregnancy may be related to the vulnerability of the fetus and immune suppression in the female (Fessler, Eng, & Navarette, 2005). In bees, workers may shift during their lifetime from foraging for pollen to foraging for nectar, or the reverse. This depends on whether a particular bee is engaged in brood care (pollen preference) or supplying the hive with energy (nectar preference) and appears to be regulated epigenetically (Amdam, Norberg, Fondrk, & Page, 2004). The roles that epigenetics and the gut microbiome may play in lifetime and evolutionary changes in food preferences remain to be explored (Alcock, Maley, & Aktipis, 2014).
Food has been mostly absent for too long from the table of evolutionary psychology. This major part of human life, with its crucial connection to and influence on animal evolution, needs much more attention. Together with evolutionary selective forces, we must acknowledge the powerful role of culture in determining food choice, food habits, and the meaning of food. Just as the most informative piece of information about an animal's behavior may be what it eats, probably the most informative cue to a person's culture is what he or she eats. As we have shown throughout this chapter, the two factors, biology and culture, are inseparably intertwined: Cultural traditions are influenced both by general human metabolic and behavioral/cognitive capacities and by predispositions, and specific cultural differences in taste genetics and metabolic capacities have coevolved with cultural changes. The present is a particularly important and exciting time to study the interactions of evolution and culture in food behavior. For the first time in human history on Earth, billions of humans can sample the staple foods and cuisines from cultures across the globe. The homogenization of the world's diet may actually amplify the percentage of individual differences in food choice that can be attributed to genetics, as environmental variation is reduced. Furthermore, especially in the enlarging developed world, global perspectives about reducing pollution, food waste, and water use, saving the remaining unspoiled land on the planet, protecting animal rights, and “meddling” with nature via genetic engineering of foods add new dimensions to human food choice. All this is part of the immense future of the evolutionarily informed study of human food choice and eating behavior.
Another pressing research challenge for this field is to address increasing and changing food-related health concerns. It is ironic that two major genetically determined traits that were adaptive in the human ancestral environment, liking for sweet tastes and fatty textures, have become major suspects for a maladaptive outcome in modern cultures, namely obesity (Nesse & Williams, 1995; Speakman, 2013). The civilized environment may have inverted some of the basic selection pressures that were important in early human evolution: from food scarcity to food abundance, from low to very high caloric density foods, and from appreciable short-term consequences of food toxins and microbes to much more subtle long-term negative consequences of diet on degenerative diseases. Moreover, in the modern developed world, selection pressures operating on poor diet choice and obesity are often low (e.g., predation risk is not important in most modern settings) or occur at a range of ages that were rarely achieved in the ancestral environment (e.g., degenerative diseases). The increase in obesity and other food-related health challenges calls for study from a variety of perspectives, including two new approaches with strong evolutionary connections: epigenetics and analysis of the human microbiome.
Given that food acquisition behaviors have been fundamental to survival throughout the history of animal life, it is reasonable to expect that some of the mechanisms underlying these behaviors may have been appropriated and repurposed (as preadaptations, or exaptations) for other functions over the course of evolution. Hills (2006) argued that dopamine-driven food-search mechanisms formed the evolutionary basis of mechanisms controlling the search for other resources, including attentional control of search for information in the external environment (including visual search; Wolfe, 2013) and executive control of internal goal-driven cognition (see Todd, Hills, & Robbins, 2012 for an overview). For example, humans recalling concepts in memory (e.g., “name all the types of animals you can think of”) switch between patches of related concepts (e.g., from farm animals, to pets, to insects) in a way that maximizes their success as predicted by the marginal value theorem in optimal foraging theory (Hills, Jones, & Todd, 2012). Similarly, people search the web using “information-foraging” strategies akin to those appropriate for patchy food sources (Pirolli, 2007). As already noted, preadaptation has resulted in expansion of food systems into the aesthetic domain (e.g., haute cuisine), the moral world (e.g., food in Hindu India as a biomoral substance; Appadurai, 1981), and the domain of language and metaphor, as when we say Linda is sweet, or let's get to the meat of the argument (Chan, Tong, Tan, & Koh, 2013).
We close with an example from Leon Kass's The Hungry Soul (1994), edited here to make a particular point. In his discussion of eating in the modern world as a statement of being civilized and not animals (the theme of his book), Kass asks us to imagine a dining scene in the Western developed world. Two adults are eating dinner, sitting opposite each other. Each spears food with a fork and conveys it into the mouth, where it is chewed and swallowed. The act of eating is done with delicacy and with great skill. Food does not fall off the fork or out of the mouth. The mass of food in the mouth is disgusting: moist, mixed with saliva, and a potential vector for germs. Each eater manages to chew the food without displaying any of the product of mastication. This is remarkable especially since the conversation at dinner is produced by sounds emanating from the same hole (mouth) that is incorporating food. So a deeply biological act, acquiring nutrients, is carried out with skill acquired through practice from early in life. We are still doing the evolved biological thing, eating, based on a biological motive, hunger, intertwined with an emotion with biological roots, disgust, in order to satisfy a basic biological necessity, but with such learned cultural skill that an innocent observer might not realize that the situation is basically about acquiring nutrients. And this whole intricate experience goes on tens of billions of times every day for contemporary Homo sapiens. Whether we like it or not, we are animals, and though we have largely managed, through civilized eating, to hide the evolved biological forces just below the surface, we still love sweet and fatty food.