CHAPTER THREE

FIXING DINNER

NUTRITION

Up until the age of 5—until, specifically, the day my family attended a potluck at a lake near our house—I’d had unswerving faith in my parents. I needed it, because without faith it would have been hard to remain cheerful on my family’s diet. I knew, from the moment I was old enough to begin exercising some judgment over what I put in my mouth, that we took food more seriously than most families take religion. We believed wholeheartedly in the old aphorism “you are what you eat,” which—if you think about it—puts an awful lot of pressure on the eater. It encumbers every forkload with the possibility of salvation or damnation. By choosing the right things to put in your body, you could transform your life, become smarter, happier, and more energetic—or by making the wrong choices you could grow ugly and sick. Though this idea seems self-evident (if you eat fat you get fat, right?), I’ve come to feel that it is fundamentally misleading. In some ways, dietary health is more akin to Calvinist predestination. At the age of 5, however, I’m sure I must have been thinking about all this on a far more basic level. I was only dimly conscious that the question of what to eat was fraught with terrible consequence. Doubt had yet to stir.

For most Americans there was nothing particularly traumatic about the summer of 1983, but for me it was The Time of Unceasing Zucchini. Though Mom had refused the beasty-yeasties, she was forever discovering new dietary theories, each of which would inevitably lead to a purging of the pantry. During these food-culture revolutions, undesirable elements disappeared from our already meager menu until we were surviving like Chinese peasants. My little brother, Tim, had grown—from a jolly baby built like the Michelin Man—into a sort of nutritional censor, unwittingly redacting foods from the table. When he’d first gotten a rash, which wrapped around his torso, my parents had taken him to a naturopath, who suggested that they take the family back in time to a Paleolithic diet. This meant no foods of the agricultural revolution—no wheat and no dairy (which in the United States eliminates pretty much everything). Meat and tubers were permitted, along with vegetables (which meant cartloads of zucchini), and brown rice, though I doubt cavemen often snacked on grasses, let alone made them a staple. There was always brown rice, typically with a few overcooked kernels of tooth-chipping toughness. Tim and I scooped at it dutifully: It was this or the snails. And even the snails were hard to come by in our new home in the California foothills.

Dad said that he’d known it was time to leave Berkeley when my little brother and I started falling on the sidewalks and scuffing our knees. It was yet more proof that civilization could damage young bodies—children weren’t meant to learn to walk on concrete, he said. That was when we moved to Nevada City, the little town tucked in a hollow between the knees of the Sierra Nevada Mountains. The sun-fired rocky soil of our new home was hardly more forgiving than concrete, and in the next few years I’d mix my blood with that red earth—with pine sap, river water, and blackberry juice. But these were injuries my parents believed in.

The problem with moving back to the land is that, paradoxically, you leave behind many of the people who like the idea of moving back to the land. Romantic liberals, despite their passion for nature, tend to stay in coastal cities, relinquishing the country to conservatives and acolytes of technological progress. My parents had picked our little town because other San Francisco Bay Area hippies had already colonized it. There were yoga classes, communes out among the hills, and a spiritual community that ran a health-food store in town. Still, the majority of our new neighbors were Republican businessmen who went to church regularly, red-necked loggers who converted dead cars into chicken coops, or grizzled marijuana-growing libertarians who panned for gold in the rivers. The children of these types would be my classmates when I started school, and would show me alternatives to my sackcloth-and-squash existence.

At the age of 5, however, I still assumed my diet was perfectly normal. I lacked means of comparison. We had no TV and, though we read, families tend to dissolve into the background in children’s literature. Kids cannot become heroes if they have parents standing in their way, so authors kill off the adults or separate them from their children or simply dress them in dark floral prints and force them to sit in matching armchairs. It wasn’t that I had no contact with the outside world; Mom had moved her day-care center up to Nevada City, and every day adults delivered a throng of companions to the door. But the parents who felt comfortable leaving their children at my house had to have been at least somewhat like-minded. And while there, the other kids ate like me. I was unaware of the Oreo cookie. I had not deduced the theoretical possibility of Cadbury Cream Eggs. So when we went to that potluck at the lake just outside of town, I was innocent to any possibility that others might see my way of life as strange.

It was a warm summer evening. Casseroles and salads were spread out along the tables by the water. Adults batted at mosquitoes. Children ran up and down the lakeshore in screaming waves. On a platform floating offshore kids were playing king of the hill, forming alliances, betraying them, and splashing into the water in glorious defeat. Lake Vera is a miserable little algae farm, but at that moment it looked marvelous. I stripped down and crashed though the shallows, my pudgy little brother hard at my heels. When I reached the platform, I flopped aboard then pivoted, bracing myself to take all comers. But instead of rushing me, everyone stopped. It was only then that I noticed that—in stark contrast to me—every other child was wearing a swimsuit.

We’ve all experienced what I experienced next. You show up at school or work, and realize two things in quick succession—first, that you are naked, and second, that this is highly, excruciatingly, inappropriate. The difference is that most people undergo this trial while dreaming. I got to live the dream. Nakedness had always been a part of our household. My father believed it was better to sleep naked, for one thing.

“Gives your balls a chance to breathe,” he’d say.

When sleeping naked I certainly wasn’t going to bother with clothes each time I went to the bathroom, got a glass of water or even, for that matter, when I wandered into the dining room for breakfast.

My brother was struggling onto the floating platform when a treble voice shrieked, “They’re naked! They’re naked!” It was like a scene from a 1950s monster movie in which a horrific crustacean-thing emerges from the depths and wanders into a beach party. Kids swarmed off the other side of the raft and porpoised for shore. I tried to savor having this little kingdom to myself. I pushed my brother off the platform a few times, but there was no satisfaction in it. We swam in and put on our clothes. I felt an odd sort of shyness the rest of the evening and stayed close behind my mother’s legs. It was the first time I understood that the things I had assumed to be self-evident might be wrong.

My parents believed that they were raising a new generation of Tarzan-like power children by liberating us from the nudity taboo and enforcing strict food rules. But I began to suspect that their techniques were actually doing us harm. Within a few years I had been over to the houses of some of my classmates and had seen the refrigerators full of Capri Suns (those silver space-pods of juice), wondrous Creamsicles, and spreadable cheese food. Instead of being dens of disease, these houses had luminous white carpets, pastel furniture, and glass coffee tables where magazines spread in perfect fans. These families went to church on Sundays, the girls took ballet, the boys were star pitchers, and nobody was ever naked. It seemed unlikely, on my diet of nudity and zucchini, that I would ever become an athlete, or a cheerleader’s boyfriend, or any sort of clean, crew-cut hero, like the boys in the books.

At school, my family’s strangeness became inescapable, and nowhere were these quirks more starkly pronounced than in the lunchroom. The trip to the cafeteria was like my own personal version of Marco Polo’s first expedition to China: It was an encounter with a more advanced civilization, which each day reasserted its dominance through the display of the Lunchable triumphs it had crafted. My friends would lift the tamper-evident plastic veils to reveal their meals, pristine at the moment of consumption. Surely there could be no greater contrast to these wonders of food technology than my bruised and oozing pear, my sweaty Lebanon baloney, my translucent lettuce, my grubby stub of carrot.

Some basic inclination had reversed itself since my toddlerhood. Instead of yearning for crawly things to put in my mouth, I’d developed a fear of foods that were too close to their living, organic state. I was especially apprehensive of the pilafs and stir-fries my parents sometimes made, with their multitudinous unidentifiable components. When I bit into something anonymous, large, and crunchy, I would stiffen as a vivid image of the giant rhinoceros beetle I had seen in National Geographic magazine lit up in a high-traffic sector of my brain. Naturally, I was envious of my friends’ lunches. The refined perfection of prepackaged food spoke to me. “Observe, my healthful symmetry,” it said. “These perfect disks of meat, these Doric breadsticks. I was shaped not by fungus, nor mealworm, nor inner-backpack compression, but by the principles of Euclid. Three thousand years of culture have transubstantiated me. Take this cracker, the body of Pythagoras. Take this juicebox, the bathwater of Archimedes.”

There’s a fundamental divide here, not just between my parents and my 6-year-old self, but also between two ways that we think about food: the conviction that foods closest to the earth are the healthiest, versus the conviction that foods most firmly under technological control are the healthiest. For some the fish lifted moments ago from a lake is more wholesome than anything you could buy in a grocery store. For others it is suspect, unknown, unsanctioned by food-safety authorities. Better to reduce this fish down to pure nutrient parts—to protein, and vitamins, and omega-3 fatty acids—and count out precisely the right number of each into a rationally formulated, fortified, health-enhanced fish stick. If you subscribe to the view that you are what you eat, isn’t it then crucial to know exactly what you are eating, down to the molecule?

The implications of dessert, in particular, shook my faith in eating naturally. My parents forbade the consumption of anything containing refined sugar, and for that I craved sweetness even more. I pondered it: What if my parents were fundamentally wrong? What if I could be more like my cooler, more popular, Vans-and-Jams-clad friends if only I were allowed to eat Oreos? What evidence existed to back up my parents’ conviction that sugar was a dangerous substance?

DIETARY BEFUDDLEMENT

I’d thought my project would be a simple one: I’d spend a few days in the library, talk to a few scientists, and determine who was right. What I found was much more complex, and much more interesting. Americans never seem to tire of books and articles telling them how to eat scientifically, yet our knowledge of these topics is shallow. Once my research took me beyond the bromides regularly printed in health magazines, I was dumbstruck by how little science there was to support even the basic nutritional advice I had assumed was certain. We generally think we know which types of fats and cholesterol are good and which are bad, but the jury is still very much out. The evidence for the health benefits of monounsaturated fat and the ill effects of saturated fat is contradictory and startlingly incomplete. Even the distinction between “good” and “bad” cholesterol turns out to be a crude oversimplification. People tend to think that sugar (which I’ll deal with later) is harmful for a variety of reasons—most of which are false: Studies fail to show that sugar makes kids hyper, or that it weakens the immune system. The only firmly established reason I could find to avoid sugar was its association with cavities, but there wasn’t enough evidence for scientists to come right out and say that high-sugar diets cause tooth decay. It’s revealing that the diet-related diseases (obesity, type 2 diabetes, and most forms of heart disease) are only prevalent in those cultures advanced enough to have—or to think they have—an understanding of food science.

When it comes to nutrition, our knowledge is so limited that, if someone claims they know how to eat scientifically, it’s a fairly reliable indicator that they are actually selling pseudoscience. America’s grand tradition of nutritional pseudoscience goes back to John Harvey Kellogg, who, around 1900, began alerting people to the dangers of protein, masturbation, and toxic bacteria, while promoting enemas and cornflakes. His influence may be measured both by the degree to which his family name is now synonymous with cereal, and by the degree to which later health gurus have copied his techniques.

Nutrition is an oddly public science. It shows off its first drafts. It’s almost irresistible for the various factions—raw foodists, high-tech pill poppers, animal-protein enthusiasts, low-fat hardliners—to take up the provisional science and, like Kellogg, extrapolate to form a complete (that is, marketable) dietary theory. In most other fields, basic research is of interest only to those scientists following the various competing hypotheses. But in nutrition, each contradictory finding makes headlines and produces radical changes in the way people live. One day we are carbo-loading, the next we are asking for our hamburgers without the bun. The humble omelet shifts from a heart attack on a plate to a health food. I read a handful of the most respectable diet books I could find: One advised shunning protein, another warned against fat, and a third pointed the finger at carbohydrates. That left nothing free from suspicion, which shouldn’t have surprised me, given that we generally think of calories (simply a measure of how much energy—or life force—is contained in a food) as something to be avoided.

After a few false starts in parsing the science from the pseudoscience, I called Bruce German, the food chemist at University of California, Davis. I remembered that his fascination with bacteria was the natural outgrowth of his larger project on nutrition. It wasn’t until I called that I learned how large his project really is: He has devoted himself to linking an understanding of food metabolism on the microscopic level to a holistic understanding of how foods (the whole fish) affect health (the whole person). It’s uncharted territory, and when we spoke, he didn’t hold back. There was a good reason I was confused, he said.

“What people are beginning to realize to their horror,” German told me, “is that we actually don’t know much about diet and health.”

For nearly two centuries nutrition scientists have been saying just the opposite: that we’ve figured out everything we need to know about nutrition. Michael Pollan has acidly etched this history of scientific arrogance in his book In Defense of Food, starting with Justus von Liebig who, in 1842, proposed that humans only needed potassium, phosphorus, and nitrogen to thrive—along with fat, protein, and carbohydrates. Liebig was obviously missing a few things (vitamins, for example), but no one knew that at the time and many doctors suggested that babies should be exclusively fed on Liebig’s formula—which, they presumed, was superior to breast milk.* It was assumed that Liebig had closed the book on nutrition.

In the 1950s, German said, conventional wisdom yet again held that scientists had solved the problem of diet. This overconfidence sprang from a decision to approach nutrition not as the study of foods, but as the study of their molecules. Going molecular was wonderfully enabling: Scientists were able to see what happened when they removed one chemical at a time from the kibble they were feeding to rats. If that nutrient was important, the animals would get sick. Methodically, researchers worked their way down the list of chemicals.

“In about 40 years, scientists were able to identify every single nutrient molecule that animals need to survive and reproduce,” German said. “There was the feeling that all the work was done. We assumed that all we needed to do was fortify the heck out of the food supply with essential nutrients. It was a great idea, but we will probably end up looking back at it as one of the greatest scientific goofs of all time.”

The view at the molecular level fostered confidence by exposing the details of a small vista with perfect clarity—but it also left vast areas shrouded in darkness. The rapid accumulation of this sort of microscopic knowledge, German said, left three crippling assumptions embedded deep within nutrition science. First, the pinhole focus on minutiae left the impression that the big picture, the food itself, was unimportant. Under this assumption, it shouldn’t matter whether sugars were latticed through a slice of bread or molded into a lollipop. Second, the cataloguing of essential nutrients didn’t account for human diversity, and nutrition science treated everyone—Inuits, Bushmen, distance runners, and couch potatoes—as if they required the same diet. Third, given the focus on deficiencies, the best solution seemed to be the supplementation of nutrients to the national food supply. This final assumption had the consequence of putting large institutions, rather than individuals, in control of nutrition, because it was easy for governments and corporations to simply fortify salt, bread, and milk, with more than enough essential nutrients to everyone.

“We didn’t teach people what iodine is,” German said (it’s an element, commonly found in seawater and soil, necessary to make hormones in the thyroid). “We just iodized salt.”

As a result we are profoundly ignorant about diet and health. What little information did filter down into schools—like the food pyramid—was simplified to the point of nonsensicality.

These three assumptions—that molecules matter while the food itself is irrelevant, that everyone is the same, and that institutions rather than individuals should be trusted to control nutrition—are to a large extent responsible for the epidemics in heart disease, obesity, type 2 diabetes, and osteoporosis, German said. More than a third of U.S. citizens are clinically obese. Demographers estimate that one of every three children who were born in the year 2000 will develop type 2 diabetes during their lives. Today’s children are expected to be the first generation in 200 years to die younger than their parents. And the epidemic reaches far beyond the United States. Countries rapidly modernizing are suffering the heaviest brunt of diet-related illnesses. Walk into clinics in China and you will find doctors overwhelmed by diabetes and heart disease. The results of our experiment in eating scientifically haven’t been good.

THE GREAT WHITE HOPE

The problem with doing science that attempts to look, not just through the microscope, but also at the big picture, is that the work becomes exponentially more complex each time you zoom out. In a lab you can control variables so that only one element changes at a time: When you take all the salt out of the kibble and the rats sicken, it’s fairly safe to infer that they needed salt. If you’ve done the experiment carefully there shouldn’t be any other causes—no confounding factors, as they are called—to muddle the results. But if you want to ask how diet affects people’s health in a larger context, then you must wade through truck-loads of confounding factors: Was it the fat that caused my (hypothetical) heart disease, or the sugar? Or could it be the fact that my grandmother passed me a susceptible gene, or that I didn’t exercise, or could it be because that my sleep was interrupted several times each night by police sirens? Nutrition research had been, all too often, focused on a field either so broad as to render the results nearly meaningless, or so specific that it was hard to find applicability outside the lab. The challenge for researchers was to find some new angle from which to study their subject, some new way of seeing that could break open the scientific logjam by providing microscopic accuracy and macroscopic applicability. That’s exactly what German did.

He began by asking what a food would look like if it was precisely designed to make us healthy. Such a food would be a Rosetta stone with which to crack the code for dietary health. No plant would do as a model because evolutionary pressure tends to favor plants that can avoid being eaten, and plants have honed their expertise in defending themselves by building poisons. The model food would have to be just the opposite: something that wanted to be a meal, that gained evolutionary success by being eaten, something shaped by constant Darwinian pressure to satisfy all the needs of mammals. That ur-food, of course, was milk.

“In milk you have the Darwinian engine of your dreams,” German said. “You’ve got a mother who is literally dissolving her tissues to make milk—whatever is going into it is costing her—so if it’s not helping the infant, evolution should weed it out. But if she creates something that enhances the infant’s chance of survival, that provides a tremendous boost to the chances of it spreading. You let this engine run for a few million years and you end up with this complex, almost magical substance. It’s a spectacular gold mine for science.”

When I asked German to show me what the research looked like he took me to a room that one of his colleagues had set up. This room looks a lot like your high school chemistry lab might have, only more so: more pipes and cables swooping up past hanging fluorescent lights, more battered machines cluttering the faux-wood lab benches, more substances that could blaze or boom, along with stern prohibitions against blazing and booming taped to the walls. One, in a red font, read: DANGER INVISIBLE LASER RADIATION AVOID EYE CONTACT OR SKIN EXPOSURE TO DIRECT OR SCATTERED RADIATION. Acronyms and arrows crowded a blackboard, stuffing leaked from an old chair, and a dozen lab coats burdened a coat tree. Grad students drifted in and out. A mass-spectrometry machine, which looked as if it might have come from the engine room of a steamship, dominated a quarter of the lab, shrouded in hissing ice clouds. A wooden sign hung above, carved with the words SPECTROMETRY FOR THE MASSES. The machine was essentially a scale, but a scale so precise that it could determine the type and number of atoms in a milk molecule by weighing it. “It’s like weighing a battleship to see if there is a fly on the deck,” German said, shaking his head in admiration for his colleague who had built it. “Carlos Lebrilla: He’s an absolute wizard.”

On the other side of the room was a freezer containing hundreds of tubes, beakers, and vials of milk. Milk from humans, gorillas, mice, and seals scabbed like white lichen to the glass. The milk research is still in its infancy, but already these samples have shown German and the scientists working with him just how far astray our nutritional assumptions have taken us. The reason this excites German—“Sometimes this stuff gets me so excited I can’t sleep at night”—is that this failure of dietary theory presents the opportunity for a Copernican revolution in nutrition, an opportunity for a better theory that changes our conception of how the universe works.

ASSUMPTION 1: MOLECULES MATTER, FOOD IS IRRELEVANT

It’s relatively easy for scientists to measure the type and number of molecules of any nutrient (using mass spectrometry for instance) but infuriatingly hard to see how they fit together to form actual food. This is a common problem for science—categorizing and counting the parts of a system is simple (or at least feasible) but understanding the relationships between the parts is difficult. So for a long time many scientists simply assumed that the structure of food was irrelevant. When the early nutritionists thought about food structure at all, it was to plot its destruction. The molecular nutritionists, remember, had won their fame in identifying the nutrients needed to prevent deficiencies, so they favored simple foods that digestive tracts could easily absorb. For years, therefore, scientists encouraged processing the complexity out of foods. The results were products like Wonderbread—vehicles for vitamins and minerals that barely required chewing.

“They’re rocket fuel,” German said. “The nutrients have just been atomized—they go into the bloodstream like they’ve been injected.”

Milk suggests that perhaps we should be striving for the exact opposite: calories bound up in complex structures that break down bit by bit. Milk doesn’t start out in complex chunks; in order to pass through a narrow aperture—the nipple—it has to be fluid. But once through, enzymes in the baby’s stomach trigger a transformation of milk proteins and, like a ship unfolding in a bottle, they open and link together, forming large curds. Put another way, evolutionary trial and error has fixed it so that babies drink milk, but digest cheese.* Next, of course, the baby must break down this cheese to extract the nutrients. Evolution would not tolerate the expense of knitting together this complex structure then breaking it down again if it had no benefit. But according to the dominant dietary theory, which holds that food is simply independent molecules, there is no benefit: Chunky milk and fluid milk are nutritionally identical.

It’s unequivocally apparent to German that the structure of foods matters. A simple restacking of identical nutrients was so important, so advantageous in the sink-or-swim test of natural selection, that it made it worth solving the devilish engineering problem of getting cheese through a nipple. The implications are enormous: It means that a nutrient that’s good for you in one food may be bad for you in another. And that makes the nutritional information boxes required on all food packaging almost completely irrelevant: The same type of fat may have different consequences if it arrives in a slice of coconut, a steak, or a scoop of gelato.

ASSUMPTION 2: EVERYONE IS THE SAME

There’s a page from Sports Illustrated magazine that German sometimes uses in lectures with photographs of Olympic athletes in their underwear: Some bulge like comic book characters; some are planed down to willowy smoothness; some are birdlike, gracile; some hulk as if they’d been wrapped for shipping. It’s ridiculous to suggest that each of these people would be better off eating the same diet. And yet for years the nutritionists have advised just that.

Mother’s milk, on the other hand, is personalized for each infant. It contains antibodies specialized to protect against local germs. Its balance of fats and sugars shifts depending on the baby’s size, hunger, and energy expenditure. When a baby is more active and burning more calories, its movements—butting and jiggling the breast—cause the fat content of the milk to increase. And—at least among rhesus macaques, which produce milk similar to human milk—the breast produces a different mix of nutrients depending on whether the baby is male or female (the boys get fattier milk, while the girls get a greater volume of thinner milk—the upshot is that males feed less frequently and explore more, while females feed more often and perhaps learn more from their mothers). Milk changes continuously, providing age-appropriate levels of nutrients for different stages of development.

The major components of milk, however, remain the same, even as their amounts shift relative to one another. The way that fats are assembled, in particular, are consistent. “The most well-conserved gene set across all mammals is the set that creates fats in milk,” German told me. “It’s one of the great treasures in the genome.” The bulk of the fats in breast milk are saturated, which are suspect under the nutritional orthodoxy because they are associated with cholesterol. If breast milk were sold in grocery stores, it would be considered a dangerously high-cholesterol food. Yet researchers found that cholesterol levels in breast milk couldn’t be budged by putting mothers on diets. Scientists also noticed that the more cholesterol neonates drank in their milk, the less they produced in their livers, which led to the hypothesis that mothers were programming their babies—tuning their bodies to produce no more and no less cholesterol than they uniquely needed. This discovery contributed to science showing that people can have high cholesterol for different reasons: Some are eating too much of it, some are producing too much, and some aren’t efficiently eliminating it from the bloodstream.* The study of milk, in other words, has suggested that the meaning of “high cholesterol” depends utterly on context.

“The thing that bothers me most about the industrial, authoritarian model of nutrition,” German said, “is that it is in diametric opposition with human evolution.”

Homo sapiens has evolved a remarkable elasticity when it comes to diet: We have both shearing teeth and grinding molars, and our digestive system is that of a generalist. If you look deeper, beyond the tissue and bone, it becomes clear that the human genome contains multitudes: Most people in the world—aside from those with ancestors from the Eurasian cow belt, and a few cattle-rich spots in Africa—lack the genetic mutation required to digest dairy after infancy. Similarly, descendants of grain-growing cultures have genetics to manufacture more salivary amylase—an enzyme that breaks down starches—than hunter-gatherers. Furthermore, people routinely overcome these genetic predispositions, recruiting gut bacteria to help them digest lactose, for instance.

“We are at the platinum level of freedom,” German continued, “and yet nutritional dogma says we are all supposed to eat the same way?”

Humans have been molded to eat diets as diverse as humanity itself. To cater to the wondrous diversity of humankind, German thinks that nutrition science must follow the example of milk, and tailor recommendations for each individual. Which brings us to the third assumption.

ASSUMPTION 3: INSTITUTIONS, NOT INDIVIDUALS, SHOULD BE IN CHARGE OF DIET

It would be impossible for institutions to fabricate and furnish tailor-made diets for every individual on a national, or industrial, scale. To accomplish this, people would have to devise guidelines for themselves, which seemed like a recipe for disaster to early nutritionists. The example of milk, however, shows that people are capable of learning and adapting to personalized dietary guidelines by the time they are six months old.

Breast milk, as it shapes itself to the needs of the baby, is also shaping the infant to its surroundings. Long before babies are capable of speech, mothers communicate with them through flavors and scents to provide a personalized education in nutrition. This education starts in the womb, where babies begin to imprint on volatile compounds they inhale with amniotic fluid, and continues through the breast as they drink milk. Scents are transmitted from foods into milk (a phenomenon that the dairy industry studied extensively since cows that eat wild onions or garlic can dramatically alter the flavor of dairy products). Researchers sniffing breast milk have successfully detected the smell of garlic, alcohol, vanilla, and carrots after mothers had ingested the same. And babies are more likely to welcome foods the moms have regularly eaten during pregnancy and breastfeeding. It seems that a mother tunes her children’s tastes, using the knowledge she has accumulated over her lifetime about what foods best satisfy the needs of her genotype, along with the cultural knowledge built up over several lifetimes about what combinations of foods best meet the needs of someone living in the local climate, among the local of plants and animals, and within the local economic system.

“You see this not just in humans, but in all mammals,” said Julie Mennella, a scientist at the Monell Chemical Senses Center who is responsible for many of the discoveries on the development of flavor preferences. “Information about what plants to avoid, what plants to eat occasionally, and when plants are at their peak nutritional content is not innate knowledge, it’s learned. And it’s learned through the amniotic fluid and milk. These are the biological mechanisms on which culture acts when it comes to food.”

Cultures place great importance on food traditions and these established food preferences tend to outlast language when people immigrate to new countries. “When a cuisine disappears, that’s when a culture is truly dead,” Mennella said. And not only is the cuisine of every culture different, each mother offers her own twist, eating only what works for her and imbuing the infancy of her children with powerful flavor memories—be they of Parisian madeleines or brown rice. “When we think of the emotional potency of these flavor-based memories, those that take us to our past, those that trigger the reward centers in our brains, they all originate early in life,” she said.

The early nutritionists eschewed this complexity. They understood that everyone needed slightly different amounts of nutrients, but figured it wouldn’t hurt to provide double and triple doses in some cases. So they made food companies the stewards of our health by asking them to fortify the food supply. When it came to curing deficiencies, this strategy was wonderfully successful. Salt companies added a few drops of iodine to their crystals, and within a decade, goiters disappeared from America. Bakers cut rates of neural tube defects at least 25 percent by mixing folic acid in flour. Adjusting the nutrients at the national level led to the near eradication of pellagra, beriberi, and rickets. The great triumph of the uniform, top-down approach to nutrition was in providing an abundance of cheap nutrients. Rather than trusting individuals with the tools to solve our dietary problems, nutritionists simply drowned those problems in a flood of calories. By now, however, it has become clear that in attacking the nutrient-deficiency problem we created a super-sufficiency problem.

The education an infant receives through breast milk, of course, is only as good as the knowledge of the mother. And today, after years of misinformation have convinced people to mistrust the dietary evidence they observe in their own bodies, the lessons babies are learning from breast milk are not the result of years of optimizing and experimentation by the mother, but instead the dictates of the industry.

image

I’d gone to nutrition science hoping to find a prescription for how to eat, or at least hard-and-fast rules that would allow me to judge my parents’ dietary proclivities. But the problem with eating scientifically is that it’s not supported by science. The microscopic view of nutrition has provided enough information to tell people how to avoid getting goiters and scurvy, but not enough to tell a healthy person how to be healthier. Instead, it reveals just enough to enable hordes of well-meaning (or profit-driven) reformers, each selling a diet book. I’d been thinking that eating scientifically was the opposite of eating naturally, but the more I learned, the more they looked like two sides of the same coin. Both ideas extend beyond the reach of real evidence. Both produce pseudoscientific gurus who claim certainty where, in fact, there is complexity.

German has great ideas for tackling this complexity: He talks about biomarkers, metabolomics, and a moon-shot investment in dietary science. Most of all, he talks about education. Rather than teaching nutrition as a set of laws delivered from on high to be memorized and obeyed (no matter how wrong they are in context), German wants to candidly explain the limits of our knowledge to students, then set them loose on the mystery.

“We need a new generation to solve this,” he said. “I have a set of pre-assumptions that I’m not even aware of, and it’s dictating what I’m thinking. We need these young minds when they’re not stuck.”

The greatest contribution of the milk science seemed to be the excavation of nutritional nonsense—it defined the negative space in our understanding and explained our dietary confusion. It couldn’t provide the kind of evidence I would have wanted as a kid—the kind that would have convinced my parents that there was really nothing wrong with processed sugar. It did, however, offer some indication of how to proceed, because the revelations brought by milk suggest that nutritional truth is contingent upon its surroundings. It seemed appropriate, therefore, to broaden my perspective, so that I was looking not just at the chemistry, but also the context that alters the fate of any given nutrient. If the relationship between nutrients is important, surely the way a food is constructed and consumed is as well, which then makes it important to understand the history and economics—in short the culture—of a food. When I’d gone looking for precise science to tell me whether sugar was good or evil I’d come up empty-handed. But it deserved another look, I thought, with this broader lens.

FEAR OF SUGAR

Of all my family’s dietary restrictions, the sugar ban had the most staying power. Usually, my mother would relax her strictures over time, allowing tofu, perhaps, or whole-wheat bread back into the rotation. But the sugar ban remained stubbornly in place. This made sweetness a novelty, and even more exciting. For me, sugar was the taste of a better life. When I was lucky enough to get my fingers into something sweet, it provided a moment of pleasure, but not relief. Instead, the taste just made me drool for more. Looking back, I can see that while the discovery of the nudity taboo had sent subterranean shockwaves through my developing id, the discovery of sugar reordered my conscious world: Sweetness became the definitive proof that I was missing out on the party. It became a representation of all the hallmarks of the mainstream American life I yearned for, a life foreign enough from my own that I didn’t recognize the signs that it had never existed outside a television screen.

When people like my parents talk about the evils of sugar, they are usually not implying that all forms of sugar are bad. They’re not worried about sugar in milk (lactose), in fruit (fructose), or the complex sugars in whole grains. What bothers them is refined sugar (sucrose—which is 50 percent fructose and 50 percent glucose), and high fructose corn syrup (which is usually 55 percent fructose and 45 percent glucose). The liver can transform all of these sugars into glucose, which is necessary for survival.* The brain requires a steady stream of glucose, and if you’re running low it will send out panicked messages to drop everything and eat—preferably something sweet and therefore not too many metabolic steps away from glucose. The urgency of the resulting impulse is tantamount. This helped to explain my rapturously feral feeding on sweet things when I had the chance, but it didn’t explain why I felt this impulse even when I was full and in no danger of glucose depletion.

To see if I could understand what had led so many young parents to deny sugar to their children (which of course continues in some circles to this day) I went looking for documents expressing the sugar angst of the time. At the zenith of this canon is William Dufty’s 1975 book Sugar Blues, a swashbuckling account of the author’s own battle with sucrose addiction. The book tickled the concerns of the day about new chemicals in foods, and Dufty, who knew his audience, grasped the trope of sugar as a dangerous technology and carried it to the point of absurdity. “After all, heroin is nothing but a chemical,” he wrote. “They take the juice of the poppy and they refine it into opium and then they refine it into morphine and finally heroin. Sugar is nothing but a chemical. They take the juice of a cane or a beet and they refine it to molasses and then they refine it to brown sugar and finally to strange white crystals.”

Dufty blamed sugar for the plague, depression, hallucinations, and traffic accidents. His critique must have strained the faith of even the most credulous readers when he claimed that people who don’t eat sugar won’t be sunburned or bitten by insects. These claims were easily falsifiable for a sunburned, itchy, and sugar-free kid like me. Still, where he lacked for evidence, Dufty made up for it with a turn of phrase. He was a master of the false analogy: “Just as spilled sugar in our kitchens attracts ants and insects,” he wrote, “so does sugar in our bloodstreams attract mosquitoes.”

Sugar Blues was obviously not written as a logical argument to convince the skeptical, but as a sermon to affirm a fear already present in the cultural imagination. Clearly there was, even at that time, alarm about sugar—but not the science to justify it. The details of Dufty’s book were pure malarkey, but the force behind it, the sense that there was something fundamentally wrong with the amount of sugar we were eating, made sense. It was people, not ants or mosquitoes, who were swarming to sugar. Throughout the era, consumption of sugars steadily rose. Americans went from eating 15 pounds of sugars a year in 1830, to 105 pounds in 2005. Now, the average American tips back more than half a cup a day. Dufty had exposed a deep vein of sugarphobia, and yet our fears have seemed only to stimulate our appetite for sweetness.

THE SUGAR HYPOTHESIS REBORN

In 2009, the proposition that there was something fundamentally noxious about sugar roared back to life with new scientific vigor. The University of California, San Francisco, pediatrician Richard Lustig stirred my hopes for a simple, scientific explanation of America’s dietary woes by pointing the finger at the nutritional scoundrel of my youth. The sugar we ate, he said, whether in crystalline or liquid form, was killing us.

“High fructose corn syrup and sucrose are exactly the same, they are both equally bad,” he said. “They are both dangerous. They are both poison, ‚kay? I said it. Poison.”

It was an unusually blunt statement for a doctor of Lustig’s stature, and as someone starved for information about what to eat, I was transfixed. Apparently there were others like me. Soon after it was posted on the Internet, the video of his lecture, “Sugar: The Bitter Truth,” went viral. In a few months half a million of us had watched the full hour and a half. It’s probably safe to say that it is the most watched 90-minute biochemistry lecture ever.

Fructose is what’s really doing the damage, Lustig said (fructose is the principal sugar in fruit, but context is crucial—the fiber that comes with an apple mitigates the effect of the sugar, he said). When people are fed fructose, 30 percent of it is turned into fat, compared to almost none when they are fed glucose. Fructose does not suppress ghrelin (the hunger hormone from the stomach), nor does it stimulate leptin (the fullness hormone produced by fat cells). Furthermore, Lustig showed that the chemical reactions required to break down fructose in the liver triggered a cascade of other problems: the formation of new fat cells; inflammation and hypertension, which has implications for heart disease; insulin resistance, which has implications for diabetes; and leptin resistance, which has the effect of leaving the body’s starvation-distress beacon stuck in the on position.

For this lecture, Lustig wore a navy blazer, a light-blue shirt, and a shiny silver tie with a fat knot. He looked more like a politician than a medical-school professor. He sounded like a politician too: “We’ve had our food supply adulterated, contaminated, poisoned, tainted,” he said, adding: “on purpose.” That last shot was a sharp elbow in the side of soft-drink makers, which are responsible for most of the sugar added to our food since 1970. Many sodas are formulated to make you thirstier as you drink, he said, because they contain both caffeine—which makes people urinate—and salt: “Fifty-five milligrams per can. It’s like drinking a pizza.” The mixture produces dehydration, and the urge to keep drinking. “They know what they’re doing,” Lustig said darkly: “They know.”

This was stirring stuff—Lustig’s ability to command such a large audience came, no doubt, from his willingness to name a villain in no uncertain terms. And yet, given our record of demonizing single nutrients while ignoring the big picture, I couldn’t swallow this new sugar hypothesis without a spoonful of caution. The narrow focus on fructose smacked of the old nutrition myopia, uninformed by the revelations that people like Bruce German were bringing to the field with milk science. To be clear, I was completely convinced by Lustig’s biochemistry. The evidence did suggest, as he claimed, that fructose was a poison, but it’s the dose that makes the poison—even the healthiest of substances (water, for instance) becomes poisonous if consumed at sufficient quantities. The real question wasn’t “Is sugar toxic?” but “What impels people to eat so much sugar that it becomes toxic?”

Lustig, a pediatrician who works with obese children from around the Bay Area, allowed that small amounts of sugar are no more toxic than water. The problem is that massive consumption has become the norm. “The science is important,” he said when I called him, “but it’s not enough. There has to be a cultural element. I can’t do individual therapy without societal control.” To illustrate the point, he told me a story about an appointment he’d had a few months earlier. Lustig had explained to his patient that he was on track to become obese and diabetic. The solution was simple, he told the boy: Stop drinking sodas.

“Inevitably, he returns a month later and admits that he hasn’t stopped drinking sodas,” Lustig said. “So I ask why, and do you know what he says? ‘Water doesn’t taste good.’ And I look at the mom, who is also obese, and ask, ‘Surely you didn’t drink soda as a kid?’ and she says, ‘We drank Kool-Aid.’ And I go, okay, I get it. I get it.”

The problem, Lustig said, has nothing to do with the fact that all children prefer sweetness, but everything to do with our learned flavor preferences, with the food traditions passed from generation to generation, and with the fact that it’s cheap to quench the brain’s thirst for sugar from a carbonated two-liter bottle. It may seem strange to the wealthy that people who can barely afford food are drinking soda, but this isn’t the first time that the poor have found that it makes economic sense to buy unhealthy luxury foods. Ever since the 18th century the use of sugar as a primary source of calories has been more a symptom of desperation than indulgence. Back then, the Victorian poor drank tea rather than soda, and the social historian, David Davies, observed “Tea drinking is not the cause, but the consequence of the distresses of the poor.”

As the milk science suggested, the context of a nutrient—in any particular food, or in any particular person—was of the utmost importance. And sugar, in particular, is entangled with its economic context.

SUGARCANE CAPITALISM

The strange thing about the world’s sweet tooth is that it didn’t really exist before 1700. During the Renaissance, sugar was a rare spice, and it spread slowly in the following centuries. Then, between 1700 and 1900, British sugar consumption increased 2,250 percent, until it made up a fifth of all calories in the national diet.

Clearly people are attracted to the taste of sugar, but that isn’t enough to explain this sudden and titanic shift in food culture, argues sociologist Sydney Mintz, in Sweetness and Power, his book on the history of sugar. Furthermore, a liking for sweetness certainly doesn’t explain why the poor eat immense quantities of sugar while those with money eat less. Mintz’s explanation is that sugar served as midwife at the birth of modern consumerism, and as this particular form of economy grew, sugar thrived as a new calorie source for commoners. Historian Fernando Ortiz called plantations “the favored child of capitalism,” and Mintz shows how the sugarcane farms of the Caribbean in the 1700s provided a test case for global markets and financial speculation. They were so successful that they became “one of the massive demographic forces in world history,” moving hundreds of millions of enslaved Africans to the New World.

This economic system enriched even the poorest Englishman in that it put luxury imports—like tea and sugar—within his reach, but it did not change his position within society. “Slave and proletarian together powered the imperial economic system that kept the one supplied with manacles and the other with sugar and rum; but neither had more than minimal influence over it,” Mintz observed. “The growing freedom of the consumer to choose was one kind of freedom, but not another.”

And even this freedom, the freedom to shop, proved illusory as consumer choice was increasingly constrained to sugar in various guises. Food historian John Burnett wrote that, in the 18th century, white bread and sugared tea were transformed from luxuries of the rich to, “the irreducible minimum beyond which lay only starvation.” The newly urban poor no longer could supplement their meals with parsnips from the garden, or milk from the family cow, and in the newly commodified economy where time was money, there simply weren’t enough hours to make vegetable broth or porridge. English commoners found that their new factory jobs also demanded a new diet—food they could prepare quickly between shifts, that prevented them from nodding off at the loom. Sugar—usually mixed with tea—met both criteria. “Sugar was taken up just as work schedules were quickening,” Mintz writes, “as the movement from countryside to city was accelerating, and as the factory system was taking shape and spreading. Such changes more and more affected eating habits.”

Around this time, sugar began to spread under subterfuge, by infiltrating other foods and driving down their costs. Mix enough sugar with fruit and you get preserves that will not rot, and therefore may be mass-produced and distributed cheaply. By 1905, preserves made with Jamaican sugar were less expensive than butter from British cows. The cheapest midday meal came to consist almost wholly of simple carbohydrates: bread slathered with jam or treacle (sugar syrup), and tea with the requisite two lumps. While some reformers were aghast to see sugared tea push its way into the center of the diet, teetotalers welcomed the beverage, which was replacing beer as the refreshment of choice among the poor. The arrangement also pleased industrialists, who found that employees worked more efficiently when they exchanged alcohol for a diet of stimulants.

Historical circumstance had led to the rise of sugar, and businessmen began campaigning to ensure that circumstances remained favorable. Politicians started working to influence governments for the promotion of sugar. This was a novel development: An inanimate commodity had gained a seat at the table of power, trumping allegiance to king and country.

Early attempts to influence nutrition science on the behalf of sugar began around this time as well. Mintz identifies a Dr. Frederick Slare as the chief nutritional propagandist in the 1700s. Something of an anti-Dufty, Slare recommended sugar for curing ailments of the eye, as a hand lotion for healing cuts, as a snuff substitute for snorting, and even as a form of toothpaste. In response to evidence associating sugar with diabetes mellitus, Slare wrote an outraged rebuttal, bemoaning the damage that would be done if the world were denied the curative powers of this panacea should its reputation be wounded by a mere hypothesis.

SUGARS POWER TODAY

These initial efforts to influence politics and nutrition science were refined over the centuries as sugar flexed gracefully into the postcolonial world, insinuating itself into more countries, more foods, and more adipose tissue. Slare and his kind were replaced by more formidable entities like the Washington, DC-based International Life Sciences Institute, ILSI. The organization’s stated mission is “to improve the well-being of the general public through the advancement of science,” and it produces enough legitimate science that many have assumed that the organization is focused purely on the public interest. But it’s been shown, through internal memos made public during litigation against tobacco companies, that a major part of ILSI’s true mission is to serve its members: food, agribusiness, and drug corporations.

In the 1990s, ILSI was run by Alex Malaspina, who also happened to be vice president of Coca-Cola. Geoffrey Cannon, who in 1992 was a British delegate to a World Health Organization (WHO) conclave on nutrition, said that Malaspina comported himself as if he were in charge of the WHO. “I can still see him striding at the head of a phalanx of rent-a-profs, dispatching two to talk to this national delegation, two to the next,” Cannon said. During that particular meeting, these scientists-for-hire succeeded in fomenting enough disagreement over details that the participants eventually agreed to refrain from mentioning sugar at all.

In 1997 ILSI was even more successful, sponsoring an “Expert Consultation on Carbohydrates” in which the WHO concluded there was no upper limit on sugar in a healthy diet. “Good news for kids: Experts see no harm in sugar,” read one press release.

These techniques of guiding the scientific conversation and spreading doubt were pioneered by the tobacco industry when working to deny the connection between smoking and lung cancer, and the use of the same tactics in the debate over sugar was not coincidental. In 2001 a WHO investigation found that “ILSI was used by certain tobacco companies to thwart tobacco control policies.” It was later banned from participation in WHO decisions on food or water standards.

Sugar has also employed more direct tactics. In 2004, after the WHO recommended that added sugars should account for no more than 10 percent of a diet (a widely accepted proposition already enshrined in the U.S. food pyramid) the emissaries of sugar appealed to the U.S. government. President George W. Bush’s administration responded to the call, denouncing the WHO’s recommendation and threatening to pull U.S. funding from the organization. This time, the WHO, to its credit, stood firm.

This access to presidential power hasn’t been restricted to Republicans. Monica Lewinsky’s testimony revealed that President Bill Clinton interrupted one of their oval-office interludes to take a call from a member of the Fanjul family—one of the handful of Florida cane growers receiving government sanction to ship in Caribbean workers and pay them less than minimum wage. South Florida columnist Carl Hiaasen, who watched the Fanjul family power grow, told Vanity Fair, “The most telling thing about Alfy Fanjul is that he can get the president of the United States on the telephone in the middle of a blow job. That tells you all you need to know about their influence.”

Perhaps the most important factor in sugar’s success, however, has not been the aid of the rich and powerful, but the way it affects the minds of the people who eat it. It wasn’t just the convenience of sugar that endeared it to the Dickensian poor, but it was also the way it made them feel. It provided a bit of a fix to keep them going, just a hint of the luxury they were generating. “Sugar seems to satisfy a particular desire (it also seems, in so doing, to awaken that desire anew),” Mintz wrote.

THE CULTURE OF DESIRE

It’s possible to get an idea of the collective experience of sweetness by looking at the meanings of the word we use to describe the sensation. The word sweet has been used in thousands of different contexts over the course of the last millennium, and this spectrum of connotations shifted with the spread of sugar—there was a semantic devaluation corresponding with the declining value of sweet foods. In the Middle Ages “sweet” was used in settings that ranged from approval to religious ecstasy; from the 16th to 19th centuries the synonyms downgraded slightly—from “pleasant” to “delightful”; in the 20th century “sweet” was nice or pleasant, and was often used in faint praise that implied a lack of substance, as in, “sweet nothings,” or “she’s a sweet woman.” Despite these changes, however, one meaning has withstood the wear of centuries. John Lydgate’s usage of “swetness” in the 15th century, to describe how siren songs could “Brineth a man to confusioun,” would still be perfectly apt if used today (albeit with spellcheck). A confusion of siren-like temptation with the actual experience of pleasure seems to be the enduring hallmark of sweetness.

Joan Ann Teitz has recorded these meanings in A Thousand Years of Sweet, and as I leafed through this book I was struck by how infrequently sweet has implied fulfillment—there were pages of desire but hardly a note of satiation. Sweetness is associated with “satisfaction” only when used to describe victory or revenge—which is a peculiar species of satisfaction, unrelated to contentment. And, while spiritual comfort and contentment were sweet during the Middle Ages, this usage faded by the 15th century. Particularly interesting is the fact that the Indo-European word swad was the root of both “sweet” and “persuade.” These meanings rejoin in modern idioms like sweet-talking and honeyed speech. The insight captured in this confluence of meaning is that sweetness is advertisement: It implies pursuit of pleasure rather than pleasure itself.

This is a subtle distinction, but one that neuroscientists would make independently. For years, scientists studying pleasure confused wanting with liking. Experiments were rigged for rats so that they would spark an electrode implanted in their brains by pressing a lever. If the zap was unpleasant the rats would stop. But if the rodents started hitting the lever like crazy, the scientists thought, “Aha! The electrode must be in a pleasure center.” When rats were willing to work for reward, these areas, which are located in the mesolimbic brain, were flooded with the chemical dopamine. So the whole pleasure system was named the mesolimbic dopamine reward system.

This went on until 1989, when the psychologist Kent Berridge set out to confirm that dopamine was in fact related to pleasure. Berridge took rats and wiped out their ability to produce dopamine. Then he gave them sugar. He expected that they’d be totally uninterested because, without dopamine, the sugar wouldn’t give them pleasure. But the rats enthusiastically licked their lips, which is what rats do when they like a taste. Without dopamine, rats were not as driven to work for sugar, but they still enjoyed it.

“It was just a little study,” Berridge said. “We thought we just did something wrong.”

So they did it again. But they kept getting the same result. It was clear that the rats were experiencing pleasure. Then Berridge went back and read old experiments, in which electrodes had been put into human brains. Unlike the rats, these people could describe the sensations they experienced, and they didn’t sound all that great.

Berridge recounts the travails of “B-19,” a young man undergoing treatment for depression, epilepsy, and—this was the early ’60s—for being gay. Robert Heath, who led Tulane University’s psychiatric research at the time, implanted an electrode running through the ostensible pleasure centers in B-19’s brain. While showing the man heterosexual pornography, Heath would press the button to trigger the electric pulse. The idea was to kick-start B-19’s pleasure centers while he was looking at naked women, and turn him into a heterosexual.

“The stimulation evoked strong sexual arousal and interest,” Berridge wrote. “But it did not produce pleasurable sexual orgasm, not even after a thousand consecutive stimulations, unless B-19 was allowed to simultaneously masturbate (or to copulate with a prostitute who was persuaded to provide ‘therapy’ on one occasion, in what must be one of the most astounding accounts ever published in scientific literature).”

Heath thought he’d found not just a method for converting gay people to a more wholesome lifestyle of prostitutes and porn, but also the ultimate source of human pleasure. Berridge was unconvinced: “There were no exclamations of delight reported, not even a ‘Oh—that feels nice!’ Instead the stimulation seemed to fail to provide the particular sensory pleasure it made him most eager to pursue.” B-19 would mash the button over a thousand times, and when it was taken away he would, Heath wrote, plead “to self-stimulate just a few more times.”

When I first read the description of this joyless compulsion to click a button over and again, I happened to have my own finger on a computer mouse, and I felt a shiver of recognition. The sensations of another patient, a woman given a brain electrode in hope that it would ease her chronic pain, were even more hauntingly familiar: “At its most frequent, the patient self-stimulated throughout the day, neglecting personal hygiene and family commitments.… At times, she implored her family to limit her access to the stimulator, each time demanding its return after a short hiatus.” She reported a vaguely erotic urge, a goading undercurrent of anxiety, the desire to eat or drink without hunger or thirst, listless inactivity, and most of all, a driving compulsion chanting, “More.” For me, as a child, that had been the voice of sugar. I have an early memory of pilfering a forbidden cask of candied popcorn, and watching myself with growing horror as I methodically, joylessly, worked my way to the bottom.

Compulsion without pleasure, whether provoked by sugar consumption or B-19’s electrode, seems to be related to a flood of the neurotrans-mitter dopamine into the middle of the brain, where the mesolimbic dopamine reward system lies. The same neural trigger also fires in the brains of drug abusers and gambling addicts. This mechanism must have evolved as a crude neural-override switch, to lock attention on the pursuit. The power dopamine has over us reflects the evolutionary importance of its mission: It’s there to keep us from starving and insure that we reproduce. It is activated as parents bond to their infants (it’s no coincidence, I think, that Beth and I exclaim “Oh, she’s so sweet!” with something like pained dismay at the feeling of rapacious attraction we feel for our baby daughter). This system, in other words, is normally vital, allowing the perseverance and focus required for all achievements. But when abused or damaged, it can produce manic, destructive behavior. Anyone who has been unable to tear themselves away from inane Web clicking, or compulsively mined a pint of ice cream for bits of cookie dough long after they have stopped enjoying it, or promised themselves they’d play “just one more level” (this is my particular demon), has felt the grip of dopamine. Interestingly, animal studies show that the brain reacts to sweetness, not by switching off this seeking mechanism, but by further heightening the reaction. In a world where sweetness is rare, it’s logical for sugar to trigger focused seeking. It makes some sense, for instance, for the taste of one ripe berry to elicit gorging before this ephemeral sweetness disappears. But when these sorts of binges are routine—as they can be in a sucrose-drenched world—they cause changes in the brain structure similar to those found in brains altered by heroin.*

Complicating all this is the fact that wanting and liking are tangled. Liking seems to be related to endorphins as opposed to dopamine, but almost everything that activates the endorphin-linked pleasure centers also activates the mesolimbic dopamine system. The key difference is that it’s harder to provoke liking than wanting. The liking centers in the brain are much smaller, each about a cubic centimeter, “an archipelago of interacting islands,” Berridge wrote. These islands must be triggered simultaneously to create a feeling of pleasure. Desire is robust. Pleasure is fragile and fleeting. The first taste of sugar provokes genuine pleasure, but the desire to eat more only grows as satisfaction fades. Of course we are talking about the simplest form of pleasure here, associated with sweetness. I would expect that the pleasure that comes from a symphony, or a skyline, or a smile, is even more ephemeral.

Food companies have learned to manipulate this neurochemistry, argues former FDA commissioner David Kessler, in his book, The End of Overeating. Of course, it’s not just sugar that jukes our wanting system into overdrive: Fat and salt can also cue dopamine, and it’s the combination of all three (fat, salt, and sweetness) that triggers the most intense yearning. An overeater, in Kessler’s vision, is like a Manchurian candidate who, instead of killing, has been conditioned to respond to a surge of dopamine with a strong hand-to-mouth reflex.

According to Kessler, food makers—oblivious to the extent that they are controlling their customer’s minds—are simply responding to market pressure to design foods that sell. Kessler describes speaking to a group of executives from some of the world’s largest food corporations. He laid out the science, explaining how their products exploited brain chemistry. When he finished, he wrote, “there was complete silence in the room. Then one executive spoke up. ‘Everything that has made us successful as a company is the problem,’ he said.”

It’s not just the food industry. Marketers and advertising agencies working in all sectors of the economy have cobbled together an empirical understanding of what makes our brains flinch with desire. They might not realize they are practicing neuroscience, but they have developed a Madison Avenue folk knowledge for juicing the mesolimbic reward system. Central to this knowledge is the deliberate confusion of desire and pleasure, so that the thrill of the purchase becomes an end unto itself. Plants invented this form of dopamine marketing in fruit—offering sugar as an advertisement to the tongue, a loss-leader given away in exchange for seed distribution. Capitalism perfected this innovation by stripping the satiating bulk and fiber that came with fruit, and offering pure swad—that alluring sweetness that when consumed only cues greater yearning. The success of this strategy has made it pervasive. We now live in an empty-calorie world, where sugar’s equivalents flash their for-sale phosphorescence from highway signs, smartphones, and television screens. The world for sale is a world reduced to lust and hunger—endless oceans of wanting interrupted by brief atolls of contentment.

While it’s too much to blame sugar for everything wrong with consumerism, it is a fair symbol for our excesses, given its role in the creation of the culture of desire. And because its consumption awakens further desire, sugar is especially well suited as a metaphor that demonstrates the connection between superficial rewards and manic striving—whether that striving is pushing a lever in a lab or pushing paper in an office. The neurologist Robert Sapolsky, speaking about the mesolimbic-reward system, has said, “Dopamine is not about pleasure, it’s about the anticipation of pleasure. It’s about the pursuit of happiness rather than happiness itself. If you block that rise of dopamine from occurring you don’t get the work.” As Sydney Mintz wrote, sugar’s success in at once creating a market among workers, and keeping them attentively at their stations, “made visible, perhaps for the first time in history, a critical connection between the will to work and the will to consume.”

Work hard, and you’ll someday have not just a house with a car out front, but also a designer range in the kitchen, a pool, and a second car, a second house, a second spouse, a yacht, a modest Mediterranean island.… This is the American dream in cancerous metastasis, growth unchecked by reason, consumption divorced from pleasure. Of course the people of the culture of desire are fat. It would be astonishing if we were not.

I could see why people like my parents worried about sugar. Sugar easily represents all the despicably materialistic elements of our culture. But it would be a mistake, and distraction from the larger problem, to get too hung up on the debate over whether it’s evil, or poisonous. Everyone, except perhaps for a few industry-funded hacks, agrees that Americans consume way too much. We could eat a lot of sugar, heck, we could all eat a quarter cup every day (that’s about the government-recommended maximum) and that would still be less than half as much as we are eating now. What really makes sugar bad are those corrosive elements of our culture that induce some people to guzzle truly heroic quantities of the stuff.

THE SWEET LIFE

What I’d thought was a small question—how does a rational person achieve the health my parents sought through diet?—had produced an impossible answer: You must end the culture of consumption and the inequity it breeds. I was looking for tools I could apply in the kitchen, not grand political theories. I didn’t want to start researching tax policy or tactics for revolution. There was, however, one glimmer of hope for a more domestic solution because, in uncovering this dementedly complex tangle, I’d also uncovered a simple, but perhaps powerful, tool with which to extricate myself.

The sales pitch that promises everything and delivers nothing—that tickles our wanting neurons while making sure we can’t get no satisfaction—relies wholly on confusing pleasure with desire. Perhaps it could be defeated by simply learning to distinguish the one from the other. For food, this would mean learning to truly taste—slowing down enough to take pleasure in the flavors and textures in each bite. Food corporations have learned to cater to what they call “the lazy American palate,” which basically means lots of sugar, fat, and salt. They don’t waste subtle flavors on us because, in the grip of a dopamine-driven desire, people stop tasting. When I ate this way I was trying to muffle craving, not produce pleasure. The problem of gluttony is not too much love of food, but too little.

Taking pleasure in food also meant taking pleasure in sweetness, for while I’d uncovered any number of reasons to be concerned about the spread of sweetness, at the end of it I was still an unabashed lover of sugar. This, I decided, was no more contradictory than drinking wine while abhorring alcoholism. The trick, at least for me, was cultivating my attention to sweetness, so that I noticed, and was able to stop, when pleasure subsided.

It seemed radical, and maybe foolhardy, to champion an individual’s own sensation over the calorie-counting abstractions of conventional wisdom. Surely telling people to “eat what you enjoy” would be catastrophic for those with a suite of metabolic conditions, and laughable for those without a penny to spare. I absolutely wanted the reductive science curing goiters and pushing innovation forward, I just didn’t want to place so much faith in our limited knowledge that I stopped trusting my own senses. As food research progresses it should produce solutions that work in concert with the senses, rather than insisting—as the old nutrition orthodoxy did—that our bodies are leading us astray.

In fact, the nutritional assumptions that Bruce German was doing his best to debunk had all served to help industrialized uniformity triumph over sensation and the taste preferences passed down from mother to child. The assumption that structure did not matter allowed technologists to rebuild foods—mostly by removing fiber while adding sugar, salt, and fat—to increase shelf life and withstand the rigors of long-distance transportation. The assumption that everyone should eat the same diet facilitated mass production. And the assumption that industry was better situated than individuals to control nutrition put a healthy halo over processed foods, while making traditional foods look bad. As early as the 1930s, the food writer M.F.K. Fisher was already able to observe that we were losing flavor in this victory of industry over tradition: “The foundation of all French cookery is butter,” she wrote, “as that of the Italian is olive oil, German lard, and Russian sour cream. In the same way, water or drippings may be designated, unfortunately, as the basis of the English cuisine, and perhaps the flavor of innumerable tin cans, of American!”

Despite the current hegemony of industrial nutrition, however, food culture persists. There are pockets of resistance—delicious traditions—in every family. The way to fix dinner and find our way back from obesity could be lurking in these recipes—and in the deep flavor memories from infancy. There is raw emotional power in those memories: The Chinese writer Lin Yutang gauged their potential energy when he asked, “What is patriotism but the love of food one ate as a child?” Perhaps we could reclaim our dietary homelands if we remembered how to distinguish between Kessler’s dopamine-driven craving and genuine satisfaction—if we found a way to take, not guilty pleasure, but a kind of holy joy in every mouthful.

It seemed to work for Beth and me. The more we learned about food the more we appreciated it. And this appreciation steadily changed the foods that took form in our kitchen. We began spending a little more money to receive a box of produce directly from a farm, first twice a month, then every week. Next we added six eggs—pastel shells and bright orange yolks—to our order. Then, a half-gallon of milk. I experimented in the kitchen, basing my creations on bits of overheard advice, inspiration from food writers, and snatches of half-remembered culinary science. Some attempts were disastrous (my high-density whole-wheat bread), some traumatic (my wrestling match with elastic fish guts), some simply disgusting (my sour, alcoholic kefir made from the cauliflowerlike microbial spores, which a stranger had handed me in an alley behind the TransAmerica building). But my failures were occasionally interrupted by modest successes, and our lives grew steadily more delicious. With much more grace, Beth also built up a repertoire of dishes, and a library of cookbooks. I have to admit that I frequently fell off the wagon: I wolfed down my food unthinkingly, and amid the deliciousness there was (and is) a fair share of bland brown rice with bitter greens—those are my flavor memories after all. But we were also eating marbled meats, triple-cream cheeses, and molten-chocolate pudding. We chose these foods not because we craved them, but because they truly made life richer. And though most of our food was arriving without a nutrition-facts panel, we managed to maintain our good health. More and more frequently our table was crowded with friends. Life grew sweeter, but not sweet in the diminutive, or yearning senses commonly evoked by the word in English—this was sweetness as the Italians must experience it in order to produce a phrase like la dolce vita, the sweetness of fulfillment. Changing a culture, it seems, isn’t so hard when it begins at home. And if people were able to stop striving and fighting long enough to find contentment in good food, lovingly prepared, with a small group of friends, in universal solidarity that can be found around the table—well, perhaps that really could change the world.


* Liebig, though a brilliant chemist, wasn’t so good when it came to food: He also claimed that searing a piece of meat seals its juices inside, but this actually does just the opposite, drying and caramelizing the outer layer, and to this day cooks sacrifice steaks to Liebig’s confusion. Harold McGee has pointed out that you can achieve the same effect without drying out the meat by searing it at the end of the cooking process rather than the beginning.

* If we are to give credit where credit is due, the honor for the invention of cheese belongs to babies. We still employ the enzyme, rennet, from the stomach of a baby goat, or sheep, to perform this magic trick. It’s likely the first man-made cheese was a happy accident that occurred when someone used a bag made from a calf’s stomach to carry milk.

* It’s possible to zero in on these issues by looking at different molecules in the blood: One (phytosterol) can show you are taking too much cholesterol in, another (mevalonate) can reveal if your liver is the problem, and a third (7-α-hydroxy-4-cholesten-3-one) can indicate that you aren’t eliminating enough cholesterol by converting it to bile. Each condition demands a different treatment. But these tests aren’t routinely performed before doctors make a prescription to control cholesterol.

* Though glucose is necessary for survival, you don’t have to eat sugar to get it: The liver can synthesize glucose from amino acids. But it is a more elaborate process, and the amino acids usually come from breaking down muscle.

* Dufty was partially right to compare sugar to heroin. But his larger point, that sugar is categorically evil because it is a pure chemical, is totally nuts. Salt (sodium chloride) and water (sometimes called by the playful and scientifically correct name dihydrogen monoxide) are examples of the hundreds of pure chemicals we need to live.