The sweet shop in Llandaff in the year of 1923 was the very center of our lives. To us, it was what a bar is to a drunk, or a church is to a Bishop. Without it, there would have been little to live for….Sweets were our life-blood.
Imagine a moment when the sensation of honey or sugar on the tongue was an astonishment, a kind of intoxication. The closest I’ve ever come to recovering such a sense of sweetness was secondhand, though it left a powerful impression on me even so. I’m thinking of my son’s first experience of sugar: the icing on the cake at his first birthday. I have only the testimony of Isaac’s face to go by (that, and his fierceness to repeat the experience), but it was plain that his first encounter with sugar had intoxicated him—was in fact an ecstasy, in the literal sense of that word. That is, he was beside himself with the pleasure of it, no longer here with me in space and time in quite the same way he had been just a moment before. Between bites Isaac gazed up at me in amazement (he was on my lap, and I was delivering the ambrosial forkfuls to his gaping mouth) as if to exclaim, “Your world contains this? From this day forward I shall dedicate my life to it.”
What if Roald Dahl and Michael Pollan are right, that the taste of sugar on the tongue can be a kind of intoxication? Doesn’t it suggest the possibility that sugar itself is an intoxicant, a drug? Imagine a drug that can intoxicate us, can infuse us with energy, and can do so when taken by mouth. It doesn’t have to be injected, smoked, or snorted for us to experience its sublime and soothing effects. Imagine that it mixes well with virtually every food and particularly liquids, and that when given to infants it provokes a feeling of pleasure so profound and intense that its pursuit becomes a driving force throughout their lives.
Overconsumption of this drug may have long-term side effects, but there are none in the short term—no staggering or dizziness, no slurring of speech, no passing out or drifting away, no heart palpitations or respiratory distress. When it is given to children, its effects may be only more extreme variations on the apparently natural emotional roller coaster of childhood, from the initial intoxication to the tantrums and whining of what may or may not be withdrawal a few hours later. More than anything, our imaginary drug makes children happy, at least for the period during which they’re consuming it. It calms their distress, eases their pain, focuses their attention, and then leaves them excited and full of joy until the dose wears off. The only downside is that children will come to expect another dose, perhaps to demand it, on a regular basis.
How long would it be before parents took to using our imaginary drug to calm their children when necessary, to alleviate pain, to prevent outbursts of unhappiness, or to distract attention? And once the drug became identified with pleasure, how long before it was used to celebrate birthdays, a soccer game, good grades at school? How long before it became a way to communicate love and celebrate happiness? How long before no gathering of family and friends was complete without it, before major holidays and celebrations were defined in part by the use of this drug to assure pleasure? How long would it be before the underprivileged of the world would happily spend what little money they had on this drug rather than on nutritious meals for their families?
How long would it be before this drug, as the anthropologist Sidney W. Mintz said about sugar, demonstrated “a near invulnerability to moral attack,” before even writing a book such as this one was perceived as the nutritional equivalent of stealing Christmas?
What is it about the experience of consuming sugar and sweets, particularly during childhood, that invokes so readily the comparison to a drug? I have children, still relatively young, and I believe raising them would be a far easier job if sugar and sweets were not an option, if managing their sugar consumption did not seem to be a constant theme in our parental responsibilities. Even those who vigorously defend the place of sugar and sweets in modern diets—“an innocent moment of pleasure, a balm amid the stress of life,” as the British journalist Tim Richardson has written—acknowledge that this does not include allowing children “to eat as many sweets as they want, at any time,” and that “most parents will want to ration their children’s sweets.”
But why is it necessary? Children crave many things—Pokémon cards, Star Wars paraphernalia, Dora the Explorer backpacks—and many foods taste good to them. What is it about sweets that makes them so uniquely in need of rationing, which is another way of asking whether the comparison to drugs of abuse is a valid one?
This is of more than academic interest, because the response of entire populations to sugar has been effectively identical to that of children: once populations are exposed, they consume as much sugar as they can easily procure, although there may be natural limits set by culture and current attitudes about food. The primary barrier to more consumption—up to the point where populations become obese and diabetic and then, perhaps, beyond—has tended to be availability and price. (This includes, in one study, sugar-intolerant Canadian Inuit, who lacked the enzyme necessary to digest the fructose component of sugar and yet continued to consume sugary beverages and candy despite the “abdominal distress” it brought them.) As the price of a pound of sugar has dropped over the centuries—from the equivalent of 360 eggs in the thirteenth century to two in the early decades of the twentieth—the amount of sugar consumed has steadily, inexorably, climbed. In 1934, while sales of candy continued to increase during the Great Depression, The New York Times commented, “The depression proved that people wanted candy, and that as long as they had any money at all, they would buy it.” During those brief periods of time during which sugar production surpassed our ability to consume it, the sugar industry and purveyors of sugar-rich products have worked diligently to increase demand and, at least until recently, have succeeded.
The critical question, what scientists debate, as the journalist and historian Charles C. Mann has elegantly put it, “is whether [sugar] is actually an addictive substance, or if people just act like it is.” This question is not easy to answer. Certainly, people and populations have acted as though sugar is addictive, but science provides no definitive evidence. Until recently, nutritionists studying sugar did so from the natural perspective of viewing sugar as a nutrient—a carbohydrate—and nothing more. They occasionally argued about whether or not it might play a role in diabetes or heart disease, but not about whether it triggered a response in the brain or body that made us want to consume it in excess. That was not their area of interest.
The few neurologists and psychologists interested in probing the sweet-tooth phenomenon, or why we might need to ration our sugar consumption so as not to eat it to excess, did so typically from the perspective of how these sugars compared with other drugs of abuse, in which the mechanism of addiction is now relatively well understood. Lately, this comparison has received more attention as the public-health community has looked to ration our sugar consumption as a population, and has thus considered the possibility that one way to regulate these sugars—as with cigarettes—is to establish that they are, indeed, addictive. These sugars are very likely unique in that they are both a nutrient and a psychoactive substance with some addictive characteristics.
Historians have often considered the sugar-as-a-drug metaphor to be an apt one. “That sugars, particularly highly refined sucrose, produce peculiar physiological effects is well known,” wrote the late Sidney Mintz, whose 1985 book Sweetness and Power is one of two seminal English-language histories of sugar on which other, more recent writers on the subject (including myself) heavily rely.* But these effects are neither as visible nor as long-lasting as those of alcohol, or caffeinated beverages, “the first use of which can trigger rapid changes in respiration, heartbeat, skin color and so on.” Mintz has argued that a primary reason that through the centuries sugar has escaped religious-based criticisms, of the kind pronounced on tea, coffee, rum, and even chocolate, is that, whatever conspicuous behavioral changes may occur when infants consume sugar, it did not cause the kind of “flushing, staggering, dizziness, euphoria, changes in the pitch of the voice, slurring of speech, visibly intensified physical activity, or any of the other cues associated with the ingestion” of these other drugs. As this book will argue, sugar appears to be a substance that causes pleasure with a price that is difficult to discern immediately and paid in full only years or decades later. With no visible, directly noticeable consequences, as Mintz says, questions of “long-term nutritive or medical consequences went unasked and unanswered.” Most of us today will never know if we suffer even subtle withdrawal symptoms from sugar, because we’ll never go long enough without sugar to find out.
Mintz and other sugar historians consider the drug comparison to be so fitting in part because sugar is one of a handful of “drug foods,” to use Mintz’s term, that came out of the tropics, and on which European empires were built from the sixteenth century onward, the others being, tea, coffee, chocolate, rum, and tobacco. Its history is intimately linked to that of these other drugs. Rum is distilled, of course, from sugarcane, whereas tea, coffee, and chocolate were not consumed with sweeteners in their regions of origin. In the seventeenth century, however, once sugar was added as a sweetener and prices allowed it, the consumption of these substances in Europe exploded. Sugar was used to sweeten liquors and wine in Europe as early as the fourteenth century; even cannabis preparations in India and opium-based wines and syrups included sugar as a major ingredient.
Kola nuts, containing both caffeine and traces of a milder stimulant called theobromine, became a product of universal consumption in the late nineteenth century, first as a coca-infused wine in France (Vin Mariani) and then as the original mixture of cocaine and caffeine of Coca-Cola, with sugar added to mask the bitterness of the other two substances. The removal of the cocaine in the first years of the twentieth century seemed to have little influence on Coca-Cola’s ability to become, as one journalist described it in 1938, the “sublimated essence of all that America stands for,” the single most widely distributed product on the planet and the second-most-recognizable word on Earth, “okay” being the first. It’s not a coincidence that John Pemberton, the inventor of Coca-Cola, had a morphine addiction that he’d acquired after being wounded in the Civil War. Coca-Cola was one of several patent medicines he invented to help wean him off the harder drug. “Like Coca, Kola enables its partakers to undergo long fast and fatigue,” read one article in 1884. “Two drugs, so closely related in their physiological properties, cannot fail to command early universal attention.”
As for tobacco, sugar was, and still is, a critical ingredient in the American blended-tobacco cigarette, the first of which was Camel, introduced by R. J. Reynolds in 1913. It’s this “marriage of tobacco and sugar,” as a sugar-industry report described it in 1950, that makes for the “mild” experience of smoking cigarettes as compared with cigars and, perhaps more important, makes it possible for most of us to inhale cigarette smoke and draw it deep into our lungs. It’s the “inhalability” of American blended cigarettes that made them so powerfully addictive—as well as so potently carcinogenic—and that drove the explosion in cigarette smoking in the United States and Europe in the first half of the twentieth century, and the rest of the world shortly thereafter, and, of course, the lung-cancer epidemics that have accompanied it.
Unlike alcohol, which was the only commonly available psychoactive substance in the Old World until sugar, nicotine, and caffeine arrived on the scene, the latter three had at least some stimulating properties, and so offered a very different experience, one that was more conducive to the labor of everyday life. These were the “eighteenth-century equivalent of uppers,” writes the Scottish historian Niall Ferguson. “Taken together, the new drugs gave English society an almighty hit; the Empire, it might be said, was built on a huge sugar, caffeine and nicotine rush—a rush nearly everyone could experience.”
Sugar, more than anything, seems to have made life worth living (as it still does) for so many, particularly those whose lives were absent the kind of pleasures that relative wealth and daily hours of leisure might otherwise provide. As early as the twelfth century, one contemporary chronicler of the Crusades, Albert of Aachen, was describing merely the opportunity to sample the sugar from the cane that the Crusaders found growing in the fields of what are now Israel and Lebanon as in and of itself “some compensation for the sufferings they had endured.” “The pilgrims,” he wrote, “could not get enough of its sweetness.”
As sugar, tea, and coffee instigated the transformation of daily life in Europe and the Americas in the seventeenth and eighteenth centuries, they became the indulgences that the laboring classes could afford; by the 1870s, they had come to be considered necessities of life. During periods of economic hardship, as the British physician and researcher Edward Smith observed at the time, the British poor would sacrifice the nutritious items of their diet before they’d cut back on the sugar they consumed. “In nutritional terms,” suggested three British researchers in 1970 in an analysis of the results of Smith’s survey, “it would have been better if some of the money spent on sugar had been diverted to buy bread and potatoes, since this would have given them very many more calories for the same money, as well as providing some protein, vitamins and minerals, which sugar lacks entirely. In fact however we find that a taste for the sweetness of sugar tends to become fixed. The choice to eat almost as much sugar as they used to do, while substantially reducing the amount of meat, reinforces our belief that people develop a liking for sugar that becomes difficult to resist or overcome.”
Sugar was “an ideal substance,” says Mintz. “It served to make a busy life seem less so; in the pause that refreshes, it eased, or seemed to ease the changes back and forth from work to rest; it provided swifter sensations of fullness or satisfaction than complex carbohydrates did; it combined easily with many other foods, in some of which it was also used (tea and biscuit, coffee and bun, chocolate and jam-smeared bread)….No wonder the rich and powerful liked it so much, and no wonder the poor learned to love it.” What Oscar Wilde wrote about a cigarette in 1891, when that indulgence was about to explode in popularity and availability, might also be said about sugar: It is “the perfect pleasure. It is exquisite, and it leaves one unsatisfied. What more can one want?”
Sugar craving does seem to be hard-wired in our brains. Children certainly respond to it instantaneously, from birth (if not in the womb) onward. Give babies a choice of sugar water or plain, wrote the British physician Frederick Slare three hundred years ago, and “they will greedily suck down the one, and make Faces at the other: Nor will they be pleas’d with Cows Milk, unless that be bless’d with a little Sugar, to bring it up to the Sweetness of Breast-Milk.” Slare’s observation was confirmed experimentally in the early 1970s by Jacob Steiner, a professor of oral biology at the Hebrew University of Jerusalem. Steiner studied and photographed the expressions of newborn infants given a taste of sugar water even before they had received breast milk or any other nourishment. The result, he wrote, was “a marked relaxation of the face, resembling an expression of ‘satisfaction,’ often accompanied ‘by a slight smile,’ ” which was almost always followed “by an eager licking of the upper lip, and sucking movements.” When Steiner repeated the experiment with a bitter solution, the newborns spit it out.
This raises the question of why humans evolved a sweet tooth, requiring intricate receptors on the tongue and the roof of the mouth, and down into the esophagus, that will detect the presence of even minute amounts of sugar and then signal this taste via nerves extending up into the brain’s limbic system. Nutritionists usually answer by saying that in nature a sweet taste signaled either calorically rich fruits or mother’s milk (because of the lactose, a relatively sweet carbohydrate, which can constitute up to 40 percent of the calories in breast milk), so that a highly sensitive system for distinguishing such foods and differentiating them from the tastes of poisons, which we recognize as bitter, would be a distinct evolutionary advantage. But if caloric or nutrient density is the answer, the nutritionists and evolutionary biologists have to explain why fats do not also taste sweet to us. They have twice as many calories per gram as sugars do (and more than half the calories in mother’s milk come from fat).
One proposition commonly invoked to explain why the English would become the world’s greatest sugar consumers and remain so through the early twentieth century, alongside the fact that the English had the world’s most productive network of sugar-producing colonies, is that they had lacked any succulent native fruit, and so had little previous opportunity to accustom themselves to sweets, as Mediterranean populations did. As such, the sweet taste was more of a novelty to the English, and their first exposure to sugar, as this thinking goes, occasioned more of a population-wide astonishment. According to this argument, Americans then followed the British so closely as sugar consumers because the original thirteen colonies were settled by the English, who brought their sweet cravings with them. The same explanation holds for Australians, who had caught up to the British as sugar consumers by the early decades of the twentieth century.
All of this is speculation, however, as is the notion that it was the psychoactive aspects of sugar consumption that provided the evolutionary advantage. The taste of sugar will soothe distress, and thus “distress vocalizations” in infants; consuming sugar will allow adults to work through pain and exhaustion and to assuage hunger pains. That sugar works as a painkiller or at least a powerful distraction to infants is evidenced by its use during circumcision ceremonies—even in hospitals on the day after birth—to soothe and quiet the newborn. If sugar, though, is only a distraction to the infant and not actively a pain reliever or a psychoactive inducer of pleasure that overcomes any pain, as this view posits, we have to explain why in clinical trials it is more effective in soothing the distress of infants than the mother’s breast and breast milk itself.
Many animals do respond positively to sugar—they have a sweet tooth—but not all. Cats don’t, for instance, but they’re obligate carnivores (in nature, they eat only other animals). Chickens don’t, nor do armadillos, whales, sea lions, some fish, and cowbirds. Despite the ubiquitous use of rats in the research on sugar addiction, some strains of laboratory rats prefer maltose—the carbohydrate in beer—to sugar. Cattle, on the other hand, will happily fatten themselves on sugar, an observation that was made in the late nineteenth century, when the price of sugar fell sufficiently that farmers could afford to use it for feed. In one study published in 1952, agronomists reported that they could get cattle to eat plants they otherwise disdained by spraying the plants with sugar or molasses (the cattle preferred the latter)—in other words, by sugar-coating them. “In several instances,” the researchers reported, “the cattle quickly became aware of what was going on and followed the spraying can around expectantly.” The cattle had the same response to artificial sweeteners, suggesting that “the cattle liked anything sweet whether it had food value or not.” By sweetening with sugar, as an essay in The New York Times observed in 1884, “we can give a false palatableness to even the most indigestible rubbish.”
The actual research literature on the question of whether sugar is addictive and thus a nutritional variation on a drug of abuse is surprisingly sparse. Until the 1970s and for the most part since then, mainstream authorities have not considered this question to be particularly relevant to human health. The very limited research allows us to describe what happens when rats and monkeys consume sugar, but we’re not them and they’re not us. The critical experiments are rarely if ever done in humans, and certainly not children, for the obvious ethical reasons: we can’t compare how they respond to sugar, cocaine, and heroin, for instance, to determine which is more addictive.
Sugar does induce the same responses in the region of the brain known as the “reward center”—technically, the nucleus accumbens—as do nicotine, cocaine, heroin, and alcohol. Addiction researchers have come to believe that behaviors required for the survival of a species—specifically, eating and sex—are experienced as pleasurable in this part of the brain, and so we do them again and again. Sugar stimulates the release of the same neurotransmitters—dopamine in particular—through which the potent effects of these other drugs are mediated. Because the drugs work this way, humans have learned how to refine their essence into concentrated forms that heighten the rush. Coca leaves, for instance, are mildly stimulating when chewed, but powerfully addictive when refined into cocaine; even more so taken directly into the lungs when smoked as crack cocaine. Sugar, too, has been refined from its original form to heighten its rush and concentrate its effects, albeit as a nutrient that provides energy as well as a chemical that stimulates pleasure in the brain.
The more we use these substances, the less dopamine we produce naturally in the brain, and the more habituated our brain cells become to the dopamine that is produced—the number of “dopamine receptors” declines. The result is a phenomenon known as dopamine down-regulation: we need more of the drug to get the same pleasurable response, while natural pleasures, such as sex and eating, please us less and less. The question, though, is what differentiates a substance that works in the reward center to trigger an intense experience of pleasure and yet isn’t addictive, and one that happens to be both. Does sugar cross that line? We can love sex, for instance, and find it intensely pleasurable without being sex addicts. Buying a new pair of shoes, for many of us, will also stimulate a dopamine response in the reward center of the brain and yet not be addictive.
Rats given sweetened water in experiments find it significantly more pleasurable than cocaine, even when they’re addicted to the latter, and more than heroin as well (although the rats find this choice more difficult to make). Addict a rat over the course of months to intravenous boluses of cocaine, as the French researcher Serge Ahmed has reported, and then offer it the choice of a sweet solution or its daily cocaine fix, and the rat will switch over to the sweets within two days. The choice of sweet taste over cocaine, Ahmed reports, may come about because neurons in the brain’s reward circuitry that respond specifically to sweet taste outnumber those that respond to cocaine fourteen to one; this general finding has been replicated in monkeys.
This animal research validates the anecdotal experience of drug addicts and alcoholics, and the observations of those who both study and treat addiction, that sweets and sugary beverages are valuable tools—“sober pleasures”—to wean addicts off the harder stuff, perhaps transferring from one addiction, or one dopamine-stimulating substance, to another, albeit a relatively more benign one. “There is little doubt that sugar can allay the physical craving for alcohol,” as the neurologist James Leonard Corning observed over a century ago. The twelve-step bible of Alcoholics Anonymous—called the Big Book—recommends the consumption of candy and sweets in lieu of alcohol when the cravings for alcohol arise. Indeed, the per capita consumption of candy in the United States doubled with the beginning of Prohibition in 1919, as Americans apparently turned en masse from alcohol to sweets. Ice-cream consumption showed a “tremendous increase” coincident with Prohibition. By 1920, sugar consumption in the United States hit record highs, while breweries were being converted into candy factories. “The wreckage of the liquor business,” The New York Times reported, “is being salvaged for the production of candy, ice cream and syrups.” Five years later, British authorities suggested that this tremendous increase in ice-cream consumption “due to prohibition was injurious to health,” but an American college president countered that the trade-off was apparently worth it, as he had “never heard of a man who ate excessive quantities of the confection going home to beat his wife.”
All of this is worth keeping in mind when we think about how inexorably sugar and sweets came to saturate our diets and dominate our lives, as the annual global production of sugar increased exponentially from the 1600s onward. The yearly amount of sugar consumed per capita more than quadrupled in England in the eighteenth century, from four pounds to eighteen, and then more than quadrupled again in the nineteenth. In the United States, yearly sugar consumption increased sixteen-fold over that same century.
By the early twentieth century, sugar had assimilated itself into all aspects of our eating experience—consumed during breakfast, lunch, dinner, and snacks. Nutritional authorities were already suggesting what appeared to be obvious, that this increased consumption was a product of at least a kind of addiction—“the development of the sugar appetite, which, like any other appetite—for instance, the liquor appetite—grows by gratification.”
A century later still, sugar has become an ingredient avoidable in prepared and packaged foods only by concerted and determined effort, effectively ubiquitous: not just in the obvious sweet foods—candy bars, cookies, ice creams, chocolates, sodas, juices, sports and energy drinks, sweetened iced tea, jams, jellies, and breakfast cereals (both cold and hot)—but also in peanut butter, salad dressing, ketchup, barbecue sauces, canned soups, cold cuts, luncheon meats, bacon, hot dogs, pretzels, chips, roasted peanuts, spaghetti sauces, canned tomatoes, and breads. From the 1980s onward, manufacturers of products advertised as uniquely healthy because they were low in fat or specifically in saturated fat (not to mention “gluten free, no MSG & 0g trans fat per serving”) took to replacing those fat calories with sugar to make them equally, if not more, palatable, and often disguising the sugar under one or more of the fifty-plus names by which the fructose-glucose combination of sugar and high-fructose corn syrup might be found. Fat was removed from candy bars, sugar added or at least kept, so that they became health-food bars. Fat was removed from yogurts and sugars added, and these became heart-healthy snacks, breakfasts, and lunches. It was as though the food industry had decided en masse, or its numerous focus groups had sent the message, that if a product wasn’t sweetened at least a little, our modern palates would reject it as inadequate and we would purchase instead a competitor’s version that was.
Along the way, sugar and sweets became synonymous with love and affection and the language with which we communicate them—“sweets,” “sweetie,” “sweetheart,” “sweetie pie,” “honey,” “honeybun,” “sugar,” and all manner of combinations and variations. Sugar and sweets became a primary contribution to our celebrations of holidays and accomplishments, both major and minor. For those of us who don’t reward our existence with a drink (and for many of us who do), it’s a candy bar, a dessert, an ice-cream cone, or a Coke (or Pepsi) that makes our day. For those of us who are parents, sugar and sweets have become the tools we wield to reward our children’s accomplishments, to demonstrate our love and our pride in them, to motivate them, to entice them. Sweets have become the currency of childhood and of parenting.
The common tendency is, again, to think of this transformation as driven by the mere fact that sugars and sweets taste good. We can call it the “pause that refreshes” hypothesis of sugar history. The alternative way to think about this is that sugar took over our diets because the first taste, whether for an infant today or for an adult centuries ago, is literally, as Michael Pollan put it, an astonishment, a kind of intoxication; it’s the kindling of a lifelong craving, not identical but analogous to that of other drugs of abuse. Because it is a nutrient, and because the conspicuous sequelae of its consumption are relatively benign compared with those of nicotine, caffeine, and alcohol—at least in the short term and in small doses—it remained, as Sidney Mintz says, nearly invulnerable to moral, ethical, or religious attacks. It remained invulnerable to health attacks as well.
Nutritionists have found it in themselves to blame our chronic ills on virtually any element of the diet or environment—on fats and cholesterol, on protein and meat, on gluten and glycoproteins, growth hormones and estrogens and antibiotics, on the absence of fiber, vitamins, and minerals, and surely on the presence of salt, on processed foods in general, on overconsumption and sedentary behavior—before they’ll concede that it’s even possible that sugar has played a unique role in any way other than merely getting us all to eat (as Harvard’s Fred Stare put it forty years ago) too damn much. And so, when a few informed authorities over the years did, indeed, risk their credibility by suggesting sugar was to blame, their words had little effect on the beliefs of their colleagues or on the eating habits of a population that had come to rely on sugar and sweets as the rewards for the sufferings of daily life.
* The other is The History of Sugar, published in two encyclopedic volumes in 1949 and 1950, by Noël Deerr, a sugar-industry executive turned sugar historian.