Evolution fails to honor terms like “success” and “failure” for very long. In a petri dish, the early relative success of a bacterial colony only yields to a correspondingly early collapse as it approaches the edge of the dish. In the case of agriculture, expansion did indeed build on successful technology perfected in the Old World, but it was also driven by the failure of agriculture in the Old World. Expansion produced excess population, just as centuries of farming overworked and depeleted soils. Europe needed a bigger petri dish.
It may be true that Europeans understood their system was flawed only in hindsight, but it was near hindsight. At the time of New World settlement, the life span of the average European was frighteningly short, about forty years—and far shorter in some urban centers (seventeen in Manchester, England, in the mid-nineteenth century, for instance). Transplanted to the New World, these same Europeans began living far longer. Spaniards in the earliest colonies around Buenos Aires reported survivors of up to a hundred years of age. In Massachusetts, the average age at death for the first settlers was 71.8 years. Settlers in New Zealand reported an annual death rate of 5.3 per thousand in the nineteenth century, compared to a contemporaneous death rate in the United Kingdom of 16.8 per thousand. In 1898, for every 10,000 male infants born in New Zealand, 9,033 survived. England’s survival rate for the same period was 8,414.
As we have seen, the disease and parasite load the Old World carried played a huge role in this disparity, but something more basic
was at work. Settlers in the New World ate better. There are strings of wide-eyed dispatches from European visitors to the New World reporting such tidbits as “feasts” at the boardinghouses of average factory workers, where meat was served twice daily. Factory workers in England during the same period were lucky to see meat twice a month, if at all. The Americans had both more food and better food. Even the temperance crusaders Catherine Beecher and Harriet Beecher Stowe focused more opprobrium on food than drink: “For every reeling drunkard that disgraces our country, it contains one hundred gluttons.” It was a novel problem. On one of his visits to France, Thomas Jefferson and a few of his countrymen were enduring a drawing-room assault from a group of Frenchmen touting the virtues of their civilization over that of the New World. Jefferson simply pointed out that the Americans were the tallest people in the room. The European society that produced these settlers was steeped in chronic hunger, and had been since the beginning of recorded history.
The best evidence that hunger was a way of life in agricultural societies is the persistence of famine. Indeed, accounts of famine, like history itself, date to the beginning of agriculture. The food writer Brian Murton goes so far as to suggest that famine is, in a way, history itself: “In preliterate societies, particular famines, along with other collective catastrophes, served as a common means of recording and recovering the experience of the past.” In a limited sense, this notion of history probably predates agriculture. Hunter-gatherers like the much-studied Eskimos of the extreme north of North America have a long oral tradition that remembers hunger—such as when a particular village failed in a certain year to catch a whale. They dealt with this as humans have throughout the ages, with a high death rate and infanticide. Yet these are hunter-gatherers on the extremes, living in an environment that most of the world’s humans could not tolerate for a single season, even a single week. The better test is in the temperate world, and there, famine is a creation of farming.
Historians have been able to assemble a reasonably complete account of famine stretching back six thousand years. At any given time, this plague was not spread equally across humanity but dominated
in a shifting series of famine centers. From about six thousand to twenty-five hundred years ago, the hot spots for starvation worldwide were in northeast Africa and western Asia. This is when Egypt recorded regular cycles of famine lasting seven years, during which, according to an ancient account, “every man ate his children.” From twenty-five hundred to fifteen hundred years ago, the Roman Empire recorded at least thirty-five major famines, making it simultaneously the center of hunger and civilization in the world. Then the locus shifted to Western Europe, where it remained until the beginning of the nineteenth century, Between A.D. 500 and 1500, there were ninety-five major famines in England, and seventy-five in France, which suffered its last major famine in 1795. Late in this period, the center shifted to Eastern Europe, which recorded 150 famines between 1500 and 1700. Russia was next, with a hundred major famines from 971 to 1974. In the years 1921—22, about nine million Russians starved to death; in 1933—34, between four and seven million.
Meanwhile, Asia, both north and south, was suffering famine throughout this period, but not of sufficient scale and frequency to challenge Europe’s status as famine capital. The incidence, however, rose rapidly in the nineteenth century. South Asia has seen at least ninety major famines in the last twenty-five hundred years, and two-thirds of those came after 1700. At the beginning of the twentieth century, Asia emerged as the world’s famine capital. However, China, that long-standing agricultural society, has never been without famine. From the beginning of its history, famine was a way of life. During the past two thousand years, it recorded about ninety famines per century, almost one per year somewhere in the country. Its most devastating famines, however, have been in the twentieth century.
Given this pattern, it is difficult to see agriculture as the antidote to hunger. In fact, the pattern suggests the opposite. Famine was the mark of a maturing agricultural society, the very badge of civilization.
The world’s worst famine is not buried in archaeology and the scratchings of forgotten languages. The worst we know of occurred
within the memory of middle-aged people alive today, yet its details—the sweep of a period one writer has called a time of “hungry ghosts”—are only beginning to trickle out in anecdote and demographers’ calculations. The details have been closely closeted by one of the world’s most organized and autocratic societies. Nonetheless, we are coming to understand that, conservatively estimated, eighty million people died of starvation during the period the Chinese call the “Great Leap Forward.”
Again, China is the great enigma in the expansion of catastrophic agriculture. Unlike the pre-Columbian agricultural societies in the Americas, China’s suite of crops and technologies were the equal of Europe’s; in fact, in many cases they were ahead of Europe’s, or were Europe’s. The northern half of Chinese society was based in wheat, and China had given Europe both chickens and swine. As Europe was gearing up for imperialism and exploration, China was doing the same, sending the eunuch navigator Cheng Ho on a voyage to the Indian Ocean. Chinese trade with Europe during the fifteenth century was widespread. Economists calculate that at this point China accounted for about a quarter of the world’s economy. Yet, integrated as it was into the European economy, China never became imperialistic, at least not globally. (Modern Tibetans would consider the distinction academic.) The Han Chinese did finally grab an enormous amount of territory, but it was contiguous.
China had its own internal tensions between north and south that kept it preoccupied. Even more demanding was the centuries-old conflict between this hierarchical agricultural society and the nomadic pastoralists (aka Mongol hordes) to China’s west. More fundamentally, China’s best agricultural trick was, and is, rice, and rice doesn’t travel well in temperate zones.
And yet there is also a case to be made that China tempered its drive toward imperialism by using famine to vent population pressure. Hunger drives imperialism to a degree; Middle Eastern, Roman, and later European imperialism all coincided not only with these regions’ development as agricultural societies but also with their accession to the title of global famine capital. Simply put, the
population explosion that agriculture allows creates the need for expansion, as it has since the first wheat-beef people hit the European plain. A society, however, also can settle the problem with famine.
The massive toll of starvation during the Great Leap Forward has been read by demographers, just as an asteroid’s striking the earth can be read in the crater left behind. During the periods of famine in Europe, local death rates often ran to 80 percent of the population, both from starvation and from the diseases that strike hunger-weakened people. Furthermore, those death rates reflect a higher toll on both pregnant and lactating women, because of their greater nutritional needs. The menstrual cycle is suspended in hungry women, so even those who do survive don’t reproduce. All of this combines to level population.
For the last several thousand years, as the famine center has shifted around the world, waxed and waned, China has maintained a fairly steady course of starvation. Researchers have compiled documentary evidence of 1,828 famines in China between 2019 B.C. and A.D. 1911. They were concentrated in a famine belt between the Yellow and the Yangtze rivers, which is to say that China’s hunger was concentrated in the same area as its agriculture. This history can be read in the numbers, but also in the language.
During China’s most recent famine, people again began using an expression that means “swapping children, making food.” Hungry peasants traded children to avoid killing and eating their own. The practice was widespread. The specific phrase for this is about 2,200 years old and meant exactly then what it does now. As the Han Dynasty was founded, in 200 B.C., a single famine killed about half of China’s population. The emperor Gao Zu issued an edict permitting people to eat or to sell their children as meat, thus lending legal sanction to a long-established practice. A written report from 2,600 years ago notes: “In the city, we are exchanging our children and eating them, and splitting up their bones for fuel.”
The Chinese are not alone in practicing cannibalism during famine. The phenomenon was, for instance, widespread in the Ukraine in the famine of the 1930s. Still, a people that acknowledges cannibalism
in tradition, law, and language is indeed a culture of hungry ghosts.
Amartya Sen, in his book Poverty and Famines, argues that the problem is not insufficient food but a lack of “entitlement,” which he defines as the means to command food. “Hunger and famine have to be seen as economic phenomena in the broadest sense … and not just as reflections of problems of food production.” Translated from academese, this means people are hungry because they can’t afford to buy food, not because there isn’t food to buy. It would seem hunger and famine are creations of poverty, not agriculture; but, of course, poverty is agriculture’s chief product.
It is an article of faith among modern proponents of the agriculture miracle that famines are a thing of our past. True enough, they acknowledge, famine still does happen in pockets on the globe, but they argue that we produce enough food to feed everybody. That is, famine is no longer a problem of technology but of distribution, not of science but of politics.
One thread in this line of thinking blames modern famine on bad government. China’s most recent famine fits this analysis. The immediate cause of the starvation in China during the Great Leap Forward was monumental government stupidity. Specifically, Mao decided that the laws of Western science were invalid in China—a bourgeois plot—and unilaterally decreed a whole new set of agronomic principles drawn from thin air. For instance, wheat and rice were seeded at densities that were orders of magnitude beyond what the soil could support. Crop failure was almost total. Worse, sycophants and functionaries desperate to meet quotas and curry favor up the line refused to report the failures. They faked photos of fields so packed with ripe stalks that peasants could stand on top of the waves of grain, as if walking on water. Party workers fashioned plaster models of giant vegetables as evidence of their miraculous success. Determined to demonstrate this success to the world, the Chinese exported what little grain they had.
We can just as easily trace the blame for contemporary famines in Ethiopia, Bangladesh, and the Sudan—certainly in North Korea—to inept rulers. Modern famine is the result of bad government, but so was ancient famine. Bad government is a part of the syndrome, a chicken-and-egg problem. Population explosion generates the need to grow more food, but agriculture is the cause of that population explosion, and agriculture creates government. The hierarchical, specialized societies that agriculture builds are wholly dependent on the smooth operation of their infrastructure, on stability, on transportation. Dams must be built, canals must flow, roads must be maintained, and government must be established to order those tasks. Government leaders emerge from the social hierarchy that agriculture’s wealth makes possible. Failures occur as frequently as humans fail. To hold agriculture blameless and government responsible for famine is like holding a lion blameless for a child’s death on grounds that it was the lion’s teeth that did the biting. Poverty, government, and famine are coevolved species, every bit as integral to catastrophic agriculture as wheat, bluegrass, smallpox, and brown rats.
The food historian Sophie Coe has argued that famine did not begin to end in Europe until the introduction of New World crops. Much has been made of the effects of Spanish ships returning loaded to the gunwales with gold and silver, and much should be. The loot reorganized not just Europe’s but the world’s economy. China, for instance, found itself heavily drawn into the European trade network in the fifteenth century simply because most of its currency was made of New World silver. The flash of this booty, however, burned out fast. Silver and gold do not go forth and multiply; but seeds do. Long after Spain’s gold—and Spain itself—faded from prominence, maize and potatoes remain. Maize is today the world’s most important crop, potatoes fourth. This was then and is now of special significance to the world’s poor. Most of the maize grown feeds livestock and poor people, and this low-caste bias is even more pronounced in the case of the humble potato. While Europe’s upper crust learned
to sip chocolate and munch tomatoes, the potato reorganized the life of Europe’s peasant—and later its working—class. It’s worth tracing that trail for what it tells us about food’s ability to sort social classes.
The potato was not an overnight success. Still, the conquistadors did recognize its potential when they found it in the Andes no later than 1537, along with a string of indigenous domesticated tubers that are still central to Andean agriculture. They had ferried potatoes back to Europe by 1570, but it would be two centuries before the plant developed into one of Europe’s most consequential foods. That dormant period had everything to do with where the first potatoes were found.
Andean agriculture is the exception to the rule. That is, all of the other agricultures in the world—wheat in the Middle East, rice in Asia, maize in Mexico—sprang up in fertile river valleys close to temperate zones. Andean agriculture developed atop some of the world’s tallest mountains, close to the equator. Thus, the climate in which potatoes evolved was relatively cool, because of the altitude, but with a long, light growing season, because of the latitude. They are photoperiod-sensitive, which is the botanist’s term for being sensitive to the length of day. That is, they take their key signal to shift gears, from building roots and stems to building tubers, from the seasonal change in the length of days.
As a result, the first potatoes brought back from the New World wouldn’t grow well in much of Europe. They required a very long growing season, and would only set tubers in late fall, well after temperate-zone frosts killed the vines. There was an exception, however. Northern Atlantic currents give the British Isles a mild fall climate. The Andean tubers could thrive there. Later, an American breeder developed some Chilean potatoes into varieties more suited to a temperate zone. These are the types that dominate production today and made the spud at home throughout Europe in the nineteenth century. Ireland, however, got an early start, a classic case of agricultural success prefiguring doom.
To a certain degree, England, too, could have benefited from potatoes, as Ireland did. The climate agreed with them, but the social atmosphere did not. Throughout Europe, there persisted a vicious
prejudice against the potato as a food fit only for livestock and the Irish. The French, for instance, regarded the potato as poisonous. In the nineteenth century, when Irish immigrants began bringing potatoes into English working-class towns, social commentary railed against the corrupting influence of this food that promoted “idleness, improvidence, and moral deviations.” In his book The Potato: How the Humble Spud Rescued the Western World, Larry Zuckerman writes:
The indictment was plain. The potato, a coarse food that subverted desires for comfort or cleanliness, stood accused of cheapening lives. Not only did it promote the ruinous cycle of poverty and population, it was partly responsible for the moral illness that helped bring about tuberculosis, typhus, and cholera.
That is, it was not poverty that made those Irish immigrants eat the potato but their eating the potato that made them poor, sick, and dirty. Not that life was all that great for England’s native poor. The disparagement of the potato coincided with that previously reported period when the life expectancy in Manchester was seventeen years. The poor in Britain—which is to say most people in Britain—ate mostly bread. Period. Occasionally there was milk or cheese; meat appeared only on special occasions. Further, bakers stamped their loaves with a “W” or an “H,” indicating “wheaten” or “household.” The pure wheat bread went to the well-to-do. Bread of mixed wheat and rye flour, the household bread, went to the poor. This thin diet based wholly in grain left the British terribly vulnerable to crop failures and the ensuing famines, a situation that really didn’t change until Britons reluctantly adopted potatoes in the nineteenth century. That, and they began exporting starvation to colonies.
The Irish adopted potatoes earlier because they didn’t have much choice. Wheat bread is a simple enough diet, but still requires some resources: the grain must be ground and ovens must be fueled to bake it into loaves. Potatoes are the food for the poorest of the poor because they don’t require even these steps. One digs them from the ground, a dense package of starch that needs no milling but can simply
be tossed into the fireplace to roast and then be eaten like an apple, no fork or plate required. This ease of cooking was critical, because England had clear-cut Ireland’s forests, and, unlike England, Ireland had very little coal. Fuel was peat, and eating potatoes instead of bread helped to conserve that fuel. Spoiled spuds could be tossed to hogs, allowing the poor to raise a bit of meat (for market, of course, not for table).
Said the Irish, the “sauce of a poor man is a little potato [eaten] 1 with a big one.”
Before potatoes arrived, the Irish relied heavily not on wheat but on porridge of oats, a crop bred to grow on the ragged edges of agriculture where wheat won’t. Ireland could grow wheat on its eastern edge, but as with beef, only a few well-to-do Irish would taste it, and for the most part it was exported to Britain. The island’s pitiful, rocky soil, its cool climate, and frequent rains all combined to make growing wheat a difficult assignment, so oats would have to do (and even those failed frequently). The rain-tolerant potato changed this situation rapidly. By the beginning of the nineteenth century, the average Irish person was eating five and a half to six pounds of potatoes per day.
Throughout the eighteenth century, there was a clear trend toward potatoes, accelerated by a couple of major famines. The latter and worst of those occurred in 1740—41 and killed as many as 400,000 people, 10 percent of Ireland’s population. Those famines were brought on by failure of the oat crop, prompting an increasing reliance on potatoes. At the beginning of the period, the poor—the majority of the Irish—were living on milk, potatoes, and oat porridge. Toward the end of the century, the porridge disappeared. One observer reported in 1780 that an Irish person usually ate only milk and potatoes year-round “without tasting bread or meat. Except perhaps at Christmas once or twice.” Meagerly equipped kitchens meant the potatoes were simply boiled and eaten. People commonly kept one fingernail long to strip the peel, the only cutlery required.
In the meantime, potatoes had been the basis of Andean agriculture for thousands of years. Yet there it built a culture of such wealth that
the Spaniards came to dominate the rest of the world simply by looting the Andes. So far as we know, though, the Incas were never in the same position as the Irish. Perhaps they learned the hard way the perils of depending on a single crop. For whatever reason, the diet in the simplest Andean village was (and remains) far more varied than that of the peasants of the advanced Western world. The Incas grew maize, a string of indigenous tubers (especially mashua, ulluco, and oca), and quinoa (a species of lupine) for grain. They had vast numbers of llamas and alpacas, and the rich ate both. The peasants, however, were not without protein; they ate guinea pigs, a delicacy that still figures prominently in Andean peasant diets. It was not a monocrop economy, but its mainstay, the potato, created a monocrop economy in Ireland.
But for all of its advantages, the potato has one enormous drawback. Because of its complex array of chromosomes, it doesn’t reliably pass on traits from generation to generation, or even regularly produce seed. Cultivation depends on vegetative propagation. As any backyard gardener knows, one does not grow potatoes by planting seed but by planting a bit of last year’s potatoes. Each new crop is a clone of the last. If you had the sensibilities of a pathogen, you would see enormous opportunities in this. Bits of tissue carry on, generation after generation. Normally viruses and fungi aren’t so lucky. They can’t survive in most seeds, and have to find alternate hosts to weather out the off season. With potatoes, viruses and fungi go along for the ride year after year.
At the time of the Irish potato famine, though, viruses were not on European minds, people were, and the focus was on Ireland. The potato’s “success” could be quickly measured and was apparently evident, even to the Irish of the time. Typical households had six to ten children. A French visitor once asked an Irish family how such poor people were able to raise so many healthy children. The answer was, “It’s the praties, sir.”
The economics of rural life stabilized, allowing earlier marriage. Above all, the hand-to-mouth existence of the countryside prizes plentiful, cheap stoop labor—i.e., children, an asset made even more abundant by Ireland’s Catholicism. As a result, between 1780 and
1841 the population of Ireland doubled from four to eight million. It was simply a population explosion, and English observers, many of them infected by the vicious anti-Irish sentiments of the time, noted it warily. In fact, this explosion caused the father of gloom and doom, Thomas Malthus, to focus his studies on the potato and to speculate that Ireland would soon have a population of twenty million. From our vantage, Malthus’s projections have often been ridiculed as unnecessarily grim and, in any event, wrong. They were wrong, though, only because Malthus didn’t account for factors like fungi. The devastation wrought by plant disease balanced Malthus’s equations, a “correction” that was hardly cause for optimism.
The unprecedented boom in population and the country’s dependence on a single, disease-prone crop made Ireland doubly vulnerable in 1845. The fungus in this case (Phytophthora infestans) leads to a disease called late blight. It is still with us, and is still the leading cause of potato-crop losses around the world.
Phytophthora infestans first made its way from North America into the Low Countries of Europe in the 1840s. Because those places were not nearly so dependent on potatoes as Ireland, the spread of the disease and the resulting losses were moderate on the Continent. When it appeared in Ireland, in 1845, it claimed about 40 percent of the crop that year. Late blight causes potato vines to curl, blacken, and die. The diseased tubers are inedible, so they were simply left to rot in the ground. This practice allowed the fungus to weather the winter of 1845—46, then roar to life the following growing season. Ironically, it was aided by earlier fungal and viral diseases. European potatoes had suffered before from a different fungal dry rot and a viral curl, and as a result growers had planted varieties developed in North America that were resistant to these diseases. This narrowed the number of varieties grown in Europe, and it happened that these new varieties were wholly susceptible to late blight. In 1846, weather conditions were ideal for the spread of late blight, and spread it did, knocking out 90 percent of the single crop on which eight million poor people, most of them children, depended. The blight abated slightly in 1847, then came back full force in 1848. As often happens with famine, disease—in this case, cholera—spread through a weakened
population. The usual companion of hard times, government stupidity (or viciousness), followed closely. The English refused any amendment of the corn laws, which meant that a starving Ireland continued to export grain to England.
The commonly quoted toll of the Irish potato famine is a million people. Another 1.3 million emigrated during that same period. Over the next sixty years, another 5 million fled the famine-ravaged economy. By 1911, Ireland’s population was 4.4 million, about what it had been in 1780, before the potato-induced explosion.
Dehumanization probably took as great a toll, if a less measurable one. An observer reported that the “bonds of natural affection were loosened,” that parents neglected children, and children parents, and men abandoned wives and children. As Zuckerman notes, one Irishman reported a visit to a village on the western coast, where the famine was worst:
In the first [hovel] six famished and ghastly skeletons, to all appearances dead, were huddled in a corner on some filthy straw, their sole covering what seemed a ragged horse cloth, and their wretched legs hanging about, naked above the knees. I approached in horror, and found by a low moaning they were alive, they were in fever—four children, a woman, and what had once been a man. It is impossible to go through the details. Suffice it to say, that in a few minutes, I was surrounded by at least 200 of such phantoms, such frightful specters as no words can describe.
England overcame its hatred of potatoes, not necessarily by choice, and certainly not quickly. By the beginning of the nineteenth century, some Britons were beginning to replace their beloved wheat with this cheaper, more convenient starch, but no one really liked the idea, particularly not the upper class. One writer, calling potatoes “the root of slovenliness, filth, misery, and slavery,” concluded that he would rather be hanged than eat “the lazy root.” The potato stood as chief target for the proper Englishman’s hatred of the Irish in particular
and the poor in general. England, however, depended on the existence of a vast stock of poor people, and so, by extension, needed the potato.
Enclosure, a practice that had impoverished Ireland by excluding subsistence farmers from farmlands taken for the aristocracy, also occurred in England in the early nineteenth century. Landowners consolidated small peasant holdings, reducing the peasants to wage workers. These newly landless workers had no fields to grow grain, but a hill of potatoes occupies only a few square feet. Thus the potato, the backyard food source, came to fit into their diet. The potato’s means of preparation, however, were even more relevant than the means of growing it.
Wage earners began to congregate in the infamous slums of industrial England, where tenements lacked any sort of cooking facilities. In fact, it was standard practice for a working-class family to rent oven space on Sunday, their day off, to bake a bit of bread. Only a generation before, rural workers had taken something like 40 percent of their nutrition from bread alone, but in the slums, as industrialization gained steam, bread became a luxury. It was replaced by the potato, which could be roasted and eaten on the street. A number of institutions grew from the necessities of this new, portable life. For instance, it was the practice at many factories to hand the mass of workers a lump sum on payday; it was up to them to make change and divide it amongst themselves, and public houses arose for this purpose. In the process of making change, the public house would hold on to a bit in exchange for a pint or two. These same workers—and the family members who came to drag them from the pubs—could immediately trade some of the cash for street food, fried potatoes and sometimes fried fish. The combination is a marriage of convenience that survived as fish-and-chips and puts the lie to the notion that fast food was born in mid-twentieth-century America in a phalanx of deep fryers nestled under a set of golden arches.
The wheat-beef people who first confronted the Hungarian plain six thousand years ago created an agriculture that would sweep the
world. By this measure, their culture was successful. Yet by another measure, that culture had no choice but to sweep the world, being, as it was, the engine of population growth that would so alarm Malthus in the nineteenth century and Paul Ehrlich et al. in our time. The New World provided a safety valve, but so did famine and disease. A million dead in Ireland—and more in earlier famines in England, France, Russia, and Eastern Europe—adjusted the demographics to a more manageable level, as did the hunger-driven emigration of fifty million people from Europe, largely to the neo-Europes, between 1820 and 1930. The introduction of New World foods to Europe also offset some need, allowing population to swell further. These new foods came just as Europe was industrializing, just as it needed a vast, cheap source of labor and new, cheap ways to feed it. The potato filled that role, but there appeared at the same time another new tool for concentrating populations of the poor.
As much as spices and slavery, sugar drove European exploration and imperialism. Sugarcane, a perennial grass, was first domesticated in Southeast Asia. During the Arab agricultural revolution in the Middle East, which coincided with the rise of Islam, it made it to Mediterranean Europe, arriving in Spain with the Moors. The Iberian peninsula was the ragged edge of the tropical grass’s natural range, but the Spaniards were able to grow it, a skill that would give them a leg up in the contest of imperialism. Even before Columbus, the Spanish developed colonies at Madeira, and then in the Canary Islands, and quickly discovered that both of those places grew sugar well. The locals, however, did not. The indigenous people of the islands were hostile and proved to be unreliable laborers, largely because they died from European diseases. The Spaniards solved this problem by importing slaves from agricultural areas of Africa, a practice that would dovetail nicely with their new expertise at growing sugar. This initial foray, and the resulting trade in sugar with the rest of Europe, financed Spanish exploration. The model built in the Canaries set the stage for what food historians call a “sugar revolution” in the eighteenth century. Brazil and the West Indies provided ideal sugar climates, and slave traders had perfected their methods to allow the importation of ten million African slaves to the sugar
colonies. This in turn created a glut of sugar, so what had been a curiosity, a luxury for sweetening the ladies’ chocolate in Europe, suddenly became commonplace and cheap. As this happened, the British came to dominate both the trade in sugar and its consumption, by virtue of the territory they held as colonies and their dominance of trade in Senegal.
The spread of slavery, the sugar trade, and the consumption of sugar powered the Industrial Revolution, for reasons particular to the nature of sugar. First, sugar agriculture is, by necessity, tropical. It is not a simple extension of Europe’s wheat-beef base, which wouldn’t work in the tropics (nor would the wheat-beef methods). Temperate agriculture could spread with the Jeffersonian yeoman farmer model because Europe was filled with trained farmers, but the farmers of Europe grew food that required, at most, grinding and cooking. Sugar required a labor-intensive industrial refining that made it very much the first processed food—processed by slaves. And sugar was a remarkably efficient food, producing the most calories per acre of any crop. An acre of sugar will produce the same number of calories as four acres of potatoes, twelve of wheat, or 135 devoted to raising beef.
There is a fundamental tension inherent in civilized economies, one that intensifies as they develop. Farming, pyramid building, and industrialism above all require a huge pool of cheap labor. But that pool must be fed. We have seen that famine, disease, and simple malnourishment can come to the rescue of an overtaxed economy by correcting periodic population imbalances. In this light, famine, poverty, and disease are useful institutions, which is perhaps why Christ was so certain they would always be with us. The more trumpeted tool for this task, though, is efficiency, a favorite word of economists. In this mind-set, food is no longer a pleasure, an aesthetic experience, a bearer of culture and tradition. It is not cuisine but calories. The efficiency of sugar fit nicely with the ascendant dehumanization that was British industrialism.
Sugar gave the homeland cheap food, supported by slave labor in the Caribbean and South America. Its production rested on industrialized plantations that were markets for England’s factories. The
plantations in turn created wealth that became the capital that financed the industrialization of Britain. It was a system that had nothing to do with the well-being of most of the humans involved and everything to do with raising wealth. Writes the anthropologist Sidney Mintz, “Slave and proletarian together powered the imperial economic system that kept one supplied with manacles and the other with sugar and rum.”
The British custom of taking tea as an afternoon break has more to do with sugar than with tea. During the nineteenth century, when the custom arose, it was something like the coffee break in modern workplaces, but not so leisurely: a chance to gulp a quick cup of tea, which was invariably laced with sugar. In this way were the human machines of the factory “nourished”—fueled—without even needing to leave their machines.
Annual per capita sugar consumption rose 2,500 percent in England during the 150 years preceding 1800, but the pace quickened as the nineteenth century brought a proliferation of not only sugared teas but also jams (which were about 60 percent sugar), puddings, and treacle to British tables, especially those of the working class, resulting in a 500 percent increase in consumption in the years 1860 to 1890. By the beginning of the twentieth century, the average Briton was getting about one-sixth of his total nutrition from sugar.
Mintz argues that these figures don’t tell the whole story, in that dependence on sugar was not only greater among the poor, but among women and children in working-class households. Any meat and bread went to the man of the house as a simple practical measure: the mister needed his strength to work. Women and children typically ate jam, suet pudding with molasses, and sugared tea for at least two meals a day. Says Mintz, “If that figure [one-sixth of calories] could be revised to account for class, age, and intrafamily differentials, the percentage for working-class women and children would be astounding.” This is efficiency too, if one considers the purpose of humanity to provide cheap labor. Like famine, malnutrition promotes infant mortality and suppresses the birthrate, biasing the population toward working adults.
The rationale remains with us in the form of an order of fries and
a Coke. The spirit of fast food in twentieth-century America is an echo of the nineteenth in Britain. Fast food got one of its bigger boosts when Bill Clinton, then president, confessed an affection for Big Macs. More than just a politician’s slumming, this presidential touch lends the legitimacy the system needs. Similarly, fish-and-chips had maintained a nasty reputation in Britain into the twentieth century, but much of that was overcome when Winston Churchill confessed a secret liking for it, thus providing state sanction of a necessary institution.