The idea that we are perfectly evolved for a world that no longer exists, that somehow we have fashioned an environment for ourselves in which we are no longer fit for purpose, does stand up to some scrutiny when it comes to our growing waistlines. Although the slightly smug, ‘I’m only fat because I’m so well adapted for famine’ thrifty gene hypothesis lacks support globally, it does seem to explain one of the stand-out features of the global obesity crisis: the clustering of South Pacific Island nations in the top left-hand corner of the ‘obesity–economy’ graph. The alternative explanation, the drifty gene hypothesis, asserts that a lack of predation caused our upper weight limit to ‘drift’ upwards, unconstrained by strong selection pressures. Prior to the relatively recent environmental shift towards cheap, sugar-rich, fat-enhanced, calorie-dense food available on every street corner, this gradual drifting would have gone more or less unnoticed, although obesity was far from unheard of when people had sufficient access to plenty of food (Henry the Eighth being a prime example).
When we start trying to apply some very broad-brush evolutionary thinking to our diet, though, we start getting into trouble. The paleo diet approach ‘works’ in that a calorie-restricted diet will allow you to lose weight, but the diet assumes that humans have been stuck in an evolutionary rut for the past 12,000 years or so; that we are somehow the same as our Palaeolithic ancestors. As we have already seen in the South Pacific that is far from the truth. What is more, if the drifty gene hypothesis holds sway across the population, then the even greater release from predation that would have come in the past 12,000 years of settled human societies would hardly have reduced the drift (which is, remember, non-adaptive evolutionary change) in our upper weight limit. Evolution has caused other recent changes related to diet that influence more than just our fat metabolism. Our ability to physically process and digest different foods provides some of the very best examples of recent human evolution. These changes illustrate both how we can evolve relatively rapidly to fit in with environmental changes of our own making, and how very recent environmental changes have simply occurred too quickly for evolution to catch up. In summary, the development of agriculture provides us with a case study of how large-scale environmental changes lead to evolutionary mismatches and are sometimes solved by evolutionary responses, and sometimes not.
The rise of agriculture
Around 12,000 years ago something truly massive happened to the human environment. We finally worked out that growing plants is an awful lot easier than finding them and that keeping animals is a much better arrangement that hunting them down with spears and arrows. We didn’t go from gathering berries and hunting deer to having a fully fledged farming system overnight, though. The transition from our hunter-gatherer phase to one when we were much more reliant on agriculture was relatively slow. It occurred in multiple locations and went hand in hand with other crucial developments like the widespread rise of urban centres and the beginnings of ‘civilisation’ as we know it. Of course, harvesting seasonal bounties from nature, like fruits and nuts, and hunting wild game didn’t disappear from our food-gathering repertoire (and indeed are enjoying something of a resurgence thanks to the recent ‘forager’ movement) but for many, and especially those living in or near major population centres, there was no longer the need to source food directly from nature’s larder full-time. With food production becoming increasingly centralised, we could start on our long but relatively rapid road to modernity. Make no mistake, the development of farming in the so-called Neolithic revolution was a very big deal indeed.
At that time in our history humans were already well dispersed. Africa, Europe, the Middle East and Asia were already populated and humans were well established through Indonesia and Australia. Around 20,000 years ago, the last so-called ‘Glacial Maximum’ ended and the ice sheets that covered great swathes of the Northern hemisphere prior to that time began retreating, never again to reach as far south as they had. As the ice sheets melted, humans headed northwards and populations also crossed the land bridge that connected what is now north-eastern Siberia with Alaska. Through this movement the peopling of the Americas began, and it marked the start of the period when Homo sapiens developed a truly global distribution. This widespread distribution inevitably meant that agriculture spread from a series of globally distinct centres, with the developing technology and knowledge spreading out from those centres to more far-flung populations over the course of several thousands of years. Attempts to account for the drivers behind the development of agriculture and its subsequent spread have resulted in a large number of competing, although not necessarily mutually exclusive, hypotheses. Undoubtedly, as we uncover more archaeological evidence and discover more about the climate and environment of the period during which agriculture developed, the details will become clearer.
Agriculture: really bad for our health
Growing our own food and developing the skills and knowledge to keep livestock is one of our greatest achievements, and without agriculture the modern world, and virtually anything we consider as a recent human achievement, would clearly be impossible. Agriculture serves primarily to free people from the daily necessity of finding food. What is more, if food production and distribution are reasonably well managed, and short- to long-term storage of food is possible, then food surpluses can be produced to even out the problems that might be caused by climatic issues such as the droughts and cold snaps discussed in Chapter 2. Taken together, potentially plentiful production and stable storage would seem to provide something of a paradise for a human population used to eking out a living from whatever could be found opportunistically in the environment. It therefore comes as something of a surprise to discover that the agricultural revolution went hand in hand with a dramatic decline in human health.1
Piecing together how we responded to the Neolithic agricultural revolution relies on integrating a wide range of evidence from different sources. Preserved teeth from Neolithic humans provide one such evidentiary source and, being relatively straightforward to interpret, the teeth tell quite a tale. Oral health was markedly poorer in humans subject to an early agricultural diet than their hunter-gatherer predecessors. Evidence can be found for reduction in tooth size, crowding of teeth, dental caries and increased occurrence of gum disease. These dental problems likely came about because of an important evolutionary change in human face shape that occurred during this period and that we met first in Chapter 1 when we discussed the characteristics of anatomically modern humans. One defining feature of modern humans is the development of a pronounced chin, which, combined with our relatively short jaws and steep foreheads, makes our proportionally smaller faces almost vertical. Palaeolithic, pre-agricultural humans had heavier, larger skulls than later Neolithic humans, but as we developed towards eating agriculturally derived food our masticatory needs changed. Archaeological evidence of grinding stones and cooking vessels during the Neolithic suggests that we physically processed grains and other foods, just as we do now, to render them more palatable and versatile. Further processing through more advanced Neolithic cooking than was undertaken in the Palaeolithic would have made foods even softer and easier to chew. By outsourcing much of our food processing from our teeth to our hands, Neolithic skulls no longer had to accommodate the larger masticatory (chewing) muscles required for arduous in-mouth processing. Over time this resulted in evolutionary changes to skull shape, proportions and angles. These changes included adjustments to jaw size that were not followed by changes in our dentition. With the same number of teeth cramming into smaller jaws attached to more graceful skulls, we ended up with dental overcrowding. Teeth that are crammed together create tight spaces in which bacteria can grow and dental caries can form. We entered a world of tooth decay.
There is another more subtle influence of processing on teeth wear that is apparent from archaeological material. ‘Microwear’ describes the tiny pits and scratches that form on the surfaces of our teeth and provide nooks and crannies in which the bacteria causing dental caries can reside. Processing food using stones introduces tiny, abrasive grit particles into our food that cause microwear of tooth enamel. When the sort of coarse foods eaten by hunter-gatherers are consumed regularly then microwear tends to be ‘polished’ out, but eating softer foods with grit prevents this polishing effect and can lead to caries.
Early agriculture didn’t just lead to bad teeth. Hunter-gatherers had much smaller populations than early agricultural communities and a much wider range of dietary options, albeit a range closely constrained by the environment. Despite the vagaries of that environment, hunter-gatherers were actually less likely to suffer from diseases caused by nutritional deficiencies than those who were farming and reliant on a relatively narrow range of foods. Early adopters of agriculture depended on one to three crops, typically chosen from barley, wheat, millet, maize and rice. Cereals like wheat, barley and millet are great for carbohydrates, but they contain relatively few of the vitamins and minerals essential for adequate nutrition. They are, for example, low in iron and calcium, a problem made worse by the fact that they contain phytates, compounds that actually inhibit the absorption of iron, magnesium and, to a lesser extent, calcium from the diet. Maize is deficient in certain important amino acids and also inhibits iron uptake, while a diet that is heavy on rice can lead to vitamin A deficiencies. As well as being low in iron and calcium, in general these staples are low in protein. An obvious source of all these grain-deficient nutrients is meat, but we know that early agricultural communities ate significantly less meat than their predecessors. This has led some to suggest that early agriculture may also have led to zinc and vitamin B12 deficiencies. Iron and vitamin B12 deficiencies are primary causes of anaemia, and indeed signs of this disease can be seen in characteristic bone lesions apparent on some Neolithic skeletons.
Skeletons reveal other debilitating effects of the Neolithic revolution. We shrank, becoming shorter and less robust overall, with a stunting of growth apparent at a population level. In part, the lower physical demands of an agricultural lifestyle may have contributed to that reduction in stature, since our skeleton readily adapts to the prevailing conditions in which it must function; bone mass increases as we exert ourselves and put more stresses and strains on the different components of our skeleton. However, the ‘lazy farmer’ hypothesis doesn’t explain many of the observations of Neolithic remains. Clear signs of incomplete enamel formation on teeth, evidence of ‘growth arrest lines’ on bones (visible lines that indicate periods when growth has ceased) and indications of weakened bones (osteopenia and in more serious cases osteoporosis) can all be related to malnutrition to some degree.
Dietary changes clearly caused problems in the early days of Neolithic agriculture, but developing farming also resulted in radical lifestyle shifts that went far beyond being generally less active than our hunter-gatherer predecessors. The development of larger communities and even cities acted to concentrate some human populations to create areas with both high population and high density. Humans are social creatures (more of which in later chapters) and we interact in complex ways. We work, play and live together, greeting each other, hugging, kissing, having sex, all the while speaking to each other with open mouths, spraying saliva as we go. The opportunity for these interactions increases greatly as our population density increases and with interaction comes the transmission of both ideas and diseases.
Smaller and more dispersed populations have far less opportunity for infectious diseases to spread and become epidemics than higher density populations. The towns and cities that developed as a consequence of agriculture put large numbers of people into very close proximity with each other and coupled with poor sanitation would have provided ideal conditions for disease outbreaks that would have been more or less unknown previously. Sure enough, analysis of the skeletons of Neolithic humans suggests that they experienced greater physiological stress from infectious diseases, the effects of which would have been exacerbated by poor nutrition.
Early agriculture, depending as it did on a limited range of crops, led to a degree of malnutrition that had potentially serious effects, especially when combined with the social lifestyle changes that were also happening. These early farmers were very clearly ‘unfit for purpose’. Brain power had allowed them to change their environment through a powerful combination of the realisation that crops could be grown and the ability to convert that line of thinking into a practical solution. We might think of the recent changes in our lifestyles as radical, and they undoubtedly are, but the shift to agriculture was arguably far more radical in terms of its overall impact. The major differences are that fewer people were affected (our population was perhaps only 5 million at the time) and that the shift happened across the global population over the course of a few thousand years. Nonetheless, the shift to agriculture provides a very useful example of how we can change our environment, render ourselves unfit for purpose and potentially evolve out of problems we make for ourselves. In many ways then, the Neolithic revolution could be a model for our current situation.
One nutritional problem strongly suggested by Neolithic bone evidence is a lack of calcium. Calcium is a metal that combines with phosphorus and oxygen to produce calcium phosphate. It is calcium phosphate that provides the mineral component of our bones and gives our skeleton strength. Our skeleton is a dynamic system with bone being laid down and resorbed all the time, and diets low in calcium can adversely affect the development and ongoing strength of the skeleton. Calcium gets into our bones via our diet, and patterns in modern-day dietary calcium intake vary greatly across the globe, as a study in the scientific journal Osteoporosis International has shown.2 Northern European countries like the UK, Ireland, Germany and France, and countries that have recently developed through an influx of migrants from these countries, like the USA and Australia, have relatively high dietary calcium. Meanwhile, countries in South, South-east and East Asia, including countries with very high populations such as China, India, Indonesia and Vietnam, have a notably low calcium intake. Below 400mg of calcium a day is a known risk factor for developing osteoporosis and in this study China, Indonesia and Vietnam were comfortably below that level (an average of 338mg a day, 342mg a day and 345mg a day) and India only just above it (429mg a day).3
The global pattern of calcium intake provides us with a clue as to how some of our ancestors solved the Neolithic calcium crisis, because in countries where people have a diet rich in calcium most of it comes from dairy. In a study in the US, for example, an estimated 72 per cent of calcium comes either from ‘straight’ dairy (milk, cheese and yoghurt) or from foods to which dairy products have been added (including pizza, lasagne and other ‘cheese on top’ meals, and dairy-based desserts). The remaining calcium comes from vegetables (7 per cent); grains (5 per cent); legumes (4 per cent); fruit (3 per cent); meat, poultry and fish (3 per cent); eggs (2 per cent); and the less helpful category, miscellaneous foods (3 per cent). In the UK, the figure for straight dairy products is slightly lower (50–60 per cent) but we still get the bulk of our calcium from dairy.
The dominance of dairy in our diet is such that for those us who grew up in the UK, ‘drinking milk’ is pretty much synonymous with ‘healthy bones’. When I started primary school in Devon in the late 1970s we had mid-morning ‘milk time’, when children were more or less force-fed their daily bottle of milk because it was ‘good for bones’. Free milk for schoolchildren had initially come about because of the 1946 Free Milk Act, passed in response to earlier research that had identified the link between low income, malnutrition and under-achievement in schools. Milk for secondary school pupils was stopped for budgetary reasons in 1968 and the same financial pressure removed milk for children over seven in 1971. The Education Secretary at the time was Margaret Thatcher, a fact recognised later by the popular and undeniably catchy anti-Thatcherite chant ‘Thatcher Thatcher Milk Snatcher’. In fact, documents from the time suggest Thatcher may have tried to save free milk, and was overruled by Prime Minister Edward Heath, but regardless of the details the ‘free milk for children’ debate did not end in 1971. Rumbling on through the 1980s, free milk for children lived on via various subsidies and Acts like the 1980 Education Act. Children, including my own, now benefit from free milk when they are under five years old through the Nursery Milk Scheme and subsidised milk thereafter through schemes like Cool Milk. The ingrained mantra, that milk is good for you, means that many parents pay for their children to drink milk at school as well as at home.
It is little surprise that milk has become revered as a healthy drink. For one thing, milk and dairy generally are wonderful sources of calcium. To get the same amount of calcium as you get in a 250ml (9fl oz) glass of milk (322mg) you would need to eat 685g (1½lb) of broccoli, which is at least two whole heads and maybe three if you like to trim off much of the thick stalk. Other non-dairy relatively calcium-rich foods include nuts (about 275g, 10oz to get the same calcium as a glass of milk) and eggs (12, give or take), so no one is denying that you can get calcium through non-dairy sources, but dairy is certainly a super-concentrated and reliable source. It is the consumption of large amounts of dairy that accounts for the high calcium intake in countries at the top of calcium league. We can also flip that fact around and in so doing account for low calcium intake in countries at the bottom of the league.
Milk and dairy do not feature much at all in the Asian diet and this absence largely explains the low dietary calcium intake observed in those countries. This situation is changing slowly. China in particular is greatly increasing its intake of dairy, especially so in urban areas, but it still lags far behind European countries.4 That said, the shifting balance of the Chinese diet towards higher dairy intake is happening to such an extent that is becoming a concern to many environmental commentators because satisfying the demand could have negative impacts on climate change.5 We are still unsure of the wider-scale health risks of a lower calcium diet in these countries because we lack meaningful large-scale studies of bone health in many countries. Where we do have data though, the implications are worryingly clear. A recent study of osteoporosis in China concluded that more than one-third of people over 50 are affected (34.7 per cent), compared with just 6.8 per cent of men and 21.8 per cent of women in the UK and broadly similar figures in France, USA, Germany, Spain, Italy and Sweden.6 While correlation does not imply causation, it is extremely hard to ignore the dairy–calcium–osteoporosis link.
Milk saves the day?
The initial problems of the Neolithic post-agricultural diet were in part solved by improving agricultural ambition. Cultivating a wider variety of crops provided a better range of nutrients, while establishing trade links between different agricultural communities both near and far provided produce (and technology and knowledge) that allowed a more varied and balanced diet to develop. These different improvements took place slowly and incrementally around the world, but around 7,500 years ago a far bigger and more rapid development occurred. Around that time, most likely in central Europe, we started drinking milk or, more accurately, adult humans started drinking the milk of other mammals. Understanding when, how, why and where this dietary shift occurred shows us both how we can evolve to fit a changing environment and why our recently changed environment is not working out for a great many people across the world.
Drinking milk comes naturally to infant mammals, who instinctively root around to find nipples, teats or udders. Indeed, as any parent knows, infant humans will readily attempt to suckle on fingers, ears and noses. Most mammalian infants can be persuaded to suckle milk from a bottle if a suitable teat is provided. I once had the honour of bottle-feeding a beautiful suckling zebra called Mbezi. She greedily took two litres of horse milk (made from powder) in a good deal less than a minute from a soft-drink bottle with a length of rubber gas hose as a substitute zebra udder. The clear lesson is that infant mammals love milk, and so they should. The milk received from the mammalian mother is a highly nutritious food, and its rich mix of proteins, fats and sugar in the form of lactose supports very rapid growth in species as varied as the pygmy shrew and the blue whale. Where things get weird is the drinking of milk as an adult. You simply don’t find this habit in non-human mammals, and the reason for the absence is that most adult mammals cannot properly digest milk. It is, quite literally, baby food.
To digest milk, and specifically lactose (the sugar in milk), requires an enzyme called lactase. Since the only function of lactase is to digest milk, and mammals only drink milk when they are suckling, the activity of that enzyme drops off massively after young are weaned. Evolution tends towards efficiency and if something isn’t needed then it doesn’t tend to be produced. To drink milk as an adult requires the adaptive evolution of what is termed lactase persistence, where lactase activity continues past weaning and into adulthood. Without that evolutionary change, in fact a genetic mutation, drinking milk as an adult will make you ill. This would have been the case in humans during the Neolithic revolution. Although there is evidence that we started to domesticate sheep around 11,000 years ago,7 and goats and cattle only slightly later, we would not have been able to drink their milk, at least not in raw form or in any quantity.
Lactase persistence
It took around 3,500 years for our genetics to catch up with our environment and provide at least some of us with the biochemical tools we needed to digest the wonderful source of calcium (and fat and protein) that we could squeeze out of our developing livestock herds. In other words, although it took a while we were able to evolve our way out of at least some of the problems agriculture had created. This evolutionary solution wasn’t hit upon in all Neolithic populations though, and even now lactase persistence is very far from evenly spread throughout the world. In north-west Europe for example, lactase persistence is high at 89–96 per cent8 but this declines gradually as we move south and east through Europe. Only 17 per cent of Greeks, for example, are predicted to be lactase persistent in a study that combined data from well-studied populations with a theoretical approach that ‘filled in the gaps’ for those populations that had been studied less. The same study also predicted 100 per cent lactase persistence in Ireland.9 Relatively high frequencies of lactase persistence are found in some other populations, including sub-Saharan Africa and the Middle East. In Africa, the distribution of lactase persistence is patchy and can be highly variable even between neighbouring populations. For example, a study of nomadic pastoralists (the Beja) living between the Nile and the Red Sea in the Republic of the Sudan, and the neighbouring semi-nomadic cattle-breeding Nilotic peoples in the south of the country, revealed more than 80 per cent of Beja were able to absorb lactose compared to less than 25 per cent of Nilotic peoples.10 In Asia, the pattern is clearer; the overall frequency of lactase persistence is low and more or less uniformly so. In most Chinese populations lactose persistence is less than 5 per cent and a similar frequency is found in East Asian and Native American populations.
It can come as a surprise to many people who have grown up in cultures and nations that revere milk as a healthy food full of calcium, but most adults in the world simply don’t have lactase persistence. Most people in the world cannot drink milk, at least not in the quantities that Europeans tend to. Of course, the pattern we see in lactase persistence in present-day populations pretty much mirrors the pattern we saw in the consumption of dairy around the world. We don’t tend to eat foods that we can’t digest and that make us ill. The pattern can also be used to help explain the story of how and why lactase persistence evolved, and why that relatively recent evolution has left modern populations both fit and in some cases unfit for purpose in the modern environment.
In Europe, lactase persistence is explained by a single mutation that has increased greatly in frequency over the last 7,500 years or so. Genetic studies show that around that time, in a region between the Balkans and Central Europe, the lactase persistence mutation began to increase and spread.11 Since being able to digest lactose (and therefore consume milk) is only an advantage when there is a source of lactose available, it is likely that lactase persistence first began in dairying communities where the ability to consume milk would have provided a strong advantage. Consequently, we can also infer that cultural evolution of dairying as a farming practice co-evolved with the ability to drink milk; genes and culture co-evolved, with each encouraging the subsequent evolution of the other.8 Communities that could drink milk would likely become more dependent on dairy, and their farming culture would then further change to accommodate this dietary preference. The presence of more milk in the environment and the advantages that it provided would lead to a selective advantage to the lactase persistence gene.12 Fast forward a few millennia and we see some adult populations where regular consumption of large quantities of cow, sheep and goat baby-food is the norm.
In Africa and Asia, lactase persistence is both far rarer and more complex. Four known mutations are associated with it, and there are likely to be more. These mutations occur at different frequencies in different populations. The reasons for these population differences could be linked to the type and strength of the selection pressures for lactase persistence in different populations. In Europe it has been proposed that selection for milk drinking might have been linked to the fact that the lactose and vitamin D in milk enhance the absorption of calcium (from both the milk and other dietary sources). This might have been especially beneficial in regions of low light like Northern Europe because of the possibility of vitamin D deficiency. Vitamin D is essential for calcium uptake, and it can be synthesised in the skin from cholesterol in a chemical reaction that requires exposure to UVB radiation from the sun. Consuming dairy, with its high concentrations of both calcium and vitamin D, could have helped to compensate for lower levels of vitamin D from sunlight in gloomy Scandinavia, for example.11 As appealing as this one-stop-shop ‘calcium assimilation hypothesis’ is, it is not supported by a study of the spread of lactase persistence through Europe. This study concluded that the need for a source of dietary vitamin D was simply not necessary to account for lactase persistence in Europe. Instead, the reason for there being a single mutation in this case is probably because of human history. Milk and products made from it are a nutritious source of calcium, and following the evolution of lactase persistence in the Balkans, there was a wave of population expansion across Europe from that region. This, at least according to some researchers, explains why across Europe lactase persistence is a consequence of the same single mutation. In contrast, in Africa and Asia a number of different selection pressures have been proposed for lactase persistence, including milk drinking as a source of hydration, as a simple source of calories and nutrition, and even as a potential mechanism to reduce the effects of malaria.13 The diversity in lactase persistence in these populations is possibly also the consequence of their varied farming practices that did not rely so much on dairy. This differed from Europe, where dairy farming expanded more widely over the whole population.
Working out the precise details of the story of lactase persistence evolution is very much a scientific work in progress, combining genetics with knowledge of early human populations, their culture and migration. Regardless of the details, the evolution of lactase persistence would clearly have required a fortunate set of circumstances bringing together the latent ability of the environment to produce milk from animals kept primarily for meat, with the appearance of a mutation that allows adults to drink milk. Without that interaction between cultural farming practices and genetics, it is much less likely to occur. In China for example, where overall lactase persistence is rare, the historical tendency towards dairy farming and milk consumption is also rare. For whatever reason, that gene–culture coupling never got off the ground.
The evolution of lactase persistence shows us very clearly that we can and have evolved in response to the changes in our environment that we have ourselves brought about. However, it also teaches us that these evolutionary solutions occur relatively slowly, across timescales of thousands of years. The evolution of fixes for mismatches is subject to the vagaries and differences of both our own imposed environment and the wider environment. The fragmented nature of human populations and the chance appearance of the necessary mutations mean that these changes have tended to occur unevenly across the globe. Even today, despite globalisation, the relative ease of long-distance travel and a gradual increase in wealth and economic freedom, ‘humans’ are not a single population. We are still fragmented to a large degree into different populations, and the environment and selection pressures differ greatly between, and even within, those populations. Any evolutionary changes that might occur today to get us out of trouble will still be influenced by the same basic forces and processes as in the past. The evolution of lactase persistence gives weight to the idea that we could evolve our way out of self-imposed problems, but it gives us really no hope at all that we will, at least in any useful timeframe.
Globalisation and the dairy problem
In the past, the fact that some strange people in far-off lands consumed milk would be little more than an amusing piece of trivia with which to impress your neighbours; an early version of the ‘crazy foreigner’ trope. The development of communications and transport technology over the past 50 years or so, however, has given a great many of us a far more sophisticated awareness of the rest of the world than was ever possible before. These technological developments have moved in step with the development of advanced medical science, the general push towards epidemiological medicine in developed nations and the widely acknowledged and often (though not always) evidenced link between health, nutrition and diet. That these advances have very often occurred in nations that also have lactase persistence means that the firm identification of milk as a healthy food, the association of milk consumption with a Western lifestyle and the global dissemination of the ‘milk is good’ message were inevitable.
The modern world allows a global reach for the message that ‘milk is good’, but this reach is a clear mismatch with the global pattern of the evolution of lactase persistence. This mismatch occurs at the level of nations and even entire continents but can also occur within nations with a high frequency of lactase persistence. The fact remains that across the world around two-thirds of people cannot consume raw milk, and even within Northern European populations the ability is not ubiquitous.
The recent rise of dairy in China provides a useful case study to show how very recent self-imposed changes in our environment can be mismatched with our evolutionary past. We already know that China consumes far less dairy than European nations or the USA, but from a very low level China’s dairy consumption has been steadily increasing.14 The UN Food and Agriculture Organisation estimated that China’s consumption of milk increased from 26 kilocalories per person per day in 2002 to 43 kilocalories in 2005, which although still very low in comparison to Western nations does represent a near-doubling. The reason for this rise, which has only increased further over the past 10–15 years, is difficult to pin down precisely, although political pressure has certainly played a part. When in 2007, Chinese premier Wen Jiabao said, ‘I have a dream to provide every Chinese person, especially children, sufficient milk each day’, we can be sure that he wasn’t talking to himself. Xuē Xīnrán, a British-Chinese journalist and author who writes under the pen-name Xinran, explored the rise of milk-drinking in her columns in the UK newspaper the Guardian. Published as a book in 2006, What the Chinese Don’t Eat explains the ‘dairification’ of China as an aspirant phenomenon. ‘Until China opened up, Chinese people had no idea about international standards. This is why people in the 1980s believed McDonald’s was the best Western food,’ she says. ‘They believe that Westerners had a better life based on meat and milk.’ That foreign lifestyles can influence domestic choices is undeniable, but whether this is a factor in the rise of China as a fledgling milk-drinking nation is debatable. Professor James Watson of Harvard University is an anthropologist who specialises in food-eating and China. He dismisses the idea that it is admiration for the West that has driven the rise in milk consumption, instead proposing that simple availability has been the key. In the past, because so few people could drink milk, no one produced milk. A nation that lacks lactase persistence is a nation than lacks dairy. The modern world has put dairy within most people’s reach and this drives demand, at least according to Watson. He sums up this idea rather neatly as ‘it [the rise in milk consumption] doesn’t indicate they are becoming more Western, it just means they like ice cream.’ But with all we now know about milk and lactase, they really shouldn’t like ice cream, and neither should most people across the world. As wonderful as milk is for many, two-thirds of people just don’t have the genetic tools to digest it. We tend to term such people ‘lactose intolerant’, although given they are in the majority it might be better to call them normal.
Intolerance
Without the right genetic tools in place to handle lactose as an adult, consuming milk and dairy will make you ill. The symptoms of lactose intolerance are not pleasant and include diarrhoea, stomach cramps, abdominal pain, bloating, nausea and flatulence. The triggering of these symptoms tends to be dose-dependent, with some people reacting poorly to very small quantities of dairy, like some milk in a cup of coffee, while others are able to tolerate much more before their body’s reaction to lactose becomes a major problem. Not all dairy is equal, and processed dairy products like butter, yoghurt and cheese contain lower levels of lactose than raw milk, sometimes far lower. The process of making butter typically removes a great deal of lactose, and the fermentation processes involved in creating yoghurt and cheese also serve to reduce lactose, in some cases to less than 10 per cent of that found in the equivalent volume of ‘raw’ milk. This reduction in lactose, coupled with the dose-dependent nature of the lactose reaction, means that processed dairy can be consumed more easily by many people, and it is a rise in the consumption of these products and especially cheese that is a major factor in the rise of dairy globally.
Cheese contains calcium, which is a very good thing, but it is also a high-fat, calorie-dense food that is all too easy to add to a great many dishes. As sure as night follows day, the rise in dairy consumption has been linked by some to a rise in obesity in China, although other factors including the more widespread availability of Western-style fast food have also been implicated. If dairy is to blame for a rise in obesity then, with a knowing nod to the previous chapter, the story provides a nice example of how the complexity of the modern environment playing against our evolutionary history can cause us problems. Technology creates a world where the message that ‘milk is good’ spreads to populations that have not evolved to digest it; as a work-around, processed milk is consumed in quantities great enough to cause obesity, which is a far greater risk to health than low calcium levels.
The concept of processing milk to make it more palatable to those without lactose persistence is far from new. Archaeology combines with chemistry in unravelling the history of cheese-making, with analysis of carbon and nitrogen isotope ratios (see Chapter 2) from fatty residues on pottery fragments allowing scientists to determine whether those residues are from meat, fish, milk or fermented milk products. This technique was applied to pottery fragments from the Dalmatian coast of Croatia in 2018 and pushed back the earliest known cheese-making in the Mediterranean region to 7,200 years ago. It is likely that we will continue to push this date back even earlier with more discoveries and better analytical techniques.15 The jump to making cheese from milk was presumably accidental at first (and hats off to those early pioneers, bravely tucking into the first cheeses) but would have given early dairy farmers two key advantages. First, cheese has a far better shelf life than raw milk, and second, cheese could be consumed by adults. The ability of adults to eat cheese would have driven an increase in dairying and, as we have already seen, this cultural change would have gone hand in hand with the evolution of lactase persistence.
Lactase persistence, and the flip-side of the coin, lactose intolerance, show how evolution has both prepared some humans and failed to prepare others for the modern world of near-ubiquitous dairy availability. It remains to be seen how the rise of dairy in lactose-intolerant nations will influence overall health, but it seems hard to argue that a rise in dietary calcium won’t reduce osteoporosis or that increasing cheese consumption could lead to obesity. Whether either will have any long-term effect on human evolution, though, is questionable. Osteoporosis predominantly affects older people who have already passed on their genes before the problems hit. Evolution is largely blind to events that occur after reproduction ends, although having healthy and active grandparents could provide an evolutionary advantage, especially in the modern world. People with active parents who can help with childcare might be tempted to have more children than they would otherwise. If those grandparents are healthy and active because they have strong bones as a consequence of being lactase persistent then at least some of their children and grandchildren will likely also be genetically lactase persistent. More children and more grandchildren means more of an increase in the frequency of lactase persistence.
The modern world is seemingly providing an environment favouring healthy and active grandparents. We are seeing increases both in ‘grandparental care’, where grandparents assume primary custodial care of their grandchildren, and of grandparents acting as part-time child minders.16 Full custodial care of grandchildren is driven by a range of complex societal factors but some of these, including addiction (Chapter 8) and economic uncertainties, can be clearly linked to the recent overall environment we have created. The ‘grandparents as childminders’ scenario is primarily driven by more straightforward changes in our recent economic and social environment. Higher house prices and other cost-of-living pressures over the last few decades often mean that two-parent households require both parents to work. Furthermore, social changes have taken us away from the ‘traditional’ household of the first half of the twentieth century and earlier, when mothers would be expected either not to have a career or to give up that career for child care. These changes in our environment exert all kinds of everyday financial and social pressures on us, but perhaps they are also exerting a selection pressure for grandparents with strong bones able to deal with the rigours of late-life child care.
As well as the increase in dairy farming, the agricultural revolution saw a rise in the availability and consumption of another component of our diet that is linked in the modern world to ‘intolerance’. The ability to process the grain (in fact the seeds) of wild grasses into early flours by grinding and the realisation that grains can be sowed, collected and re-sowed were really all it took for cereal cultivation to develop. Once it had developed, perhaps 10,000 years ago in the Fertile Crescent of the Middle East (encompassing the modern-day countries of Iraq, Israel, Palestine, Syria, Lebanon, Egypt and part of Turkey and Iran), wheat cultivation quickly began to spread. With that spread came developments in farming technology that we still recognise today. Early farmers would have selected for better varieties both accidentally and deliberately, producing domesticated cereals with traits like increased grain production. Crop rotation, whereby different crops are grown in a sequence to limit soil nutrition depletion was developed, and leaving land fallow to allow recovery is even mentioned as a rule in the Book of Leviticus.
Cereal grains, especially whole grains (as opposed to highly processed and refined products like white flour) can be a reasonable source of protein and fibre, B vitamins, antioxidants and minerals like zinc and magnesium, although as we have already seen, early adopters of agriculture suffered from symptoms consistent with a lack of some of these nutrients. As well as providing many important dietary components in one hit, a diet rich in wholegrain cereals has been firmly linked to a number of important positive benefits that include a reduction in type 2 diabetes17 and protection against various cancers.18 However, increasingly in the modern world we seem to be revealing a major problem with cereals: a protein called gluten. Gluten comes from the Latin word for glue and it features heavily in the endosperm, the main component, of seeds like wheat, barley, rye and oats. The glutens (for there are many related proteins in the group) have a unique set of properties that give bread dough the combination of adhesion, holding itself together, and elasticity, allowing dough to rise.
Although the majority of us can consume gluten with no issues whatsoever, in some people gluten can trigger a whole range of symptoms and conditions that are grouped under the umbrella term gluten-related disorders (GRD). These include coeliac disease, non-coeliac gluten sensitivity, gluten ataxia, dermatitis herpetiformis and wheat allergy. Despite their vagueness and the fact that their use is discouraged medically,19 the terms ‘gluten intolerance’ and ‘gluten sensitivity’ are still commonly used to refer to the set of symptoms brought on by consuming gluten. These symptoms can include diarrhoea, abdominal pain, bloating and nausea; indeed, many of the symptoms we have met already with lactose intolerance. Lactase persistence can mostly be linked to a single gene and a reasonably straightforward evolutionary scenario. GRD evolution is similar in some respects but more complex in others, although its rise can once again be linked to recent environmental changes of our own making.
Gluten intolerance and our immune system
Coeliac disease is an autoimmune disorder, with the immune system ‘overreacting’ when gluten is ingested. This abnormal response primarily causes inflammation in the small intestine, leading to that distinct and unpleasant suite of symptoms. Over time, this inflammation eventually causes damage to the lining of the small intestine and poor absorption of nutrients. In childhood, poor absorption caused by inflammation of the bowel lining can result in problems with growth and development, which is another serious symptom of CD. In adults, poor absorption can lead to iron-deficiency anaemia and symptoms including bone or joint pain, fatigue, seizures and (of particular interest from an evolutionary perspective) infertility. Coeliac disease (CD) then is both a serious and a permanent condition that affects around 1 per cent of people globally, although its broad range of symptoms are indicative of many different diseases and this means that CD is easily missed. There is also strong evidence that CD has been increasing worldwide and that the increase is not just because we’re getting ever better at detecting and diagnosing it (which we undoubtedly are).20
CD is a genetic disease and more than 95 per cent of people with it have one of two types of a specific protein involved with our immune response. HLA-DQ proteins are present on the outer surface of some cells and form part of the complex system of signalling that occurs between our cells and our immune system. These HLA-DQ proteins bind to other proteins (antigens) derived from ‘invaders’, such as a disease-causing bacterium, and present them to immune system T cells. It is a way that cells can flag that they are in trouble and attract attention from the immune system. Such signalling mechanisms also allow the immune system to learn to distinguish between ‘self’ and ‘non-self’. HLA-DQ proteins then play a big part in assisting our immune system in attacking foreign invaders while tolerating our own cells. Sometimes, though, it can all go horribly wrong.
There are seven different variants of the HLA-DQ protein, numbered DQ2 and then DQ4 through to DQ9. The proteins are coded for by different variants (called alleles) of the HLA-DQ gene, and 95 per cent of people with CD have either the DQ2 or the DQ8 form. If you have neither DQ2 nor DQ8 then you are very unlikely to have the disease.21 In the gut, gluten is broken down into lengths of amino acids called peptides and the DQ2 and DQ8 forms of the protein in the gut bind much more tightly than other DQ variants to these peptides. This tight binding means that those people with DQ2 and DQ8 forms of the protein are far more likely to activate T cells and then activate their autoimmune system if gluten is present. It is important to note that while most people who have CD also have these DQ2 or DQ8 copies of the gene, not everyone who has those genes develops CD. This shows that the onset and persistence of CD symptoms is environmentally triggered by some factor related to exposure to gluten proteins.
The only current course of treatment for CD is a gluten-free diet. This lessens the symptoms and in time can promote healing of the intestine, although there is evidence that damage to the small intestine may well remain even after adoption of a gluten-free diet. Given that CD levels hover below or around 1 per cent of the population, and that not all CD sufferers have the same level of symptoms, it perhaps seems strange that the terms ‘gluten-free’ and GF have gained such prominence recently. Supermarkets have entire GF sections, restaurants promote GF menus and the popular press is stuffed full with ‘going gluten free’ articles. The reason behind that rise is not primarily an increase in CD, although it has become more readily diagnosed and has increased in frequency over in recent years. In fact, a different, and difficult, condition called non-coeliac gluten sensitivity (NCGS) is the reason why bread and pasta have become the latest dietary villains.
When we speak casually of ‘gluten intolerance’ it is almost certain that we mean NCGS. The most common of the gluten-related disorders, NCGS has an estimated prevalence as high as 13 per cent and symptoms are similar to CD.22 The big difference between CD and NCGS is that there are no diagnostic biomarkers for NCGS; in other words, we can’t run a test to identify genes. Diagnosis of NGCS rests on testing whether patients have CD, testing whether they have the much rarer condition of being allergic to wheat and then asking if the symptoms clear up when the patient stops eating gluten. If the answers are no, no and yes then bingo, they have NCGS. It was first discussed in the 1970s, but it is over the past decade that NCGS has really come to the fore.
The simple fact is that we still don’t have a clear understanding of NCGS, and it has taken some time to be widely accepted by medical practitioners. Undoubtedly the absence of any biomarkers leading to a clinical test that can definitively identify it (as we can with CD) has been a big factor in the relative reluctance to diagnose NCGS until recently. Indeed, it has been dismissed as a ‘fad’ by some and it is still questioned as a clinical entity by others, although this is changing as we learn more about it and about related diseases. The symptoms of NCGS are also similar to those of another problematic and still poorly understood condition, irritable bowel syndrome (IBS). Patients can find themselves caught in what some have called a ‘no man’s land’, with symptoms that are explained neither by CD nor by IBS. Exactly what causes NCGS is up for debate and it may well be that different forms of NCGS exist, triggered by different factors. Proteins other than gluten in cereals might be triggers in some sufferers, and carbohydrates, collectively termed FODMAPS (fermentable oligosaccharides, disaccharides, monosaccharides and polyols), have also been implicated.
It is interesting to note that the rise of the internet has created an environment where people are much more able to self-diagnose, and in no condition is this more apparent than NCGS. Self-diagnosing NCGS based on personal experience and then treating it by adopting a gluten-free diet is made even easier by the huge rise of popular articles on the topic. One study identified a ratio of 4,598:1 of Google citations versus PubMed citations (a major medical science literature database) on NCGS, and the concomitant rise in the availability of GF products facilitates and arguably encourages self-diagnosis.23
With lactase persistence, the pattern of occurrence around the world gave us insight into its evolution and some of the issues that it, or the lack of it, may cause in the modern world. GRDs are far more global in their distribution, and with an overall frequency of 0.9 per cent there are a large number of global studies that support the notion that CD is one of the most common lifelong disorders affecting humans worldwide. Despite such widespread occurrence, patterns in the prevalence of CD, the frequency of HLA-DQ2 and the consumption of wheat do vary geographically. This variation can give us evolutionary insight but it also throws up an evolutionary paradox.
The paradox derives from two correlations. First, there is a correlation between the consumption of wheat and the frequency of HLA-DQ2, such that in regions where we see a high consumption of wheat we see a higher frequency of the form of the gene that causes CD. Second, there is also a correlation between the frequency of CD-causing genes and the duration of wheat consumption that shows the history of CD is related to the spreading of wheat cultivation following the agricultural revolution in the Fertile Crescent. The Palaeolithic pre-agricultural diet would not have contained such a high proportion of gluten-containing cereals, and diseases like CD would have little chance to express themselves. What we find after the development of agriculture though, is the increase of a condition that would have been a clear disadvantage to those who had it. This seems especially the case when we remember that in the early days of the agricultural revolution we weren’t exactly thriving on our new diet and any additional pressure, like being nearly crippled with bloating and abdominal pain from eating cereals, would hardly have made sufferers evolutionarily fitter.
There was no gluten-free aisle in the Neolithic supermarket and with an increasing reliance on grains since that time, it is therefore a paradox that we find CD in modern populations at all. It might be expected that CD would have been selected out early in our agricultural history and, if still present, that the longer a population has had agriculture (and therefore presumed selection against CD), the lower the prevalence of CD should be. In fact, and paradoxically, that is not the case. CD is just as common in the UK, where cereal cultivation started around 4,000 years ago, as it is in Turkey, which led the way in cereal cultivation a full 6,000 years earlier. The frequency of HLA-DQ2 is actually higher in Turkey and Iran (where there is high wheat consumption that has continued for a long time) than it is in Finland and Ireland, both are which are late adopters and relatively low consumers.24 The fact that despite clear negative effects on health CD has not been selected out is known as the ‘evolutionary paradox of CD’.25 In a sense it is two paradoxes rolled into one, since not only do we need to account for the puzzling persistence of CD, but we must also account for the fact that CD is increasing and doing so in areas with a high level of wheat consumption.
Resolving the paradox
One mechanism that could account for the continued existence of CD and the fact that seemingly ‘bad’ genes can end up increasing in frequency is called antagonistic pleiotropy. This describes the situation where one gene controls for more than one trait, and where at least one of those traits is beneficial and at least one is detrimental. In the case of CD, HLA-DQ2 and HLA-DQ8 are part of a distinct gene family that are located in a chromosomal region packed full of genes associated with our immune system. Chromosomes are those ‘wobbly X’ structures that form from the molecules of DNA that make up our genome. We have 46 of them in 23 pairs (one from each pair coming from each parent) and most of the time they are unravelled and invisible, only ravelling to form those distinctive tightly packed structures at a particular time during the period when cells divide to form new cells. It has been proposed that the close physical association on chromosomes of HLA genes with another gene family essential for immune-system function known as KIR (killer cell immunoglobulin-like receptors) might have led to these two gene families evolving together as an integrated system of genes.26 Strong selection for a good immune system associated these genes together and in the absence of gluten the HLA-DQ2 and HLA-DQ8 variants work just fine. It is only when we went and invented cereal-based agriculture and loaded our gut full of gluten that these variants became a problem. By that point though, the advantages they gave to immune systems of the past had locked them into a ‘if they’re coming, then I’m coming along too’ relationship with KIR genes. So, HLA-DQ genes overall give benefit by aiding in immunity against some pathogens, but some variants (HLA-DQ2 and HLA-DQ8) have a cost that is only expressed when we changed to a dietary environment dominated by gluten.
Another potential clue for the persistence of HLA-DQ2 (which as we’ve learnt accounts for 95 per cent of CD cases) is that the gene has been shown to provide some protection against dental caries, which you’ll recall were a major health implication of early agriculture (Chapter 2). Dental caries result from a three-way interaction between the owner of the decaying teeth, their diet and the bacterial cultures they support on their teeth. The increase in carbohydrates in the post-agricultural diet, together with increased tooth crowding, led to a higher frequency of tooth decay, which eventually prevents individuals from being able to eat properly. The ‘DQ2-dental caries protection’ hypothesis proposes that people with the HLA-DQ2 mutation would have had some protection against dental caries and might therefore have had higher survival and higher reproduction than those who lacked this mutation. We still need to find out much more about this association and we do not yet know the mechanism (it may be linked to the clearance of ‘sticky’ gluten peptides from the mouth), but if it turns out to be correct then the mutation that causes CD may have initially been subjected to positive selection because of the harmful effects of the exact same gluten-rich diet that triggers CD.27
Antagonistic pleiotropy and positive selection for protection against dental caries can account for the persistence of CD, but they don’t account for the second component of the paradox: the fact that CD is increasing in recent times even if we account for increased diagnosis. Evidence from a study in Sweden seems to point us towards what is now a familiar category of explanation for this phenomenon: very recent changes leading to a mismatch between our modern environment and that in which we evolved.
Sweden experienced a rise in CD in children under two years old between 1984 and 1996 known as the ‘Swedish epidemic’. Rates increased three-fold, eventually reaching a rate higher than had been seen in any other country at that time. There was then a sudden drop, with CD in that age group returning to the baseline level of the early 1980s. Analysis of the epidemic revealed that the rise was associated with two factors. First, there was an increase at that time in the amount of gluten present in early weaning foods. Second, there was a shift in the pattern of exclusive breastfeeding and the timing of the introduction of gluten to the diet. The study showed that children who were still being breastfed at the time when gluten was introduced had a lower risk of developing CD.28 The problem is that recent studies using larger populations and randomised trials showed no increased risk of CD in relation to gluten being introduced and no protective benefit of breast-feeding. With every step forward in our understanding we seem to take another step back and it is instructional that such a prevalent disease, which appears on the face of it to be rather simple, can be so complex. While we are gaining good ground in terms of understanding the disease itself, getting to grips with the environmental landscape in which it is triggered is still proving to be difficult.
Another factor that has been linked to CD is how you are born, specifically whether you were delivered by caesarean section or vaginally. Again, the situation is complex with some studies revealing a link between delivery modality and CD, and other studies finding no link. It may seem curious that the mode of delivery could be thought to influence the triggering of a disease, but the connection here is with the bacteria that dwell within you.
The colonisation of our gut with bacteria that have, of late, become known as ‘friendly bacteria’ happens as we grow and develop. Genetic factors and our environment, notably our diet, all have profound effects on our inner ecosystem, but both mode of delivery (caesarean section versus vaginal) and early feeding (breastfeeding versus formula) also have a role in determining which bacteria colonise our gut. As we are learning, our inner ecosystem has profound effects on our immune system. Given the clear links between that system and CD, it is obvious that we should be looking for links between the gut microbiome (as our bacteria are collectively known) and CD. You will not be surprised to learn that the situation is again complex, but the picture that is beginning to emerge is that CD is linked to a reduction in beneficial bacteria species and an increase in potentially harmful species.29 The change that our recent environmental shifts have wrought on our inner ecosystem is something we shall return to in the next chapter.
The evolution of agriculture was a massive environmental change for Neolithic humans, and it led initially to a considerable mismatch between our evolutionary history and our dietary environment. Meanwhile, accommodating a radically different diet led to evolutionary changes whose influence is very much apparent today, when we consider our ability or otherwise to consume the two primary dietary changes that agriculture brought upon us: dairy and cereals. The ability of some of us to continue to digest lactose in adulthood has combined with technological, historical and social factors to produce a ‘dairy rich environment’ across the world that clashes with the evolutionary history of many and brings potential problems (lactose-induced illness and obesity) as well as benefits. Agriculture also made possible a gluten-rich environment that ironically may have initially selected for genes that now combine with aspects of our modern environment to cause serious problems for significant numbers of people. In both cases, our recent trend towards globalisation and the ‘homogenisation’ of human culture towards a Western-style diet combine with the high relative availability and desirability of certain foods to create what is quite literally a toxic environment for some. As well as showing how environmental shifts lead to mismatches with our evolutionary past, what these stories of dietary intolerance really highlight is just how diverse different groups of people across the planet still are, even in the modern world.