9

THE SIX CAUSES OF THE EPIDEMIC

NCDs run rampant across the globe, and we only now seem to be getting a clue about fundamental aspects of human biology—was there some awful conspiracy? I don’t think so. There were no colossal errors or horrific judgments that got us here. Hindsight is always easy. Instead, there were well-intentioned people. In almost every case, people were drawing upon new inventions, new ideas about what could be helpful for humans, new opportunities for jobs and affordable housing, and they were helping societies make progress as well as they knew how. The rewards for decisions that were made primarily during the twentieth century were immediately obvious. The risks were never realized until the twenty-first century.

The combined changes in our diet, living conditions, medical treatments, and approaches to human safety had downsides that are only now becoming apparent. Professionals were either not aware of or greatly underestimated the risks associated with various practices and lifestyles. Health guidance that was provided by professionals failed to recognize that such practices could damage the microbiome and, in turn, make our immune systems a living nightmare of self-destruction, cancer, and disease. These professionals simply did not know. It is sad but understandable given the old biology.

Now, sitting in the middle of the NCD epidemic, we can easily look back and understand how seemingly useful and harmless practices ushered in disease and disability. Many contributing factors led us down the path toward more and more people growing up with a deficient or damaged microbiome and later-life NCDs. There are six prime factors that led the way to this pivotal point in health and medicine:

  1. Antibiotic overreach
  2. The food revolution and diet
  3. Urbanization
  4. Birth delivery mode
  5. Misdirected efforts at human safety
  6. Mammalian-only human medicine

Unfortunately, to date, these practices have yet to be overturned or adequately modified. In some cases the risks are better known now. But only baby steps have been taken to address them. Making changes at both personal and institutional levels will be a critical part of any new medicine.

These are not the only factors involved. However, they are the factors we will need to address in the medical revolution if we are to right the ship of human superorganism wholeness and health.

Here is what we need to do.

1. Antibiotic Overreach

Antibiotic overreach includes both the inappropriate use of antibiotics (as a routine supplement in animal feeds and for probable viral infection in humans) and an underappreciation of the costs to health involved in antibiotic use. While antibiotics saved lives during the twentieth century and continue to today, too much of a good thing is often a problem, and that is certainly the case with antibiotics. But this problem is not like a child getting a tummy ache from eating too much ice cream. No, this is a much bigger, longer-lasting problem. Inappropriate antibiotic use is more like taking an action that results in losing one of your organs or an arm or a leg. The cost to your health is high, so it had better be worth it.

In a recent study, the antibiotic amoxicillin was found to be the most frequently prescribed drug for US infants and children. How did that even happen? It happened because if your child has a bacterial infection, antibiotics can usually clear it. They work. Physicians assumed that the worst that could happen by prescribing an antibiotic in the event the infection was caused by a virus was that it would not work and might contribute a bit to antibiotic resistance among bacteria. But the thinking was that inappropriately prescribed antibiotics had no real downside for the patient. No harm, no foul was the idea. But we now know there is harm and potentially a significant amount of it. Under the old biology, bacteria were generally seen as evil, and widespread killing of our bacteria by antibiotics was no problem. One round of antibiotics can damage your microbiome and cause your entire metabolism to change, along with the interconnected functions of your tissues and organs. How could people know that killing off more of your bacteria than just the one causing the illness was actually destroying an important part of you?

Originally, antibiotics were used when illness was a matter of life or death such as with cholera, typhoid fever, staph infections from wounds, or tuberculosis. But now they are used far more routinely in doctor’s visits such as with infant ear infections, called acute otitis media (AOM). This is despite the fact that a high percentage of those infections are caused by viruses, which are not susceptible to antibiotics. The majority of these prescriptions are administered by general practitioners. Such overuse has resulted in pressure to discontinue this practice when possible, in part because spontaneous remission of AOM without complications often occurs.

Overuse of antibiotics also increased antibiotic resistance in pathogens, giving rise to methicillin-resistant Staphylococcus aureus (MRSA) infections and other so-called superbugs. The threat is real and growing. The US Centers for Disease Control and Prevention recently estimated that at least two million people in the United States become infected with superbugs annually, resulting in tens of thousands of deaths each year. As a result, there is now a race to develop new antibiotics that will work. Yet all of this is happening without an effective medical plan to replace the nontargeted, beneficial bacteria that are also destroyed during antibiotic treatment. We did not want to lose these beneficial bacteria nor can we really afford to lose them. Antibiotic treatment of children can have several major effects: loss of key bacterial species that are required for the entire microbiome to mature correctly, loss of diversity within the microbiome, loss of key metabolic functions needed by mammalian cells and tissues, and increased later-life susceptibility to serious infections. Losing useful bacteria opens up space for harmful pathogens to move in and claim our body’s territory as their own.

The health problems linked to children who have lost major parts of their microbiome are things like obesity, diabetes, cardiovascular disease, neurological problems, allergic and autoimmune diseases, depression, and cancer. In the past, we didn’t know about the health ramifications of damaging our microbiome. But we do now.

The problems stemming from antibiotic overuse were not restricted just to humans and prescription drugs. My first responsibility as a newly minted Cornell professor was to develop naturally healthier chickens. I got firsthand experience and insight into the world of globalized food production and sustainable agriculture. Poultry represents the number one meat protein source consumed not just in the United States but globally as well. To paraphrase a famous Cornell professor and public educator, Carl Sagan, there are billions and billions of them, but in the case of my early career, they were chickens rather than stars.

One thing I quickly learned is that, in both the research strains and agricultural production chickens, we control virtually every aspect of their environment. We determine what they eat, drink, and breathe (i.e., air quality), as well as their housing type, space, vaccinations, and lighting. This exquisite control and the desire to gain optimal production (whether in terms of eggs or meat) led to the discovery in poultry of many of the essential vitamins and minerals our bodies need. In fact, Leo Norris, a foundational nutritionist from Cornell who worked on poultry, first described riboflavin and magnesium deficiencies from observing their effects in chickens. My observations and thoughts about Cornell poultry were generally far more pedestrian. I once worked with a veterinarian who had previously treated the Triple Crown–winning racehorse Secretariat, and wondered why our specially bred birds did not have the same monetary value as a pedigree thoroughbred racehorse or at least a prize bull.

At the end of World War II, there was a significant need for cheap animal protein sources. It was the same decade when poultry feeds were being supplemented with both folic acid and vitamin B12. In this mid-twentieth-century climate of animal feed supplementation, a decision to include antibiotics in poultry and livestock feeds was viewed as simply including just one more additive that would enhance growth and/or productivity. Folic acid, B12, and antibiotics were lumped into the same general category as modifying the diets of production animals. Now, almost seventy years later, we can see what damage having only part of the scientific story (the old biology) can do.

This became the widespread scourge known as routine use of antibiotics in animal feed, which were everywhere and in every feed. The antibiotic drugs took a variety of chemical forms, including some with toxic substances such as arsenic. Even when the levels of antibiotics used in feeds were subtherapeutic (using lower doses than would be used for treating actual bacterial diseases), they were still being released both into the animals and into our environment, causing pernicious effects. By the 1950s, studies reported bacterial resistance against the antibiotics used in poultry feed. The genes that conveyed this resistance did not stay just in poultry flocks but could be transmitted to microbes that could infect humans.

Alarm bells began to sound as early as the late 1960s in the UK. But it was not until the 1980s and later that US science and health organizations began taking note of the potential risk. Sweden was the first country to ban antimicrobials for the purpose of promoting animal growth (1986), and a series of antibiotics were banned for that purpose by Denmark during the 1990s. After 2000 the World Health Organization initiated global opposition to using antibiotics in animal feed. But there was little regulatory activity in the United States. Some producers began to reduce or eliminate the use of antibiotics voluntarily in the face of the accumulating scientific evidence pointing to a larger environmental health concern. But it wasn’t until September 2014 that one of the largest US producers, Perdue Farms, announced it would no longer use antibiotics for growth promotion via egg injections. However, that still leaves the issues of antibiotics in feed, and some reports suggest that antibiotic use in feed continues among major producers.

More than twenty years ago, near the end of my poultry career, I became concerned about various practices in animal agriculture, including the routine use of antibiotics in animal feed. I coauthored a paper advocating for a dietary and natural immune approach to animal agricultural management that would reduce bacterial pathogen load during production, and followed that paper up with arguments against antibiotic use in animal feed in a June 25, 1998, article published in the Christian Science Monitor and other media sources. At the time, I was among an increasing number of scientists speaking out against the routine use of antibiotics as an agricultural food supplement. The three major points I made nearly twenty years ago were:

  1. Antibiotic resistance is real, and the biology of the process alone told us we should not put antibiotics from billions and billions of chickens into our environment and food chain when the growth and health of chickens could be achieved in other less damaging ways.
  2. Massive antibiotic administration via animal feed was an unnatural defense process for the chickens themselves. The animals were not being bred and managed based on an integrated natural health plan, but were being loaded up with drugs for as long as was allowed by regulations. Because the chickens were constantly being fed antibiotics, their immune systems never got exposed to infectious bacteria and never developed a protective immune response against them. Part of the normal immune response against pathogens caused muscle loss. Producers hate muscle loss because muscle loss means less meat per bird and lower profits. Antibiotics were given in feed in part to produce chickens with larger breasts. But this strategy meant that far less attention would go to actually having healthy chickens across all breeds (laying birds and meat birds) and stages of life. If you could just add more and more chemicals and drugs to the birds and not worry about whether they had innate resistance to infections and other diseases, that became a lower priority in both breeding and environmental management. A “we’ll just drug them up” approach took over, giving rise to an ever-increasing reliance on more and more chemicals and drugs that would enter our food chain.
  3. The massive antibiotic approach was incomplete. Regulations require that the drugs be removed from the animal days to weeks before they are processed for your dinner table. This was necessary to reduce the levels of antibiotics remaining in the meat and/or eggs to what were deemed safe levels for the consumer. Of course “safe” was determined before we understood what constant low-level antibiotics coming through the food chain might do to our microbiome. Because the poultry producers had been all-in for antibiotics, once they were removed, the animals (and the farmers’ livelihoods) were vulnerable to infections and diseases. Since inherent immunity of the animals had not been needed for the vast majority of the animals’ lives, there was a greater likelihood that the animals at the processing plants would be chock-full of newly emerged infectious agents that took over once the antibiotics were removed. A race was on to see if the animals could just make it through the processing plant carrying newly emerging infections before the spread of infection required a high percentage of birds to be condemned. But the newly growing bacteria would still be there since the antibiotics-for-all approach did not stress the value of the bird having natural resistance to disease. An increasing number of those bacteria present at processing would be antibiotic resistant as well. Antibiotic residues for your own microbiome and aggressive bacteria interested in your gut were probably not at the top of your list when you last shopped at the grocery store.

In the late 1990s I noticed a trend among the more visionary and influential poultry farmers in New York State. They voluntarily chose to wean their farms off of antibiotics in feed. Farm by farm, there has been a gradual shift under way to a different type of environmental management of poultry microbiomes. Antibiotics in animal feed have been on their way out, even if consumers had to make it so with their purchasing choices. Regulatory agencies largely stood on the sidelines watching.

2. The Food Revolution and Diet

In general the world has never had so much food and yet has never starved our microbial partners so much. It is an unusual story of grabbing failure from success and how through technology we have created a wonderland of food but have gotten it very wrong when it comes to what we really need to eat as a superorganism.

I grew up between the early 1950s and the ’70s, during a perfect storm for food in the United States. It was a time of amazing technological advancement and the shedding of many old country traditions that were rooted in local crop consumption and the need to store food for survival through harsh winters. It was a time when my family went from pre-TV to having a black-and-white TV, and during a shift from few frozen foods to many frozen foods as fully prepared frozen dinners supported two-career parents. (We won’t talk about the composition and taste of some of those early, complete frozen dinners.)

The interstate highway system project under President Dwight Eisenhower had led to new opportunities for moving people and, most important, food. Early attempts to transport food without it spoiling relied on dry ice. But a breakthrough in 1939 changed how we access and connect with our food. In that year Cincinnati-born Frederick Jones filed a major patent. Jones was a largely self-taught inventor and the holder of sixty-one patents. Most of his inventions involved sound equipment for motion pictures or innovations in refrigeration. It was Jones, along with his partner Joseph Numero, who developed a mechanical system called the Thermo King for refrigerating tractor-trailer trucks. Most of the recognition for Jones’s remarkable contributions was received posthumously. In 1991, he became the first African American to be awarded the US National Medal of Technology. The Thermo King refrigeration units were attached to the underside of the truck trailer. With the improving highway system, semis could begin to roam the country, delivering meats, fruits, dairy products, and vegetables with shorter delivery times. For the first time, food grown elsewhere could be stocked on shelves everywhere.

The first half of the twentieth century involved train transportation of food by railcars that were iced. Many boxcars were really iceboxes on rails. This was not particularly efficient and required teams of icers at icing stations along each rail route. The icers were mobilized much like today’s teams of pit crews for auto races. They would refill bunkers on each car with ice. These bunkers were usually accessed through hatches on the roof. The system was labor intensive and still had its limitations. By the mid-twentieth century, mechanically refrigerated train cars, essentially Thermo King units for trains, were an important technological development that gradually replaced ice-packed transportation. This paralleled what was happening in the trucking industry, not just in the United States but also in other countries.

Thermo King self-contained refrigeration units could be moved from trains to trucks and even to ships, opening up marine transport as well. Food and other perishables, including medicines, could be moved vast distances under refrigeration. In the 1950s, there was less pressure to use fermentation to store food safely, and the fermented foods, so important to the health of our ancestors, lost their place on our dinner tables. The technology was wonderful, but we had no clue that we were literally losing a microbial part of ourselves in the process.

Frozen foods became another convenient food choice, another option when the range of fresh foods was limited. That technology was aided by New Yorker Clarence Birdseye’s invention and commercialization of the flash-freeze process. He had watched indigenous Arctic people such as the Inuit use quick-freeze methods for preserving fish and realized that he could simulate those conditions, preserving the freshness, form, and palatability of food when thawed months later. The technology eventually found its way to a later-formed corporation, General Foods.

Because both refrigerated and frozen food could now be shipped across the United States, in my 1950s childhood San Antonio home we could enjoy apples from New York and Washington state, potatoes from Idaho, peaches from Georgia, avocados from Texas’s Rio Grande valley, and berries from California. But there was an absence of probiotic-containing food everywhere one looked. Of course we weren’t really looking either. Like most in the 1950s in the US, we were enjoying the food revolution, the increased availability of food choices across seasons, without realizing that a fundamental food component was being lost: probiotic microbes. In fact, with yogurts and kefir not yet common in the US, it was a biological wasteland for the human superorganism. Never before in human culture had our diet options been so extensive yet contributed so little to our internal biological diversity.

Of course the intention of the inventions providing this ready access to even nonseasonal food and a reduced need for long-term storage methods of the past was that we could avoid starvation, work longer hours away from our homes, and keep ourselves relatively healthy. But what we did not realize is that the food of our ancestors, much of it stored using fermentation to protect against agriculturally barren winters, was better at supporting our microbiome. It was feeding us not only nutrients but also what we now know as probiotics. Additionally, many of the foods contained what are now called prebiotics, food components that feed the majority microbial part of us. In Part Three of this book, I will discuss prebiotics and probiotics in more depth.

Because we were unaware of the importance of our microbiome, including its care and feeding, a paradox was established. Food was now available year-round, and food could be shipped from areas of plenty to areas facing starvation due to drought or conflict. But in the changes we were making, we were unknowingly depleting and starving our commensal microbes. We unintentionally became incomplete as a superorganism and increasingly dysfunctional and unhealthy as a result.

Food choices are interconnected with the food and agricultural revolutions that have changed much of how our food is produced, stored, and made available. In addition to what we are no longer eating, we are eating some things our ancestors either did not have available or chose not to eat. That is not to imply that the food of our ancestors was always inherently better. It is simply to say that in one or two generations we changed our diet in unprecedented ways. Some of our food choices are also linked with the last category of this chapter: safety. In practice, food safety has usually meant the elimination of something that could poison the individual or cause infections (e.g., pathogenic bacteria). This is a place where impact on the microbiome has not been considered until the past few years, and there is a lot of catching up to do.

For people who are often away from home, convenience is a big part of food availability. A great deal of research has gone into food processing to create tasty products almost ready for consumption rather than lots of raw ingredients requiring hours of preparation. In hindsight, the processing may have come at a cost when it comes to complex formulations that can have unanticipated or undesirable effects on the consumer. If the full extent of breast milk’s positive effect on the microbiome was misunderstood, then it is safe to say that processed foods are a big unknown relative to their impact on the microbiome. In cases where they have been examined, the findings raise concerns.

When it comes to promoting the health of the superorganism via diet, there are several recent research-based findings. First, many of the vitamins we need are produced by the lactic acid bacteria in probiotic foods as well as by specific commensals in our gut such as bifidobacteria. Because our mammalian cells do not make most of our vitamins, at a minimum we should seed our microbiome and then eat a diet that feeds the gut microbes that produce the vitamins we need. Stanford researchers recently described the lack in many diets of critical carbohydrates that are needed by our microbiota. In fact, they argue that we have essentially starved out our useful microbes with a westernized diet.

That is certainly a part of how we got to where we are now. But the good news is that dietary components supporting needed gut microbiota are now well known, as are foods that are harmful for our microbiome. Truly holistic diets that support the completed self can be pursued.

3. Urbanization

Urbanization has had an interesting and varying effect on human health for centuries. An expansion in urbanization increases health risks, although the reasons for this happening today are somewhat different than in the past. In the past, unsanitary conditions, combined with people huddled together, allowing easy spread of infectious diseases, made cities a perilous place for health. Today, many of the aspects of modern city life are the apparent causes of a disrupted microbiome; a dysfunctional inflammation-prone immune system; and a totally different set of diseases.

The trend to move to cities is nothing new. One of my scholarly hobbies is research on Scottish history, in particular the history of the goldsmiths of Edinburgh, Scotland, where the Incorporation of Goldsmiths has existed for more than five hundred years. You can learn a lot about urban living from the records and stories of highly skilled craftsmen who lived and worked in a town’s center. Edinburgh is an ancient city that was originally built on steep hillsides above a swamp, the whole of which was defended by its monumental castle from the early 1100s. Scotland itself had a largely agrarian culture until the Industrial Revolution. Extant historic details of city life as well as city death can be quite revealing.

During the seventeenth and eighteenth centuries, Edinburgh was a busy town with overcrowded tenements and various places of business crammed into tall buildings along a few streets. These tall buildings were even placed along the side of the largest church, St. Giles’ Cathedral, sitting in Parliament Square along the Royal Mile running from the castle to Holyrood Palace. In its heyday the Royal Mile of Edinburgh was like a mini Manhattan, only with more wood. People of various social classes were intermingled, and residences and shops were all nearby in the cramped spaces. The town’s affectionate nickname was Auld Reekie (probably meaning Old Smoky due to numerous wood and coal fires but often interpreted to have meant Old Smelly). It was a very polluted and unsanitary hill. Human waste was thrown out of windows of dwellings onto the streets, eventually flowing down the streets to the swamp. You wanted to live as high up in those buildings as possible. Infectious diseases were rampant.

Among these filthy conditions there were many things of beauty being crafted. One of the eighteenth century’s most famous Scottish goldsmiths was the second-generation master James Ker, who was noted for his marvelous gold and silver wares. James Ker was famous for navigating the perilous political waters following Scotland’s 1745–46 Jacobite rebellion ending with the Battle of Culloden. He rose to the lofty status of representing Edinburgh in the British Parliament in 1747. In those days, election to Parliament was a position normally reserved for the town’s most well-respected merchants.

James Ker’s family life almost never happened. His goldsmith father, Thomas Ker, lived in a cellar under his shop in Edinburgh’s Parliament Square, with the only outside ventilation coming through a grate that opened directly into the sewage-draining side of the steep street. It was described as a “miserable and unhealthy hovel.” Of thirteen children, James was the only son to survive infancy—and only because the family had relocated to healthier housing. Ironically, James Ker had little more success raising the next generation of Ker children to adulthood, though he and his wives certainly tried. He had twenty children between two wives, with only five known to have survived. The majority of those surviving childhood were born after James Ker was wealthy enough to buy a country estate called Bughtrig. His daughter Violet married his talented apprentice William Dempster and forged the basis of a long, lucrative partnership while Ker was in politics and in London. One of his few surviving sons, Robert Ker, became a famous science writer. Robert was born away from Edinburgh on a country estate where his mother had been at the time. That quite possibly saved his life.

Of course, in Edinburgh during the two generations of Ker goldsmiths (1650s–1760s), infectious diseases (communicable diseases) were the leading cause of death. In stark contrast, in 2013 the leading causes of death in Edinburgh were noncommunicable diseases like cancer and circulatory diseases, which made up more than half of all deaths.

What happened in Edinburgh between the eighteenth and twenty-first centuries is a microcosm of what is happening all over the world, and overcrowding is still involved. City planning took over in nineteenth-century Edinburgh with the draining of the swamp and construction of New Town to accommodate more people wanting to urbanize. Sanitation improved with these changes. As a result, instead of massive amounts of human waste causing widespread infections and death, it is the chemical by-products of urban human activities that are now assaulting and breaking down our bodies.

More people than ever live in urban areas and major cities compared with rural areas. According to a 2014 United Nations report titled “World Urbanization Prospects,” 54 percent of the world’s population now lives in urban areas, and this is expected to increase to 66 percent by the year 2050 in association with the creation of new megacities, each with more than ten million people. Cities have been a massive drawing card for jobs, services, and a variety of entertainment. The general formula has been that more people per square mile equals more jobs, more commerce, more transportation opportunities (if you need convincing, just compare the destinations available flying out of metropolitan New York City airports with those of regional upstate New York airports), and more stuff to do. The centuries-old demand for urban living has led to modern-day university degree programs and jobs for urban planners to create integrated city spaces to accommodate all of the activities of large populations of people. These human-made spaces have been called the “built environment.” Considering the amount of planning that has gone into cities like San Francisco, New York City, Tokyo, Beijing, Seattle, London, Rome, São Paulo, one would think they must be the healthiest places on earth. Well, not exactly.

A major concern is that urban planners may have all the bells and whistles you could think of surrounding a major metropolis, such as services, entertainment districts, green spaces, hiking and cycling trails, parks, residential-density planning, public transportation, etc., but they have somehow missed a key asset for those choosing to live in the city: protection against the world’s most common killer, NCDs.

Recently, researchers have recognized that people living in urban areas, regardless of the city or continent, have significantly elevated percentages of noncommunicable diseases compared with those living outside cities. This can include age groups that normally don’t see high rates of mortality. Professor Arline Geronimus of the University of Michigan Population Studies Center noted that young to middle-aged residents of some impoverished areas within cities suffer extraordinary rates of excessive mortality where chronic disease contributed heavily to the deaths. Of course the question is why? There are some pretty good leads.

Several components associated with urban living could contribute to problems with the microbiome as well as increased risks of noncommunicable diseases. One of the most studied is air pollution. The fine particulate matter (PM) concentrated in urban air is a significant concern. Researchers have associated exposure to PM with elevated systemic inflammation. As described in the prior chapter, inability to shut down inflammation when appropriate is a major component contributing to onset or maintenance of NCDs. Specifically, living near major roadways, as happens in all major cities, significantly elevates the risk of both heart disease and asthma. For asthma, the developmental timing of the exposure and the sex of the child influence the risk following exposure to the air pollution of cities. Additionally, there is evidence that exposure to traffic-related urban air pollution increases the risk of obesity. Again the promotion of inflammation by urban air pollution is thought to be involved.

If normal metropolitan areas are not enough of a concern, China is moving to create megacities with unprecedented numbers of people and related activities concentrated into comparatively small geographical areas. Two megacities are planned. One in the Pearl River delta is intended to have more people than Canada or Australia; another involving Beijing, which will be called Jing-Jin-Ji, will have an estimated 130 million people. Is it a good idea? The answer might depend upon what one wants out of it. Anyone who witnessed the air pollution connected with the recent Summer Olympics held in China can imagine what a multifold increase in the concentration of particulate matter would mean for human health.

Rather than creating megacities that would be expected to increase destruction of the human microbiome and elevate rates of NCDs even further, a return to a lower-density, country-type life might be a more healthy direction. That fits with a scientific idea that has been discussed for some time in various forms, called the hygiene hypothesis. In fact, the urban versus rural effect has been known for some time in terms of impact on the immune system and risk of multiple NCDs. It might have you singing, “Farm livin’ is the life for me” like Eddie Albert in the 1960s show Green Acres.

Green Acres was the poster child for country living in the small town of Hooterville, where Eddie Albert, Eva Gabor, and of course Arnold the Pig lived. (If you are too young to remember that show, just wait: Both a Broadway musical and a movie version are planned.) Beyond air pollution, the urban environment removed us from contact with farm animals, the environment, and the food that supports both the microbiome and a well-regulated immune system. The urban versus rural effect was first noticed in comparisons made in Germany on the health risks for children who grew up on a farm compared with those in a nearby city. Despite other factors being similar, the farm versus city living was associated with significant differences in risk for a specific category of immune-related NCDs: allergic disease and asthma. At the same time, the farm versus city environment has been shown to affect immune development. In a recent article in the journal Science, researchers from Belgium and the Netherlands showed at least part of the basis for the immune-protective effects of early life on a farm. They found that the microbial products in dust from farms with animals can program the immune system differently, such that it is better balanced, generates less unhealthy inflammation, and requires a higher threshold of allergen exposure before any type of allergic response would be produced.

Urbanization is a superorganism health problem. That can be changed, but it means radically altering how cities are structured and operated. Somehow we must either change the environments of cities or move out to the country.

4. Birth Delivery Mode

The method by which a baby is born is one of the most significant factors affecting a baby’s microbiome. While newborns are exposed to some bacterial products during prenatal development, the single most important seeding event for the microbiome is at birth through vaginal delivery. That provides the foundation for microbes throughout the body (mouth, gut, urogenital), and they will co-mature with the developing immune system. Not surprisingly, birth delivery mode is also a significant factor affecting the newborn’s immune status and risk of noncommunicable diseases.

When a child is delivered by cesarean section, the microbiome is not properly seeded and adequate microbial colonization is delayed unless a complementary therapy is used. With our understanding of the new biology and the view that self-incompletion is essentially a birth defect, the consequences of not establishing our majority microbial cells and genes at birth are becoming obvious. Cesarean delivery can be medically necessary and should be used when that is the case. However, in a recent journal article I had the opportunity to consider the origins of cesarean delivery and the evolution of the practice across the years, leading to the historically high rate of global cesarean deliveries in the present.

Cesarean delivery was originally employed only to save the baby when the mother had just died or was dying. In fact, an ancient Roman decree called the Lex Caesarea stated that cesarean delivery would be attempted on all dead or dying women who were with child. The idea that it could be used with both mother and baby surviving is a comparatively modern use of the medical procedure. The first purported instance of mother and baby both surviving came from Switzerland in 1500, although the event was not recorded until the 1580s.

Once antiseptics, anaesthesia, and antibiotics made cesarean delivery survivable and safer, its use as the birth mode of choice exploded. This opened up the possibility of elective operations. Planned delivery dates had certain inherent advantages for all parties involved. With everything paving the way for elective cesarean deliveries as an option, if not preference, there has been a steady rise in elective cesarean deliveries in both developed and developing countries. There was a 53 percent increase in cesarean deliveries in the United States between 1996 and 2007. The increase occurred across all states and ethnic groups. In Sweden there was a threefold increase in elective cesarean births just between the years 1997 and 2006, and in England the rate doubled between 1990 and 2008. Recent reported rates for cesarean delivery are as follows: England, 24 percent; US, 33 percent; parts of India, 40 percent; Brazil, 32 to 48 percent, depending on the mother’s country of origin; and China, 46 percent. Of course, the massive rise in elective cesarean deliveries was all based on the assumption that there was little downside to the procedure, particularly once immediate surgical risk was past. But the risk-benefit estimates were wrong because the understanding of biology and approach to safety testing were wrong. Now we know better.

Invariably, throughout the twentieth and twenty-first centuries we have used short-term measures in determining what is safe. That is fine when we are considering infectious diseases, pandemics, and acute poisonings. But it is far from adequate when considering lifelong safety. It is almost as if we have been willing to check a week after an operation or environmental exposure, and if no problem could be detected, then the event was considered to be safe. But if the new biology teaches us anything, it is that what you measure and when you measure are both crucial. Merely waking up the morning after a dinner hosted by the ruthless Lucrezia Borgia might come as a relief, but that measurement of survival is not the best predictor of health across a lifetime (or even a month in the dangerous world of Italian Renaissance politics). Developmental programming happens, epigenetic regulation happens, and no-see-um issues with the microbiome, the immune system, and the neurological system happen much like a ticking time bomb. We would never know there was a problem just by looking at the newborn given the usual measures that have been used in Western medicine and safety evaluation.

The missed health risks connected to cesarean delivery are twofold, based on the latest results. First, the surgical procedure, like most, includes presurgical administration of antibiotics to prevent postsurgical infections. The antibiotics compromise, if not destroy, the mother’s microbiome, which needs to be passed to the baby, and also impair bacterial signals the baby is receiving from those maternal microbes just before delivery. Essentially, a mother is given a drug affecting 99 percent of the genetic component she should be passing on to her baby, and in standard medical practices, to date, nothing is done to correct this. Then the C-section bypasses the transfer of the coating of microbes from the mother’s vagina to the baby so they can seed the baby’s gut. Cesarean delivery interferes with the birth of the human superorganism. This presents future health challenges if nothing is done to biologically complete the baby.

The timing and nature of the founding microbes in body locations like our gut are critical for both our physiological maturation and the developmental programming of later health. Fredrik Bäckhed and research colleagues of Sweden recently compared mother and infant microbiomes across the first year of life. Their conclusions? Below are the highlights.

  1. A baby born vaginally has a microbiome that looks like its mother’s. Based on analyses of stool samples, bacterial species in the baby’s gut were a 72 percent match for those in the mother’s gut, while the bacteria in cesarean-delivered babies had only a 41 percent match with those in their mothers.
  2. More bacteria in cesarean-delivered babies were derived from sources outside the mother (e.g., hospital workers and surfaces) as well as from the mother’s skin and mouth. However, the bacteria in the skin and mouth are not the ones normally needed in the lower gut to promote co-maturation and the most effective metabolism. The profile in vaginally delivered babies features genera Bacteroides, Bifidobacterium, Parabacteroides, Escherichia, and Shigella bacteria. In contrast, C-section-delivered babies were installed with genera Enterobacter, Haemophilus, Staphylococcus, and Streptococcus, and Veillonella bacteria.
  3. As the cesarean-delivered babies developed, they were missing or had much fewer Bacteroides bacteria compared with vaginally delivered babies.
  4. As the babies aged, the microbiome of cesarean-delivered babies appeared to look more like that of adults sooner than the vaginally delivered babies’ microbiome. It is as if with the C-section they had missed some earlier developmental progressions. At each developmental stage during the first year of life, the C-section signature bacterial species differed from those in vaginally delivered babies.
  5. The microbes in cesarean-delivered babies carried a greater proportion of genes for antibiotic resistance than those of vaginally delivered babies when first born, and the difference was still significant at four months of age. That could affect the babies’ capacity to receive effective future treatment with antibiotics. In many ways, that is not surprising since more of the microbes in cesarean-delivered babies either came from or had more exposure to the hospital environment than those in the mother’s birth canal.
  6. The early infant microbiome, when complete, is designed to receive and process breast milk as the initial food source.
  7. Metabolism by the early infant microbiome is central to the production of key vitamins, iron, and amino acids required by the brain for its development.

An editorial article accompanying the Bäckhed group’s paper on Swedish moms and their babies was titled “Birth of the Infant Gut Microbiome: Moms Deliver Twice!” I absolutely agree with this.

An additional, recently described effect of cesarean delivery involved an attempt to use maternal probiotics to boost microbial transfer to the baby through both colostrum and breast milk. A collaborative research group from Italy examined the effects of giving a daily probiotic mixture containing lactobacilli and bifidobacteria to women during late pregnancy and early lactation on the microbes found in both colostrum and breast milk. The probiotic bacteria levels were significantly elevated in both colostrum and mature breast milk in the women who had delivered vaginally, but there were no significant increases in the probiotic bacteria in the colostrum or breast milk of women who had given birth by C-section. In this case, birth delivery mode influenced the levels of probiotic-ingested bacteria that were subsequently available for transfer to the baby through colostrum and breast milk. That was an unexpected finding.

There certainly are associated effects of cesarean delivery on both immune maturation and risk of many NCDs. Also, the types of problems with the immune system suggest they underlie the elevated risk of NCDs. For example, one of the needed changes for the newborn’s immune system is for the Th1 branch of acquired immunity to catch up with those types of responses promoting allergic diseases (Th2). There is a bias toward Th2 prenatally, and that imbalance has to be corrected through further maturation after birth to provide infants with immune balance. In general, Th1 responses are most useful in fighting viruses and cancer while Th2 responses are the biggest help fighting parasites and certain kinds of bacteria. Ultimately, the infant needs both types of responses in balance to fight diseases and maintain tissue integrity. Imbalances between Th1 and Th2 capabilities usually end in disease.

Researchers at the Swedish Institute for Communicable Disease Control showed that cesarean delivery not only causes problems with the gut microbes, but also keeps the Th1 branch of immunity suppressed in the infant. The immune system is not brought into balance. Similar results have been found by other researchers measuring immune hormones and other factors needed to help with Th1 immune responses. Measures of airway inflammation are also increased in cesarean versus vaginally delivered children. These studies indicate that the infant immune system is imbalanced in cesarean-delivered children, and that a higher level of tissue inflammation is likely during certain environment exposures.

As expected with childhood immune disorders promoted by cesarean delivery, NCDs linked with immune problems occur more frequently in these children and adults. A study out of Denmark examined two million children born between 1977 and 2012 for possible birth-delivery-mode-associated disease. After correcting for other factors, they found that asthma, systemic connective tissue disorders, juvenile arthritis, inflammatory bowel disease, immune deficiencies, and leukemia occurred more frequently in cesarean-delivered children than in those from vaginal births. Not surprisingly, with this extra disease burden, C-section-delivered children were hospitalized more often than vaginally delivered children. The researchers suggested that a common immune mechanism likely exists in this case. Other studies have reported a variety of NCDs to be at an elevated risk with cesarean delivery. These include obesity, autism spectrum disorders and ADHD, high blood pressure, celiac disease, IgE-mediated risk of food allergies, and atopic dermatitis. It should be noted that some of these disease associations involved other factors as well. For example, with atopic dermatitis, it was the combination of cesarean delivery, antibiotic use, and certain immune gene variants that produced a significantly elevated risk of disease. Also, with celiac disease, a host immune genetic component affects who is at greatest risk from cesarean delivery. That suggests that not all babies are at exactly the same risk with cesarean deliveries for a given NCD. It may explain, in part, why different diseases show up in different cesarean-delivered children. Finally, there is elevated risk for multiple NCDs linked with C-sections. However, that does not mean that a specific child will develop an NCD if born via cesarean. It simply means that, as a group, more cesarean-delivered babies will develop significantly more chronic diseases as they age. When it is your child with one or more of these diseases, the population statistics blur.

The combined evidence of mother-child microbial transfers, infant immune maturation, and risk of later-life NCDs suggests that cesarean delivery, when elective and not medically necessary, is to be avoided.

5. Misdirected Efforts at Human Safety

Attempts to protect human health, though well-intentioned, have often gone awry. With regulatory agencies and safety testing regulations fully in place, how is it that:

  1. The drug thalidomide was given to thousands of pregnant women for morning sickness and thought to be safe, only to be withdrawn from the market later after producing massive numbers of serious birth defects?
  2. Asbestos was thought to be a wonderfully safe insulation material and was installed in a majority of commercial buildings and many homes, only to be removed and remediated later as a major health hazard?
  3. Safety levels for lead were set, only for it to be discovered later that apparently safe levels were reducing IQs and damaging the immune systems of exposed children?
  4. Bisphenol A and phthalates were included in thousands of plastic products such as baby bottles, only to be recalled later and banned in some countries because of endocrine disruption and toxicity affecting numerous physiological systems?
  5. Flame-retardant chemicals such as polybrominated diphenyl ethers (PBDEs) were included in children’s pajamas and furniture as a new improved safety measure only to be recalled later because the chemicals produce multiorgan toxicity and cancer?

The problem with the chemicals and drugs being extensively introduced, then decades later withdrawn once safety levels were reevaluated, is that millions of pregnant women and children end up exposed to unsafe, NCD-promoting levels of chemicals and drugs. Carl Cranor details these and other problems in his book Legally Poisoned.

Why is safety testing off-kilter, leaving massive gaps in the protection of human health? For one, you need only compare the difference between the ingenuity, creativity, and investment in new drug development with the application of science to safety evaluation to see a problem. Research to discover new drugs uses state-of-the-art science, looking anywhere and everywhere for new health solutions. In contrast, regulated safety assessment of chemicals and drugs requires the lengthy building of consensus among all stakeholders, including the pharmaceutical and chemical industries. It is glacial-speed bureaucracy at its most tedious. Nothing moves fast, and by the time any consensus is actually reached, the issue under consideration may be a decade old and no longer even relevant. Not surprisingly, it is old biology all the way. There is a huge momentum toward maintaining the status quo, such that changes in required testing protocols are more the exception than the rule.

Here is a mind-numbing example related to my own area of work. Among many new drugs termed “biologics” used to treat NCDs, some are designed to correct specific imbalances in immune hormones (cytokines) connected with NCDs. The presence of the cytokine imbalance can reflect disease status, and clinicians will track cytokine levels as a way to monitor the effectiveness of treatments with some biologic drugs. So, actual human patients were already being given cytokine drugs to change cytokine levels and better manage some NCDs. However, when it came to measuring the exact same cytokines and using those measurements to determine if new drugs or chemicals might be a health concern and cause detrimental immune changes, the same cytokine measurements were deemed too new, and their relevance to immune status too uncertain, to be used to determine safety. I remember my response at the time was “You can’t have it both ways.” If it is good enough to dictate when you inject cytokines into people for therapy, it is good enough to tell you about the status of the immune system. But of course that requires the same level of science be applied to drug safety as is applied to drug development and therapy.

That is only one issue explaining how we could expose generations to chemicals and drugs that have the potential to cause NCDs before yanking them from the market after problems develop. The second challenge of misdirected safety testing concerns the microbiome being our filter and gatekeeper. All of our required human safety testing, to date, has been based on the model that we are only human mammals, and safety for our mammalian cells and tissues is the only concern. Under the new biology this limitation is a problem. We are only evaluating safety for a minority component of us. The reality is that human safety testing that does not take into account our microbiome does not adequately protect us. Things that we previously considered safe may not be if they are harmful for our partner microbes.

A team of researchers recently used a commonly employed laboratory mouse strain (not unlike those used in safety testing) to ask about the safety of a type of very common food additive: food emulsifiers. Emulsifiers are used to make food thick and smooth. After all, you would not want lumpy ice cream or gravy or sauce. In this case, the two food emulsifiers tested were among those most widely used in foods, polysorbate 80 and carboxymethylcellulose. Polysorbate 80 can be found in ice cream, chewing gum, gelatin, and food shortenings. But it is also in other products where the consumer might be exposed in several different ways. These include some vitamins, soap, shampoo, cosmetics, medicines, and vaccines. Products with carboxymethylcellulose include ice cream, laxatives, diet pills, textile sizing agents, detergents, and some artificial tears. Importantly, carboxymethylcellulose can also be used in dressings or drug-delivery systems following some surgeries. In other words, you are likely to encounter these two chemicals virtually every day of your life, and some doctors might even put them on or inside you.

These emulsifiers were found to alter gut microbe populations by thinning the mucus layer and increasing inflammation, eventually leading to inflammation-driven NCDs in the mice. The food and consumer product additives had been deemed safe based on old-biology-driven safety testing. But with information from this new-biology-based research, it appears that they are very likely to promote inflammation-driven NCDs. And by the way, that explosion of out-of-control inflammation and NCDs is exactly what has been happening globally for the past few decades. Smoothie anyone?

What else don’t we know? Are products based on genetically modified organisms (GMOs) safe? Have they been tested for detrimental effects to the microbiome? If not, then it remains an open question. In fact, at least one recent journal article reported that the pesticide trade-named Roundup made by Monsanto has an inhibitory effect on several probiotic bacteria, including Lactobacillus species. Many chemicals and drugs that were previously deemed to be safe cause problems for our microbiome. We need to redo much safety testing with the new biology and the microbiome front and center. Safety testing needs to be relevant to the human superorganism.

6. Mammalian-Only Human Medicine

The way human medicine has been practiced is an important piece of the puzzle that explains why our microbial partners have been devastated and why we have, to this point, lost the battle against NCDs. Modern medicine, as currently practiced, with its mammalian-only focus on the human patient, is the sixth cause of the NCD epidemic. To date, it has featured antibiotic overreach, C-sections promoted as a safe birth delivery mode, a misunderstanding of diet to benefit the whole human, and the application of misdirected or incomplete safety informatiom. The default has tended to be: See a doctor while sick, leave with an antibiotic. Late-twentieth-century medicine used ideas about foods as a weight-loss tool without realizing that the food needed to be for the microbes, too, and the microbes needed to be in place or the patient was likely to have weight issues regardless of dietary fads. Medical practice tended to steer clear of social choices like urbanization. After all, that is a personal choice, even if it is one that was practiced by the masses based on an incomplete understanding of health benefits and risks. But during the twentieth century, location-based medical advice was usually limited to suggestions to leave a stressful job or run away from regional allergens. Medicine fully embraced cesarean delivery as a safe procedure for both mother and baby. It relied on human safety assessment that we now realize is both incomplete and often misdirected. It is incomplete because it focused only on the mammalian part of humans, and it is misdirected because it never measured the very things that are the most relevant indicators of the current epidemic of NCDs. Unless medical practices change to include the microbiome, progress on the other categories will not be enough.

If we hope to ever truly get humans healthy, we will need to shift our current model of medicine, fully embrace the biology of the human superorganism, and treat that patient.