CHAPTER 4

The Power of Epigenetics

What Obesity and Other Non-communicable Diseases Have in Common

I am frequently asked how much X, Y, or Z contributes to a condition such as obesity. In other words, how much do genetics, dietary preferences, or behavior contribute to someone being overweight or obese? Sugar, for example, has gotten a lot of bad press recently. Some have argued that sugar is the main culprit in our obesity epidemic and that we can trace the rise in sugar consumption to the rise in that number on the scale, as well as in other health challenges.

The truth is we do not know the exact “ingredients” to obesity, especially on an individual level. We may never know precisely what drives person A into obesity while person B remains at a normal weight. The question is made more difficult by the incredible variation among individuals, which is becoming increasingly apparent in studies of diet and metabolism.19 Person A may consume twice the amount of sugar as person B but never have an issue with weight and its related health problems. We would need to follow tens of thousands of people from preconception throughout adulthood while measuring many parameters such as what they eat, how much they exercise, where they live, what chemicals they were exposed to in utero and during early life, and so on in order to answer that question. We almost had a “National Children’s Study” in the United States that would have addressed many of these issues, but unfortunately, it was abruptly canceled in 2014 amid allegations that it was poorly planned and had flaws in design.102 National Institutes of Health (NIH) director Francis Collins suggested that the study would emerge from the ashes in a new form, but this has not happened yet. Fortunately, there are equivalent studies under way in Japan and elsewhere that will help us to understand the relative contributions of different factors to obesity and other health concerns.

Irrespective of how many factors contribute to obesity, one thing is certain: your DNA is not your destiny. That is, your health does not depend simply on what genes you were born with. The new science of epigenetics reveals how your environment (broadly defined) and the choices you make can change how your genes are expressed. In some cases, these changes can be passed on to your children, grandchildren, and beyond. We often hear people say, “I take after my mother” or “I’m built like my father.” Obviously this must be true to some degree, because you inherit one copy of all your genes from your mother and another copy from your father (except in boys, who can inherit their X chromosomes only from mom and their Y chromosomes only from dad). But more correctly stated: Your life has been directly influenced not just by the genes you inherited, but also by what you were exposed to while your mom was pregnant and during your formative years, as well as the experiences of your parents and grandparents—what they ate, how they lived, and what they experienced. In this view, your health was at least partly influenced before you were born and maybe even before your parents were born.

The debate between nature (your DNA) and nurture (your environment) has persisted throughout modern history, particularly among psychologists. However, it is clearly a contrived debate—today, it is obvious that both genes and environment are important, as are interactions between genes and environment. The toxicology community has jumped into this type of contrived debate. In 2017, I was invited to an industry conference to debate the topic “Which is more fattening, the pizza or the pizza box?” Obviously both the pizza and the pizza box are fattening, but for different reasons. One hopes that the organizers were aware of how silly and irrelevant such a debate would have been, but you never know.iv Both nature and nurture have long been considered for their influence on the incidence rates of diseases, from obesity to cancer. About 18 percent of diseases have been associated with specific, genetic causes (for example, sickle cell anemia and hemophilia). The rest are multifactorial (require the action of multiple defective genes) or unexplained, despite the enormous amount of effort that has been expended into linking genes to diseases around the world. One explanation would be that we simply have not tried hard enough to find the disease-causing genes, despite having sequenced the genomes of humans and all the major groups of mammals and after hundreds, if not thousands, of so-called genome-wide association studies (GWAS) that attempt to link DNA sequences with disease incidence. A better explanation would be that there are epigenetic factors that have not yet been considered adequately. What do I mean by that? Before we go there, it will help to begin with a short primer on genetics so that we are all on the same page.

THE DISCOVERY OF DNA HERALDS A NEW ERA

The twentieth century was no doubt the century of the genome with the discovery of DNA (deoxyribonucleic acid, the genetic material that underlies most life on earth). But the story of the genome goes much further back in history, and a number of books have been written on the topic.v The Austrian Augustinian monk Gregor Johann Mendel was a passionate gardener in the nineteenth century who gained posthumous fame as the father of genetics because his careful experiments predicted the existence of discrete units of heredity that gave rise to particular traits. Breeding peas in the garden at St. Thomas’s Abbey, Mendel noted the effects of crossing different strains of the common pea plant. He demonstrated the transmission of characteristics in a predictable way by inherited “factors” that would later be called genes. He showed that characteristics get passed on from one generation to the next, and that for each characteristic in an individual, each parent contributed to that characteristic. He further showed that genes could have variations, resulting in dominant and recessive traits. Mendel’s work was not widely appreciated during his lifetime but was rediscovered in the early 1900s and refined into what we now know as Mendelian genetics by the work of geneticists such as Thomas Hunt Morgan.

While it became well accepted that there were units of heredity termed genes, it was not known what type of molecule (DNA, RNA, or protein) served as the “genetic material.” Oswald Avery, working at what was then the Rockefeller Institute (now Rockefeller University), showed conclusively in the 1940s that DNA, rather than protein, was the genetic material. It was known that natural DNA contained phosphate, deoxyribose sugars, and nitrogen-containing bases (purines and pyrimidines), but the structure of DNA and how it could be replicated accurately were not known. Perhaps the next big breakthrough was made by the Austrian chemist Erwin Chargaff, who made the next step by determining how the “ingredients” to DNA matched up chemically. Working at Columbia University, Chargaff showed that in any naturally occurring DNA, the amounts of purine bases (cytosine and guanine; C and G) and pyrimidine bases (adenosine and thymidine; A and T) were roughly equal. That is, the amount of A was equal to the amount of T, and the amount of C was equal to the amount of G, which became known as Chargaff’s first rule (A=T, G=C). Why this was so went unexplained until Francis Crick and James Watson, working at Cambridge University, together with Maurice Wilkins at King’s College London, deduced the structure of DNA.106

Watson and Crick, with an assist from Wilkins, are most commonly identified as the ones to discover the structure of DNA, but their work built on the work of many others, including Rosalind Franklin, also from King’s College, whose X-ray photographs of DNA structure led Watson and Crick to their “aha moment.” What Crick, Watson, and Wilkins brought to light in the mid-1950s, which earned them the Nobel Prize in Physiology or Medicine in 1962, was that DNA was organized as an antiparallel double helix. This double helix had a phosphate backbone, held together by bonds between the 5′ and 3′ carbon atoms of successive deoxyribose sugars. The purine and pyrimidine bases extended out from this double helix, and the two strands were latched together by hydrogen bonds between opposing A and T or G and C bases.

Imagine that you had a flexible wooden ladder, held the ends, and twisted it and you will have something like the basic structure of DNA (not exactly, but close enough). In this model, the paired bases serve as the rungs of the ladder. This structure immediately suggested a mechanism for duplicating the DNA molecule—that each strand could serve as a template for a new one. The nature of how this happens was deduced by Matt Meselson and Franklin Stahl at the California Institute of Technology, who showed in 1958 that each DNA double helix unwinds and serves as a template for a new, complementary strand.107 This way DNA can be precisely copied without changing its structure, with the exception of occasional errors or mutations. The DNA code is read as the sequence of A, C, G, and T nucleotides that constitute each strand of the DNA. The nucleotide sequence is a key structural element for the genes that individually, or in combination, determine everything from your hair color to your predisposition for certain diseases.

Watson and Crick’s Nature paper transformed the life sciences and ushered in the era of molecular genetics—suddenly, learning the underlying DNA sequence seemed very important. It was widely assumed that once we knew the sequence of the entire human genome, it would be possible to understand virtually everything about how the human body worked, what genes were responsible for our individual traits, disease susceptibilities, and so on. However, it took some time for the technology required to determine individual DNA sequences to be developed. Walter Gilbert at Harvard University and Fred Sanger from Cambridge University shared the 1980 Nobel Prize in Chemistry for developing the first robust methods for determining DNA sequences. This opened the door for sequencing the genomes of viruses, bacteria, and so on.

The pace of genome sequencing has been breathtaking: from the first genomic sequence in 1978 of a small virus called ΦX174 (5,386 bases encoding eleven genes), which infects bacteria (commonly called a bacteriophage), to the “complete” sequence of the human genome, roughly three billion base pairs, in 2003. It took a mere twenty-five years from the first sequence to the human genome and only fifty years from Watson and Crick’s Nature paper describing the structure of DNA.106 There continues to be a concerted effort to determine which DNA sequences put one at a greater or lesser risk for diseases of various sorts, but there is still much to be learned about how genes function and how genes interact with the environment. The costs to sequence DNA have come down significantly in the past decade with the advent of better technologies, falling from almost $10 million in 2008 to close to $1,000 today (the first human genome took about $2.7 billion to complete). This was largely driven by the “$1000 genome” project funded by the NIH.

DNA AND DISEASE

We have grown accustomed to thinking that our individual DNA sequences almost completely control our health and wellness—even among members of the medical community. But this view is an oversimplification, if not largely mistaken. DNA sequence is just one part of the puzzle. DNA says more about our risk than our fate. DNA sequence controls probabilities for the most part, not necessarily destinies. Of course, any one of us can have a mutation or gene that encodes an absolute outcome, such as hemophilia, cystic fibrosis, or muscular dystrophy. But those types of conditions are rare because the majority are “recessive” mutations. That is, development of the disease requires two mutant copies of the gene, one from each parent. Dominantly inherited diseases—those that are caused by a single mutant copy of a gene—are much less common, with the exception of genes that are encoded on the X chromosome in men (such as red-green color blindness), because men have only a single X chromosome.

One way to get an idea of your potential risk for any genetic disease is to subscribe to any of the large number of personal genetic testing databases and see how many traits you are at risk for. I predict that most readers will find very little, if anything. I did “23andMe” before the FDA arrogantly prevented them from telling you much about your personal disease profile in 2013. I am happy to report that, for what it’s worth, I have no major susceptibilities for any disease or condition (ranging from about half of normal to twice normal for a variety of conditions, but most in the plus/minus 50 percent range).

Your risk of this or that trait might be about 1.4 times higher or lower than “normal,” but it is very unlikely that you will discover many DNA sequence variations strongly linked with any disease. Having said that, I have a friend who discovered from her 23andMe profile that she had a mutation in a gene that is linked to progressive hearing loss, and sure enough, she had noticed that it was becoming progressively harder for her to hear seminar speakers for the past few years. So the take-home message is that these types of testing services have some value, but what they tell you will most frequently be that your susceptibility for some condition will be slightly higher or lower than normal. This might call for some modest lifestyle modifications, but rarely more than that.

Another thing that it is virtually impossible to explain in the context of alterations to our DNA sequence is the staggering rise in the frequency of non-communicable diseases (NCDs) in the past thirty years or so.108 By non-communicable, I mean diseases that are not caused by infectious agents such as viruses (for instance, colds and influenza), bacteria (cholera, tuberculosis, pneumonia), fungi (cryptococcal meningitis, “Valley fever”), or protozoans (brain-eating amoebas). According to the World Health Organization, a whopping 70 percent of global deaths in 2015 were due to NCDs.109 The rise in NCDs is especially puzzling considering the sheer number of transformative medical breakthroughs over the last sixty or so years. The discovery of antibiotics, widespread recognition of the value of environmental hygiene, and increased use of vaccines against devastating diseases such as polio provided a great boost in life span. Other developments aided this life extension, including improvements in diagnostics and medical care, a decline in smoking, and increased accessibility to medical care and pharmaceuticals. Sadly, we have taken a sudden turn for the worse. For the first time in more than two decades, life expectancy for Americans declined in 2015. The report released by the National Center for Health Statistics in 2016 cited rising deaths from heart disease and stroke, diabetes, accidents, drug overdoses, and other conditions.110 A very troubling study showed that life expectancy at five years of age in mid-Victorian England (1850–1870) was as good as or better than that today if one eliminated infectious diseases from consideration as a cause of death.111 On top of that, the incidence of degenerative diseases such as cancer, cardiovascular disease, diabetes, and the like in mid-Victorians was 10 percent of what we experience today. Despite all our technological advances, the horizon is looking fairly bleak at the moment and we need to understand why this is.

Non-communicable diseases are now the number one cause of death in the world. The World Health Organization calls NCDs “a slow-motion catastrophe.” Some astonishing numbers:

image Leukemia and brain cancers: over 20 percent increase since 1975.

image Asthma: doubled between 1980 and 1995, has remained elevated.

image Autism: increased 1,000 percent in past three decades.

image Infertility: 40 percent more women had difficulty conceiving and maintaining a pregnancy in 2002 than in 1982 (doubled in women aged eighteen to twenty-five years).

image Autoimmune disorders: according to a new study, the prevalence and incidence of autoimmune diseases, such as lupus, celiac disease, and type 1 diabetes, are on the rise, and researchers at the Centers for Disease Control and Prevention (CDC) are unsure why.

image We already mentioned type 2 diabetes, but also: between 2001 and 2009, the incidence of type 1 diabetes increased by 23 percent, according to the American Diabetes Association.

A century may seem like a long time compared with our usual life span, but it is just a moment on an evolutionary scale. Thirty years is an evolutionary instant. There is no possibility that human genetics has changed throughout the world rapidly enough to account for changes that happened over a thousand-year time frame, let alone a thirty-year time frame. This is unequivocal evidence that something other than alterations in the sequence of our DNA is leading to the worldwide increase in NCDs. One common explanation we hear is that our modern lifestyles are to blame. If you travel as much as I do, you will clearly see that this does not work either, because lifestyles in various countries are very different. Another striking way to think about this new reality is to consider that infant obesity almost doubled in a mere twenty years. We can’t really blame six-month-old infants for “unhealthy lifestyles.” Nor can we use the standard explanations of unhealthy lifestyle for the epidemic of obesity in domestic cats and dogs, urban feral rats, and five other species of animals that David Allison and his colleagues observed.14 (See chapter 1.) We do not know what is triggering the rapid rise of type 2 diabetes, childhood asthma, autism, attention deficit hyperactivity disorder (ADHD), and infertility, but it is certain they must be due to environmental, dietary, and behavioral factors. Clearly, disease is more than just genes. The environment and how our genomes and “epigenomes” interact with the environment have prominent roles to play.

THE POWER OF EPIGENETIC EFFECTS

Do you remember when you first heard about evolution in biology class? Perhaps you even remember hearing about Charles Darwin’s theory of “natural selection” compared with Jean-Baptiste Lamarck’s theory that characteristics one acquires in life could be transmitted to one’s offspring. Lamarck was the French naturalist who, you’ll recall, proposed in 1802 that the environment can directly alter phenotype in a heritable manner—a mechanism for evolution in which species pass traits acquired during their lifetimes to their offspring. The example I gave earlier and that you most likely read about in school describes how antelopes could have evolved into giraffes by stretching their necks to reach the higher leaves on a tree. The “stretchiest” giraffes would pass slightly longer necks to their offspring. Every generation would lead to a slightly longer neck until the antelope was now a giraffe. Of course, if this were true, one wonders why the increases in neck size stopped at giraffe length instead of continuing to increase as the lower leaves were eaten.

This theory of evolution differed from Charles Darwin’s later thesis that organisms cannot alter their genetic material on demand as needed. Whereas Lamarck’s premise says adaptations appear as needed in response to the environment and the acquired traits are then passed on to offspring, Darwin’s theory of evolution by natural selection holds that evolutionary changes in organisms result from differential procreation or survival in response to a changing environment. That is, natural selection acted on preexisting variations in the population. The organisms that were best equipped to prosper in current environments would produce more and fitter offspring, eventually dominating the population. Those organisms harboring variations that made them less fit eventually became underrepresented in the population. In this view, antelopes with the longest necks would have better access to food, enabling them to reproduce more, and eventually replace the short-necked ones.

Most biologists dismiss the possibility of Lamarckian inheritance since it doesn’t immediately make sense and there was no ready mechanism to explain it.vi While Darwin’s theory of natural selectionvii has long dominated evolutionary theory, there is now room for a version of Lamarck’s ideas. As is often the case in science, very smart people are rarely completely wrong. It is hard to believe that antelopes grew their necks as Lamarck proposed, but what about the effects of a changing environment on living populations? Wouldn’t it make evolutionary sense if organisms also had a rapid way to respond to environmental changes? Lamarck got the details completely wrong but was probably correct in a broad sense—epigenetic inheritance is essentially the inheritance of acquired characteristics.

It is important to note that while evolution by natural selection is well supported by the fossil record and experimental data, Darwin also got some details wrong. For example, Darwin believed in a “blending” model of inheritance that was fashionable in the mid-1800s. Basically, blending inheritance holds that variation in individuals was random but bounded by the traits of the parents. The “blood,” or hereditary traits, of both parents came together in the offspring, just as two colors of paint blend when mixed. While there are some instances where this might be true, it completely fails for continuously graded traits such as height. For example, a tall father and short mother would have children that were somewhere in between the two heights. If blending inheritance was the underlying mechanism, then offspring in every generation would always be less extreme than their parents for every trait and would approach an average over time. This is completely inconsistent with what we see in the real world and with the idea of gradual evolution by natural selection. Blending was an idea that Darwin wrestled with in his writings.

The term “epigenetics” (literally, “on top of genetics”) was first coined by the British developmental biologist Conrad Hal (C. H.) Waddington in 1942. Waddington had proposed a theory of “genetic assimilation” to explain some intriguing experimental results he had seen. Genetic assimilation referred to a trait (phenotype) that was originally produced in response to an environmental condition but that later became fixed in the genome. Waddington believed that a cell during development was like a ball rolling down a hill that was filled with gullies. Near the top of the hill, the ball rolled into one or another gully, perhaps as a result of hitting a rock or being displaced by a gust of wind. Once the ball was in a gully, it was difficult to get back out—not so different from embryonic cells, which start out with the ability to form all of the types of cells in the body (what we call pluripotent) but are later restricted in their ability to transition from one cell type to another (for instance, from muscle to brain). The influences acting on the cell were “epigenetic” since they were on top of the existing genetic information. Waddington found that he could induce extreme phenotypes in fruit flies (an extra set of wings) by treating them with ether and that he could breed these flies and increase the frequency of these effects until they could be seen in the absence of ether treatment. Although Waddington showed that these changes became more stable over successive generations as a result of selection, he was strongly criticized by colleagues such as the evolutionary biologist Ernst Mayr, who believed that Waddington was invoking Lamarckian inheritance. Waddington was prescient and his ideas have found support in the work of Mike Skinner, in our work, and in that of other laboratories.

Epigenetics simply refers to changes in gene expression without changes to the underlying DNA sequence. Unlike standard genetics, which studies changes in the sequence of the DNA letters (A, T, C, and G) that make up our genes, epigenetics examines changes that do not alter the sequence of the DNA code. Rather, epigenetics changes when and how genes are expressed. Epigenetic “tags” play an important role in whether chromatin—the material of which our chromosomes are composed—is accessible to the complex of proteins that controls gene expression (known as “the transcriptional machinery”). If the chromatin is not accessible, nearby genes will not be expressed, even if all of the other necessary factors are available. These epigenetic tags can be added, removed, or changed in response to environmental factors. In turn, the presence or absence of these tags can play a key role in whether a gene is expressed or not. Think about it for a moment. A gene can be rendered nonfunctional by a mutation (change in the DNA sequence) that causes a truncated or defective protein to be produced. An epimutation (change in the epigenetic tags) that prevents a gene from being expressed causes exactly the same effect—absence of a functional protein—but by an entirely different mechanism. Or an epimutation can lead to a gene being expressed at a time and place when it would normally not be expressed. Either of these can alter how an organism functions and responds to its environment.

Epigenetic forces help explain why identical twins can grow up to look and behave somewhat differently from each other and why each possesses a different assortment of risk factors, despite harboring precisely the same DNA sequence. Their DNA may be identical, but the sum of epigenetic changes that they acquire in response to their environment throughout life alters how their identical DNA is expressed. Want an example? While more than 90 percent of identical twins are very close to the same height, less than half of identical twins will share traits such as alcoholism, diabetes, breast cancer, and rheumatoid arthritis. This provides conclusive evidence that epigenetic effects can be profound.

The field of epigenetics has only recently gained traction thanks to advances in DNA-sequencing technology that allow us to detect these “epimutations.” Although we have known for quite a while now that the environment can affect gene expression, a new and provocative finding is that these epigenetic effects induced by chemical exposures can be passed on to future generations. Mike Skinner of Washington State University was the first to show this,91 but my lab90 and other labs113 have also shown that environmental effects on physiology (in our case, obesity) can be passed on to future generations.

What all this means is that some of your genes, right now, could be behaving in ways inherited from your parents and even grandparents. What we eat, how much we exercise, where we live, whom we interact with, how much sleep we get, and even the aging process all can eventually cause changes in the epigenome around important genes that modulate whether and to what extent these genes are expressed over time. We refer to the changes in the epigenome as epigenetic “tags” or epimutations and the type of inheritance this produces as epigenetic.

I should point out that compared with genetics, the field of epigenetics is still in its infancy and continues to suffer from the anti-Lamarckian bias in genetics. The heritability of epigenetic changes, particularly those induced by environmental factors, remains controversial because no one has convincingly proved the underlying mechanism, although our work suggests that at least some effects of EDCs result in long-range changes in the structure of DNA that affects the expression of many genes, and Mike Skinner’s work has identified specific changes in DNA methylation that may be heritable. Epigenetic inheritance is well documented in plants and worms, but less so in humans.

One prominent epigenetic mechanism is called DNA methylation, usually on cytosine nucleotides. A methyl group is a carbon atom surrounded by three hydrogens that can be attached to the 5 position of the cytosine ring, producing what we call 5-methyl-cytosine. The presence or absence of blocks of 5-methyl-cytosines affects the structure of DNA. Methylation in the parts of a gene important for its expression (the regulatory regions such as promoters and enhancers) can strongly influence whether the gene is expressed or not. Usually, more methylation in a gene promoter inhibits its expression, and vice versa. For example, both mutation of the INK4A tumor suppressor gene and hypermethylation of the INK4A promoter can cause malignant melanoma because both lead to loss of the INK4A protein.114 While the concept of DNA methylation and the regulation of gene expression is well established, there are many who believe that DNA methylation cannot be heritable, at least not in mammals, because DNA methylation is mostly erased during the development of sperm and egg (so-called germ cell reprogramming). This is an exciting and controversial area of study at the moment.

Another important mechanism involves changes to proteins that help “package” DNA, such as histones, around which DNA is wrapped. Histone proteins can be modified in various places by the addition of methyl groups, acetyl groups (two-carbon groups), and phosphates. The particular combination of histone modifications is called “the histone code.” The histone code can be fairly complex and provide precise fine-tuning of gene expression. Some histone methylation can be inherited, at least in mice, and this is another prime candidate to transmit heritable epigenetic changes.

A third type of epigenetic change involves the expression of what we call “non-coding RNAs” (or ncRNAs). These RNA molecules come in both small and large flavors that are not translated into proteins. Without digging deeply into the biochemical details of their function, these ncRNAs can alter gene expression and have been shown to have a critical role in normal development and biological processes. For example, the non-coding RNA MIR31HG interferes with the expression of the INK4A protein mentioned above; higher MIR31HG expression leads to lower INK4A expression in melanoma patients.115 Whether and how changes in expression of ncRNAs for multiple generations remains to be determined.

The environmental factors that can influence gene expression (and presumably evolution and physiology) are wide and varied, from ecological parameters such as temperature and light to stress and nutritional details such as caloric restriction or high-fat diets. A multitude of environmental chemicals from phytochemicals—biologically active compounds found in plants—to synthetic toxicants can also influence observed characteristics and health. Future research will further help us understand all the steps involved from environmental triggers to inherited changes in chromatin structure and DNA expression. We also know that environmental factors can influence epigenetic tags in children and developing fetuses in utero. This is where my work has homed in on the impact that endocrine disrupting chemicals can have on a growing organism whose cellular programming could be altered permanently by epigenetic forces.

DEVELOPMENTAL ORIGINS OF DISEASE

London-born David Barker was among the pioneers to link chronic disease in adulthood to growth patterns in early life. In 1979, Barker became professor of clinical epidemiology at the Medical Research Council Environmental Epidemiology Unit, now the MRC Lifecourse Epidemiology Unit at the University of Southampton. As he was studying birth and death records in the United Kingdom, he noted a striking relationship between low birth weight and a risk for dying from coronary heart disease as an adult. Barker developed a hypothesis, eponymously named the Barker Hypothesis (aka the fetal origins hypothesis), that nutrition and growth in early life are important factors in determining whether a child will grow up to be more or less vulnerable to metabolic and cardiovascular disorders. Barker proposed that a baby born to a mother who was malnourished during pregnancy would be more susceptible later in life to chronic diseases such as diabetes, high blood pressure, heart disease, and obesity, because the fetus had adapted itself to a nutritionally poor environment. His observations culminated in a Lancet paper published in 1989 in which he reported that among 5,654 men from Hertfordshire, UK, those with the lowest weights at birth and at one year of age had the highest death rates from heart disease.116

Barker was not the first researcher to record the relationship between early life conditions and later disease. The Norwegian doctor Anders Forsdahl was the one who initially formulated the hypothesis in 1977, when he noted that a mother’s living conditions during pregnancy and the first years of a child’s life have an important impact on that child’s risk of chronic disorders later in life, especially cardiovascular diseases.117 As a community physician, Forsdahl had firsthand experience of poverty and a high incidence of cardiovascular disease. Barker is often credited with establishing the theory of fetal programming, but he was merely popularizing a hypothesis formulated by Forsdahl (although Barker does cite Forsdahl in his Lancet publication). In one of Barker’s last public speeches, he stated: “The next generation does not have to suffer from heart disease or osteoporosis. These diseases are not mandated by the human genome. They barely existed 100 years ago. They are unnecessary diseases. We could prevent them had we the will to do so.”

Barker had his critics and opponents, but his work ultimately inspired others to study the fetal origins of adult illnesses and disorders. Peter Gluckman of the University of Auckland and Mark Hanson of the University of Southampton showed how important the developmental period is in the future health of an individual. They went beyond the fetal origins model and pioneered a new area of medical research called the Developmental Origins of Health and Disease, or the DOHaD paradigm.118 They wrote a 2006 book, Mismatch, in which they explain why we suffer from “lifestyle” diseases now.119

Just what are the “lifestyle” diseases of developmental origin? That is, what are the diseases that can be affected by exposures during the fetal and developmental period? All the chronic ones I have already mentioned: cardiovascular and pulmonary maladies, including asthma; neurological ailments, including ADHD, autism, and neurodegenerative conditions such as dementia; immune/autoimmune conditions; endocrine diseases; disorders of reproduction and fertility; cancer; and, yes, metabolic disorders, including obesity and diabetes. And here is a critical fact: Developmental programming is almost entirely epigenetic because the DNA sequence is not changed. Put simply, epigenetic changes are the mechanism through which developmental origins of disease happen. Prenatal and early life experience programs the body throughout the rest of life.

ROBUST BUT FRAGILE AT THE SAME TIME

Development is a unique process. On the one hand, despite how complex living organisms are, with many opportunities for things to go wrong, development largely works well because there are built-in redundancies and backup plans for potential errors that could otherwise lead to serious malformations, defects, or problems. The famous German embryologist Hans Spemann (who won the Nobel Prize in Physiology or Medicine in 1935) said that development has both a belt and suspenders (to hold its metaphoric pants up). Cells have elaborate mechanisms to repair DNA as well as mechanisms to kill themselves in an orderly way if too many mutations are detected—a process called apoptosis. Having said that, I should add that development is also exquisitely sensitive during certain stages, during which small changes can have major outcomes. These are called “critical windows” of sensitivity. One entertaining example is in frog embryology—if you tilt a frog embryo at a critical time before the fertilized egg divides into two cells, you will get an embryo with two perfect heads; do the same thing thirty minutes later and the embryos will be completely normal. Perturbations that occur during critical windows can lead to dramatic birth defects. For example, 80 percent of babies whose mothers were prescribed thalidomide to control nausea from morning sickness during pregnancy were born with severe limb defects if mom took the drug between twenty and thirty-six days after fertilization (that is, in the first month or so of pregnancy). However, if thalidomide was taken by mom outside of this critical window, the children did not have these birth defects.120

Developmental perturbations can also lead to more subtle alterations that cause adverse health outcomes and increased risk for disease that may not be observable until years later. For example, the nutrition of an expectant mother and her exposure to chemicals and tobacco products can have far-reaching and permanent effects on her offspring. Reduced fetal growth, which is often one of the results of exposure to nicotine in utero, is strongly associated with chronic conditions later in life, such as heart disease, diabetes, and obesity. There are many good reasons why physicians recommend certain guidelines to pregnant women: avoid drug X, take prenatal vitamins containing folate, don’t smoke or drink alcohol, avoid exposure to Zika-infected mosquitoes, and so on. Insults to fetuses can have devastating and permanent repercussions.

A shortcoming of many epidemiological studies is their inability to test cause and effect, as we can in animal studies. Occasionally, a tragic industrial accident, war, or other unexpected event offers the opportunity to study cause and effect in a human population (cohort). Among the first studies to provide early, undeniable clues to a cause-and-effect relationship between the prenatal experience and lifelong consequences was the Dutch Famine Birth Cohort Study, which found that the children of pregnant women exposed to famine were more vulnerable to diabetes, obesity, cardiovascular disease, kidney damage, and other health problems as adults. Known as the Hongerwinter (“Hunger winter”) in Dutch, the Dutch Famine took place near the end of World War II during the winter of 1944–1945 in the Nazi-occupied part of the Netherlands, especially in Amsterdam and the densely populated western provinces. A Nazi blockade cut off food and fuel shipments from farm areas, leaving millions starving. Rations were as low as four hundred to eight hundred calories a day—less than a quarter of what an adult should consume. Those who survived, some 4.5 million people, relied on soup kitchens until the Allies arrived in May 1945 and liberation of the area alleviated the famine. The children of women who were pregnant during the famine were smaller, as expected, considering the severe caloric restriction. Surprisingly, however, when these children grew up and had children, those children were also shorter (but not lighter) than average and had increased fat and poorer health later in life.

The Dutch Famine Birth Cohort Study121 was initiated by the Departments of Clinical Epidemiology and Biostatistics, Gynecology and Obstetrics, and Internal Medicine of the Academic Medical Centre in Amsterdam, in collaboration with the MRC Environmental Epidemiology Unit of the University of Southampton in the United Kingdom. The study has expanded to include other universities in the Netherlands (Leiden) and United States (Columbia University). Barker was in fact part of the study, which began publishing its results in 1998.

Data from this study showed that the famine experienced by the mothers caused epigenetic changes that were manifested later in life as a predisposition to disease and that some of these effects were passed to the next generation.122 There are some data supporting the possibility that these effects are carried by DNA methylation, but this topic remains controversial at the moment. The DOHaD hypothesis suggests that the type and availability of nutrients during pregnancy and infancy have profound impacts on the individual’s life. Another way to state this is to say that the effects of prenatal nutrition flow across generations. A baby develops from an egg in its mother that was created and nourished by the maternal grandmother during her pregnancy. This also means that a mother’s exposures to hazardous chemicals while pregnant, and her baby’s exposures while developing, can affect not only the future health of that child, but also the future health of that child’s biological children. This is a sobering thought. It is one thing to suffer the consequences of our own choices, but quite another matter entirely to impose these effects on our descendants down the generations. I should also add that when it comes to epigenetic changes, both nutritional impacts (or deficiencies) and chemical exposures can be powerful forces.

An important distinction between the fetal origins hypothesis championed by David Barker and the DOHaD hypothesis formulated by Hanson and Gluckman is the realization that developmental programming does not stop at birth but continues throughout early life, probably at least until adolescence. One strong example comes from the famous Swedish Överkalix study.123 This study is named after a small isolated municipality in northeast Sweden where nutrition of the local population was strongly dependent on the annual wheat harvest. What made this study possible were the detailed records available for harvest, food prices, births, and causes of death. The groups of Marcus Pembrey from University College London and Lars Olov Bygren from the Karolinska Institute found that drastic changes in nutrition during the prepubescent period (eight to twelve years old) affected longevity of grandchildren in the paternal lineage. Surprisingly and counterintuitively, when the paternal grandfather had an abundance of food (that is, was overfed) just prior to puberty, his grandsons had a fourfold increased risk for death from cardiovascular disease and type 2 diabetes. Conversely, when food availability was restricted during the father’s prepubescent period (that is, they were undernourished), his grandsons were less likely than normal to die from cardiovascular disease and type 2 diabetes.

To summarize the somewhat complex science we have just discussed, the available results show that nutritional factors, chemical exposures, and stress during fetal development and at least until adolescence can lead to permanent effects on the health and well-being of individuals. In some cases, these effects can be passed on to future generations through mechanisms we are just beginning to understand. This is why Jerry Heindel talks about a good start (or a bad start) lasting a lifetime. When we think of malnutrition, the most frequent image that comes to mind is one of a skeletal child in the developing world with a protruding belly on the precipice of death. However, many children in so-called developed nations are suffering from a different but also harmful type of malnourishment: they are overfed with processed foods containing numerous EDCs from both agrochemical and industrial sources, while at the same time lacking essential micronutrients such as high-quality protein, healthy fats, and good carbohydrates including whole grains, fruits, and vegetables. They experience an excess of calories from products high in white flour and refined sugars and that lack real nutrients and vitamins. This is called “high-calorie malnutrition.”

The fact that the human body, especially a young, developing one, is fragile but robust at the same time is critical, especially when we consider the impact of one particular class of chemicals known to reprogram biology and ultimately impact weight: obesogens. As with biological insults such as exposures to famine or too much food, we are now beginning to understand how obesogen exposure in utero and as newborns, toddlers, and even teenagers has the ability to alter gene expression, leading to a predisposition to obesity. But as you probably already know, obesogens alone are not to blame for our obesity epidemic. Other things can come into play as well, complicating matters. Let’s go there next.