CHAPTER 2

The Biological Breakthrough That Is Changing Everything

Conventional wisdom has long held that our health is 70 percent heredity and 30 percent everything else. The breakthrough discovery at the heart of the functional medicine revolution flips that ratio on its head. This pulls the rug out from under genetic determinism and sends it on its way. Our genes are not our fate; no disease or dysfunction—or particular strength or level of longevity, for that matter—is foreordained.

Like the physiological discoveries of the late nineteenth century, today’s biological breakthrough has fundamentally altered our understanding of how the human organism works and will change medical practice fundamentally and thoroughly. The word “breakthrough,” however, seems to connote in many people’s minds a stunning revelation that comes out of left field and, in an instant, makes everything clear. Science doesn’t actually work that way. Remember the scientific method, which you probably first learned about back in elementary school, with its painstaking process of observation, hypothesis, experiment, testing, modifying, retesting, and retesting again and again and again? That’s how science works, and the breakthrough understanding of the relationship between our genes and chronic disease happened in just that way, building on the work of scientists from decades—even centuries—ago.* In fact, it is still happening; the story continues to unfold as the research presses on.

But to begin the story, let’s go back to the latter part of the nineteenth century. As a biochemist, I have always thought that if time travel really were possible and if I were given my choice of an era to be transported back to, this is the one I would choose. Florence Nightingale, dispatched to the Crimean War in 1854, had found that more soldiers were dying of infectious disease than in battle and had commenced a methodical statistical analysis of British servicemen that fingered poor hygiene and sanitation as the culprits. She was perhaps the first epidemiologist, and her advocacy of sanitary living conditions would be felt worldwide. Charles Darwin had returned from his epic voyage on the Beagle and in 1859 had published On the Origin of Species, laying out the process of natural selection and explicating the theory of evolution. Gregor Mendel had published his experiments on inherited traits in peas in 1865. In France, Louis Pasteur was setting forth the principles of what would become known as microbiology, while in Germany, Robert Koch was discovering the bacterial origin of disease. What a time of discovery that was!

If I could choose the place to be transported to, I think I would ask for London, where all of these discoveries were being discussed and debated—in a language I understand!—by some of the finest minds of all time. The biologist Thomas Huxley was a prominent spokesman for Darwin’s theory of evolution, while his fellow biologist, William Bateson, translator of Gregor Mendel’s work on inheritance and the man credited with coining the word “genetics,” disagreed, refusing to accept the evolution argument. Archibald Edward Garrod, later Sir Archibald, the son of one of Queen Victoria’s personal physicians and himself a biochemist who discovered genetic diseases of infancy, is credited with having integrated Mendelian genetics and Darwinian natural selection into medical thinking. The Huxleys, the Batesons, and the Garrods regularly dined together, discussing their discoveries and arguing about natural selection and genetic inheritance. Can you imagine how energetic those dinner conversations must have been?

Out of this time of great discovery and debate came the two dominant concepts that shaped medical thinking from that time to our own: that our species evolves through the process of natural selection, and that we inherit dominant and recessive characteristics through our genes. Thread the two concepts together, filter them through the lens of medical practice and its need to apply remedies, and our perception of health and disease comes down to the notion that some people are born with fit genes and therefore flourish, while others are born with disease-producing genes and therefore get sick. Under this rubric, medical practice becomes focused on saving the less lucky among us from the inheritance of the disease-producing genes over which we have little control.

By the early twentieth century, further work had found that our genetic characteristics are carried on our chromosomes, of which humans have twenty-three pairs. One half of each pair of chromosomes is provided by the biological mother through her egg, the other half by the biological father through his sperm. So half our genes come from each parent. Actually, as we’ll see shortly, some additional genetic material, mitochondrial DNA, is contributed only by the mother. Nevertheless, the overall perception held: our genetic inheritance, lucky or unlucky, comes from both parents in pretty much a fifty-fifty split. If that inheritance produces diseases in us, we have only our lineage to blame.

The perception was reinforced by Garrod’s discovery in the early twentieth century that a number of diseases found in infancy—specifically, phenylketonuria (PKU), sickle-cell anemia, Tay-Sachs disease, and Gaucher’s disease—were defined by dominant genetic characteristics on a single gene. The idea that one characteristic from one gene could so dramatically affect a life fed right into the idea that your lifetime pattern of health and disease was preordained at conception. The embryo that would become you was already coded with the diseases that would afflict you; whether those diseases were disastrous or benign was the luck of the draw. If it wasn’t your “fault” that you weren’t among the fittest—it was natural selection, after all—there was also little you could do about it; it was the hand you had been dealt. The term for it is “genetic determinism,” and it left a lot of people feeling doomed to suffer the same chronic diseases their parents suffered because those diseases were inscribed on at least one half of their genome.

Against such diseases, it’s little wonder that the therapeutic model was based on the perception that medicine existed to do battle with disease. Health-care providers fighting a bad-luck genome were seen as heroic soldiers on the front lines of combat, striving to win the war on smallpox or diphtheria or cancer, or to overcome each individual’s genetically determined flaws and weaknesses through whatever therapeutic measures might be required—whatever it might take to get the job done. When all you have is a hammer, everything is a nail. And the model did indeed hit the nail on the head many times over many years, achieving great victories in the treatment of acute disease with aggressive, short-term interventions. But as we have seen, it did little to manage chronic diseases safely and cost-effectively.

In truth, there had long been studies that modulated the armor-like purity of both dominant shapers of medical thinking—the luck-of-the-draw survival of the fittest and the genetic determinism that you were stuck with the hand you had drawn. Some researchers came up with findings that suggested we are not locked into our genes and there is not just one single track—or rather, two parental tracks—along which genetic information arrives to influence our patterns of health and disease.

By the late twentieth century, serious doubt had been cast on the rigidly fifty-fifty split between maternal and paternal genetic inheritance. Much of the doubt was propelled by the finding of that additional genetic material, the mitochondrial DNA contributed solely by the mother. Mitochondria are the energy powerhouses of the cell, the place in which food-derived molecules are transformed into cellular energy. In the male, mitochondria are in the tail of the sperm, and since the tail drops off once an egg is fertilized, none of those mitochondria get incorporated into the embryo. In short, mitochondria come from Mom; that is, the genetics of our cellular energy centers are connected particularly closely to our mothers, with no contribution to this particular function from fathers. So if we’re 50 percent our fathers, it looks like we’re slightly more than 50 percent our mothers—and the slightly more than 50 percent affects a pretty important function.

Half a century later, in the mid-1960s, a young faculty member at Boston University, Lynn Margulis,* propounded what she called the endosymbiotic theory of the mitochondrion. Margulis held that mitochondria were originally bacteria that millions of years ago had infected a host pre-human cell, had then adapted to being there, and have remained there ever since thanks to a mutually beneficial symbiosis: the cells provide a harmonious residence, and the bacteria/mitochondria pay rent by producing energy for the cells. It is certainly true that the structure of mitochondrial DNA is very much like that of bacteria and that the genetics of mitochondrial DNA trace back farther than human cells—two facts that seem to support Margulis’s theory.

For Margulis, these findings of internal symbiosis, called endosymbiosis, confirmed her advocacy of the Gaia hypothesis, which suggests that the competitive, survival-of-the-fittest concept of evolution is “incomplete,” in Margulis’s word. To complete it, the Gaia hypothesis suggests that it is cooperation rather than competition—networking rather than the struggle of the unfit against the fit—that is the true driving force of evolution. Or perhaps it is that cooperation eventually derives from the competition, the struggle of unfit against fit finally settling into network as each adapts to the naturally selecting “best” of the other. In this view, evolution is a self-regulating symbiosis between organisms and their environmental surroundings, and it is this self-regulation that keeps life on the planet ticking over, even as conditions change.

That is just what had happened with those ancient bacteria that had come to attack a human cell; they had stayed on and adapted until they had become integral contributors to the evolution of that cell, morphing into mitochondria that enabled humans to use energy better than organisms without mitochondria. To Margulis, that fact supported the idea of symbiotic harmony, not combat, as the “success factor” in shaping evolution.

What does this idea of self-regulating symbiosis tell us about our genetic inheritance? Simply put, it tells us that we’re carrying genetic information that doesn’t just derive from Mom and Dad and isn’t split evenly between them. Our mitochondrial DNA is carrying genetic information that dates from millions of years ago and derives from these ancient bacteria; it is capable of adapting, it is sensitive to environmental influences, and it can affect our health.

Suppose, for example, that your mother and father are the two liveliest and most energetic people you know, yet you’re constantly exhausted. If you didn’t inherit the fatigue gene from their chromosomes, maybe you got it from the way your mitochondrial DNA interacts with your environment or with your particular lifestyle. So while your exhaustion wasn’t genetically predetermined, it’s there nonetheless, but because it is susceptible to factors of your environment and your behavior, you can change it by changing those factors. And that is something we humans have the power to do.

This thinking was a shot across the bow of the notion that our health is predetermined by our genes, and as the twentieth century proceeded and the twenty-first got under way, other discoveries further challenged the primacy of genetic determinism. Where chronic diseases in particular were concerned, the concept was found to be not exclusively true. In fact, it began to look like only some 30 percent of the common chronic diseases resulted from genetic inheritance, while 70 percent were shown to come from something else—namely, from the influence of environment on genetic expression.

Then along came the mapping of the human genome, the core scientific event of our era, to really illumine for us this relationship between environment—the resources and faculties we draw on in our daily lives and our habits of behavior vis-à-vis these resources and faculties—and genetic expression. The result has been a profound change in what we know about disease and how we think about medicine.

THE HUMAN GENOME PROJECT: A NEW BOOK OF REVELATION

The mapping of the human genome was initiated in 1990 under the direction of Dr. Francis Collins with $3 billion in U.S. government funding. Eight years later, a private group, Celera, headed by Dr. Craig Venter, announced it would compete with the government-sponsored research, using a different approach that ultimately cost 10 percent of what taxpayers had paid. But it was thanks to both groups that the first-draft sequencing, as it was called—that is, the decoding of the human genome—was successfully completed in 2000. And although the results of the project would not be finalized until 2003, the moment was marked in 2000 by an announcement in the Rose Garden of the White House joining representatives of both groups and presided over, appropriately, by the president of the United States, Bill Clinton.

The achievement represented one of the largest international science collaborations in history. It was the culmination of efforts begun back in 1953 when James Watson and Francis Crick discovered the structure of DNA—the iconic double helix of genetic code locked in two spiraling chains of nucleic acids coiled around a single axis. From Watson and Crick to the White House Rose Garden in just under forty years represents a truly extraordinary milestone in our understanding of the chemical nature of our book of life, and the research this culmination set in motion continues to yield insight after insight to this day.

It started by shattering assumptions. Scientists had assumed that the mapping would confirm, first, that humans had many more genes than other species and, second, that all human genomes were fundamentally similar to one another. So where common diseases were concerned—in particular, chronic diseases—the genetic variants causing the diseases must also be common. It would mean an easy path ahead for medicine. All that had to happen was for research to find the common genetic root, and it would be a snap to generate a universal treatment—the pill or injection that would do the trick. Medical researchers and practitioners held their breath waiting for the mapping project to give them the go-ahead to find, develop, and administer these cures.

It didn’t happen. The research told a far different story, and that is the breakthrough that is changing everything. It is changing our notions of genetic determinism, demonstrating that we are not fated to suffer the same diseases or disorders as our parents. It is showing us how our genetic expression works and the factors that can influence it. It is even showing us how our genetic expression can be changed by mechanisms outside or unrelated to our underlying DNA.

Let’s take it a step at a time.

THERE’S MORE TO THE STORY OF EVOLUTION THAN WE THOUGHT

First, at the start of the project, there was that assumption that the human genome would comprise nearly 100,000 genes, many more than other species. After all, we humans are both bigger and more complex than most other organisms, and if you think of our genes as the stories that make up our book of life, then it seemed logical that what was “written” in our chromosomes—our genetic code—would also be relatively more complex, with a greater number of genes than the average squirrel or the tree the squirrel lives in.

We were in for a surprise. The total number of genes in the human genome is approximately 25,000—fewer than are found in a number of plants. The genome of the pinot grape, for example, has nearly 30,000 genes—making us seem perhaps less complex than the wines we drink.

There were other surprises in store. As the deciphering went on, we learned that the human genome is about 96 percent identical to that of the chimpanzee, a finding that focused scientific scrutiny like a laser on that 4 percent difference. The reason? Chimps don’t get cancer, don’t experience autism, don’t get heart attacks. Finding out why not could offer clues to why we do.

Within the human species, the deciphering showed that while your genes and mine are very similar in the larger sense—again, as the stories that make up our book of life—there are small differences between us in the individual words that encode the stories. These differences, it turns out, can be significant. The difference of just a single nucleotide—in effect, a single letter in a single word of a single gene—can influence our susceptibility to disease. The difference might, for example, make me, with a particular nucleotide in a specific position in the particular gene, less susceptible to a particular disease than you, whose corresponding gene has a different nucleotide at that position. On the other hand, where another disease is concerned, I might be more susceptible than you, because of that very same nucleotide in that very same position in that very same gene. There are more than 3 million of these variants in humans; they are called single nucleotide polymorphisms, SNPs—routinely pronounced “snips”—and they are evidence that while we may have a mere 25,000 genes, there is plenty of variation in their construction.

What do these 3 million SNPs have to do with our patterns of health and disease? It is the way they influence specific functions that is so crucial. Take the example of one SNP, methylenetetrahydrofolate reducatase polymorphism—MTHFR polymorphism for short. Yes, even in its short form, the name is over the top, but the importance of this SNP cannot be overestimated. It turns out to be extremely influential in controlling the metabolism of the important B vitamin known as folic acid. Folic acid is pivotal to numerous functions of the heart, brain, blood cells, and immune system as well as being an essential component of fetal development. People who carry the variant SNPs of the MTHFR gene—and whose metabolizing of folic acid is therefore modified in some way—have been found to have higher risk of depression, heart disease, and dementia. Many are unable to function normally unless they get a higher dietary intake of folic acid—or its relative, 5-methyltetrahydrofolate—from food or from a nutritional supplement. This discovery of nutrient-dependent SNPs has spawned the new field of nutrigenomics, which is aimed at determining the correct intake of specific nutrients that will meet the genetically determined needs of the individual. With 3 million SNPs already identified, and with a goodly number of them no doubt nutrient-dependent, nutrigenomics would seem fertile territory indeed for empowering us to take some control over managing our own health.

Certainly, the idea that nutritional therapy should be tailored to individual needs is not new. Back in 1950, renowned biochemist Dr. Roger Williams of the University of Texas had a famous paper in The Lancet entitled “The Concept of Genetotrophic Disease.” In it, Williams suggested that the recommended dietary allowance (RDA) levels of nutrients established by the Food and Drug Administration were essentially useless because, as Williams proposed, the nutrient needs of individuals differ far more substantively from person to person than the RDAs took into account. I was in a seminar with Dr. Williams in 1974 when he delivered his classic statement that “nutrition is for real people; statistical humans are of little interest.” He meant by this that while most medical and nutritional training focuses on the so-called 70-kilogram human—a 155-pound male accepted as statistically average—there is in fact no such thing as a statistically average person. (Well, maybe a few men qualify.) But there’s something fundamentally flawed in making the 70-kilogram human male the focus of tens of thousands of published research studies because the results of such studies simply won’t be germane to the vast bulk of us, who are as individual as snowflakes. To Williams, applying the rule of the statistical average to the needs of the individual is therefore a path to failure in treatment because it is based on this flaw of understanding.

It means that if you toss back a multivitamin at breakfast, you’re probably not doing yourself any harm, but you may not be providing yourself with the levels of specific nutrients that you need for optimal function—and if you think you are, you are laboring under a delusion. On the other hand, there may well be specific genomic reasons for you to be complementing or supplementing your intake of one or another nutrient in that multivitamin, but in an amount and for a purpose that can make a difference to your health, not that of the 70-kilogram fantasy. The completion of the Human Genome Project and additional studies on SNPs have proved Roger Williams right.

So was Dr. Linus Pauling, the only person to have been awarded two individual Nobel Prizes in different fields. His landmark paper in the journal Science in 1949 concerned the cause of sickle-cell anemia, a disease well known to be genetically linked. It was in this 1949 paper that Pauling first used the term “molecular disease” in describing how the genes of the person with sickle-cell trait created a form of hemoglobin in the blood cells that was just slightly different from normal hemoglobin. This small change in the structure of hemoglobin, Pauling contended, was what had so huge an impact on the individual’s health.

Pauling advanced this concept further in 1968 in another article in Science, this one entitled “Orthomolecular Psychiatry,” proposing that certain forms of mental illness were the result of an alteration in the metabolism, which in some cases could be a result of insufficient nutrient intake—that the people suffering these forms of illness were not getting the nutrients they needed to meet their genetically determined requirements for proper brain function.

Pauling made up the word “orthomolecular,” based on the Greek prefix ortho, for “upright” or “correct,” using it in the general sense to describe a medicine that would mix and match substances native to human physiology—vitamins, minerals, nutrients, hormones, metabolites, and cellular building blocks—to the right levels for an individual’s optimal health and function. It was an early instance of a kind of medicine that contrasted sharply with the standard allopathic approach. Allopathic medicine uses drugs that are not native to the human body to block, inhibit, antagonize, or alter specific physiological functions. Pauling was convinced that the more scientists learned about the origin of chronic diseases and about the relationship between genes and environment, the more medicine would focus on adjusting the balance of substances associated with healthy metabolic function through orthomolecular therapy, and the less it would depend upon allopathic substances to alter physiological functions.

Again, the Human Genome Project and the research it has inspired have proved Pauling correct, although he did not live to see the genomic revolution confirm his thinking. Among people with the MTHR SNP who suffered from depression but who had become resistant to antidepressant drugs, oral administration of therapeutic levels of the active form of folic acid, 5-methyltetrahydrofolate, resulted in successful management of the depression, just as Pauling had suggested in 1968.

Imagine the implications.

THE GENETIC ORIGINS OF CHRONIC ILLNESS

Researchers were at first stumped by the lower than expected number of genes in the human genome, along with the greater than anticipated diversity of human function. In exploding the assumption that we had a lot of common genetic variants, the first-draft sequencing also put the kibosh on the notion that these common variants were the roots of chronic disease—and therefore also killed the idea that it would be possible to develop a drug to put each chronic disease out of business. The high number of SNPs in the human genome pretty much put that hope out of business instead.

What’s in the gap? What’s between the relatively low number of genes and the relatively high number of variants—and what does it all have to do with chronic disease? That has been a topic of intense scrutiny since the publication of the Human Genome Project’s results. And what all the scientific detective work that has unfolded over the years made clear is that although the human genome has fewer genes than expected, it has the largest amount of what was originally called “junk DNA” of any organism, plant or animal, on the planet.

Junk DNA takes up more than half the real estate in the human genome. We called it junk because we thought, incorrectly as it turns out, that it was just the remnants of ancient infections that were simply floating around uselessly in the genome—like tiny, worthless shards of ancient pottery at an archaeological site.

In fact, however, while not part of the specific coding of our genes—it is called noncoding DNA—what we used to think of as junk DNA actually contains the information that controls the expression of our genes. Think of it as the executive function of the genome, regulating the changing expression of our genes and determining how genetic characteristics function in each of us. The “junk DNA” label yielded to the term “promoter regions” within the human genome, and so far from being junk, the promoters are absolutely central because what they control is the translation of our genotype into our phenotype. Genotype is our genetic makeup, the potential of various traits to develop in us; phenotype is what happens when our genotype interacts with the environment, realizing the potential of particular traits and thereby giving rise to observable characteristics in the way we look, act, feel, and perform. Our genes represent many possible phenotypes. What makes the difference is how the genes are expressed. What controls that process is information encoded in the promoter regions.

This also goes a long way toward explaining that 4 percent difference between us and chimps; in fact, the difference is qualitative, not quantitative. For while it is true that the genes of the chimpanzee are more than 96 percent the same as human genes, the information encoded in the promoter regions of the human genome is far more complex than what is encoded in the chimpanzee genome. It is this complexity of humans versus all other plants and animals that is the great differentiator, and the complexity is determined by the sophistication of the genetic messages contained in the promoter regions of our genome—messages that control how genes express themselves and that can be influenced by such factors as environment, lifestyle, and diet. Recent discoveries on the rapid evolution of animals in the Cambrian period 500 million years ago suggest that this was when changes began in the promoter regions of the genome, rather than just in the genes per se, that eventually would lead to the differentiating complexity of the human genome—the complexity that divides us from chimps. So the clue as to why chimps don’t get cancer or suffer autism is in the complexity of our genetic expression versus theirs, a complexity susceptible to changes in environmental factors and one that can take us to wholly different disease and health patterns—and wholly different health outcomes—from those of all other species.

This is a whole new ball game of understanding. If our genetic expression can be changed, we’re not hardwired for disease. If signals from the environment can change genetic expression—if they can be converted by processes in the cell to inform the promoter regions that translate genotype into phenotype—then we have the power to affect our genetic expression, and thus our health, by changing the environmental factors sending the signals.

Apply the new understanding to chronic disease and you can see how profound it is, for it implies that there is no specific gene for a specific chronic disease. Rather, there are families of genes that may be susceptible to expressing a particular chronic disease process if the genes are exposed to factors that regulate their expression to create the phenotype of that disease. It sounds circular, I know, but put it to work explaining the global epidemic of obesity, for instance, and you can see how powerful it is. It tells us that there is no one specific gene that causes obesity. Instead, there is a family of genes that, when exposed to specific environmental and lifestyle factors, can be modified in their expression to turn on the storage of fat. What pushes that expression-changing button in one person, however, may be very different from what pushes it in another and thereby sends that person into obesity, as we’ll see in greater detail in Chapter 10.

This new understanding demands a new health-care paradigm; it constitutes the scientific basis for altering the way we think about and deliver medical therapies. If we cannot change our genes—and we cannot—we can nevertheless change the messages that our genes receive from the environment that regulates genetic expression. It means that what we are talking about is health care that is personally tailored to the individual. The more we know about the factors that modulate genetic expression in the individual, the more effective the program we can design to optimize that individual’s health and maximize his or her organ reserve. I call it personalized lifestyle medicine.

HOW TO CHANGE GENETIC EXPRESSION: CHANGE THE MESSAGE THE ENVIRONMENT TRANSMITS

Can we really control our genetic expression? After all, isn’t it true that a number of diseases of infancy are closely linked to genetic inheritance? Absolutely. That is why fetal screening looks for such genetic disorders as Down syndrome if certain family or individual risk factors are present. But what we have learned is that most of these genetic diseases can vary in severity from mild to extreme. In the case of the sickle-cell trait, which is certainly genetically linked, it has been found that if those with a mild version of the trait take certain drugs—namely, hydroxyurea and sodium butyrate—this alters the genetic expression of the trait and reduces the risk of the disease. The same holds true for phenylketonuria, one of the most common genetically linked diseases of infancy. Historically, children born with PKU risked retardation and early death. It is now known that both can be avoided if the child in his or her early years is fed a controlled diet low in the amino acid phenylalanine. In both these cases, the genes that are traits for both Down syndrome and PKU have not been changed—they can’t be. But the environment the genes are exposed to has been altered, and the result has been significantly reduced risk that the diseases will take hold and flourish. Simply put, the environment is sending a different message, and the genes are expressing themselves differently in response.

We are also seeing results in applying this principle to autism, the incidence of which is growing dramatically throughout the world’s developing countries. Fifty years ago, what we today define as autism was found in one child out of more than 8,000. The Centers for Disease Control and Prevention (CDC) now report that this once rare disorder affects one out of 50 children in the United States, a prevalence as of 2012 of 2 percent, up from 1.6 percent in 2007—an increase so stunning it raises the question: Are more children being affected or are more children with autism being detected? The likely answer is that both phenomena are at work, but the dominant reality is a real increase in prevalence—and not only in the United States but in other industrialized countries as well. The CDC and the American Academy of Pediatrics find it sufficiently disquieting that they jointly released an “Autism ALARM” over what they judge to have been a tenfold increase in incidence of autism over the last decade of the twentieth century and the first decade of the twenty-first.

Of course, everyone—scientists above all—want to know why. What is the explanation for the stunning growth of this disorder, and what is the cause of it? A determined scientific search for the gene that codes for autism is well under way. Dr. James Watson—the Nobel laureate of double-helix fame—established a research group at his Cold Spring Harbor Laboratory on Long Island, New York, that is searching for the genetic bases of autism and schizophrenia. Around the world, many other prominent geneticists are doing the same. Already, this extensive genetic screening has found more than three hundred genes with some relationship to autism, but not one of them is strong enough to be termed the autism gene.

In the meantime, many scientists and health-care providers are leaning to the view articulated by Dr. Michael Stone, a remarkable family physician in Ashland, Oregon, who has had significant clinical experience with autism. Stone told participants at a medical meeting in 2013 that “once you know one child with autism, you know one child with autism.” In other words, because autism presents in so many ways and with such different severities, there is no one gene causing a single disease called autism. In fact, autism is more appropriately termed autistic spectrum disorder, ASD, and the origin of the disorder is likely to differ from child to child. For parents and caregivers of children on the spectrum, what it comes down to is the recognition that a range of environmental factors interacting with multiple genetic susceptibilities is likely to have caused autism in their child, and by changing the child’s diet, environment, and therapies, they can alter the genetic expression of characteristics that are termed autism.

THE AUTISM DILEMMA AND THE LIFESTYLE MEDICINE APPROACH TO ASD

Dr. Bernard Rimland, trained as an experimental psychologist, was the father of a son, Mark, born with infantile autism in 1956, when the condition was very rare. Rimland dedicated the remainder of his life to trying to better understand the origin and treatment of the condition. He wrote the classic Infantile Autism: The Syndrome and Its Implications for a Neural Theory of Behavior, published in 1964, which debunked the then-dominant theory of the origin of autism advanced by University of Chicago professor Bruno Bettelheim. The legendary Bettelheim, the well-known, highly regarded child psychologist, had theorized that autism was the result of a traumatized and loveless childhood, a theory now almost universally discredited, thanks primarily to Rimland’s work.

In 1967, Rimland founded the Autism Research Institute in San Diego, California, to conduct scientific studies and advance the frontier of knowledge on autism. The institute’s first task was to mobilize a group of researchers and clinicians to begin accumulating a database of clinical case studies by organizing the parents of autistic children to become sources of information. More than 26,000 parents responded, supplying a wealth of information to the database. It was this more than anything else that concentrated the focus of doctors, parents, and researchers on the need to act aggressively and to broaden the scope of their collaboration in order to find a successful approach for managing what was becoming an epidemic.

I met Dr. Rimland in 1973 and am proud to consider myself one of his students. He was a man of charm, wit, determination, and intelligence, with a bigger-than-life personality that drew people to him. Rimland served as the primary technical director on autism for the 1988 movie Rain Man; his son, Mark, who had become an accomplished artist, was one of the models for the character portrayed by Dustin Hoffman, and Hoffman interviewed Mark as part of his preparation for playing the role. Dr. Rimland died at the age of seventy-eight in 2006.

One of the leaders in the group of researchers and physicians he gathered about him was Dr. Sidney MacDonald Baker. A onetime professor at the Yale Medical School, Baker worked extensively on difficult pediatric medical cases. He is the kind of physician medical professionals think of as a doctor’s doctor. Once physicians meet him and get to know how he approaches patient management, they want him to become their personal physician. In the early 1990s, in collaboration with Dr. Rimland and many other forward-looking physicians, researchers, and patient advocates, Baker was instrumental in founding the Defeat Autism Now organization—DAN. Over the past twenty years this organization has focused on the development of successful approaches to the management of autistic spectrum disorders. DAN doctors recognize that autistic spectrum disorder is not a simple, genetically determined disorder but rather a complex constellation of conditions resulting from the unique interaction of a child’s genetic constitution with specific environmental damage that can occur at conception, in utero, or more frequently in infancy.

So matters stood on autism in the first decade of the twenty-first century. While early psychological explanations for the disease had been dismissed and most doctors and researchers agreed that the multiple expressions of the disorder probably meant multiple causes, a few innovators in the field were actively pursuing assertive new approaches to treatment. Against this background, a most remarkable medical detective story unfolded.

In 1998, Dr. Andrew Wakefield, an academic pediatric gastroenterologist affiliated with the prestigious Royal Free Hospital in London, published a paper, with his associates, that set the world of medicine whirling. The paper, in the February 28 issue of The Lancet, identified a problem with the digestive system of children as strongly associated with autism-like conditions. The paper described an enlargement of the lymph glands of the intestines as suggestive of a connection between autism and the alteration of immune function in the digestive tract. The most controversial assertion in the paper was that the onset of the autistic symptoms and the presumed alteration in the immune system of the children was associated with vaccination for mumps, measles, and rubella, the MMR vaccine.

Wakefield’s suggestion that the principal environmental trigger was the MMR vaccination ignited a high-profile and often heated debate between those who support childhood vaccination and those who do not. After a number of years of controversy, The Lancet appointed an independent group of experts to carry out an inquiry into the research—that is, to evaluate the data and results. The inquiry prompted a retraction published in The Lancet’s March 6, 2004 issue; the paper’s assertion that vaccination was the cause of the onset of the autistic symptoms was withdrawn, the inquiry panel having concluded that Wakefield’s data supporting this association had been falsified. Dr. Wakefield was defrocked and left his position at the Royal Free Hospital and School of Medicine.

Putting aside the specific question of immunization and the ethical concerns over his research, Dr. Wakefield did open up a major area of investigation—namely, the possibility of a connection between intestinal immune activation and autism. It is a field of research that has taken the medical field from the notion that autism is strongly genetically linked to the present view that the environmental influence on genetic expression plays a major role in the origin of the disease.

The question is: What are the major environmental factors that we should be concerned about vis-à-vis autism? One key area in which evidence is accumulating is that of specific allergy-producing foods. Among these are gluten-containing grains and cow’s milk proteins, which are often introduced into a child’s diet at a time that the immune system has not yet developed tolerance for them. Obviously, the evidence in no way suggests that all cases of autism are due to exposure to wheat or cow’s milk, but it raises the issue that autism might be seen as a condition in which the child’s immune function has been altered because of specific exposures to which the child is intolerant, and that such alterations have induced altered gene expression in the brain that translates to behavior we call autistic.

This major change in thinking about the origin of a chronic condition was the model that Dr. Baker and his colleagues had in mind when they formed DAN in the 1990s; it would shape the way they looked for a solution to the autism epidemic. It is also the conclusion articulated by Dr. R. F. Tuchman at the Miami Children’s Hospital Dan Marino Center, who has written widely in the field of autism. In the Revista de Neurologia in 2013, Dr. Tuchman asserted the likelihood that “there are risk genes and early environmental risk factors for autistic spectrum disorders that contribute to an altered trajectory of brain and behavioral development.” Clearly, we are at a major turning in the road in how we think about not just conditions like autism, but the whole family of chronic diseases.

Exemplifying this new pathway forward is the pioneering work of Dr. S. Jill James, a distinguished pediatrics researcher based in Arkansas. Dr. James and her colleagues have focused on better understanding the connection between specific genes and environmental factors associated with autism. For example, her team has identified specific genetic characteristics that reduce a child’s ability to properly metabolize folic acid, so critical for brain function. The team’s studies have found that in a child carrying that particular impaired ability, cow’s milk protein, gluten, and other allergenic substances can help alter brain function and development.

How that happens is complicated, but it distills down to a very important takeaway for the management of autism. It’s the lesson that was expressed most powerfully by Sidney Baker in the late 1990s in defining his approach: “Take away the things that are a problem and provide the things that are missing.” In short, eliminate those things in the child’s environment that are causing alteration in his or her nervous system function, and add back the things the child needs more of—the active form of folic acid, 5-methyl tetrahydrofolate; or methylcobalamine, the active form of vitamin B12. This is a very different strategy from relying on medications to manage the child’s autistic symptoms.

It is historically noteworthy that it was back in 1988 that Bernard Rimland, in reviewing the clinical experiences reported by parents of autistic children, first offered observational data from many parents of the positive effect of providing supplemental amounts of vitamin B6, folic acid, and vitamin B12 to their autistic children. The detailed work of Dr. James and her colleagues has now identified exactly how this approach might be valuable to certain autistic children with specific genetic susceptibilities.

The work continues, offering new ideas and fresh opportunities for parents with autistic children to improve their children’s brain function and behavior. Above all, the new work is changing the conversation—and thereby changing people’s perception of autism. In this regard, one of the most exciting approaches is that of Dr. Martha Herbert of Harvard and Massachusetts General Hospital. Dr. Herbert is a remarkably talented researcher and autism activist, and in her 2012 book, The Autism Revolution: Whole-Body Strategies for Making Life All It Can Be, she sets out a strategic approach to the disorder that takes advantage of all the recent revolutionary discoveries on the relationship of genes and environment in autism. Dr. Herbert dismisses the one-size-fits-all assumption about autism and its treatment, replacing it with an analysis of the specific factors that might contribute to the disorder given specific genetic susceptibilities. The book is an exemplar of the new medicine, coupling genomic understanding with personalized lifestyle intervention.

GENETIC DETERMINISM AND ALZHEIMER’S DISEASE

The same logic that for too long has governed our perception of autism as genetically controlled holds true for our perception of Alzheimer’s dementia—namely, that it is inherited. While it is true that certain forms of early-onset Alzheimer’s disease are strongly linked to genetic inheritance, these constitute less than 5 percent of all diagnosed cases. The later-age form of the disease, typically affecting people over the age of sixty, is far more common and is not strongly linked to any specific genetic inheritance factor; rather, like autism, it is a product of the interaction of certain genes with environmental and lifestyle factors.

One of the genetic markers for Alzheimer’s disease that has gotten a lot of press is the ApoE4 gene. People who have this gene from their mother, their father, or both have been found to have increased risk for both Alzheimer’s and heart disease. To know that you carry this gene has been understood by many as the mark of an inescapable fate; naturally, a lot of people would rather not know they face a disaster they can do nothing about.

But that is not the full story—or the true story—of the ApoE4 gene, as ongoing research has shown. It is now recognized that this gene does not create Alzheimer’s disease by itself; rather, it describes a susceptibility to the disease that the individual’s choices of lifestyle and diet can affect. Simply put, a person with the ApoE4 gene is highly susceptible to the dangerous effects of a diet high in saturated fats and sugar and of a sedentary lifestyle. So the ApoE4 gene isn’t telling the individual who carries it that he or she will die of Alzheimer’s. On the contrary, it is saying instead, “Go out and design your lifestyle behavior and your diet in ways that reduce your risk for Alzheimer’s and heart disease.” It’s a warning, a lesson, and a directional signal all in one.

Increasingly, this understanding is being confirmed in the many different approaches to Alzheimer’s research. Neuroscientists like Dr. Dale Bredesen of UCLA* have looked at the Alzheimer’s disease–producing processes in animals and have identified more than thirty different causes of the disease, all of them related to lifestyle, diet, and environmental factors. Dr. Bredesen, a dedicated physician researcher who has long worked on neurodegenerative diseases and is now a leading Alzheimer’s researcher, has said that the more he finds out about the disease, the more convinced he is that we will never find a single drug to treat it, so varied is its origin from person to person. As Bredesen reminds us, more than $5 billion has been spent on developing Alzheimer’s drugs thus far in the twenty-first century, and not one of the drugs has proved successful in safely and effectively treating the disease.

That is why Dr. Bredesen has designed a clinical study that examines what happens at the intersection of an individual’s genetic uniqueness and his or her environment. The study involves early-stage Alzheimer’s disease patients and is evaluating the use of a new drug under controlled conditions of diet, lifestyle, and environmental conditions. The aim is to manage all of the more than thirty identified factors associated with the development of Alzheimer’s to prove that its solution will never come from one drug.

Bredesen’s ideas about the origin of the disease are shared by a fellow Alzheimer’s disease clinical research expert, Dr. Suzanne Craft. Dr. Craft, a neuropsychologist, and her research team have identified what she has termed “diabetes of the brain” or “type 3 diabetes” as the origin of Alzheimer’s in certain individuals. Specifically, people whose diets encourage poor control of blood sugar have been found to experience a particularly high incidence of Alzheimer’s disease. The conclusion is that all those desserts and refined white-starch foods over many years create imbalances in brain metabolism that are linked to Alzheimer’s disease. Craft’s research shows that intervention in the form of a diet low in sugars and refined starches improves brain function in Alzheimer’s patients. “Our results suggest,” Craft wrote in a 2011 study published in Archives of Neurology, “that diet may be a powerful environmental factor that modulates Alzheimer’s disease risk through its effects on central nervous system concentrations of Aβ42, lipoproteins, oxidative stress, and insulin.”

So far, Dr. Craft’s suggestion is just that: an indication of a fact, if not yet completely proved. Yet all of these new paths of inquiry and research in the field of Alzheimer’s have something very important to say to people who have the ApoE4 gene. They affirm the biological breakthrough that is changing everything: the recognition that genes in and of themselves don’t control the appearance of later-age Alzheimer’s disease; the environment in which the genes are expressed does. Among the major factors related to the risk of Alzheimer’s disease uncovered by these new paths of inquiry and research are these:

  1.  ApoE4 gene and a diet high in saturated fats

  2.  Chronic inflammation

  3.  Elevated level of homocysteine in the blood

  4.  Insulin resistance and type 3 diabetes

  5.  Poor tolerance for exercise—that is, the person tires quickly and cannot sustain the activity

  6.  Lack of brain stimulation

  7.  Exposure to toxic substances

All of these risk factors for Alzheimer’s disease are in our control. Each can be modified. Change your diet, change various factors of your lifestyle, and in effect you are treating the cause of the underlying alteration in physiology that is associated with the development of Alzheimer’s disease. Such diet and lifestyle changes are precisely the type of clinical interventions that Bredesen and other neurology specialists are now recommending for people with early-stage memory loss and cognitive impairment.

Medicine is changing. Haltingly, perhaps, and unevenly, it is incorporating the breakthrough discoveries of the era and learning how to personalize the management of chronic disease. With Alzheimer’s as with autism and the other debilitations we dread, our genetic inheritance tells us more about how we should live than about the chronic disease we are doomed to suffer. We are not doomed at all.

The breakthroughs that are bringing about this changed approach to chronic disease management tell an amazing story, but the story is not complete until we understand perhaps its most amazing subplot—epigenetics.

EPIGENETICS: THE GENETIC WILD CARD

One more time: the revolution that is changing health care is the recognition that our genes do not hardwire us for chronic disease but instead offer a menu of what we can be under differing environmental conditions. This is empowering. It means that at any age, we have the choice about what information to send to our genes from our environment, diet, and lifestyle.

Let’s be clear. When I use the word “environment,” I am not just talking about trees and streams, although our natural surroundings, like our physically constructed surroundings, are certainly part of the environment. What I am referring to is all the things present in our lives that we draw on to satisfy our needs and desires—hand creams and toothpaste, nail polish and hair gel; the fabrics we wear; the stuff we buy for cleaning our house; whether we heat up dinner in a BPA plastic dish in the microwave or start the meal from scratch by handpicking ingredients at the farmers’ market. In other words, our environment comprises all the assets and amenities, comforts and conveniences, practices and arrangements we employ in the choices we make about behavior.

When I use the word “diet,” I mean an individual’s way of eating—kinds of foods, amount of food, diversity of food, even personal taste. In short, given what is available for this individual to eat, what are his or her choices, tastes, habits, likes and dislikes, and approach when it comes to food and drink?

Given all those environmental factors and available ways of eating, what choices does the individual make in terms of behavior, habits, and way of living? That’s what I mean by “lifestyle”—the totality of actions, functions, and kinds of conduct that define how an individual operates in his or her daily life.

On all three of these planes, what we do sends messages to our genes, but empowering as that is, it sets up an important question. If all the cells in our body have the same genes, then how are they able to differentiate their function? Surely we need to know how that works if we’re going to send the right message to the right genes?

Heart cells, for example, must function in a specific way to keep the heart pumping. That means those cells express only heart-related information from the genome’s book of life. Ditto liver cells, muscle cells, brain cells: Each cell must function in a specific way, and it is therefore required to express only selective genomic communications. In order to be sure that any change we make in environment, diet, or lifestyle gets transmitted to the right cells, we need to know how this works.

The first step in telling us how came from the science of developmental biology and specifically from the so-called father of this field of science, Edinburgh University professor of animal genetics Conrad Waddington, who died in 1975 at the age of seventy. A renaissance figure in biology who made fundamental contributions to paleontology, genetics, embryology, systems biology, and the new field of developmental biology, Waddington was almost equally talented in the arts and somehow found time to indulge his passions for painting and writing poetry.

In a period of amazing scientific creativity in the late 1930s, he coined the term “developmental epigenetics” to describe how animals develop from a fertilized egg to a fully formed organism. The Greek prefix epi signifies something over and above, so epigenetics refers to things that reside above the control of the expression of the genome—in the epigenome. In today’s terms, we might say that the genes are the computer hardware and the epigenetic controlling factors are the software that tells the computer hardware how to perform. In coining the term in those pre-computer days, Waddington was giving a name to his understanding that the environment in which the fertilized egg develops influences the organism, and that environmental stress during the period of development can change the way the organism will end up functioning. In this as in many things, Waddington was ahead of his time, since his ideas on epigenetics came well before there was universal understanding of the importance of genes in controlling development.

In fact, not until the latter part of the twentieth century was it recognized that in the development of an embryo, only certain genes are expressed in certain cells; that’s how a single genome message gets differentiated into multiple cellular functions. Exactly how this works was not well understood until the start of the twenty-first century, but it was accepted that Waddington’s epigenetics was the mechanism of the differentiation. In essence, epigenetics is the genome’s gatekeeper, regulating which genes are expressed in which parts of the body at any given time.

Full understanding of the epigenetic mechanism—and of how it relates to the origin of chronic disease—had to await the completion of the Human Genome Project in 2000. A most unexpected discovery, by Drs. Randy Jirtle and Robert Waterland at the Department of Radiation Oncology at the Duke University Medical Center, is what shed the clarifying light on the subject. The two were engaged in a study of the influence of maternal nutrition on fetal development when they came upon an earlier observation by British molecular biologists Robin Holliday and John Pugh. In 1975, Holliday and Pugh had found that methyl groups attached to the genome silenced the expression of certain genes so they could not be read. These methyl groups are composed of chemical units comprising a carbon atom with three hydrogen atoms “fastened” to it, and they are made in the body out of some of those same B vitamins mentioned previously—folic acid and B12—along with B6, choline, and betaine.

Jirtle and Waterland decided to explore the impact of high supplemental doses of these gene-silencing nutrients on the developing fetus of a pregnant agouti mouse. Their aim was to find out whether changing the nutritional environment could stimulate the epigenetic mechanism to influence fetal outcome; specifically, they wanted to see what particular genetic expression might be silenced by the methyl groups. The agouti mice they used for the study were excellent test animals—genetically well scrutinized and inbred to get fat, contract diabetes and cancer, and die young. Their fur color was always tan.

The first unexpected outcome from the study was that the offspring born to the pregnant mice supplemented with the methyl nutrient were not tan but had mottled fur. Even more interestingly, these offspring did not get fat, did not contract diabetes or cancer, did not die young like their mothers. They had the same genes as their parents, but the expression of their genes had been altered by the methyl nutrient supplement to result in agoutis that lived 30 percent longer than their parents and without the chronic disease bred into those parents. Some lines of genetic expression had indeed been muted.

Jirtle and Waterland published their findings in the 2003 issue of Molecular and Cellular Biology and won awards for the enormous impact the findings had on the field of nutrition and for giving birth to the field of nutritional epigenetics. Beyond this, their discoveries have profoundly affected medicine and the understanding of how environment and lifestyle exposures can imprint the epigenome and control the expression of our book of life. As I write this, more than 3,000 subsequent research papers on environmental epigenetics have been undertaken as a result of the agouti mouse study, and researchers around the world continue to confirm the significance of the work.

I’ve had the chance to talk with Dr. Jirtle frequently over the years; he is the first to point out the obvious, which is that humans are not agouti mice. It is a long way from a mouse to human fetal development, but the power of the original finding is that epigenetics is a process shared across all animal species, and since nutrition and other environmental factors can influence this process, what happens in the epigenome can directly affect health and disease patterns in an individual.

And what can happen there ranges well beyond simply silencing genes. While some environmental factors can blot out parts of the record in our epigenome, others are like sticky Post-it notes that emphasize a particular part of the record, saying, “Read here.” Still others can actually switch chromosomes around. In less time than it takes to say deoxyribonucleic acid, we’ve gone from thinking of ourselves as genetically predetermined and pretty well fated for certain illnesses to having a whole new console of buttons to turn on and off to affect our health patterns and life span.

That’s an essential realization, because forecasts about the health of children being born today are truly alarming. Indications are that specific chronic diseases are rising in prevalence in this group much faster than would have been expected in traditional models of disease prevalence. The first evidence of this alarming trend came in 2005 in a study in the New England Journal of Medicine suggesting that, given certain chronic disease problems now appearing in children in the United States, it is possible that the current generation of children will be the first in history to have a shorter life span than their parents.

How is this possible? We spend more per person on health care than any other country in the world. We talk about health, write about health, go to the gym to improve our health, worry about our health constantly. Could it be that twenty-first-century environmental factors are altering our epigenetic mechanism so that our functional health is declining? Could this help explain the rising incidence of such childhood health problems as asthma, allergy, autism, attention disorders, hyperactivity, type 2 diabetes, autoimmune arthritis, and obesity?

This question becomes even more significant when we take into account the work of Drs. Moshe Szyf and Michael Meaney of McGill University. McGill is the very institution, of course, where Dr. Hans Selye coined the term “stress” to define environmental factors that alter the flight-or-fight arousal system; the word has since become the most used—and perhaps the most useful—English word in medicine. Drs. Szyf and Meaney have taken our understanding of stress to a whole new level of understanding, determining that an individual’s social environment influences his or her epigenetic makeup and, as a result, changes both the physiological and behavioral response to stress. It is particularly the case with young children. In a sense, what the Szyf-Meaney work tells us is that traumatic events can imprint the genome with epigenetic marks that alter the way we respond to stress over a long period of time.

What does this mean to children born and raised in conditions of fear, anger, and violence? I have had the opportunity to ask the question of Dr. Szyf himself; not surprisingly, he says that one of his greatest fears is what is happening to children born where such conditions are the norm during the pregnancy of their mothers and throughout their childhood. His fear, quite simply, is that the epigenetic marks left by such exposure not only will adversely affect the mental and physical health of the current generation but may be transmissible to the next generation as well, as recent animal studies have suggested. The studies define a whole new field, transgenerational epigenetics, exploring whether and how characteristics can be passed down to the next generation by epigenetic inheritance, not via traditional genetics.

Transgenerational epigenetics studies may also show how and why chronic disease patterns may change much faster than one would expect—that is, if your expectation is that the only way to create change in offspring is through genetic natural selection over millions of years. Rather, epigenetics indicates that there are two ways that the health of a population can change over time. The first is the traditional inheritance of a disease-producing gene like that of sickle-cell anemia or phenylketonuria. The second is epigenetic in origin and can occur much more rapidly through the epigenetic response to a change in the environment. Animal studies, like those by Dr. Michael Skinner of Washington State University, have shown that exposure to environmental toxins can result in epigenetic changes transmissible to the next generation and can create increased incidence of chronic disease. More and more, we are seeing that nutrition, social stress, and environmental toxins can all influence the epigenome and alter gene expression patterns—thus increasing the risk of chronic disease.

The good news is that we can correct the influences that are altering our genetic expression and putting us at risk for chronic disease. That is the great lesson of the biological breakthrough represented in the mapping of the human genome and in the explosion of research it set off—that the translation process from genotype to phenotype is complex and dynamic as our genome responds to messages received from the environment. Therefore, to the extent that we can change the messages, we can also shape the response—and thereby affect our health outcomes.

How can we change the messages? By changing what is in our environment, what substances we take into our bodies, and what we do with our bodies for exercise and a sense of well-being. By making changes that take away the things that are a problem and that provide the things that are missing, we may indeed improve our health outcome.

This is all within our power. Modifying, altering, or radically redoing environment, diet, and lifestyle is entirely within the reach of every individual. How do we harness the understanding of the stunning biological breakthroughs of our era to make the changes that can prevent or reverse the chronic diseases that steal our organic reserve, vitality, and longevity? As the next chapter shows, the answer is personal.