Humans are for the most part microbial. Scientists have estimated that by cell count you have ten times more microbial cells than mammalian cells in your body. When geneticists compared microbial genes to mammalian genes, they found we are even more microbial genetically; while humans only have about 22,000 mammalian genes, we carry approximately ten million microbial genes. The totality of your microbial genes, including bacteria, viruses, fungi, and parasites, has been called your second genome.
This second genome is important beyond just the numerical comparison. To change the mammalian genome means changing the chromosomes in every cell of the body. A chromosome is a threadlike structure found in all living cells that is made up of nucleic acids and proteins and carries genes. A mammalian gene makeover would be a daunting task given the number of chromosomes and cells that would have to be changed. But while it is difficult to change mammalian genes, it’s comparatively easy to change the microbial genes. Basically, all you have to do is change your body’s microbial mix and you change your microbial genes. That fact provides a powerful new strategy for improving our health and well-being.
Another challenge for changing the mammalian genome is the fact that many mammalian genes work in groups. The attempt to change one gene can set off a chain reaction in other genes, resulting in some genes not being fully expressed. This may cause some functions to be altered or even go missing. Even if you got the mix right, that most likely would not correct most noncommunicable diseases. Things like asthma, diabetes, obesity, and autism seem likely to require changes in many genes and metabolic pathways involving the immune system as well as organs and tissues. In any case, the overriding limitation to mammalian gene therapy is that it only targets less than 1 percent of our total genes (first and second genome).
Think of the possibilities of targeting your microbial genes—99 percent of your total genes—by changing your gut microbes or your skin microbes. That is not science fiction. Researchers and clinicians have already accomplished what is called a “proof of concept.” They can make these changes, and they have methods that have already worked. As I take you through this chapter on the human superorganism, I will be stressing our genes, both microbial and mammalian; how they have affected our history and our present status; and how they are likely to affect our future and our children’s future.
Right there inside you and on you, you are the world, just as Michael Jackson and Lionel Richie and a host of other artists sang back in 1985. You are a microcosm of thousands of species from this world. You are not alone. You are more than you ever thought you were. What is in you is also shared with millions of people from places all over the world. People you have never met. Places you have never been. Yet you are related to them microbially. Your microbes are related to those people living continents away and decades before you.
In the past, we only thought about shared microbes in a very negative context since they most often led to infections that swept the globe. This was how plague, smallpox, typhoid, tuberculosis, and the polio virus became epidemics. But your resident microbes that don’t normally cause disease and support your body’s maturation and function have circled the globe as well. Most of us don’t think about our relatedness to a visitor from a continent other than that of our ancestors. But in total gene composition, including your microbial genes, a relationship is almost inescapable.
The connection with family is strong and has persisted through the millennia. Blood relations have been the basis of communities, tribes, and clans in many ancestral cultures. Blood relatives might wear clothing or other adornments signifying their loyalty, be it a Scottish tartan, a pattern of beads, or a specifically styled tattoo. Heraldic symbols on shields and crests, sometimes even family mottoes, labeled families as they marched into battle. In times when even many nobles were illiterate, they would also use these symbols as their official mark on contracts and letters.
Kinship has been a basis of politics for a very long time and continues even today—you can see it from Kazakhstan to Kennebunkport. To most of us, such affiliation is just the way the world works.
Microbial heritage and loss was not something that our ancestors could see, like the blood red of hemoglobin, or even understand until recently. Yet as a growing number of biologists around the world are beginning to recognize, microbial loss probably matters more than kinship for the human legacy parents leave their children—and their children’s children. Drawing on his research, New York University professor Martin Blaser recently suggested in his book Missing Microbes that we cannot afford the loss of microbial diversity that has been created, in part, by the overuse of antibiotics. The benefits of establishing and maintaining a healthy, family-based microbiome are clearly set out in his rigorously argued book.
Much like those ancestors who wore a Scottish tartan, we should wear our microbial colors with pride and strive to protect and preserve them. As we shift our focus from our first genome, consisting of mammalian genes, to our majority second genome, our microbial genes, our own perspective on ancestry and legacy is likely to shift as well.
The battle of the sexes at the center of various culture wars is likely to be upended by the idea of ourselves as superorganisms. This is how fundamental the new perspective is. Relationships among men and women, husbands and wives, appear in a new light. My wife and I jointly authored a history paper detailing the underappreciated role of women in the history of goldsmithing in Scotland. We found that women heavily influenced who got to apprentice to which silversmith, based on the women’s family ties, and what wares the silversmiths produced. Then men just trained and produced what was required.
Throughout history there have been two predominant inheritance types—patrilineal, dominated by the father’s family, and matrilineal, led by the mother’s family. Whether newlyweds lived with the father’s family, termed patrilocal, or with the mother’s family, called matrilocal, also had implications. There were also rules determining who paid whom for the privilege of marrying, who lived where after marriage, and who inherited the family’s wealth and property.
While these rules might have been more tribal, similar rules extended to rulers (e.g., kingship) and who decided who ruled (e.g., some Native American tribal chiefs were chosen by the women). While both male- and female-dominated societies have existed and do exist today, approximately 80 percent are patriarchal. Anthropologists argue that war may be a driving force behind this. Matrilineal descent, in which mother-daughter inheritance dominated, was often due to less certainty over paternity. Protecting the integrity of the family’s mammalian genome in the bloodline became the driving force behind patrilineal lineage.
The rules of kinship and succession certainly made life interesting. Take a specific case told over and over in movies, TV shows, and even operas. Henry VIII, king of England, had a lot of wives.
One could argue that English Protestantism arose because Henry VIII was unable to encourage more of his Y-chromosome-bearing sperm to perform their duty with his queens’ eggs. Yet given biological understanding in the 1500s, the women took the blame instead. Even if he had had an adult male heir, the real genetic inheritance would have been through his wife’s microbial genome. Since Henry did not have a long-surviving male heir, his daughter Elizabeth was eventually crowned queen of England. The daughter of Henry’s second wife, Anne Boleyn, Elizabeth was three when her mother was executed. Since the microbial genome is 99 percent and inherited from the mother, Elizabeth was more a queen in Anne’s lineage than that of heir-obsessed Henry.
Anne Boleyn’s reported craving for apples during the pregnancy probably helped to craft the eventual donated microbiome for Elizabeth. Two weeks before delivery she retired to a chamber that has been described as a cross between a chapel and a padded cell. Ironically, as the baby’s delivery approached, only women were allowed into Anne’s chamber. That would seem to suggest that any bystander microbes donated via skin-to-skin contact with Elizabeth were from ladies of the court and not from King Henry. At three P.M. on September 7, 1533, Elizabeth was born via natural delivery, and the baby’s biology was completed by Anne Boleyn’s donated microbiome. Observers noted that the baby Elizabeth got Henry’s red hair and Anne’s dark eyes. Of course, what they didn’t realize at the time is that Elizabeth had far more genes from Anne’s body than from Henry’s when ascending the British throne.
If you actually tally up the genetic contributions of Henry versus Anne to Elizabeth using the facts that Elizabeth had about 99 percent microbial genes and only 1 percent mammalian genes, it turns out that Henry donated only 0.5 percent of Elizabeth’s total genes, with Anne Boleyn providing 99.5 percent of the genes, minus a few microbial genes that came from birth attendants and wet nurses who might have breast-fed Elizabeth. Whose ancestral baby bottom graced the English throne? Mostly Anne’s.
Though Elizabeth had a lengthy, powerful rule, she never married nor produced a royal heir. Succession to her throne created significant contention. Her cousin Mary, Queen of Scots, vied for that honor, and their dramatic confrontations are legendary. Were that happening today, they would command a reality series in their own right. To keep Mary from gaining the British crown, Elizabeth had her imprisoned and eventually executed. In an ironic twist of fate, Mary’s son, James, who became king of Scotland while yet a baby, became Elizabeth’s successor. And Mary lived on and/or in James via the microbes she contributed for his gut during delivery and the skin microbes she exchanged with him when she held him. Most of the microbes and genes that passed from Anne Boleyn to her daughter, Elizabeth, or from Mary, Queen of Scots, to her son, James, had nothing to do with mammalian genetic kinship. The microbes carried far more biological information. Maybe it’s time for a new opera about Henry and Anne Boleyn.
You don’t have to go back multiple centuries to find societies in which the inheritance of power, money, and even fame were governed by the mammalian male line. It prevails in some cultures and societies today, and its pernicious influence may even be growing. It stems from what I call 1 percent thinking: the idea that a male heir passing chromosomes from generation to generation is the true test of a family’s worth, the true bloodline. It has virtually nothing to do with biology.
A preference for sons occurred in several agriculturally based cultures spanning the globe because they have tended to earn more money. The dowry system, where families with daughters had to pay the groom’s family for the right of the women to marry, also arguably led to a devaluing of women in general. In the twentieth century, this archaic tradition meshed with new technologies in an alarming way. During the 1980s, prenatal sex identification changed things, mostly in China but elsewhere in the world, too. If a baby’s sex could be identified in utero and sex-specific abortions were possible, well, the outcome was awful but perhaps not surprising—population selection against women. Female fetuses were aborted while male fetuses were carried to term. This of course does have a long-term biological consequence.
The view that the male offspring continues the family line is based on the pseudoscientific idea that a continuous line of males passes on the true family genetics. But again, chromosomes passed by males across generations only comprise less than 1 percent of the genes that are a part of us. And here is a twist: The 99 percent of microbial genes passed from generation to generation are largely inherited through the women in a family. Apparently, some cultures like the ancient Picts in Scotland got it just about right.
China, with the world’s largest population, implemented a one-child-per-family program in 1979. With some variations in different provinces, couples could only have one child. Two were permitted if the first was a girl. The policy was intended to last one generation, but it persisted. Given the culturally ingrained preference for a male heir and the ramifications of limited family size, the outcome led to a critical overabundance of male children and a shortage of females. With a present excess of forty-one million bachelors, according to the Population Reference Bureau, that figure is expected to grow to fifty-five million by the year 2020. Given the desperateness of the situation, China relaxed its one-child policy in 2013.
In India the situation is no better. A 2013 New York Times article looked at the “man problem” in India. Its author concluded that the excess numbers of unmarried men had led to increased violence against women. And the sex-selection problem during pregnancy is quite extreme in certain regions of the country, particularly where dowries are still culturally required for girls. While laws have been passed both to discourage sex selection and to do away with dowries, enforcement has been problematic.
So many historic and present-day conflicts, wars, views of succession, examples of racism, and even sex selection of offspring have been based on what we now understand are biological half-truths—all given currency by the dominance of the idea that you are what others can see or peek at. Advertising in our glossy online culture reinforces this image-based approach to human evaluation. We love body images. But that mammalian body image is not the real you. Essentially, you aren’t just a body; you are a superorganism. When you want to find your core, when you want to understand what is deep inside you, when you want to control your health and moods and interactions with others better, you must seek out the genetic 99 percent of you that is microbial.
The seeding of the newborn’s microbiome occurs largely at birth. Prenatally, the baby is exposed to some microbes, such as bacteria associated with the placenta, and this no doubt helps with prenatal immune maturation.
The placenta has a much smaller community of bacteria, including the phyla Firmicutes, Tenericutes, Proteobacteria, Bacteroidetes, and Fusobacteria. The microbiome of the placenta seems to most closely resemble that of the mouth. The diversity of the placenta’s microbes seems to be related to the baby’s prenatal development. In a recent study from Beijing, China, researchers found that the placental microbiomes associated with normal weight versus low birth weight in babies differed significantly. Lower-birth-weight babies had placentas that were comparatively barren in terms of bacterial diversity and were also reduced in the percentage of Lactobacillus bacteria.
Maternal environment, including diet, stress, and drugs (e.g., antibiotics), plays a large part in crafting the array of microbes that will seed the baby. The birth event itself is the single most important step in the seeding process. It is during vaginal delivery that the baby is exposed to both microbes in the vagina and those from the mother’s cecum, a portion of the large intestine near the appendix. Bacteria that can grow with or without oxygen, such as Enterobacteriaceae bacteria, are among the first to appear in the newborn’s gut, and these are replaced shortly thereafter with several different types of oxygen-hating bacteria (Bifidobacterium, Bacteroides, and Clostridium).
These are the founding microbes that are the first co-partners of the newborn. These beautifully simple but ancient organisms include bacteria, viruses, fungi, and eukaryotic microbes (cells that have a nucleus) such as yeast. Because the baby’s physiological systems are actively maturing during the first few months to years of life, interactions with these founding microbes exert a lasting impact on organ and tissue development.
Skin-to-skin contact between the mother and her baby and breast-feeding help to complete the microbiome seeding process. Both the skin and breast-feeding transfer specific microbes, many of which differ from those transferred during vaginal delivery. In premature babies, skin-to-skin contact is often referred to as kangaroo care, where the baby is carried against the mother’s skin. This not only helps with skin microbiome seeding but also seems to help premature infants catch up in their maturation. Changes in the infant microbiome will occur as the baby grows and matures and the baby’s sources of food become more diverse.
Breast milk is the ideal food for the baby with few exceptions, one of which is if the milk has been contaminated with unusually high levels of toxic chemicals that could harm the baby. In addition to providing specific immunological factors that help to protect the baby from infections, breast milk is unique in that it contains certain sugars (oligosaccharides) that our mammalian cells cannot digest but that are needed by our microbes. It is specially designed to feed those newly seeded microbes in the baby’s gut and help their maturation over the early stages of an infant’s life. An indicator of just how important our microbiome is to us is the fact that human breast milk contains foods designed exclusively for the microbes. Additionally, breast milk appears to be a source of extra microbes that are transferred via breast-feeding so that it functions as a type of probiotic food.
Breast milk contains several hundred species of bacteria, and these microbes, plus the microbial food (prebiotics) found in breast milk, help to guide maturation of the infant’s gut. In fact, breast milk is probably the first probiotic food the baby will consume. The exact composition of the breast milk microbiome differs based on several factors, including whether the mother delivered vaginally or by cesarean section. Not surprisingly, lactobacilli are prominent in breast milk, along with other lactic-acid-eating bacteria. But these are only the tip of the iceberg. Other bacteria, such as Bifidobacterium species and Staphylococcus aureus, are found as well. Antibiotic treatment during pregnancy or lactation can affect the concentration of bacteria in breast milk. Additionally, the milk of mothers who delivered vaginally had an increased diversity of bacteria, with fewer Staphylococcus species bacteria than the milk from mothers who had an elective C-section. Just like other probiotics in food or supplements, the microbes within human milk can alter the baby’s metabolism and may even take up longer-term residence in the baby’s gut.
In turn, when the gut microbes of the baby are fed their preferred food, they will produce breakdown products (i.e., metabolites) from the breast milk that the baby needs to grow and mature. Again, breast milk contains unique food designed not for the baby’s mammalian cells but to be used by the baby’s microbes to produce vitamins and other metabolites that a baby needs. Obviously, formulas and other breast milk substitutes that do not adequately feed the baby’s newly founded microbiome can alter the course of microbiome development and also can result in developmental problems for the baby’s physiological systems. This is something that developers of formulas back in the twentieth century simply did not understand. They were operating under the old biology.
Other body sites of the baby exposed to the environment, such as the airways and the urogenital tract, are also populated with microbes shortly after birth. In general, far more is known about the microbes of the gastrointestinal tract than those inhabiting the other body sites. This is simply a reflection of the amount of microbiome-related research that has focused on the gut compared with the skin, airways, and urogenital sites.
As the baby grows and matures, the microbiome grows and matures as well. It is a true partnership, with the microbes of each body site fine-tuned to coexist at that particular site and in communication with those particular cells in the baby’s body. Each life stage of the growing child exhibits changes in the physiological systems as well as in the mix of microbes. What happens at these early stages with the microbiome is absolutely critical for later-life health. That is because the baby is very sensitive to being developmentally programmed for gene activity from conception through the first couple of years of life. Those developmental windows, which I have termed critical windows of vulnerability in prior publications, are when attention to the care and feeding of the microbiome can yield the biggest dividends. It turns out that each physiological system (e.g., immune, respiratory, neurological) has its own specific developmental windows of vulnerability that are very sensitive to environmental influences, including those affected by the microbiome. This means that getting a well-balanced microbiome in place early has added health advantages.
The microbial world is far more than meets the eye. Soil and some plants harbor bacteria that have the capacity to “fix” nitrogen. That means they can take nitrogen gas from the atmosphere and turn it into a form (e.g., ammonia) that plants like peas, soybeans, and alfalfa can use and that eventually enriches the soil. In return the mutualistic nitrogen-fixing bacteria living among the root hairs of some plants get energy sources from the plants. They also cycle the building blocks of protein (amino acids) with the plant, each helping the other out.
The earth itself appears to be encased in a microbial bubble. Recent studies suggest that the range of environmental microbes extends into earth’s upper atmosphere under remarkably harsh conditions. In fact, it is thought that they are likely to affect, if not control, climate. Analysis of recent hurricanes showed that the bacterial communities in the hurricane cells were different compared to the regular bacterial composition of the upper atmosphere. Patterns can reveal similarities otherwise overlooked. Hurricanes are a perturbation of the atmosphere, so the pattern of atmospheric microbes is altered from the norm; maybe what happens in humans with the perturbation of the microbiome results in a hurricane in the body in the form of a noncommunicable disease.
One of the current questions is whether microbes can survive in space. A newly discovered, extremely tolerant bacterium has shown up twice in different space agency facilities where highly sterilized materials were being prepared for launch. One case was at the Kennedy Space Center in Florida and a second at the European Space Agency facility in French Guiana. In fact, part of the name given to this new family of bacteria translates from the Latin into “clean.” Some evidence suggests that certain bacteria have the capacity to survive the harsh conditions of space. Experiments were conducted on the International Space Station, and spores of a particular bacterium (Bacillus pumilus) that had previously been isolated on prior spacecrafts were able to survive real space exposure. The bacterial cells subsequently produced by viable spores had an increased resistance to the most damaging type of ultraviolet radiation.
Whether we superorganisms originated here on earth or elsewhere, it seems clear that our earliest ancestors grew up with microbes as an integral part of their lives. A novel team of diligent researchers from the anthropology, computer science, natural resources, and biochemistry departments of several US universities compared the microbes present in feces samples found in archeological digs of extinct early humanoid communities. They found that not only were the microbial analyses possible, but the results showed these samples matched present-day human microbiomes rather well. However, the similarities were greatest for ancient human predecessors and present-day humans residing in agricultural communities. Urban living appears to have shifted our microbiome significantly from what has been found so far among our most ancient predecessors.
Not surprisingly, as human behavior and food supplies changed in our early existence, so, too, did our microbiome. Scientists in Australia have analyzed the DNA of the oral microbiome from ancient teeth and compared bacterial species across different eras of human civilization. They found the transition from a hunter-gatherer society to one based on agriculture was directly associated with a shift in the types of bacteria found in the mouth. Our microbiome matches our fundamental lifestyle and has for a very long time.
The idea that we owe our continued existence to microbes and will not function well or be healthy without our microbial partners is not totally heretical. The groundwork was firmly laid via the work of famed biologist, National Academy of Sciences inductee, and then Boston University professor Lynn Margulis. Margulis was a visionary in her own right, and she married Carl Sagan, physicist and biologist, Cornell professor, and the soon-to-be host of TV’s Cosmos series and the most popular scientist of his generation. Can you imagine the dinner table conversations? Margulis and Sagan made quite a scientific power couple, though, in fact, they went their separate ways just before real fame struck.
In 1967 Margulis first suggested the idea that ancient bacteria were so critical for our cells’ function that our own cells had captured and incorporated these bacteria into their cellular structure. In the process known as endosymbiosis, different domains of life got intermixed. As mammals with nucleated cells, we are part of the Eukaryota domain of life. Our cells literally ate organisms from the Bacteria domain of life (not surprisingly made up of bacteria) and then kept them inside as part of new hybrid cells. These new cells kept the bacteria, including the bacterial genes, virtually intact. These bacterial remnants are the mitochondria, which sit outside each mammalian cell’s nucleus, where our chromosomes live. All cells with a nucleus have mitochondria.
Even plants have an organelle called the chloroplast that is thought to originally have been a type of bacteria. Both mitochondria and chloroplasts generate energy for the cells in ways that are entirely different from the methods the cells use to generate energy. As a result, the new hybrid organism gained both power and adaptability.
The essence of the merged-species idea is summarized in a book written by Lynn Margulis and her son Dorion Sagan titled Acquiring Genomes. Margulis believed that the progress of species evolution occurred more significantly by interspecies deals than otherwise. Obviously, this did not set well with strict Darwinian admirers who were looking for more tedious mutation-based development. The question is, why wait so long and hope for mutations when you can beg, borrow, or steal a whole useful genome or at least some advantageous bacterial genes? In fact, there is good evidence that gene exchange happens quite often. Molecular evidence suggests that many of the chromosomal genes in eukaryotes probably originated in archaea and bacteria. In other words, we are chimeras. It would seem that some of the functions found in the human genome originated with our bacterial ancestors.
When you add horizontal transfer of bacterial genes to mammals, including humans, along with the billions of microbes that call the human body home, we become a rather impressive superorganism. We are a holobiont, like a coral reef with its wide diversity of organisms working together to create a whole that is greater than the sum of the parts.
Indeed, maybe we superorganisms were once more like a sort of coral reef warmed by geothermal energy beneath a frozen ocean on a moon orbiting Jupiter than like individual Olympian demigods.
My roles as a research toxicologist focused on the immune system, director of Cornell’s university-wide toxicology program, and senior fellow in Cornell’s Center for the Environment required considerable thought about safety evaluation. That is the protection of human health—including its history, present-day status, and future evolution—as well as the broader ecosystem. The fundamental tenet of toxicology and environmental safety in general was voiced back in the 1500s by the German physician, alchemist, and polymath Paracelsus. It is what drove the entire field of toxicology. The mantra is “the dose makes the poison.” The real-life effect of this mantra is that what is safe and even useful at one dose might make you sick or kill you at a higher dose. This remains a driving force in modern-day toxicology and is applied through various government-driven safety regulations around the globe. It holds for all of toxicology with only a few exceptions. For example, at the moment scientists and regulators are wondering whether there is truly a safe level for human exposure to some heavy metals such as lead. A safe level of lead exposure has yet to be found as our capacity to measure the adverse effects of lead exposure has increased significantly over the past decade.
What we call safe is only as good as our methods used to evaluate safety. While the science and practice of toxicology has saved countless lives and evolved significantly from the earliest days of food tasters, it is not without historic shortcomings. In fact, the history of toxicology is full of unpleasant surprises and has led to the conclusion that what we don’t know can kill us.
Lead in ancient pewter ware and glass, and the human exposure that resulted, is thought to have helped hasten the decline of the Roman Empire. According to the eighteenth-century Scottish physician and chemist William Cullen, in the Middle Ages arsenic was a go-to poison for politically based assassinations. The Borgia family relied on it heavily, and it is thought that Napoleon Bonaparte died of arsenic poisoning. Mercury used in industrial advances led to many unintended consequences. The term “mad as a hatter” is derived from the heavy mercury exposure within the hat industry (millinery) from exposure to vapors associated with the felting process. But other craftsmen were equally involved with unsuspected risks. The advent of silver-plating technology in Britain (e.g., London, Birmingham, Edinburgh) during the early nineteenth century landed many a goldsmith in either an insane asylum or an early grave.
The twentieth-century play Arsenic and Old Lace depicted the heavy metal arsenic as a source of homicide, with the story later adapted to a movie starring Cary Grant. More exotic toxins, including those from trees, were featured even in nineteenth-century literature and romantic operas. For example, the tropical manchineel tree with its many toxic chemicals is a major plot element in Meyerbeer’s final operatic work, L’Africaine. There it serves as a marathon opera-ending method for the lead soprano’s suicide. A second toxic tree of literary fame is the Asian upas found in places like Java. Its chemicals can produce heart attacks. It turns out the tree does produce a highly toxic substance, but it usually has to be concentrated before creating the type of widespread killing that captured the literary imagination of Erasmus Darwin, the grandfather of Charles Darwin; the Russian poet Alexander Pushkin; and others.
During my time as toxicology director, I sometimes authored blurbs for the New York Times science section Q&A regarding public health toxicology issues. The questions ranged from “Why can you eat blue cheese and not die?” to the toxicity of some fruit pits (e.g., apricots). It turns out the latter make a chemical called amygdalin that, when mixed with stomach acid, produces the poison cyanide. Little did I know that that article would eventually lead to the identification of an imported health food product loaded with amygdalin that had been jeopardizing the health of Manhattan consumers.
Natural toxic chemicals exist as well. Poison dart frogs make a poison used by indigenous populations on their arrow tips. Moldy grains can be contaminated with aflatoxin, resulting in disease and death for those who consume the contaminated food. Given all these toxins in our environment, how come we aren’t all already dead?
Recently, the microbiome has taken on new importance as a type of protective wardrobe that is able to connect us seamlessly with our external environment. You can think of it a little like a Batman or Spider-Man suit. It helps to make you who you are.
It prescreens or filters everything we see in the environment outside ourselves (foods, drugs, chemicals, other microbes), and it is our gatekeeper, determining what gets through to our mammalian cells, tissues, and organs. You can also think of it as our universal translator for a world we otherwise would view as highly threatening. Professor Ellen Silbergeld of the Johns Hopkins School of Public Health and I jointly published a paper describing the gatekeeping function of the microbiome. Other researchers such as Peter Turnbaugh and colleagues have described the importance of the microbiome in interactions with substances in the world external to us (called xenobiotics). The microbiome links our external and internal environments with communication occurring in both directions. If the microbiome is absent, deficient, or defective, our living, breathing, dynamic connection to the world is in trouble. Our very existence then becomes an us-versus-the-environment war with an underdeveloped, untrained immune system as the sole arbiter. The microbiome knows both our insides and our outsides. When it fails, we are left with a system unable to recognize what is us and what is external to us. The consequences are enormous, and we can see them all around us.
In many ways your microbiome should fit like a glove. It can be and should be a perfect match for your mammalian self, such that the two components work hand in hand. As I was preparing this chapter, I came across an analogy from the sporting news.
In the world of competitive athletics, one’s garb can help make the athlete. This is particularly true when speed, agility, and/or endurance are involved. Use of skintight suits can provide an aerodynamic advantage while supporting the individual athlete’s maximum physical performance. It creates a competitive edge. Space-age technology goes into these competitive uniforms. At the 2008 Summer Olympics in Beijing, the US men’s swimming team’s special Speedo suits were the rage, and the Spyder-designed suits worn by gold medalist Lindsey Vonn and other US skiers were thought to be an advantage at the 2010 Vancouver Winter Olympics.
But technology in the absence of individual, personalized suitability is not always the answer. Take, for example, the highly favored US speed skating team, which utilized newly designed, specially crafted, high-tech suits requiring that measurements be taken well in advance of the competition. The new suits, called Mach 39, arrived just before the start of the 2014 Winter Olympic in Sochi, meaning that the athletes had not been able to wear them in competition. In contrast, the Dutch team brought competition-proven suits along with their tailor, who would make daily individualized adjustments to the suits and/or equipment. In the end the US team grossly underperformed, causing them to change suits mid-Olympics, while the Dutch team stunned the world with their medal dominance in that sport. If Olympic athletic wear is neither happenstance nor off-the-rack but specifically tailored to fit the individual competitor and ultimately to enhance his or her performance, your microbiome appears to be uniquely matched to your specific mammalian genome. To your body, it should feel like an old friend. From a biological perspective, this makes sense since they have to work together just as life on a coral reef has to in order to thrive. With natural, vaginal childbirth, you would grow from a fertilized egg containing a mixed selection of your parents’ mammalian chromosomes as well as the microbes acquired from your mother that lived with your mother’s mammalian genes. Human studies have shown how your microbiome complements your host mammalian genes.
One way to examine genetic versus environmental effects in humans is through studies of identical twins, which are babies who come from the same fertilized egg and are genetically identical. Fraternal twins come from two different eggs fertilized by two different sperm, though they develop side by side in their mother’s womb. Twins can also share a placenta or each have their own. For scientific studies, identical twins are golden because their mammalian genetics are a known factor.
But if twins are good, triplets may be even better. A study was conducted in Cork, Ireland, looking at the gut microbiome of three sets of triplets. The babies were followed from birth to one year of age. In each set of triplets, two babies were from the same sperm and egg (developing as identical twins and carrying identical mammalian chromosomes) while a third was from a different sperm and different egg (and different in some mammalian genes). This is called the fraternal triplet. All babies were born by elective cesarean delivery, meaning that microbial seeding was not through the mother’s vagina. They were fed a mixture of breast milk and formula.
A major focus was on one set of healthy triplets where none of the babies received antibiotics. At one month of age, the microbiomes of the two babies from the same sperm and egg were very similar, while the third (fraternal) baby, who had developed from a different sperm and egg but been carried in the same mother, differed from the two siblings in the fecal profile of gut microbes. By one year of age, these differences had largely disappeared between the three healthy triplets. This finding suggests that our own mammalian genes can have some effect on the microbes that match up to complete us. It is a mini marriage of the two sets of mammalian and microbial genes, at least for the first few days of life.
The triplets where some babies got antibiotics produced a different outcome. In these two sets of triplets, the disruption of the babies’ microbiome by the antibiotics had a much greater effect on the babies’ mix of gut microbes than did the mammalian genetics (sperm and egg differences). The particular egg or sperm each baby developed from became largely irrelevant in the face of the antibiotic treatment.
For understanding how our mammalian and microbial components fit together in a superorganism, it can be useful to ask questions like, who drives our bus? That question as to who controls our complex body is, at least for me, an unanticipated part of this new biology. John Cryan of Cork and his colleagues recently published a paper suggesting the possibility that our microbiome may act more like the puppet master, a Geppetto, to our puppet, Pinocchio. They describe how microbes can dramatically affect brain function and behavior. Exactly who is in charge, our microbiome or our mammalian self, remains an open question. However, the work of Cryan and others is showing us that (1) our gut microbes can produce staggering, mind-altering effects as potent as any drug’s, and (2) these are likely to be useful for future therapeutic purposes.
Mechanisms that demonstrate a psychological consequence to the nature of the microbes with which we share our lives raise an array of unsettling existential questions. Perhaps foremost among them is: How many suicides have microbes caused?
As we consider the rest of the new biology that has emerged and how it applies to our health, one thing should become very clear. The microbiome plays a pivotal role in our possibility for a healthful life. Whether you think your microbes are driving your bus or are simply occupying a majority of seats on your bus, they are part of your personal life’s journey. You will soon have the capacity to exert some level of control over your own personal microbiome. What might you do with that?