5

THE REINVENTION OF LIFE AND DEATH

FOR THE FIRST TIME IN HISTORY, THE DIGITIZATION OF PEOPLE IS CREATING a new capability to change the being in human being. The convergence of the Digital Revolution and the Life Sciences Revolution is altering not only what we know and how we communicate, not just what we do and how we do it—it is beginning to change who we are.

Already, the outsourcing and robosourcing of the genetic, biochemical, and structural building blocks of life itself are leading to the emergence of new forms of microbes, plants, animals, and humans. We are crossing ancient boundaries: the boundary that separates one species from another, the divide between people and animals, and the distinction between living things and man-made machinery.

In mythology, the lines dividing powers reserved for the gods from those allowed to people were marked by warnings; transgressions were severely punished. Yet no Zeus has forbidden us to introduce human genes into other animals; or to create hybrid creatures by mixing the genes of spiders and goats; or to surgically imbed silicon computer chips into the gray matter of human brains; or to provide a genetic menu of selectable traits for parents who wish to design their own children.

The use of science and technology in an effort to enhance human beings is taking us beyond the outer edges of the moral, ethical, and religious maps bequeathed to us by previous generations. We are now in terra incognita, where the ancient maps sometimes noted, “There Be Monsters.” But those with enough courage to sail into the unknown were often richly rewarded, and in this case, the scientific community tells us with great confidence that in health care and other fields great advances await us, even though great wisdom will be needed in deciding how to proceed.

When humankind takes possession of a new and previously unimaginable power, the experience often creates a mixture of exhilaration and trepidation. In the teachings of the Abrahamic religions, the first man and the first woman were condemned to a life of toil when they seized knowledge that had been forbidden them. When Prometheus stole fire from the gods, he was condemned to eternal suffering. Every day, eagles tore into his flesh and consumed his liver, but every night his liver was regenerated so he could endure the same fate the next morning.

Ironically, scientists at Wake Forest University are now genetically engineering replacement livers in their laboratory bioreactors—and no one doubts that their groundbreaking work is anything but good. The prospects for advances in virtually all forms of health care are creating exhilaration in many fields of medical research—though it is obvious that the culture and practice of medicine, along with all of the health care professions and institutions, will soon be as disruptively reorganized as the typewriter and long-playing record businesses before it.

“PRECISION HEALTH CARE”

With exciting and nearly miraculous potential new cures for deadly diseases and debilitating conditions on the research horizon, many health care experts believe that it is inevitable that the practice of medicine will soon be radically transformed. “Personalized medicine,” or, as some now refer to it, “precision medicine,” is based on digital and molecular models of an individual’s genes, proteins, microbial communities, and other sources of medically relevant information. Most experts believe it will almost certainly become the model for medical care.

The ability to monitor and continuously update individuals’ health functions and trends will make preventive care much more effective. The new economics of health care driven by this revolution may soon make the traditional insurance model based on large risk pools obsolete because of the huge volume of fine-grained information about every individual that can now be gathered. The role of insurance companies is already being reinvented as these firms begin to adopt digital health models and mine the “big data” being created.

Pharmaceuticals, which are now aimed at large groups of individuals manifesting similar symptoms, will soon be targeted toward genetic and molecular signatures of individual patients. This revolution is already taking place in cancer treatment and in the treatment of “orphan diseases” (those that affect fewer than 200,000 people in the U.S.; the definition varies from country to country). This trend is expected to broaden as our knowledge of diseases improves.

The use of artificial intelligence—like IBM’s Watson system—to assist doctors in making diagnoses and prescribing treatment options promises to reduce medical errors and enhance the skills of physicians. Just as artificial intelligence is revolutionizing the work of lawyers, it will profoundly change the work of doctors. Dr. Eric Topol, in his book The Creative Destruction of Medicine, writes, “This is much bigger than a change; this is the essence of creative destruction as conceptualized by [Austrian economist Joseph] Schumpeter. Not a single aspect of health and medicine today will ultimately be spared or unaffected in some way. Doctors, hospitals, the life science industry, government and its regulatory bodies: all are subject to radical transformation.”

Individuals will play a different role in their own health care as well. Numerous medical teams are working with software engineers to develop more sophisticated self-tracking programs that empower individuals to be more successful in modifying unhealthy behaviors in order to manage chronic diseases. Some of these programs facilitate more regular communication between doctors and patients to discuss and interpret the continuous data flows from digital monitors that are on—and inside—the patient’s body. This is part of a broader trend known as the “quantified self” movement.

Other programs and apps create social networks of individuals attempting to deal with the same health challenges—partly to take advantage of what scientists refer to as the Hawthorne effect: the simple knowledge that one’s progress is being watched by others leads to an improvement in the amount of progress made. For example, some people (I do not include myself in this group) are fond of the new scales that automatically tweet their weight so that everyone who follows them will see their progress or lack thereof. There are new companies being developed based on the translation of landmark clinical trials (such as the Diabetes Prevention Program) from resource-intensive studies into social and digital media programs. Some experts believe that global access to large-scale digital programs aimed at changing destructive behaviors may soon make it possible to significantly reduce the incidence of chronic diseases like diabetes and obesity.

THE NEW ABILITIES scientists have gained to see, study, map, modify, and manipulate cells in living systems are also being applied to the human brain. These techniques have already been used to give amputees the ability to control advanced prosthetic arms and legs with their brains, as if they were using their own natural limbs—by connecting the artificial limbs to neural implants. Doctors have also empowered paralyzed monkeys to operate their arms and hands by implanting a device in the brain that is wired to the appropriate muscles. In addition, these breakthroughs offer the possibility of curing some brain diseases.

Just as the discovery of DNA led to the mapping of the human genome, the discovery of how neurons in the brain connect to and communicate with one another is leading inexorably toward the complete mapping of what brain scientists call the “connectome.”* Although the data processing required is an estimated ten times greater than that required for mapping the genome, and even though several of the key technologies necessary to complete the map are still in development, brain scientists are highly confident that they will be able to complete the first “larger-scale maps of neural wiring” within the next few years.

The significance of a complete wiring diagram for the human brain can hardly be overstated. More than sixty years ago, Teilhard de Chardin predicted that “Thought might artificially perfect the thinking instrument itself.”

Some doctors are using neural implants to serve as pacemakers for the brains of people who have Parkinson’s disease—and provide deep brain stimulation to alleviate their symptoms. Others have used a similar technique to alert people with epilepsy to the first signs of a seizure and stimulate the brain to minimize its impact. Others have long used cochlear implants connected to an external microphone to deliver sound into the brain and the auditory nerve. Interestingly, these devices must be activated in stages to give the brain a chance to adjust to them. In Boston, scientists at the Massachusetts Eye and Ear Infirmary connected a lens to a blind man’s optic nerve, enabling him to perceive color and even to read large print.

Yet for all of the joy and exhilaration that accompany such miraculous advances in health care, there is also an undercurrent of apprehension for some, because the scope, magnitude, and speed of the multiple revolutions in biotechnology and the life sciences will soon require us to make almost godlike distinctions between what is likely to be good or bad for the entire future of the human species, particularly where permanently modifying the gene pool is concerned. Are we ready to make such decisions? The available evidence would suggest that the answer is not really, but we are going to make them anyway.

A COMPLEX ETHICAL CALCULUS

We know intuitively that we desperately need more wisdom than we currently have in order to responsibly wield some of these new powers. To be sure, many of the choices are easy because the obvious benefits of most new genetically based interventions make it immoral not to use them. The prospect of eliminating cancer, diabetes, Alzheimer’s, multiple sclerosis, and other deadly and fearsome diseases ensures these new capabilities will proceed at an ever accelerating rate.

Other choices may not be as straightforward. The prospective ability to pick traits like hair and eye color, height, strength, and intelligence to create “designer babies” may be highly appealing to some parents. After all, consider what competitive parenting has already done for the test preparation industry. If some parents are seen to be giving their children a decisive advantage through the insertion of beneficial genetic traits, other parents may feel that they have to do the same.

Yet some genetic alterations will be passed on to future generations and may trigger collateral genetic changes that are not yet fully understood. Are we ready to seize control of heredity and take responsibility for actively directing the future course of evolution? As Dr. Harvey Fineberg, president of the Institute of Medicine, put it in 2011, “We will have converted old-style evolution into neo-evolution.” Are we ready to make these choices? Again, the answer seems to be no, yet we are going to make them anyway.

But who is the “we” who will make these choices? These incredibly powerful changes are overwhelming the present capacity of humankind for deliberative collective decision making. The atrophy of American democracy and the consequent absence of leadership in the global community have created a power vacuum at the very time when human civilization should be shaping the imperatives of this revolution in ways that protect human values. Instead of seizing the opportunity to drive down health costs and improve outcomes, the United States is decreasing its investment in biomedical research. The budget for the National Institutes of Health has declined over the past ten years, and the U.S. education system is waning in science, math, and engineering.

One of the early pioneers of in vitro fertilization, Dr. Jeffrey Steinberg, who runs the Los Angeles Fertility Institutes, said that the beginning of the age of active trait selection is now upon us. “It’s time for everyone to pull their heads out of the sand,” says Steinberg. One of his colleagues at the center, Marcy Darnovsky, said that the discovery in 2012 of a noninvasive process to sequence a complete fetal genome is already raising “some scenarios that are extremely troubling,” adding that among the questions that may emerge from wider use of such tests is “who deserves to be born?”

Richard Hayes, executive director of the Center for Genetics and Society, expressed his concern that the debate on the ethical questions involved with fetal genomic screening and trait selection thus far has primarily involved a small expert community and that, “Average people feel overwhelmed with the technical detail. They feel disempowered.” He also expressed concern that the widespread use of trait selection could lead to “an objectification of children as commodities.… We support the use of [preimplantation genetic diagnosis (PGD)] to allow couples at risk to have healthy children. But for non-medical, cosmetic purposes, we believe this would undermine humanity and create a techno-eugenic rat race.”

Nations are competitive too. China’s Beijing Genomic Institute (BGI) has installed 167 of the world’s most powerful genomic sequencing machines in their Hong Kong and Shenzhen facilities that experts say will soon exceed the sequencing capacity of the entire United States.

Its initial focus is finding genes associated with higher intelligence and matching individual students with professions or occupations that make the best use of their capabilities.

According to some estimates, the Chinese government has spent well over $100 billion on life sciences research over just the last three years, and has persuaded 80,000 Chinese Ph.D.’s trained in Western countries to return to China. One Boston-based expert research team, the Monitor Group, reported in 2010 that China is “poised to become the global leader in life science discovery and innovation within the next decade.” China’s State Council has declared that its genetic research industry will be one of the pillars of its twenty-first-century industrial ambitions. Some researchers have reported preliminary discussions of plans to eventually sequence the genomes of almost every child in China.

Multinational corporations are also playing a powerful role, quickly exploiting the many advances in the laboratory that have profitable commercial applications. Having invaded the democracy sphere, the market sphere is now also bidding for dominance in the biosphere. Just as Earth Inc. emerged from the interconnection of billions of computers and intelligent devices able to communicate easily with one another across all national boundaries, Life Inc. is emerging from the ability of scientists and engineers to connect flows of genetic information among living cells across all species boundaries.

The merger between Earth Inc. and Life Inc. is well under way. Since the first patent on a gene was allowed by a Supreme Court decision in the U.S. in 1980, more than 40,000 gene patents have been issued, covering 2,000 human genes. So have tissues, including some tissues taken from patients and used for commercial purposes without their permission. (Technically, in order to receive a patent, the owner must transform, isolate, or purify the gene or tissue in some way. In practice, however, the gene or tissue itself becomes commercially controlled by the patent owner.)

There are obvious advantages to the use of the power of the profit motive and of the private sector in exploiting the new revolution in the life sciences. In 2012, the European Commission approved the first Western gene therapy drug, known as Glybera, in a treatment of a rare genetic disorder that prevents the breakdown of fat in blood. In August 2011, the U.S. Food and Drug Administration (FDA) approved a drug known as Crizotinib for the targeted treatment of a rare type of lung cancer driven by a gene mutation.

However, the same imbalance of power that has produced dangerous levels of inequality in income is also manifested in the unequal access to the full range of innovations important to humanity flowing out of the Life Sciences Revolution. For example, one biotechnology company—Monsanto—now controls patents on the vast majority of all seeds planted in the world. A U.S. seed expert, Neil Harl of Iowa State University, said in 2010, “We now believe that Monsanto has control over as much as 90 percent of [seed genetics].”

The race to patent genes and tissues is in stark contrast to the attitude expressed by the discoverer of the polio vaccine, Jonas Salk, when he was asked by Edward R. Murrow, “This vaccine is going to be in great demand. Everyone’s going to want it. It’s potentially very lucrative. Who holds the patent?” In response, Salk said, “The American people, I guess. Could you patent the sun?”

THE DIGITIZATION OF LIFE

In Salk’s day, the idea of patenting life science discoveries intended for the greater good seemed odd. A few decades later, one of Salk’s most distinguished peers, Norman Borlaug, implemented his Green Revolution with traditional crossbreeding and hybridization techniques at a time when the frenzy of research into the genome was still in its early stages. Toward the end of his career, Borlaug referred to the race in the U.S. to lock down ownership of patents on genetically modified plants, saying, “God help us if that were to happen, we would all starve.” He opposed the dominance of the market sphere in plant genetics and told an audience in India, “We battled against patenting … and always stood for free exchange of germplasm.” The U.S. and the European Union both recognize patents on isolated or purified genes. Recent cases in the U.S. appellate courts continue to uphold the patentability of genes.

On one level, the digitization of life is merely a twenty-first-century continuation of the story of humankind’s mastery over the world. Alone among life-forms, we have the ability to make complex informational models of reality. Then, by learning from and manipulating the models, we gain the ability to understand and manipulate the reality. Just as the information flowing through the Global Mind is expressed in ones and zeros—the binary building blocks of the Digital Revolution—the language of DNA spoken by all living things is expressed in four letters: A, T, C, and G.

Even leaving aside its other miraculous properties, DNA’s information storage capacity is incredible. In 2012, a research team at Harvard led by George Church encoded a book with more than 50,000 words into strands of DNA and then read it back with no errors. Church, a molecular biologist, said a billion copies of the book could be stored in a test tube and be retrieved for centuries, and that “a device the size of your thumb could store as much information as the whole internet.”

At a deeper level, however, the discovery of how to manipulate the designs of life itself marks the beginning of an entirely new story. In the decade following the end of World War II, the double helix structure of DNA was discovered by James Watson, Francis Crick, and Rosalind Franklin. (Franklin was, historians of science now know, unfairly deprived of recognition for her seminal contributions to the scientific paper announcing the discovery in 1953. She died before the Nobel Prize in Medicine was later awarded to Watson and Crick.) In 2003, exactly fifty years later, the human genome was sequenced.

Even as the scientific community is wrestling with the challenges of all the data involved in DNA sequencing, they are beginning to sequence RNA (ribonucleic acid), which scientists are finding plays a far more sophisticated role than simply serving as a messenger system to convey the information that is translated into proteins. The proteins themselves—which among other things actually build and control the cells that make up all forms of life—are being analyzed in the Human Proteome Project, which must deal with a further large increase in the amount of data involved. Proteins take many different forms and are “folded” in patterns that affect their function and role. After they are “translated,” proteins can also be chemically modified in multiple ways that extend their range of functions and control their behavior. The complexity of this analytical challenge is far beyond that involved in sequencing the genome.

“Epigenetics” involves the study of inheritable changes that do not involve a change in the underlying DNA. The Human Epigenome Project has made major advances in the understanding of these changes. Several pharmaceutical products based on epigenetic breakthroughs are already helping cancer patients, and other therapeutics are being tested in human clinical trials. The decoding of the underpinnings of life, health, and disease is leading to many exciting diagnostic and therapeutic breakthroughs.

In the same way that the digital code used by computers contains both informational content and operating instructions, the intricate universal codes of biology now being deciphered and catalogued make it possible not only to understand the blueprints of life-forms, but also to change their designs and functions. By transferring genes from one species to another and by creating novel DNA strands of their own design, scientists can insert them into life-forms to transform and commandeer them to do what they want them to do. Like viruses, these DNA strands are not technically “alive” because they cannot replicate themselves. But also like viruses, they can take control of living cells and program behaviors, including the production of custom chemicals that have value in the marketplace. They can also program the replication of the DNA strands that were inserted into the life-form.

The introduction of synthetic DNA strands into living organisms has already produced beneficial advances. More than thirty years ago, one of the first breakthroughs was the synthesis of human insulin to replace less effective insulin produced from pigs and other animals. In the near future, scientists anticipate significant improvements in artificial skin and synthetic human blood. Others hope to engineer changes in cyanobacteria to produce products as diverse as fuel for vehicles and protein for human consumption.

But the spread of the technology raises questions that are troubling to bioethicists. As the head of one think tank studying this science put it, “Synthetic biology poses what may be the most profound challenge to government oversight of technology in human history, carrying with it significant economic, legal, security and ethical implications that extend far beyond the safety and capabilities of the technologies themselves. Yet by dint of economic imperative, as well as the sheer volume of scientific and commercial activity underway around the world, it is already functionally unstoppable … a juggernaut already beyond the reach of governance.”

Because the digitization of life coincides with the emergence of the Global Mind, whenever a new piece of the larger puzzle being solved is put in place, research teams the world over instantly begin connecting it to the puzzle pieces they have been dealing with. The more genes that are sequenced, the easier and faster it is for scientists to map the network of connections between those genes and others that are known to appear in predictable patterns.

As Jun Wang, executive director of the Beijing Genomics Institute, put it, there is a “strong network effect … the health profile and personal genetic information of one individual will, to a certain extent, provide clues to better understand others’ genomes and their medical implications. In this sense, a personal genome is not only for one, but also for all humanity.”

An unprecedented collaboration in 2012 among more than 500 scientists at thirty-two different laboratories around the world resulted in a major breakthrough in the understanding of DNA bits that had been previously dismissed as having no meaningful role. They discovered that this so-called junk DNA actually contains millions of “on-off switches” arrayed in extremely complex networks that play crucial roles in controlling the function and interaction of genes. While this landmark achievement resulted in the identification of the function of 80 percent of DNA, it also humbled scientists with the realization that they are a very long way from fully understanding how genetic regulation of life really works. Job Dekker, a molecular biophysicist at the University of Massachusetts Medical School, said after the discovery that every gene is surrounded by “an ocean of regulatory elements” in a “very complicated three-dimensional structure,” only one percent of which has yet been described.

The Global Mind has also facilitated the emergence of an Internet-based global marketplace in so-called biobricks—DNA strands with known properties and reliable uses—that are easily and inexpensively available to teams of synthetic biologists. Scientists at MIT, including the founder of the BioBricks Foundation, Ron Weiss, have catalyzed the creation of the Registry of Standard Biological Parts, which is serving as a global repository, or universal library, for thousands of DNA segments—segments that can be used as genetic building blocks of code free of charge. In the same way that the Internet has catalyzed the dispersal of manufacturing to hundreds of thousands of locations, it is also dispersing the basic tools and raw materials of genetic engineering to laboratories on every continent.

THE GENOME EFFECT

The convergence of the Digital Revolution and the Life Sciences Revolution is accelerating these developments at a pace that far outstrips even the speed with which computers are advancing. To illustrate how quickly this radical change is unfolding, the cost of sequencing the first human genome ten years ago was approximately $3 billion. But in 2013 detailed digital genomes of individuals are expected to be available at a cost of only $1,000 per person.

At that price, according to experts, genomes will become routinely used in medical diagnoses, in the tailoring of pharmaceuticals to an individual’s genetic design, and for many other purposes. In the process, according to one genomic expert, “It will raise a host of public policy issues (privacy, security, disclosure, reimbursement, interpretation, counseling, etc.), all important topics for future discussions.” In the meantime, a British company announced in 2012 that it will imminently begin selling a small disposable gene-sequencing machine for less than $900.

For the first few years, the cost reduction curve for the sequencing of individual human genomes roughly followed the 50 percent drop every eighteen to twenty-four months that has long been measured by Moore’s Law. But at the end of 2007, the cost for sequencing began to drop at a significantly faster pace—in part because of the network effect, but mainly because multiple advances in the technologies involved in sequencing allowed significant increases in the length of DNA strands that can be quickly analyzed. Experts believe that these extraordinary cost reductions will continue at breakneck speed for the foreseeable future. As a result, some companies, including Life Technologies, are producing synthetic genomes on the assumption that the pace of discovery in genomics will continue to accelerate.

By contrast, the distillation of wisdom is a process that normally takes considerable time, and the molding of wisdom into accepted rules by which we can guide our choices takes more time still. For almost 4,000 years, since the introduction by Hammurabi of the first written set of laws, we have developed legal codes by building on precedents that we have come to believe embody the distilled wisdom of past judgments made well. Yet the great convergence in science being driven by the digitization of life—with overlapping and still accelerating revolutions in genetics, epigenetics, genomics, proteomics, microbiomics, optogenetics, regenerative medicine, neuroscience, nanotechnology, materials science, cybernetics, supercomputing, bioinformatics, and other fields—is presenting us with new capabilities faster than we can discern the deeper meaning and full implications of the choices they invite us to make.

For example, the impending creation of completely new forms of artificial life capable of self-replication should, arguably, be the occasion for a full discussion and debate about not only the risks, benefits, and appropriate safeguards, but also an exploration of the deeper implications of crossing such an epochal threshold. In the prophetic words of Teilhard de Chardin in the mid-twentieth century, “We may well one day be capable of producing what the Earth, left to itself, seems no longer able to produce: a new wave of organisms, an artificially provoked neo-life.”

The scientists who are working hard to achieve this breakthrough are understandably excited and enthusiastic, and the incredibly promising benefits expected to flow from their hoped-for accomplishment seem reason enough to proceed full speed ahead. As a result, it certainly seems timorous to even raise the sardonic question “What could go wrong?”

MORE THAN A little, it seems—or at least it seems totally reasonable to explore the question. Craig Venter, who had already made history by sequencing his own genome, made history again in 2010 by creating the first live bacteria made completely from synthetic DNA. Although some scientists minimized the accomplishment by pointing out that Venter had merely copied the blueprint of a known bacterium, and had used the empty shell of another as the container for his new life-form, others marked it as an important turning point.

In July 2012, Venter and his colleagues, along with a scientific team at Stanford, announced the completion of a software model containing all of the genes (525 of them—the smallest number known), cells, RNA, proteins, and metabolites (small molecules generated in cells) of an organism—a free-living microbe known as Mycoplasma genitalium. Venter is now working to create a unique artificial life-form in a project that is intended to discover the minimum amount of DNA information necessary for self-replication. “We are trying to understand the fundamental principles for the design of life, so that we can redesign it—in the way an intelligent designer would have done in the first place, if there had been one,” Venter said. His reference to an “intelligent designer” seems intended as implicit dismissal of creationism and reflects a newly combative attitude that many scientists have understandably come to feel is appropriate in response to the aggressive attacks on evolution by many fundamentalists.

One need not believe in a deity, however, in order to entertain the possibility that the web of life has an emergent holistic integrity featuring linkages we do not yet fully understand and which we might not risk disrupting if we did. Even though our understanding of hubris originated in ancient stories about the downfall of men who took for themselves powers reserved for the gods, its deeper meaning—and the risk it often carries—is rooted in human arrogance and pride, whether or not it involves an offense against the deity. As Shakespeare wrote, “The fault, dear Brutus, is not in our stars, but in ourselves.” For all of us, hubris is inherent in human nature. Its essence includes prideful overconfidence in the completeness of one’s own understanding of the consequences of exercising power in a realm that may well have complexities that still extend beyond the understanding of any human.

Nor is the posture of fundamentalism unique to the religious. Reductionism—the belief that scientific understanding is usually best pursued by breaking down phenomena into their component parts and subparts—has sometimes led to a form of selective attention that can cause observers to overlook emergent phenomena that arise in complex systems, and in their interaction with other complex systems.

One of the world’s most distinguished evolutionary biologists, E. O. Wilson, has been bitterly attacked by many of his peers for his proposal that Darwinian selection operates not only at the level of individual members of a species, but also at the level of “superorganisms”—by which he means that adaptations serving the interests of a species as a whole may be selected even if they do not enhance the prospects for survival of the individual creatures manifesting those adaptations. Wilson, who was but is no longer a Christian, is not proposing “intelligent design” of the sort believed in by creationists. He is, rather, asserting that there is another layer to the complexity of evolution that operates on an “emergent” level.

Francis Collins, a devout Christian who headed the U.S. government’s Human Genome Project (which announced its results at the same time that Craig Venter announced his), has bemoaned the “increasing polarization between the scientific and spiritual worldviews, much of it, I think, driven by those who are threatened by the alternatives and who are unwilling to consider the possibility that there might be harmony here.… We have to recognize that our understanding of nature is something that grows decade by decade, century by century.”

Venter, for his part, is fully confident that enough is already known to justify a large-scale project to reinvent life according to a human design. “Life evolved in a messy fashion through random changes over three billion years,” he says. “We are designing it so that there are modules for different functions, such as chromosome replication and cell division, and then we can decide what metabolism we want it to have.”

ARTIFICIAL LIFE

As with many of the startling new advances in the life sciences, the design and creation of artificial life-forms offers the credible promise of breakthroughs in health care, energy production, environmental remediation, and many other fields. One of the new products Venter and other scientists hope to create is synthetic viruses engineered to destroy or weaken antibiotic-resistant bacteria. These synthetic viruses—or bacteriophages—can be programmed to attack only the targeted bacteria, leaving other cells alone. These viruses utilize sophisticated strategies to not only kill the bacteria but also use the bacteria before it dies to replicate the synthetic virus so that it can go on killing other targeted bacteria until the infection subsides.

The use of new synthetic organisms for the acceleration of vaccine development is also generating great hope. These synthetic vaccines are being designed as part of the world’s effort to prepare for potential new pandemics like the bird flu (H5N1) of 2007 and the so-called swine flu (H1N1) of 2009. Scientists have been particularly concerned that the H5N1 bird flu is now only a few mutations away from developing an ability to pass from one human to another through airborne transmission.

The traditional process by which vaccines are developed requires a lengthy development, production, and testing cycle of months, not days, which makes it nearly impossible for doctors to obtain adequate supplies of the vaccine after a new mutant of the virus begins spreading. Scientists are using the tools of synthetic biology to accelerate the evolution of existing flu strains in the laboratory and they hope to be able to predict which new strains are most likely to emerge. Then, by studying their blueprints, scientists hope to preemptively synthesize vaccines that will be able to stop whatever mutant of the virus subsequently appears in the real world and stockpile supplies in anticipation of the new virus’s emergence. Disposable biofactories are being set up around the world to decrease the cost and time of manufacturing of vaccines. It is now possible to set up a biofactory in a remote rural village where the vaccine is needed quickly to stop the spread of a newly discovered strain of virus or bacteria.

Some experts have also predicted that synthetic biology may supplant 15 to 20 percent of the global chemical industry within the next few years, producing many chemical products more cheaply than they can be extracted from natural sources, producing pharmaceutical products, bioplastics, and other new materials. Some predict that this new approach to chemical and pharmaceutical manufacturing will—by using the 3D printing technique described in Chapter 1—revolutionize the production process by utilizing a “widely dispersed” strategy. Since most of the value lies in the information, which can easily be transmitted to unlimited locations, the actual production process by which the information is translated into production of Synthetic Biology products can be located almost anywhere.

These and other exciting prospects expected to accompany the advances in synthetic biology and the creation of artificial life-forms have led many to impatiently dismiss any concerns about unwanted consequences. This impatience is not of recent vintage. Ninety years ago, English biochemist J. B. S. Haldane wrote an influential essay that provoked a series of futurist speculations about human beings taking active control of the future course of evolution. In an effort to place in context—and essentially dismiss—the widespread uneasiness about the subject, he wrote:

The chemical or physical inventor is always a Prometheus. There is no great invention, from fire to flying, which has not been hailed as an insult to some god. But if every physical and chemical invention is a blasphemy, every biological invention is a perversion. There is hardly one which, on first being brought to the notice of an observer from any nation which has not previously heard of their existence, would not appear to him as indecent and unnatural.

By contrast, Leon Kass, who chaired the U.S. Council on Bioethics from 2001 to 2005, has argued that the intuition or feeling that something is somehow repugnant should not be automatically dismissed as antiscientific: “In some crucial cases, however, repugnance is the emotional expression of deep wisdom, beyond reason’s power completely to articulate it.… We intuit and we feel, immediately and without argument, the violation of things that we rightfully hold dear.”

In Chapter 2, the word “creepy” was used by several observers of trends unfolding in the digital world, such as the ubiquitous tracking of voluminous amounts of information about most people who use the Internet. As others have noted, “creepy” is an imprecise word because it describes a feeling that itself lacks precision—not fear, but a vague uneasiness about something whose nature and implications are so unfamiliar that we feel the need to be alert to the possibility that something fearful or harmful might emerge. There is a comparably indeterminate “pre-fear” that many feel when contemplating some of the onrushing advances in the world of genetic engineering.

An example: a method for producing spider silk has been developed by genetic engineers who insert genes from orb-making spiders into goats which then secrete the spider silk—along with milk—from their udders. Spider silk is incredibly useful because it is both elastic and five times stronger than steel by weight. The spiders themselves cannot be farmed because of their antisocial, cannibalistic nature. But the insertion of their silk-producing genes in the goats allows not only a larger volume of spider silk to be produced, but also allows the farming of the goats.§

In any case, there is no doubt that the widespread use of synthetic biology—and particularly the use of self-replicating artificial life-forms—could potentially generate radical changes in the world, including some potential changes that arguably should be carefully monitored. There are, after all, too many examples of plants and animals purposely introduced into a new, nonnative environment that then quickly spread out of control and disrupted the ecosystem into which they were introduced.

Kudzu, a Japanese plant that was introduced into my native Southern United States as a means of combating soil erosion, spread wildly and became a threat to native trees and plants. It became known as “the vine that ate the South.” Do we have to worry about “microbial kudzu” if a synthetic life-form capable of self-replication is introduced into the biosphere for specific useful purposes, but then spreads rapidly in ways that have not been predicted or even contemplated?

Often in the past, urgent questions raised about powerful new breakthroughs in science and technology have focused on potentially catastrophic disaster scenarios that turned out to be based more on fear than reason—when the questions that should have been pursued were about other more diffuse impacts. For example, on the eve of the Bikini Atoll test of the world’s first hydrogen bomb in 1954, a few scientists raised the concern that the explosion could theoretically trigger a chain reaction in the ocean and create an unimaginable ecological Armageddon.

Their fearful speculation was dismissed by physicists who were confident that such an event was absurdly implausible. And of course it was. But other questions focused on deeper and more relevant concerns were not adequately dealt with at all. Would this thermonuclear explosion contribute significantly to the diversion of trillions of dollars into weaponry and further accelerate a dangerous nuclear arms race that threatened the survival of human civilization?

For the most part, the fears of microbial kudzu (or their microscopic mechanical counterparts—self-replicating nanobots, or so-called gray goo), are now often described as probably overblown, although the executive director of GeneWatch, an NGO watchdog organization, Helen Wallace, told The New York Times Magazine, “It’s almost inevitable that there will be some level of escape. The question is: Will those organisms survive and reproduce? I don’t think anyone knows.”

But what about other questions that may seem less urgent but may be more important in the long run: if we robosource life itself, and synthesize life-forms that are more suited to our design than the pattern that has been followed by life for 3.5 billion years, how is this new capability likely to alter our relationship to nature? How is it likely to change nature? Are we comfortable forging full speed ahead without any organized effort to identify and avoid potential outcomes that we may not like?

One concern that technologists and counterterrorism specialists have highlighted is the possibility of a new generation of biological weapons. After all, some of the early developments in genetic engineering, we now know, were employed by the Soviet Union in a secret biological weapons program forty years ago. If the exciting tools of the Digital Revolution have been weaponized for cyberwar, why would we not want some safeguards to prevent the same diversion of synthetic biology into bioweapons?

The New and Emerging Science and Technology (NEST) high-level expert group of the European Commission wrote in 2005 that “The possibility of designing a new virus or bacterium à la carte could be used by bioterrorists to create new resistant pathogenic strains or organisms, perhaps even engineered to attack genetically specific sub-populations.” In 2012, the U.S. National Science Advisory Board for Biosecurity attempted to halt the publication of two scientific research papers—one in Nature and the other in Science—that contained details on the genetic code of a mutated strain of bird flu that had been modified in an effort to determine what genetic changes could make the virus more readily transmissible among mammals.

Citing concerns that the detailed design of a virus that was only a few mutations away from a form that could be spread by human-to-human transmission, the bioterrorism officials tried to dissuade scientists from publishing the full genetic sequence that accompanied their papers. Although publication was allowed to proceed after a full security review, the U.S. government remains actively involved in monitoring genetic research that could lead to new bioweapons. Under U.S. law, the FBI screens the members of research teams working on projects considered militarily sensitive.

HUMAN CLONING

Among the few lines of experiments specifically banned by the U.S. government are those involving federally funded research into the cloning of human beings. As vice president, not long after the birth of the first cloned sheep, Dolly, in 1996, when it became clear that human cloning was imminently feasible, I strongly supported this interim ban pending a much fuller exploration of the implications for humanity of proceeding down that path, and called for the creation of a new National Bioethics Advisory Commission to review the ethical, moral, and legal implications of human cloning.

A few years earlier, as chairman of the Senate Subcommittee on Science, I had pushed successfully for a commitment of 3 percent of the funding for the Human Genome Project to be allocated to the study of extensive ethical, legal, and social implications (they are now referred to as ELSI grants), in an effort to ensure careful study of the difficult questions that were emerging more quickly than their answers. This set-aside has become the largest government-financed research program into ethics ever established. James Watson, the co-discoverer of the double helix, who by then had been named to head the Genome Project, was enthusiastically supportive of the ethics program.

The ethics of human cloning has been debated almost since the very beginning of the DNA era. The original paper published by Watson and Crick in 1953 included this sentence: “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.” As chairman of the Science Investigations Subcommittee in the U.S. House of Representatives, I conducted a series of hearings in the early 1980s about the emerging science of cloning, genetic engineering, and genetic screening. Scientists were focused at that stage on cloning animals, and fifteen years later they succeeded with Dolly. Since then, they have cloned many other livestock and other animals.

But from the start of their experiments, the scientists were clear that all of the progress they were making on the cloning of animals was directly applicable to the cloning of people—and that it was only ethical concerns that had prevented them from attempting such procedures. Since 1996, human cloning has been made illegal in almost every country in Europe, and the then director-general of the World Health Organization called the procedure “ethically unacceptable as it would violate some of the basic principles which govern medically assisted reproduction. These include respect for the dignity of the human being and the protection of the security of human genetic material.”

Nevertheless, most anticipate that with the passage of time, and further development and refinement of the technique, human cloning will eventually take place—at least in circumstances where a clear medical benefit can be gained without causing a clear form of harm to the individual who is cloned or to society at large. In 2011, scientists at the New York Stem Cell Foundation Laboratory announced that they had cloned human embryos by reprogramming an adult human egg cell, engineering it to return to its embryonic stage, and then created from it a line of identical embryonic stem cells that reproduced themselves. Although the DNA of these cells is not identical to that of the patient who donated the egg cell, they are identical to one another, which facilitates the efficacy of research carried out on them.

Several countries, including Brazil, Mexico, and Canada, have banned the cloning of human embryos for research. The United States has not done so, and several Asian countries seem to have far fewer misgivings about moving forward aggressively with the science of cloning human embryos—if not actual humans. From time to time, there are reports that one or another fertility doctor working at some secret laboratory, located in a nation without a ban on human cloning, has broken this modern taboo against human cloning. Most if not all of these stories, however, have been suspected of being fraudulent. There has yet been no confirmed birth of a human clone.

In general, those in favor of proceeding with experiments in human cloning believe that the procedure is not really different from other forms of technological progress, that it is inevitable in any case, and is significantly more promising than most experiments because of the medical benefits that can be gained. They believe that the decision on whether or not to proceed with a specific clone should—like a decision on abortion—be in the hands of the parent or parents.

Those who oppose cloning of people fear that its use would undermine the dignity of individuals and run the risk of “commoditizing” human beings. Cloning does theoretically create the possibility of mass-producing many genetic replicas of the same original—a process that would be as different from natural reproduction as manufacturing is from craftsmanship.

Some base their argument on religious views of the rights and protections due to every person, though many who oppose human cloning base their views not on religion, but on a more generalized humanist assertion of individual dignity. In essence, they fear that the manipulation of humanity might undermine the definition of those individuals who have been manipulated as fully human. This concern seems to rest, however, on an assumption that human beings are reducible to their genetic makeup—a view that is normally inconsistent with the ideology of those who make the protection of individual dignity a top priority.

Both the temporary delay in the public release of details concerning how to create dangerous mutations in the H5N1 bird flu virus and the temporary ban on government-funded research into human cloning represent rare examples of thoughtful—if controversial—oversight of potentially problematic developments in order to assess their implications for humanity as a whole. Both represented examples of U.S. leadership that led to at least a temporary global consensus. In neither case was there a powerful industry seeking to push forward quickly in spite of the misgivings expressed by representatives of the public.

ANTIBIOTICS BEFORE SWINE

Unfortunately, when there is a strong commercial interest in influencing governments to make a decision that runs roughshod over the public interest, business lobbies are often able to have their way with government—which once again raises the question: who is the “we” that makes decisions about the future course of the Life Sciences Revolution when important human values are placed at risk? In the age of Earth Inc., Life Inc., and the Global Mind, the record of decision making includes troubling examples of obsequious deference to the interests of multinational corporations and a reckless disregard of sound science.

Consider the shameful acquiescence by the U.S. Congress in the livestock industry’s continuing absurd dominance of antibiotic use in the United States. In yet another illustration of the dangerous imbalance of power in political decision making, a truly shocking 80 percent of all U.S. antibiotics are still allowed to be legally used on farms in livestock feed and for injections in spite of grave threats to human health. In 2012, the FDA began an effort to limit this use of antibiotics with a new rule that will require a prescription from veterinarians.

Since the discovery of penicillin in 1929 by Alexander Fleming, antibiotics have become one of the most important advances in the history of health care. Although Fleming said his discovery was “accidental,” the legendary Irish scientist John Tyndall (who first discovered that CO2 traps heat) reported to the Royal Society in London in 1875 that a species of Penicillium had destroyed some of the bacteria he was working with, and Ernest Duchesne wrote in 1897 on the destruction of bacteria by another species of Penicillium. Duchesne had recommended research into his discovery but entered the army and went to war immediately after publication of his paper. He died of tuberculosis before he could resume his work.

In the wake of penicillin, which was not used in a significant way until the early 1940s, many other potent antibiotics were discovered in the 1950s and 1960s. In the last few decades, though, the discoveries have slowed to a trickle. The inappropriate and irresponsible use of this limited arsenal of life-saving antibiotics is rapidly eroding their effectiveness. Pathogens that are stopped by antibiotics mutate and evolve over time in ways that circumvent the effectiveness of the antibiotic.

Consequently, doctors and other medical experts have urged almost since the first use of these miracle cures that they be used sparingly and only when they are clearly needed. After all, the more they are used, the more opportunities the pathogens have to evolve through successive generations before they stumble upon new traits that make the antibiotics impotent. Some antibiotics have already become ineffective against certain diseases. And with the slowing discovery of new antibiotics, the potency of the ones we use in our current arsenal is being weakened at a rate that is frightening to many health experts. The effectiveness of our antibiotic arsenal—like topsoil and groundwater—can be depleted quickly but regenerated only at an agonizingly slow rate.

One of the most serious new “superbugs” is multidrug-resistant tuberculosis, which, according to Dr. Margaret Chan, director-general of the World Health Organization, is extremely difficult and expensive to treat. At present, 1.34 million people die from tuberculosis each year. Of the 12 million cases in 2010, Chan estimated that 650,000 involved strains of TB that were multidrug-resistant. The prospect of a “post antibiotic world” means, Chan said, “Things as common as strep throat or a child’s scratched knee, could once again kill.” In response to these concerns, the FDA formed a new task force in 2012 to support development of new antibacterial drugs.

But in spite of these basic medical facts, many governments—including, shockingly, the United States government—allow the massive use of antibiotics by the livestock industry as a growth stimulant. The mechanism by which the antibiotics cause a faster growth rate in livestock is not yet fully understood, but the impact on profits is very clear and sizable. The pathogens in the guts of the livestock are evolving quickly into superbugs that are immune to the impact of antibiotics. Since the antibiotics are given in subtherapeutic doses and are not principally used for the health of the livestock anyway, the livestock companies don’t particularly care. And of course, their lobbyists tendentiously dispute the science while handing out campaign contributions to officeholders.

Last year, scientists confirmed that a staphylococcus germ that was vulnerable to antibiotics jumped from humans to pigs whose daily food ration included tetracycline and methicillin. Then, the scientists confirmed that the same staph germ, after becoming resistant to the antibiotics, found a way to jump back from pigs into humans.

The particular staph germ that was studied—CC398—has spread in populations of pigs, chickens, and cattle. Careful analysis of the genetic structure of the germ proved that it was a direct ancestor of an antibiotic-susceptible germ that originated in people. It is now present, according to the American Society for Microbiology, in almost half of all meat that has been sampled in the U.S. Although it can be killed with thorough cooking of the meat, it can nevertheless infect people if it cross-contaminates kitchen utensils, countertops, or pans.

Again, the U.S. government’s frequently obsequious approach to regulatory decision making when a powerful industry exerts its influence stands in stark contrast to the approach it takes when commercial interests are not yet actively engaged. In the latter case, it seems to be easier for government to sensitively apply the precautionary principle. But this controversy illustrates the former case: those who benefit from the massive and reckless use of antibiotics in the livestock industry have fought a rearguard action for decades and have thus far been successful in preventing a ban or even, until recently, a regulation limiting this insane practice.

The European Union has already banned antibiotics in livestock feed, but in a number of other countries the practice continues unimpeded. The staph germ that has jumped from people to livestock and back again is only one of many bacteria that are now becoming resistant to antibiotics because of our idiotic acceptance of the livestock industry’s insistence that it is perfectly fine for them to reduce some of their costs by becoming factories for turning out killer germs against which antibiotics have no effect. In a democracy that actually functioned as it is supposed to, this would not be a close question.

Legislators have also repeatedly voted down a law that would prevent the sale of animals with mad cow disease (bovine spongiform encephalopathy, or BSE)—a neurodegenerative brain disease caused by eating beef contaminated during the slaughtering process by brain or spinal cord tissue from an animal infected by the pathogen (a misfolded protein, or prion) that causes the disease. Animals with later stages of the disease can carry the prions in other tissues as well. When an animal on the way to the slaughterhouse begins stumbling, staggering, and falling down, it is fifty times more likely to have the disease.

The struggle in repeated votes in the Congress has been over whether those specific animals manifesting those specific symptoms should be diverted from the food supply. At least three quarters of the confirmed cases of mad cow disease in North America were traced to animals that had manifested those symptoms just before they were slaughtered. But the political power and lobbying muscle of the livestock industry has so intimidated and enthralled elected representatives in the U.S. that lawmakers have repeatedly voted to put the public at risk in order to protect a tiny portion of the industry’s profits. The Obama administration has issued a regulation that embodies the intent of the law rejected by Congress. However, because it is merely a regulation, it could be reversed by Obama’s successor as president. Again, in a working democracy, this would hardly be a close question.

THE INABILITY OF Congress to free itself from the influence of special interests has implications for how the United States can make the difficult and sensitive judgments that lie ahead in the Life Sciences Revolution. If the elected representatives of the people cannot be trusted to make even obvious choices in the public interest—as in the mad cow votes or the decisions on frittering away antibiotic resistance in order to enrich the livestock industry—then where else can these choices be made? Who else can make them? And even if such decisions are made sensitively and well in one country, what is to prevent the wrong decision being made elsewhere? And if the future of human heredity is affected in perpetuity, is that an acceptable outcome?

EUGENICS

The past record of decisions made by government about genetics is far from reassuring. History sometimes resembles Greek mythology, in that like the gods, our past mistakes often mark important boundaries with warnings. The history of the eugenics movement 100 years ago provides such a warning: a profound misunderstanding of Darwinian evolution was used as the basis for misguided efforts by government to engineer the genetic makeup of populations according to racist and other unacceptable criteria.

In retrospect, the eugenics movement should have been vigorously condemned at the time—all the more so because of the stature of some of its surprising proponents. A number of otherwise thoughtful Americans came to support active efforts by their government to shape the genetic future of the U.S. population through the forcible sterilization of individuals who they feared would otherwise pass along undesirable traits to future generations.

In 1922, a “model eugenical sterilization law” (originally written in 1914) was published by Harry Laughlin, superintendent of the recently established “Eugenics Record Office” in New York State, to authorize sterilization of people regarded as

(1) Feeble-minded; (2) Insane, (including the psychopathic); (3) Criminalistic (including the delinquent and wayward); (4) Epileptic; (5) Inebriate (including drug-habitues); (6) Diseased (including the tuberculous, the syphilitic, the leprous, and others with chronic, infectious and legally segregable diseases); (7) Blind (including those with seriously impaired vision); (8) Deaf (including those with seriously impaired hearing); (9) Deformed (including the crippled); and (10) Dependent (including orphans, ne’er-do-wells, the homeless, tramps and paupers.)

Between 1907 and 1963, over 64,000 people were sterilized under laws similar to Laughlin’s design. He argued that such individuals were burdensome to the state because of the expense of taking care of them. He and others also made the case that the advances in sanitation, public health, and nutrition during the previous century had led to the survival of more “undesirable” people who were reproducing at rates not possible in the past.

What makes the list of traits in Laughlin’s “model law” bizarre as well as offensive is that he obviously believed they were heritable. Ironically, Laughlin was himself an epileptic; thus, under his model legislation, he would have been suitable for forced sterilization. Laughlin’s malignant theories also had an impact on U.S. immigration law. His work on evaluating recent immigrants from Southern and Eastern Europe was influential in forming the highly restrictive quota system of 1924.

As pointed out by Jonathan Moreno in his book The Body Politic, the eugenics movement was influenced by deep confusion over what evolution really means. The phrase “survival of the fittest” did not originate with Charles Darwin, but with his cousin Sir Francis Galton, and was then popularized by Herbert Spencer—whose rival theory of evolution was based on the crackpot ideas of Jean-Baptiste Lamarck. Lamarck argued that characteristics developed by individuals after their birth were genetically passed on to their offspring in the next generation.

A similar bastardization of evolutionary theory was also promoted in the Soviet Union by Trofim Lysenko—who was responsible for preventing the teaching of mainstream genetics during the three decades of his rein in Soviet science. Geneticists who disagreed with Lysenko were secretly arrested; some were found dead in unexplained circumstances. Lysenko’s warped ideology demanded that biological theory conform with Soviet agricultural needs—much as some U.S. politicians today insist that climate science be changed to conform with their desire to promote the unrestrained burning of oil and coal.

Darwin actually taught that it was not necessarily the “fittest” who survived, but rather those that were best adapted to their environments. Nevertheless, the twisted and mistaken version of Darwin’s theory that was reflected in his cousin’s formulation helped to give rise to the notion of Social Darwinism—which led, in turn, to misguided policy debates that in some respects continue to this day.

Some of the early progressives were seduced by this twisted version of Darwin’s theory into believing that the state had an affirmative duty to do what it could to diminish the proliferation of unfavorable Lamarckian traits that they mistakenly believed were becoming more common because prior state interventions had made life easier for these “undesirables,” and had enabled them to proliferate.

The same flawed assumptions led those on the political right to a different judgment: the state should pull back from all those policy interventions that had, in the name of what they felt was misguided compassion, led to the proliferation of “undesirables” in the first place. There were quite a few reactionary advocates of eugenics. At least one of them survives into the twenty-first century—the Pioneer Fund, described as a hate group by the Southern Poverty Law Center. Incidentally, its founding president was none other than Harry Laughlin.

Eugenics also found support, historians say, because of the socioeconomic turmoil of the first decades of the twentieth century—rapid industrialization and urbanization, the disruption of long familiar social patterns, waves of immigration, and economic stress caused by low wages and episodic high unemployment. These factors combined with a new zeal for progressive reform to produce a wildly distorted view of what was appropriate by way of state intervention in heredity.

Although this episode in the world’s history is now regarded as horribly unethical—in part because thirty years after it began, the genocidal crimes of Adolf Hitler discredited all race-based, and many genetics-based, theories that were even vaguely similar to that of Nazism. Nevertheless, some of the subtler lessons of the eugenics travesty have not yet been incorporated into the emerging debate over current proposals that some have labeled “neo-eugenics.”

One of the greatest challenges facing democracies in this new era is how to ensure that policy decisions involving cutting-edge science are based on a clear and accurate understanding of the science involved. In the case of eugenics, the basic misconception traced back to Lamarck concerning what is inheritable and what is not contributed to an embarrassing and deeply immoral policy that might have been avoided if policymakers and the general public had been debating policy on the basis of accurate science.

It is worth noting that almost a century after the eugenics tragedy, approximately half of all Americans still say they do not believe in evolution. The judgments that must be made within the political system of the United States in the near future—and in other countries—are difficult enough even when based on an accurate reading of the science. When this inherent difficulty is compounded by flawed assumptions concerning the science that gives rise to the need to make these decisions, the vulnerability to mistaken judgments goes up accordingly.

As will be evident in the next chapter, the decisions faced by civilization where global warming is concerned are likewise difficult enough when they are based on an accurate reading of the science. But when policymakers base arguments on gross misrepresentations of the science, the degree of difficulty goes up considerably. When gross and willful misunderstandings of the science are intentionally created and reinforced by large carbon polluters who wish to paralyze the debate over how to reduce CO2 emissions, they are, in my opinion, committing a nearly unforgivable crime against democracy and against the future well-being of the human species.

In a 1927 opinion by Justice Oliver Wendell Holmes Jr., the U.S. Supreme Court upheld one of the more than two dozen state eugenics laws. The case, Buck v. Bell, involved the forcible sterilization of a young Virginia woman who was allegedly “feeble-minded” and sexually promiscuous. Under the facts presented to the court, the young woman, Carrie Buck, had already had a child at the age of seventeen. In affirming the state’s right to perform the sterilization, Holmes wrote that, “Society can prevent those who are manifestly unfit from continuing their kind.… Three generations of imbeciles are enough.”

A half century after the Supreme Court decision, which has never been overturned, the director of the hospital where Buck had been forcibly sterilized tracked her down when she was in her eighties. He found that, far from being an “imbecile,” Buck was lucid and of normal intelligence. Upon closer examination of the facts, it became obvious that they were not as represented in court. Young Carrie Buck was a foster child who had been raped by a nephew of one of her foster parents, who then committed her to the Virginia State Colony for Epileptics and Feebleminded in order to avoid what they feared would otherwise be a scandal.

As it happens, Carrie’s mother, Emma Buck—the first of the three generations referred to by Justice Holmes—had also been committed to the same asylum under circumstances that are not entirely clear, although testimony indicated that she had syphilis and was unmarried when she gave birth to Carrie. In any case, the superintendent of the Virginia Colony, Albert Priddy, was eager to find a test case that could go to the Supreme Court and provide legal cover for the forced sterilizations that his and other institutions already had under way. He declared Buck “congenitally and incurably defective”; Buck’s legal guardian picked a lawyer to defend her in the case who was extremely close to Priddy and a close friend since childhood to the lawyer for the Colony, a eugenics and sterilization advocate (and former Colony director) named Aubrey Strode.

Historian Paul Lombardo of Georgia State University, who wrote an extensively researched book on the case, wrote that the entire proceeding was “based on deceit and betrayal.… The fix was in.” Buck’s appointed defense counsel put forward no witnesses and no evidence, and conceded the description of his client as a “middle-grade moron.” Harry Laughlin, who had never met Carrie Buck, her mother, or her daughter, testified to the court in a written statement that all three were part of the “shiftless, ignorant, and worthless class of anti-social whites of the South.”

As for the third generation of Bucks, Carrie’s daughter, Vivian, was examined at the age of a few weeks by a nurse who testified: “There is a look about it that is not quite normal.” The baby girl was taken from her family and given to the family of Carrie’s rapist. After making the honor roll in school, Vivian died of measles in the second grade. Incidentally, Carrie’s sister, Doris, was also sterilized at the same institution (more than 4,000 sterilizations were performed there), though doctors lied to her about the operation when it was performed and told her it was for appendicitis. Like Carrie, Doris did not learn until much later in her life why she was unable to have children.

The “model legislation” put forward by Laughlin, which was the basis for the Virginia statute upheld by the Supreme Court, was soon thereafter used by the Third Reich as the basis for their sterilization of more than 350,000 people—just as the psychology-based marketing text written by Edward Bernays was used by Goebbels in designing the propaganda program surrounding the launch and prosecution of Hitler’s genocide. The Nazis presented Laughlin with an honorary degree in 1936 from the University of Heidelberg for his work in the “science of racial cleansing.”

Shamefully, eugenics was supported by, among others, President Woodrow Wilson, Alexander Graham Bell, Margaret Sanger, who founded the movement in favor of birth control—an idea that was, at the time, more controversial than eugenics—and by Theodore Roosevelt after he left the White House. In 1913, Roosevelt wrote in a letter,

It is really extraordinary that our people refuse to apply to human beings such elementary knowledge as every successful farmer is obliged to apply to his own stock breeding. Any group of farmers who permitted their best stock not to breed, and let all the increase come from the worst stock, would be treated as fit inmates for an asylum. Yet we fail to understand that such conduct is rational compared to the conduct of a nation which permits unlimited breeding from the worst stocks, physically and morally, while it encourages or connives at the cold selfishness or the twisted sentimentality as a result of which the men and women who ought to marry, and if married have large families, remain celibates or have no children or only one or two.

Sanger, for her part, disagreed with the methods of eugenics advocates, but nevertheless wrote that they were working toward a goal she supported: “To assist the race toward the elimination of the unfit.” One of Sanger’s own goals in promoting contraception, she wrote in 1919, was, “More children from the fit, less from the unfit—that is the chief issue of birth control.”

The United States is not the only democratic nation with a troubling history of forced sterilization. Between 1935 and 1976, Sweden forcibly sterilized more than 60,000 people, including “mixed-race individuals, single mothers with many children, deviants, gypsies and other ‘vagabonds.’ ” For forty years, from 1972 to 2012, Sweden required sterilization before a transgendered person could officially change his or her gender identification on government identification documents. However, the Stockholm Administrative Court of Appeal found the law unconstitutional in December 2012. Sixteen other European countries continue to have similar laws on the books, including France and Italy. Only a few countries are considering revisions to the laws, despite the lack of any scientific or medical basis for them.

In Uzbekistan, forced sterilizations apparently began in 2004 and became official state policy in 2009. Gynecologists are given a quota of the number of women per week they are required to sterilize. “We go from house to house convincing women to have the operation,” said a rural surgeon. “It’s easy to talk a poor woman into it. It’s also easy to trick them.”

In China, the issue of forced abortions has resurfaced with the allegations by escaped activist Chen Guangcheng, but the outgoing premier Wen Jiabao has publicly called for a ban not only on forced abortion, but also of “fetus gender identification.” Nevertheless, many women who have abortions in China are also sterilized against their will. In India, although forcible sterilization is illegal, doctors and government officials are paid a bonus for each person who is sterilized. These incentives apparently lead to widespread abuses, particularly in rural areas where many women are sterilized under false pretenses.

The global nature of the revolution in biotechnology and the life sciences—like the new global commercial realities that have emerged with Earth Inc.—means that any single nation’s moral, ethical, and legal judgments may not have much impact on the practical decisions of other nations. Some general rules about what is acceptable, what is worthy of extra caution, and what should be prohibited have been tentatively observed, but there is no existing means for arriving at universal moral judgments about these new unfolding capabilities.

CHINA AND THE LIFE SCIENCES

As noted earlier, China appears determined to become the world’s superpower in the application of genetic and life science analysis. The Beijing Genomics Institute (BGI), which is leading China’s commitment to genomic analysis, has already completed the full genomes of fifty animal and plant species, including silk worms, pandas, honeybees, rice, soybeans, and others—along with more than 1,000 species of bacteria. But China’s principal focus seems to be on what is arguably the most important, and certainly the most intriguing, part of the human body that can be modified by the new breakthroughs in life sciences and related fields: the human brain and the enhancement and more productive use of human intelligence.

Toward this end, in 2011 the BGI established China’s National Gene Bank in Shenzhen, where it has been seeking to identify which genes are involved in determining intelligence. It is conducting a complete genomic analysis of 2,000 Chinese schoolchildren (1,000 prodigies from the nation’s best schools, and 1,000 children considered of average intelligence) and matching the results with their achievements in school.

In the U.S., such a study would be extremely controversial, partly because of residual revulsion at the eugenics scandal, and partly because of a generalized wariness about linking intelligence to family heritage in any society that values egalitarian principles. In addition, many biologists, including Francis Collins, who succeeded James Watson as the leader of the Human Genome Project, have said that it is currently scientifically impossible in any case to link genetic information about a child to intelligence. However, some researchers disagree and believe that eventually genes associated with intelligence may well be identified.

Meanwhile, the speed with which advances are being made in mapping the neuronal connections of the human brain continue to move forward significantly faster than the progress measured by Moore’s Law in the manufacturing of integrated circuits. Already, the connectome of a species of nematode, which has only 302 neurons, has been completed. Nevertheless, with an estimated 100 billion neurons in an adult human brain and at least 100 trillion synaptic connections, the challenge of fully mapping the human connectome is a daunting one. And even then, the work of understanding the human brain’s functioning will have barely begun.

In that regard, it is worth remembering that after the completion of the first full sequencing of the human genome, scientists immediately realized that the map of genes was only their introduction to the even larger task of mapping all of the proteins that are expressed by the genes—which themselves adopt multiple geometric forms and are subject to significant biochemical modifications after they are translated by the genes.

In the same way, once the connectome is completed, brain scientists will have to turn to the role of proteins in the brain. As David Eagleman, a neuroscientist at the Baylor College of Medicine in Houston, puts it, “Neuroscience is obsessed with neurons because our best technology allows us to measure them. But each individual neuron is in fact as complicated as a city, with millions of proteins inside of it, trafficking and interacting in extraordinarily complex biochemical cascades.”

Still, even at this early stage in the new Neuroscience Revolution, scientists have learned how to selectively activate specific brain systems. Exploiting advances in the new field of optogenetics, scientists first identify opsins—light-sensitive proteins from green algae (or bacteria)—and place into cells their corresponding genes, which then become optical switches for neurons. By also inserting genes that correspond to other proteins that glow in green light, the scientists were then able to switch the neuron on and off with blue light, and then observe its effects on other neurons with a green light. The science of optogenetics has quickly advanced to the point where researchers are able to use the switches to manipulate the behavior and feelings of mice by controlling the flow of ions (charged particles) to neurons, effectively turning them on and off at will. One of the promising applications may be the control of symptoms associated with Parkinson’s disease.

Other scientists have inserted multiple genes from jellyfish and coral that produce different fluorescent colors—red, blue, yellow, and gradations in between—into many neurons in a process that then allows the identification of different categories of neurons by having each category light up in a different color. This so-called “brainbow” allows a much more detailed visual map of neuronal connections. And once again, the Global Mind has facilitated the emergence of a powerful network effect in brain research. When a new element of the brain’s intricate circuitry is deciphered, the knowledge is widely dispersed to other research teams whose work in decoding other parts of the connectome is thereby accelerated.

WATCHING THE BRAIN THINK

Simultaneously, a completely different new approach to studying the brain—functional magnetic resonance imaging (fMRI)—has led to exciting new discoveries. This technique, which is based on the more familiar MRI scans of body parts, tracks blood flow in the brain to neurons when they are fired. When neurons are active, they take in blood containing the oxygen and glucose needed for energy. Since there is a slight magnetization difference between oxygenated blood and oxygen-depleted blood, the scanning machine can identify which areas of the brain are active at any given moment.

By correlating the images made by the machine with the subjective descriptions of thoughts or feelings reported by the individual whose brain is being scanned, scientists have been able to make breakthrough discoveries about where specific functions are located in the brain. This technique is now so far advanced that experienced teams can actually identify specific thoughts by seeing the “brain prints” associated with those thoughts. The word “hammer,” for example, has a distinctive brain print that is extremely similar in almost everyone, regardless of nationality or culture.

One of the most startling examples of this new potential was reported in 2010 by neuroscientist Dr. Adrian Owen, when he was at the University of Cambridge in England. Owen performed fMRI scans on a young woman who was in a vegetative state with no discernible sign of consciousness and asked her questions while she was being scanned. He began by asking her to imagine playing tennis, and then asking her to imagine walking through her house. Scientists have established that people who think about playing tennis demonstrate activity in a particular part of the motor cortex portion of the brain, the supplementary motor area. Similarly, when people think about walking through their own home, there is a recognizable pattern of activity in the center of the brain in an area called the parahippocampal gyrus.

After observing that the woman responded to each of these questions by manifesting exactly the brain activity one would expect from someone who is conscious, the doctor then used these two questions as a way of empowering the young woman to “answer” either “yes” by thinking about playing tennis, or “no” by imagining a stroll through her house. He then patiently asked her a series of questions about her life, the answers to which were not known by anyone participating in the medical team. She answered correctly to virtually all the questions, leading Owen to conclude that she was in fact conscious. After continuing his experiments with many other patients, Owen speculated that as many as 20 percent of those believed to be in vegetative states may well be conscious with no way of connecting to others. Owen and his team are now using noninvasive electroencephalography (EEG) to continue this work.

Scientists at Dartmouth College are also using an EEG headset to interpret thoughts and connect them to an iPhone, allowing the user to select pictures that are then displayed on the iPhone’s screen. Because the sensors of the EEG are attached to the outside of the head, it has more difficulty interpreting the electrical signals inside the skull, but they are making impressive progress.

A LOW-COST HEADSET developed some years ago by an Australian game company, Emotiv, translates brain signals and uses them to empower users to control objects on a computer screen. Neuroscientists believe that these lower-cost devices are measuring “muscle rhythms rather than real neural activity.” Nevertheless, scientists and engineers at IBM’s Emerging Technologies lab in the United Kingdom have adapted the headset to allow thought control of other electronic devices, including model cars, televisions, and switches. In Switzerland, scientists at the Ecole Polytechnique Fédérale de Lausanne (EPFL) have used a similar approach to build wheelchairs and robots controlled by thoughts. Four other companies, including Toyota, have announced they are developing a bicycle whose gears can be shifted by the rider’s thoughts.

Gerwin Schalk and Anthony Ritaccio, at the Albany Medical Center, are working under a multimillion-dollar grant from the U.S. military to design and develop devices that enable soldiers to communicate telepathically. Although this seems like something out of a science fiction story, the Pentagon believes that these so-called telepathy helmets are sufficiently feasible that it is devoting more than $6 million to the project. The target date for completion of the prototype device is 2017.

“TRANSHUMANISM” AND THE “SINGULARITY”

If such a technology is perfected, it is difficult to imagine where more sophisticated later versions of it would lead. Some theorists have long predicted that the development of a practical way to translate human thoughts into digital patterns that can be deciphered by computers will inevitably lead to a broader convergence between machines and people that goes beyond cyborgs to open the door on a new era characterized by what they call “transhumanism.”

According to Nick Bostrom, the leading historian of transhumanism, the term was apparently coined by Aldous Huxley’s brother, Julian, a distinguished biologist, environmentalist, and humanitarian, who wrote in 1927, “The human species can, if it wishes, transcend itself—not just sporadically, an individual here in one way, an individual there in another way—but in its entirety, as humanity. We need a name for this new belief. Perhaps transhumanism will serve: man remaining man, but transcending himself, by realizing new possibilities of and for his human nature.”

The idea that we as human beings are not an evolutionary end point, but are destined to evolve further—with our own active participation in directing the process—is an idea whose roots are found in the intellectual ferment following the publication of Darwin’s On the Origin of Species, a ferment that continued into the twentieth century. This speculation led a few decades later to the discussion of a new proposed endpoint in human evolution—the “Singularity.”

First used by Teilhard de Chardin, the term “Singularity” describes a future threshold beyond which artificial intelligence will exceed that of human beings. Vernor Vinge, a California mathematician and computer scientist, captured the idea succinctly in a paper published twenty years ago, entitled “The Coming Technological Singularity,” in which he wrote, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.”

In the current era, the idea of the Singularity has been popularized and enthusiastically promoted by Dr. Ray Kurzweil, a polymath, author, inventor, and futurist (and cofounder with Peter Diamandis of the Singularity University at the NASA Research Park in Moffett Field, California). Kurzweil envisions, among other things, the rapid development of technologies that will facilitate the smooth and complete translation of human thoughts into a form that can be comprehended by and contained in advanced computers. Assuming that these breakthroughs ever do take place, he believes that in the next few decades it will be possible to engineer the convergence of human intelligence—and even consciousness—with artificial intelligence. He recently wrote, “There will be no distinction, post-Singularity, between human and machine or between physical and virtual reality.”

Kurzweil is seldom reluctant to advance provocative ideas simply because many other technologists view them as outlandish. Another close friend, Mitch Kapor, also a legend in the world of computing, has challenged Kurzweil to a $20,000 bet (to be paid to a foundation chosen by the winner) involving what is perhaps the most interesting long-running debate over the future capabilities of computers, the Turing Test. Named after the legendary pioneer of computer science Alan Turing, who first proposed it in 1950, the Turing Test has long served as a proxy for determining when computers will achieve human-level intelligence. If after conversing in writing with two interlocutors, a human being and a computer, a person cannot determine which is which, then the computer passes the test. Kurzweil has asserted that a computer will pass the Turing Test by the end of 2029. Kapor, who believes that human intelligence will forever be organically distinctive from machine-based intelligence, disagrees. The potential Singularity, however, poses a different challenge.

More recently, the silicon version of the Singularity has been met by a competitive challenge from some biologists who believe that genetic engineering of brains may well produce an “Organic Singularity” before the computer-based “Technological Singularity” is ever achieved. Personally, I don’t look forward to either one, although my uneasiness may simply be an illustration of the difficult thinking that all of us have in store as these multiple revolutions speed ahead at an ever accelerating pace.

THE CREATION OF NEW BODY PARTS

Even though the merger between people and machines may remain in the realm of science fiction for the foreseeable future, the introduction of mechanical parts as replacements for components of the human body is moving forward quickly. Prosthetics are now being used to replace not only hips, knees, legs, and arms, but also eyes and other body parts that have not previously been replaceable with artificial substitutes. Cochlear implants, as noted, are used to restore hearing. Several research teams have been developing mechanical exoskeletons to enable paraplegics to walk and to confer additional strength on soldiers and others who need to carry heavy loads. Most bespoke in-ear hearing aids are already made with 3D printers. The speed with which 3D printing is advancing makes it inevitable that many other prosthetics will soon be printed.

In 2012, doctors and technologists in the Netherlands used a 3D printer (described in Chapter 1) to fabricate a lower jaw out of titanium powder for an elderly woman who was not a candidate for traditional reconstructive surgery. The jaw was designed in a computer with articulated joints that match a real jaw, grooves to accommodate the regrowth of veins and nerves, and precisely designed depressions for her muscles to be attached to it. And of course, it was sized to perfectly fit the woman’s face.

Then, the 3D digital blueprint was fed into the 3D printer, which laid down titanium powder, one ultrathin layer at a time (thirty-three layers for each millimeter), and fused them together with a laser beam each time, in a process that took just a few hours. According to the woman’s doctor, Dr. Jules Poukens of Hasselt University, she was able to use the printed jaw normally after awakening from her surgery, and one day later was able to swallow food.

The 3D printing of human organs is not yet feasible, but the emerging possibility has already generated tremendous excitement in the field of transplantation because of the current shortage of organs. However, well before the 3D printing of organs becomes feasible, scientists hope to develop the ability to generate replacement organs in the laboratory for transplantation into humans. Early versions of so-called exosomatic kidneys (and livers) are now being grown by regenerative medicine scientists at Wake Forest University. This emerging potential for people to grow their own replacement organs promises to transform the field of transplantation.

Doctors at the Karolinska Institute in Stockholm have already created and successfully transplanted a replacement windpipe by inducing the patient’s own cells to regrow in a laboratory on a special plastic “scaffolding” that precisely copied the size and shape of the windpipe it replaced. A medical team in Pittsburgh has used a similar technique to grow a quadriceps muscle for a soldier who lost his original thigh muscle to an explosion in Afghanistan, by implanting into his leg a scaffold made from a pig’s urinary bladder (stripped of living cells), which stimulated his stem cells to rebuild the muscle tissue as they sensed the matrix of the scaffolding being broken down by the body’s immune system. Scientists at MIT are developing silicon nanowires a thousand times smaller than a human hair that can be embedded in these scaffolds and used to monitor how the regrown organs are performing.

As one of the authors of the National Organ Transplant Act in 1984, I learned in congressional hearings about the problems of finding enough organ donors to meet the growing need for transplantation. And having sponsored the ban on buying and selling organs, I remain unconvinced by the argument that this legal prohibition (which the U.S. shares with all other countries besides Iran) should be removed. The potential for abuse is already obvious in the disturbing black market trade in organs and tissues from people in poor countries for transplantation into people living in wealthy countries.

Pending the development of artificial and regenerated replacement organs, Internet-based tools, including social media, are helping to address the challenge of finding more organ donors and matching them with those who need transplants. In 2012, The New York Times’s Kevin Sack reported on a moving example of how sixty different people became part of “the longest chain of kidney transplants ever constructed.” Recently, Facebook announced the addition of “organ donor” as one of the items to be updated on the profiles of its users.

Another 3D printing company, Bespoke Innovations of San Francisco, is using the process to print more advanced artificial limbs. Other firms are using it to make numerous medical implants. There is also a well-focused effort to develop the capacity to print vaccines and pharmaceuticals from basic chemicals on demand. Professor Lee Cronin of the University of Glasgow, who leads one of the teams focused on the 3D printing of pharmaceuticals, said recently that the process they are working on would place the molecules of common elements and compounds used to formulate pharmaceuticals into the equivalent of the cartridges that feed different color inks into a conventional 2D printer. With a manageably small group of such cartridges, Cronin said, “You can make any organic molecule.”

One of the advantages, of course, is that this process would make it possible to transmit the 3D digital formula for pharmaceuticals and vaccines to widely dispersed 3D printers around the world for the manufacturing of the pharmaceuticals on site with negligible incremental costs for the tailoring of pharmaceuticals to each individual patient.

The pharmaceutical industry relied historically on large centralized manufacturing plants because its business model was based on the idea of a mass market, within which large numbers of people were provided essentially the same product. However, the digitization of human beings and molecular-based materials is producing such an extraordinarily high volume of differentiating data about both people and things that it will soon no longer make sense to lump people together and ignore medically significant information about their differences.

Our new prowess in manipulating the microscopic fabric of our world is also giving us the ability to engineer nanoscale machines for insertion into the human body—with some active devices the size of living cells that can coexist with human tissue. One team of nanotechnologists at MIT announced in 2012 that they have successfully built “nanofactories” that are theoretically capable of producing proteins while inside the human body when they are activated by shining a laser light on them from outside the body.

Specialized prosthetics for the brain are also being developed. Alongside pacemakers for hearts, comparable devices can now be inserted into brains to compensate for damage and disorders. Doctors are already beginning to implant computer chips and digital devices on the surface of the brain and, in some cases, deeper within the brain. By cutting a hole in the skull and placing a chip that is wired to a computer directly on the surface of the brain, doctors have empowered paralyzed patients with the ability to activate and direct the movement of robots with their thoughts. In one widely seen demonstration, a paralyzed patient was able to direct a robot arm to pick up a cup of coffee, move it close to her lips, and insert the straw between her lips so she could take a sip.

Experts believe that it is only a matter of time before the increased computational power and the reduction in size of the computer chips will make it possible to dispense with the wires connecting the chip to a computer. Scientists and engineers at the University of Illinois, the University of Pennsylvania, and New York University are working to develop a new form of interface with the brain that is flexible enough to stretch in order to fit the contours of the brain’s surface. According to the head of R&D at GlaxoSmithKline, Moncef Slaoui, “The sciences that underpin bioelectronics are proceeding at an amazing pace at academic centers around the world but it is all happening in separate places. The challenge is to integrate the work—in brain-computer interfaces, materials science, nanotechnology, micro-power generation—to provide therapeutic benefit.”

Doctors at Tel Aviv University have equipped rats with an artificial cerebellum, which they have attached to the rat’s brain stem to interpret information from the rest of the rat’s body. By using this information, doctors are able to stimulate motor neurons to move the rat’s limbs. Although the work is at an early stage, experts in the field believe that it is only a matter of time before artificial versions of entire brain subsystems are built. Francisco Sepulveda, at the University of Essex in the U.K., said that the complexity of the challenge is daunting but that scientists see a clear pathway to succeed. “It will likely take us several decades to get there, but my bet is that specific, well-organized brain parts such as the hippocampus or the visual cortex will have synthetic correlates before the end of the century.”

Well before the development of a synthetic brain subsystem as complex as the hippocampus or visual cortex, other so-called neuroprosthetics are already being used in humans, including prosthetics for bladder control, relief of spinal pain, and the remediation of some forms of blindness and deafness. Other neuroprosthetics expected to be introduced in the near future will, according to scientists, be able to stimulate particular parts of the brain to enhance focus and concentration, that with the flip of a switch will stimulate the neural connections associated with “practice” in order to enhance the ability of a stroke victim to learn how to walk again.

“MODIFY THE KID”

As implants, prosthetics, neuroprosthetics, and other applications in cybernetics continue to improve, the discussion about their implications has broadened from their use as therapeutic, remedial, and reparative devices to include the implications of using prosthetics that enhance humans. For example, the brain implants described above that can help stroke victims learn more quickly how to walk again, can also be used in healthy people to enhance concentration at times of their choosing to help them learn a brand-new skill, or enhance their capacity for focus when they feel it is particularly important.

The temporary enhancement of mental performance through the use of pharmaceuticals has already begun, with an estimated 4 percent of college students routinely using attention-focusing medications like Adderall, Ritalin, and Provigil to improve their test scores on exams. Studies at some schools found rates as high as 35 percent. After an in-depth investigation of the use of these drugs in high schools, The New York Times reported that there was “no reliable research” on which to base a national estimate, but that a survey of more than fifteen schools with high academic standards yielded an estimate from doctors and students that the percentage of students using these substances “ranges from 15 percent to 40 percent.”

The Times went on to report, “One consensus was clear: users were becoming more common … and some students who would rather not take the drugs would be compelled to join them because of the competition over class rank and colleges’ interest.” Some doctors who work with low-income families have started prescribing Adderall for children to help them compensate for the advantages that children from wealthy families have. One of them, Dr. Michael Anderson, of Canton, Georgia, told the Times he thinks of it as “evening the scales a little bit.… We’ve decided as a society that it’s too expensive to modify the kid’s environment. So we have to modify the kid.”

A few years ago, almost 1,500 people working as research scientists at institutions in more than sixty countries responded to a survey on the use of brain-enhancing pharmaceuticals. Approximately 20 percent said that they had indeed used such drugs, with the majority saying they felt they improved their memory and ability to focus. Although inappropriate use and dangerous overuse of these substances has caused doctors to warn about risks and side effects, scientists are working on new compounds that carry the promise of actually boosting intelligence. Some predict that the use of the improved intelligence-enhancement drugs now under development may well become commonplace and carry as little stigma as cosmetic surgery does today. The U.S. Defense Advanced Research Projects Agency is experimenting with a different approach to enhance concentration and speed the learning of new skills, by using small electrical currents applied from outside the skull to the part of the brain used for object recognition in order to improve the training of snipers.

ENHANCING PERFORMANCE

At the 2012 Olympics, South Africa’s Oscar Pistorius made history as the first double amputee track athlete ever to compete. Pistorius, who was born with no fibulas in his lower legs, both of which were amputated before he was one year old, learned to run on prosthetics. He competed in the 400-meter sprint, where he reached the semifinals, and the 4 × 400 relay, in which the South African team reached the finals.

Some of Pistorius’s competitors expressed concern before the games that the flexible blades attached to his prosthetic lower legs actually gave him an unfair advantage. The retired world record holder in the 400-meter sprint, Michael Johnson, said, “Because we don’t know for sure whether he gets an advantage from the prosthetics, it is unfair to the able-bodied competitors.”

Because of his courage and determination, most were cheering for Pistorius to win. Still, it’s clear that we are already in a time of ethical debate over whether artificial enhancements of human beings lead to unfair advantages of various kinds. When Pistorius competed two weeks later in the Paralympics, he himself lodged a protest against one of the other runners whose prosthetic blades, according to Pistorius, were too long compared to his height and gave him an unfair advantage.

In another example from athletics, the use of a hormone called erythropoietin (EPO)—which regulates the production of red blood cells—can give athletes a significant advantage by delivering more oxygen to the muscles for a longer period of time. One former winner of the Tour de France has already been stripped of his victory after he tested positive for elevated testosterone. He has admitted use of EPO, along with other illegal enhancements. More recently, seven-time Tour de France winner Lance Armstrong was stripped of his championships and banned from cycling for life after the U.S. Anti-Doping Agency released a report detailing his use of EPO, steroids, and blood transfusions, doping by other members of his team, and a complex cover-up scheme.

The authorities in charge of the Olympics and other athletic competitions have been forced into a genetic and biochemical arms race to develop ever more sophisticated methods of detecting new enhancements that violate the rules. What if the gene that produces extra EPO is spliced into an athlete’s genome? How will that be detected?

At least one former Olympic multiple gold medal winner, Eero Mäntyranta, the Finnish cross-country skier, was found years later to have a natural mutation that caused his body to produce more than the average EPO—and thus produce more red blood cells. Clearly, that cannot be considered a violation of Olympic rules. Mäntyranta competed in the 1960s, before the gene splicing technique was available. But if future Olympians show up with the same mutation, it may be impossible to determine whether it is natural or has been artificially spliced into their genomes. The splicing could be detected now, but scientists say that when the procedure is perfected, Olympic officials may not be able to make a ruling without genetic testing of the athlete’s relatives.

In another example, scientists have now discovered ways to manipulate a protein called myostatin that regulates the building of muscles. Animals in which myostatin is blocked develop unnaturally large and strong muscles throughout their bodies. If athletes are genetically engineered to enhance their muscle development, does that constitute unfair competition? Isn’t that simply a new form of doping, like the use of steroids and oxygen-rich blood injections? Yet here again, some people—including at least one young aspiring gymnast—have a rare but natural mutation that prevents them from producing a normal volume of myostatin, and results in supernormal musculature.

The convergence of genetic engineering and prosthetics is also likely to produce new breakthroughs. Scientists in California announced a new project in 2012 to create an artificial testicle, which they refer to as a human “sperm-making biological machine.” Essentially a prosthesis, the artificial testicle would be injected every two months with sperm cells engineered from the man’s own adult stem cells.

Some of the earliest applications of genetic research have been in the treatment of infertility. In fact, a great deal of the work since the beginning of the Life Sciences Revolution has focused on the beginning and the end of the human lifecycle—the reinvention of life and death.

THE CHANGING ETHICS OF FERTILITY

The birth in England of the first so-called test tube baby in 1978, Louise Brown, caused a global debate about the ethics and propriety of the procedure—a debate that in many ways established a template for the way publics react to most such breakthroughs. In the first stage, there is a measure of shock and awe, mingled with an anxious flurry of speculation as newly minted experts try to explore the implications of the breakthrough. Some bioethicists worried at the time that in vitro fertilization might somehow diminish parental love and weaken generational ties. But set against the unfocused angst and furrowed brows is the overflowing joy of the new parents whose dreams of a child have at last been realized. Soon thereafter, the furor dies down and fades away. As one U.S. bioethicist, Debra Mathews, put it, “People want children and no one wants anyone else to tell them they can’t have them.” Since 1978, more than five million children have been born to infertile people wanting children through the use of in vitro fertilization and related procedures.

During numerous congressional hearings on advances in life sciences research in the 1970s and 1980s, I saw this pattern repeated many times. Even earlier, in 1967, the first heart transplant by Dr. Christiaan Barnard in South Africa also caused controversy, but the joy and wonder of what was seen as a medical miracle put an end to the debate before it gained momentum. A doctor assisting in the operation, Dr. Warwick Peacock, told me that when the transplanted heart finally began to beat, Barnard exclaimed, “My God, it’s working!” Later on, the first cloning of livestock and the commercialization of surrogate motherhood also caused controversies with very short half-lives.

Now, however, the torrent of scientific breakthroughs is leading to new fertility options that may generate controversies that don’t fade as quickly. One new procedure involves the conception of an embryo and the use of preimplantation genetic diagnosis (PGD) to select a suitable “savior sibling” who can serve as an organ, tissue, bone marrow, or umbilical cord stem cell donor for his or her sibling. Some bioethicists have raised concerns that the instrumental purpose of such conceptions devalues the child, though others ask why this must necessarily be the case. In theory, the parents can love and value both children equally even as they pursue a medically important cure for the first with the assistance of the second. Whether truly informed consent on the part of the donor child is plausible in this scenario is another matter.

Scientists and doctors at the Department of Reproductive Medicine at Newcastle University in England outlined a procedure for creating “three-parent babies,” to allow couples at high risk of passing on to their children an incurable genetic illness passed from their mother’s faulty mitochondrial DNA to have a healthy child. If a third person, who does not have the genes in question, allows her genes (it must come from a female donor) to be substituted for that portion of the embryo’s genome, then the baby will escape the feared genetic condition. Ninety-eight percent of the baby’s DNA would come from the mother and father; only 2 percent or so would come from the gene donor. However, this genetic modification is one that will affect not only the baby, but all of its offspring, in perpetuity. As a result, the doctors have asked for a government review of the procedure to determine whether the procedure is acceptable under Britain’s laws.

When choices such as these are in the hands of parents rather than the government, most people adopt a different standard for deciding how they feel about the procedure in question. The great exception is the continuing debate over the ethics of abortion. In spite of the passionate opposition to abortion among many thoughtful people, the majority in most countries seem to override whatever degree of uneasiness they have about the procedure by affirming the principle that it is a decision that should properly be left to the pregnant woman herself, at least in the earlier stages of the pregnancy.

Nevertheless, the dispersal of new genetic options to individuals is, in some countries, leading to new laws regulating what parents can and cannot do. India has outlawed genetic testing of embryos, or even blood tests, that are designed to identify the gender of the embryo. The strong preference by many Indian parents that their next child be male, particularly if they already have a daughter, has already led to the abortion of 500,000 female fetuses each year and a growing imbalance of the male to female sex ratio in the population. (Among the many cultural factors that have long been at work in producing the preference for baby boys is the high cost of the dowry that must be paid by the parents of a bride.) The 2011 provisional census in India, which showed a further steep decline in the child sex ratio, led the Indian government to launch a new campaign to better enforce the prohibition against the sex selection of children.

Most of the prenatal gender identification procedures in India utilize ultrasound machines rather than riskier procedures such as amniocentesis, and the prevalence of advertising for low-cost ultrasound clinics is a testament to the popularity of the procedure. Although sex-selective abortions are illegal in India, proposed bans on ultrasound machines have not gained support, in part because of their other medical uses. Some couples from India—and other countries—are now traveling to Thailand, where the successful “medical tourism” industry is offering preimplantation genetic diagnosis procedures to couples intent on having a baby boy. A doctor at one of these clinics said that he has never had a request for a female embryo.

Now a scientific breakthrough allows the testing of fetal DNA in blood samples taken from pregnant mothers; experts say the test is 95 percent accurate in determining gender seven weeks into the pregnancy, and becomes even more accurate as the pregnancy proceeds. One company making test kits, Consumer Genetics Inc., of Santa Clara, California, requires women to sign an agreement not to use the test results for sex selection; the company has also announced that it will not sell the kits in India or China.

In 2012, researchers at the University of Washington announced a breakthrough in the sequencing of almost the entire genome of a fetus from the combination of a blood sample from the pregnant woman and a saliva sample from the father. Although the process is still expensive (an estimated $20,000 to $50,000 for one fetal genome—last year, the cost was $200,000 per test), the cost is likely to continue falling very quickly. Soon after this breakthrough was announced, a medical research team at Stanford announced an improved procedure that does not require a genetic sample from the father and is expected to be widely available within two years for an estimated $3,000.

While so much attention has been focused on the gender screening of embryos, tremendous progress has been made on the screening for genetic markers that identify serious disorders that might be treated through early detection. Of the roughly four million babies born in the United States each year, for example, approximately 5,000 have genetic or functional disorders amenable to treatment if discovered early. Since newborn babies are routinely screened on the day of their birth for more than twenty diseases, the new ease with which genetic screening can be done on embryos is, in one sense, just an extension of the process already performed routinely immediately after birth.

The ethical implications are quite different, however, because of the possibility that knowledge of some condition or trait in the embryo could lead the parents to perform an abortion. Indeed, the termination of pregnancies involving fetuses with serious genetic defects is common around the world. A recent U.S. study, for example, found that more than 90 percent of American women who find that the fetus they are carrying has Down syndrome are terminating their pregnancies. The author of an article provocatively titled “The Future of Neo-Eugenics,” Armand Leroi at Imperial College in the U.K., wrote, “The widespread acceptance of abortion as a eugenic practice suggests that there might be little resistance to more sophisticated methods of eugenic selection and, in general, this has been the case.”

Scientists say that within this decade, they expect to develop the ability to screen embryos for such traits as hair and eye color, skin complexion, and a variety of other traits—including some that have been previously thought of as behavioral but which some scientists now believe have heavy genetic components. Dr. David Eagleman, a neuroscientist at the Baylor College of Medicine, notes, “If you are a carrier of a particular set of genes, the probability that you will commit a violent crime is four times as high as it would be if you lacked those genes.… The overwhelming majority of prisoners carry these genes; 98.1 percent of death-row inmates do.”

If prospective parents found that set of genes in the embryo they were considering for implantation, would they be tempted to splice them out, or select a different embryo instead? Will we soon be debating “distributed eugenics”? As a result of these and similar developments, some bioethicists are expressing concern that what Leroi called “neo-eugenics” will soon confront us with yet another round of difficult ethical choices.

Already, in vitro fertilization clinics are now using preimplantation genetic diagnosis (PGD) to scan embryos for markers associated with hundreds of diseases before implantation. Although the United States has more regulations in the field of medical research than most countries, PGD is still completely unregulated. Consequently, it may be only a matter of time before a much wider range of criteria—including cosmetic or aesthetic factors—are presented as options for parents to select in the screening process.

One question that has already arisen is the ethics of disposing of embryos that are not selected for implantation. If they are screened out as candidates, they can be frozen and preserved for potential later implantation—and that is an option chosen by many women who undergo the in vitro fertilization procedure. However, often several embryos are implanted simultaneously in order to improve the odds that one will survive; that is the principal reason why multiple births are far more common with in vitro fertilization than in the general population.

The United Kingdom has set a legal limit on the number of embryos that doctors can implant, in order to decrease the number of multiple births and avoid the associated complications for the mothers and babies—and the additional cost to the health care system. As a result, one company, Auxogyn, is using digital imaging (in conjunction with a sophisticated algorithm), in order to monitor the developing embryos every five minutes—from the moment they are fertilized until one of them is selected for implantation. The purpose is to select the embryo that is most likely to develop in a healthy way.

As a practical matter, most realize that it is only a matter of time before the vast majority of frozen embryos are discarded—which raises the same underlying issue that motivates the movement to stop abortions: is an embryo in the earliest stages of life entitled to all of the legal protections available to individuals after they are born? Again, regardless of misgivings they may have, the majority in almost every country have reached the conclusion that even though embryos mark the first stage of human life, the practical differences between an embryo, or fetus, and an individual are nevertheless significant enough to allow the pregnant woman to control the choice on abortion. That view is consistent with a parallel view of the majority in almost every country that the government does not have the right to require a pregnant woman to have an abortion.

The furor over embryonic stem cell research grows out of a related issue. Even if it is judged appropriate for women to have the option of terminating their pregnancies—under most circumstances—is it also acceptable for the parents to give permission for “experimentation” on the embryo to which they have given the beginning of a life? Although this controversy is far from resolved, the majority of people in most countries apparently feel that the scientific and medical benefits of withdrawing stem cells from embryos are so significant that they justify such experiments. In many countries, the justification is linked to a prior determination that the embryos in question are due to be discarded in any case.

The discovery of nonembryonic stem cells (induced pluripotent, or iPS cells) by Shinya Yamanaka at Kyoto University (who was awarded the 2012 Nobel Prize in Medicine) is creating tremendous excitement about a wide range of new therapies and dramatic improvements in drug discovery and screening. In spite of this exciting discovery, however, many scientists still argue that embryonic stem cells may yet prove to have unique qualities and potential that justify their continued use. Researchers at University College London have already used stem cells to successfully restore some vision to mice with an inherited retinal disease, and believe that some forms of blindness in humans may soon be treatable with similar techniques. Other researchers at the University of Sheffield have used stem cells to rebuild nerves in the ears of gerbils and restore their hearing.

In 2011, Japanese fertility scientists at Kyoto University caused a stir when they announced that they had successfully used embryonic mouse stem cells to produce sperm when transplanted into the testicles of mice that were infertile. When the sperm was then extracted and put into mouse eggs, the fertilized eggs were transferred to the uteri of female mice, and resulted in normal offspring that could then reproduce naturally. Their work builds on an English science breakthrough in 2006 in which biologists at the University of Newcastle upon Tyne first produced functioning sperm cells that had been converted from stem cells and produced live offspring, though the offspring had genetic defects.

One reason why these studies drew such attention was that the same basic technique, as it is developed and perfected, may soon make it possible for infertile men to have biological children—and opens the possibility for gay and lesbian couples to have children that are genetically and biologically their own. Some headline writers also savored the speculation that since there is no reason why women cannot, in theory, produce their own sperm cells using this technique: “Will Men Become Obsolete?” On the lengthening list of potentially disquieting outcomes from genetic research, this possibility appears destined to linger near the bottom, though I am certainly biased in making that prediction.

LIFESPANS AND “HEALTHSPANS”

Just as scientists working on fertility have focused on the beginning of life, others have been focused on the end of life—and have been making dramatic progress in understanding the factors affecting longevity. They are developing new strategies, which they hope will achieve not only significant extensions in the average human lifespan, but also the extension of what many refer to as the “healthspan”—the number of years we live a healthy life without debilitating conditions or diseases.

Although a few scientific outliers have argued that genetic engineering could increase human lifespans by multiple centuries, the consensus among many aging specialists is that an increase of up to 25 percent is more likely to be the range of what is possible. According to most experts, evolutionary theory and numerous studies in human and animal genetics lead them to the conclusion that environmental and lifestyle factors contribute roughly three quarters to the aging process and that genetics makes a more modest contribution—somewhere between 20 and 30 percent.

One of the most famous studies of the relationship between lifestyle and longevity showed that extreme caloric restriction extends the lives of rodents dramatically, although there is debate about whether this lifestyle adjustment has the same effect on longevity in humans. More recent studies have shown that rhesus monkeys do not live longer with severe caloric restrictions. There is a subtle but important distinction, experts on all sides point out, between longevity and aging. Although they are obviously related, longevity measures the length of life, whereas aging is the process by which cell damage contributes over time to conditions that bring the end of life.

Some highly questionable therapies, such as the use of human growth hormone in an effort to slow or reverse unwanted manifestations of the aging process, may well have side effects that shorten longevity, such as triggering the onset of diabetes and the growth of tumors. Other hormones that have been used to combat symptoms of aging—most prominently, testosterone and estrogen—have also led to controversies about side effects that can shorten longevity for certain patients.

However, excitement was also stirred by a Harvard study in 2010 that showed that the aging process in mice could be halted and even reversed by the use of enzymes known as telomerases, which serve to protect the telomeres—or protective caps—on the ends of chromosomes in order to prevent them from damage. Scientists have long known that these telomeres get shorter with the aging of cells and that this shortening process can ultimately halt the renewal of the cells through replication. As a result of the Harvard study, scientists are exploring strategies for protecting the telomeres in order to retard the aging process.

Some researchers are optimistic that extensive whole genome studies of humans with very long lifespans may yet lead to the discovery of genetic factors that can be used to extend longevity in others. However, most of the dramatic extensions in the average human lifespan over the last century have come from improvements in sanitation and nutrition, and from medical breakthroughs such as the discovery of antibiotics and the development of vaccines. Further improvements in these highly successful strategies are likely to further improve average lifespans—probably, scientists speculate, at the rate of improvement we have become used to—about one extra year per decade.

In addition, the continued global efforts to fight infectious disease threats are also extending average lifespans by reducing the number of premature deaths. Much of this work is now focused on malaria, tuberculosis, HIV/AIDS, influenza, viral pneumonia, and multiple so-called “neglected tropical diseases” that are barely known in the industrialized world but afflict more than a billion people in developing tropical and subtropical countries.

THE DISEASE FRONT

There has been heartening progress in reducing the number of people who die of AIDS each year. In 2012, the number fell to 1.7 million, significantly down from its 2005 peak of 2.3 million. The principal reason for this progress is greater access to pharmaceuticals—particularly antiretroviral drugs—that extend the lifespan and improve the health of people who have the disease. Efforts to reduce the infection rate continue to be focused on preventive education, the distribution of condoms in high-risk areas, and accelerated efforts to develop a vaccine.

Malaria has also been reduced significantly over the past decade with a carefully chosen combination of strategies. Although the largest absolute declines were in Africa, according to the U.N., 90 percent of all malaria deaths still take place in Sub-Saharan Africa—most of them involving children under five. Although an ambitious effort in the 1950s to eradicate malaria did not succeed, a few of those working hard to eradicate malaria, including Bill Gates, now believe that their goal may actually be realistic within the next few decades.

The world did succeed in eliminating the terrible scourge of smallpox in 1980. And in 2011 the U.N. Food and Agriculture Organization succeeded in eliminating a second disease, rinderpest, a relative of measles that killed cattle and other animals with cloven hooves. Because it was an animal disease, rinderpest never garnered the global attention that smallpox commanded, but it was one of the deadliest and most feared threats to those whose families and communities depend on livestock.

For all of the appropriate attention being paid to infectious diseases, the leading causes of death in the world today, according to the World Health Organization, are chronic diseases that are not communicable. In the last year for which statistics are available, 2008, approximately 57 million people died in the world, and almost 60 percent of those deaths were caused by chronic diseases, principally cardiovascular disease, diabetes, cancer, and chronic respiratory diseases.

Cancer is a special challenge, in part because it is not one disease, but many. The U.S. National Cancer Institute and the National Human Genome Research Institute have been spending $100 million per year on a massive effort to create a “Cancer Genome Atlas,” and in 2012 one of the first fruits of this project was published in Nature by more than 200 scientists who detailed genetic peculiarities in colon cancer tumors. Their study of more than 224 tumors has been regarded as a potential turning point in the development of new drugs that will take advantage of vulnerabilities they found in the tumor cells.

In addition to focusing on genomic analyses of cancer, scientists are exploring virtually every conceivable strategy for curing cancers. They are investigating new possibilities for shutting off the blood supply to cancerous cells, dismantling their defense mechanisms, and boosting the ability of natural immune cells to identify and attack the cancer cells. Many are particularly excited about new strategies that involve proteomics—the decoding of all of the proteins translated by cancer genes in the various forms of cancer and targeting epigenetic abnormalities.

Scientists explain that while the human genome is often characterized as a blueprint, it is actually more akin to a list of parts or ingredients. The actual work of controlling cellular functions is done by proteins that carry out a “conversation” within and between cells. These conversations are crucial in understanding “systems diseases” like cancer.

One of the promising strategies for dealing with systemic disorders like cancer and chronic heart diseases is to strengthen the effectiveness of the body’s natural defenses. And in some cases, new genetic therapies are showing promise in doing so. A team of scientists at the University of California San Francisco Gladstone Institutes of Cardiovascular Disease has dramatically improved cardiac function in adult mice by reprogramming cells to restore the health of heart muscles.

IN MANY IF not most cases, though, the most effective strategy for combating chronic diseases is to make changes in lifestyles: reduce tobacco use, reduce exposure to carcinogens and other harmful chemicals in the environment, reduce obesity through better diet and more exercise, and—at least for salt-sensitive individuals—reduce sodium consumption in order to reduce hypertension (or high blood pressure).

Obesity—which is a major causal factor in multiple chronic diseases—was the subject of discouraging news in 2012 when the British medical journal The Lancet published a series of studies indicating that one of the principal factors leading to obesity, physical inactivity and sedentary lifestyles, is now spreading from North America and Western Europe to the rest of the world. Researchers analyzed statistics from the World Health Organization to demonstrate that more people now die every year from conditions linked with physical inactivity than die from smoking. The statistics indicate that one in ten deaths worldwide is now due to diseases caused by persistent inactivity.

Nevertheless, there are good reasons to hope that new strategies combining knowledge from the Life Sciences Revolution with new digital tools for monitoring disease states, health, and wellness may spread from advanced countries as cheaper smartphones are sold more widely throughout the globe. The use of intelligent digital assistants for the management of chronic diseases (and as wellness coaches) may have an extremely positive impact.

In developed nations, there are already numerous smartphone apps that assist those who wish to keep track of how many calories they consume, what kinds of food they are eating, how much exercise they are getting, how much sleep they are getting (some new headbands also keep track of how much deep sleep, or REM sleep, they are getting), and even how much progress they are making in dealing with addictions to substances such as alcohol, tobacco, and prescription drugs. Mood disorders and other psychological maladies are also addressed by self-tracking programs. During the 2012 summer Olympic Games in London, a number of athletes were persuaded by biotech companies attempting to improve their health-tracking devices to use glucose monitors and sleep monitors, and to receive genetic analyses designed to improve their individual nutritional needs.

Such monitoring is not limited to Olympians. Personal digital monitors of patients’ heart rates, blood glucose, blood oxygenation, blood pressure, body temperature, respiratory rate, body fat levels, sleep patterns, medication use, exercise, and more are growing more common. Emerging developments in nanotechnology and synthetic biology also hold out the prospect of more sophisticated continuous monitoring from sensors inside the body. Nanobots are being designed to monitor changes in the bloodstream and vital organs, reporting information on a constant basis.

Some experts, including Dr. H. Gilbert Welch of Dartmouth, the author of Overdiagnosed: Making People Sick in the Pursuit of Health, believe that we are in danger of going too far in monitoring and data analysis of individuals who track their vital signs and more: “Constant monitoring is a recipe for all of us to be judged ‘sick.’ Judging ourselves sick, we seek intervention.” Welch and some others believe that many of these interventions turn out to be costly and unnecessary. In 2011, for example, medical experts advised doctors to stop routinely using a new and sophisticated antigen test for prostate cancer precisely because the resulting interventions were apparently doing more harm than good.

The digitizing of human beings, with the creation of large files containing detailed information about their genetic and biochemical makeup and their behavior, will also require attention to the same privacy and information security issues discussed in Chapter 2. For the same reasons that this rich data is potentially so useful in improving the efficacy of health care and reducing medical costs, it is also seen as highly valuable to insurance companies and employers who are often eager to sever their relationships with customers and employees who represent high risks for big medical bills. Already, a high percentage of those who could benefit from genetic testing are refusing to have the information gathered for fear that they will lose their jobs and/or their health insurance.

A few years ago, the United States passed a federal law known as the Genetic Information Nondiscrimination Act, which prohibits the disclosure or improper use of genetic information. But enforcement is difficult and trust in the law’s protection is low. The fact that insurance companies and employers usually pay for the majority of health care expenditures—including genetic testing—further reinforces the fear by patients and employees that their genetic information will not remain confidential. Many believe that flows of information on the Internet are vulnerable to disclosure in any case. The U.S. law governing health records, the Health Insurance Portability and Accountability Act, fails to guarantee patient access to records gathered from their own medical implants while companies seek to profit from personalized medical information.

Nevertheless, these self-tracking techniques—part of the so-called self-quantification movement—offer the possibility that behavior modification strategies that have traditionally been associated with clinics can be individualized and executed outside of an institutional setting. Expenditures for genetic testing are rising rapidly as prices for these tests continue to fall rapidly and as the wave of personalized medicine continues to move forward with increasing speed.

The United States may have the most difficulty in making the transition to precision medicine because of the imbalance of power and unhealthy corporate control of the public policy decision-making process, as described in Chapter 3. This chapter is not about the U.S. health care system, but it is interesting to note that the glaring inefficiencies, inequalities, and absurd expense of the U.S. system are illuminated by the developing trends in the life sciences. For example, many health care systems do not cover disease prevention and wellness promotion expenditures, because they are principally compensated for expensive interventions after a patient’s health is already in jeopardy. The new health care reform bill enacted by President Obama required coverage of preventive care under U.S. health care plans for the first time.

As everyone knows, the U.S. spends far more per person on health care than any other country while achieving worse outcomes than many other countries that pay far less, and still, tens of millions do not have reasonable access to health care. Lacking any other option, they wait, often until their condition is so dire that they have to go to the emergency room, where the cost of intervention is highest and the chance of success is lowest. The recently enacted reforms will significantly improve some of these defects, but the underlying problems are likely to grow worse—primarily because insurance companies, pharmaceutical companies, and other health care providers retain almost complete control over the design of health care policy.

THE STORY OF INSURANCE

The business of insurance began as far back as ancient Rome and Greece, where life insurance policies were similar to what we now know as burial insurance. The first modern life insurance policies were not offered until the seventeenth century in England. The development of extensive railroad networks in the United States in the 1860s led to limited policies protecting against accidents on railroads and steamboats, and that led, in turn, to the first insurance policies protecting against sickness in the 1890s.

Then, in the early 1930s, when advances in medical care began to drive costs above what many patients could pay on their own, the first significant group health insurance policies were offered by nonprofits: Blue Cross for hospital charges and Blue Shield for doctors’ fees. All patients paid the same premiums regardless of age or preexisting conditions. The success of the Blues led to the entry into the marketplace of private, for-profit health insurance companies, who began to charge different premiums to people based on their calculation of the risk involved—and refused to offer policies at all to those who represented an unacceptably high risk. Soon, Blue Cross and Blue Shield were forced by the new for-profit competition to also link premiums to risk.

When President Franklin Roosevelt was preparing his package of reforms in the New Deal, he twice took preliminary steps—in 1935 and again in 1938—to include a national health insurance plan as part of his legislative agenda. On both occasions, however, he feared the political opposition of the American Medical Association and removed the proposal from his plans lest it interfere with what he regarded as more pressing priorities in the depths of the Great Depression: unemployment compensation and Social Security. The introduction of legislation in 1939 by New York Democratic senator Robert Wagner offered a quixotic third opportunity to proceed but Roosevelt chose not to support the legislation.

During World War II, with wages (and prices) controlled by the government, private employers began to compete for employees, who were scarce due to the war, by offering health insurance coverage. Then after the war, unions began to include demands for more extensive health insurance as part of their negotiated contracts with employers.

Roosevelt’s successor, Harry Truman, sought to revive the idea for national health insurance, but the opposition in Congress—once again fueled by the AMA—ensured that it died with a whimper. As a result, the hybrid system of employer-based health insurance became the primary model in the United States. Because older Americans and those with disabilities had a difficult time obtaining affordable health insurance within this system, new government programs were implemented to help both groups.

For the rest of the country, those who needed health insurance the most had a difficult time obtaining it, or paying for it when they could find it. By the time the inherent flaws and contradictions of this model were obvious, the American political system had degraded to the point that the companies with an interest in seeing this system continued had so much power that nothing could be done to change its basic structure.

With rare exceptions, the majority of legislators are no longer capable of serving the public interest because they are so dependent on campaign contributions from these corporate interests and so vulnerable to their nonstop lobbying. The general public is effectively disengaged from the debate, except to the extent that they absorb constant messaging from the same corporate interests—messages designed to condition their audience to support what the business lobbies want done.

GENETICALLY ENGINEERED FOOD

The same sclerosis of democracy is now hampering sensible adaptations to the wave of changes flowing out of the Life Sciences Revolution. For example, even though polls consistently show that approximately 90 percent of American citizens believe that genetically engineered food should be labeled, the U.S. Congress has adopted the point of view advocated by large agribusiness companies—that labeling is unnecessary and would be harmful to “confidence in the food supply.”

However, most European countries already require such labeling. The recent approval of genetically engineered alfalfa in the U.S. provoked a larger outcry than many expected and the “Just Label It” campaign has become the centerpiece of a new grassroots push for labeling genetically modified (GM) food products in the United States, which plants twice as many acres in GM crops as any other country. Voters in California defeated a referendum in 2012 to require such labeling, after corporate interests spent $46 million on negative commercials, five times as much as proponents. Nevertheless, since approximately 70 percent of the processed foods in the U.S. contain at least some GM crops, this controversy will not go away.

By way of background, the genetic modification of plants and animals is, as enthusiastic advocates often emphasize, hardly new. Most of the food crops that humanity has depended upon since before the dawn of the Agricultural Revolution were genetically modified during the Stone Age by careful selective breeding—which, over many generations, modified the genetic structure of the plants and animals in question to manifest traits of value to humans. As Norman Borlaug put it, “Neolithic women accelerated genetic modifications in plants in the process of domesticating our food crop species.”

By using the new technologies of gene splicing and other forms of genetic engineering, we are—according to this view—merely accelerating and making more efficient a long-established practice that has proven benefits and few if any detrimental side effects. And outside of Europe (and India) there is a consensus among most farmers, agribusinesses, and policymakers that GM crops are safe and must be an essential part of the world’s strategy for coping with anticipated food shortages.

However, as the debate over genetically modified organisms (GMOs) has evolved, opponents of the practice point out that none of the genetic engineering has ever produced any increase in the intrinsic yields of the crops, and they have raised at least some ecosystem concerns that are not so easily dismissed. The opponents argue that the insertion of foreign genes into another genome is, in fact, different from selective breeding because it disrupts the normal pattern of the organism’s genetic code and can cause unpredictable mutations.

The first genetically engineered crop to be commercialized was a new form of tomato known as the FLAVR SAVR, which was modified to remain firm for a longer period of time after it ripened. However, the tomato did not succeed due to high costs. And consumer resistance to tomato paste made from these tomatoes (it was clearly labeled as a GM product) caused the paste to be a failure.

Selective breeding was used to make an earlier change in the traits of commercial tomatoes in order to produce a flatter, less rounded bottom to accommodate the introduction of automation in the harvesting process. The new variety stayed on the conveyor belts without rolling off, was easier to pack into crates, and its tougher skin prevented the machines from crushing the tomatoes. They are sometimes called “square tomatoes,” though they are not really square.

An even earlier modification of tomatoes, in 1930, also using selective breeding, was the one that resulted in what most tomato lovers regard as a catastrophic loss of flavor in modern tomatoes. The change was intended to enhance the mass marketing and distribution of tomatoes by ensuring that they were “all red” and ripened uniformly, without the green “shoulders” that consumers sometimes viewed as a sign that they were not yet ripe. Researchers working with the newly sequenced tomato genome discovered in 2012 that the elimination of the gene associated with green shoulders also eliminated the plant’s ability to produce most of the sugars that used to give most tomatoes a delicious taste.

In spite of experiences such as these, which illustrate how changes made for the convenience and profitability of large corporations sometimes end up triggering other genetic changes that most people hate, farmers around the world—other than in the European Union—have adopted GM crops at an accelerating rate. Almost 11 percent of all the world’s farmland was planted in GM crops in 2011, according to an international organization that promotes GMOs, the International Service for the Acquisition of Agri-biotech Applications. Over the last seven years, the number of acres planted in GM crops has increased almost 100-fold, and the almost 400 million acres planted in 2011 represented an increase of 8 percent from one year earlier.

Although the United States is by far the largest grower of GM crops, Brazil and Argentina are also heavily committed to the technology. Brazil, in particular, has adopted a fast-track approval system for GMOs and is pursuing a highly focused strategy for maximizing the use of biotechnology in agriculture. In developing countries overall, the adoption of modified crops is growing twice as fast as in mature economies. An estimated 90 percent of the 16.7 million farmers growing genetically engineered crops in almost thirty countries were small farmers in developing markets.

Genetically modified soybeans, engineered to tolerate Monsanto’s Roundup herbicide, are the largest GM crop globally. Corn is the second most widely planted GM crop, although it is the most planted in the U.S. (“Maize” is the term used for what is called corn in the U.S.; the word “corn” is often used outside the U.S. to refer to any cereal crop.) In the U.S., 95 percent of soybeans planted and 80 percent of corn are grown from patented seeds that farmers must purchase from Monsanto or one of their licensees. Cotton is the third most planted GM crop globally, and canola (known as “rapeseed” outside the United States) is the other large GM crop in the world.

Although the science of genetically engineered plants is advancing quickly, the vast majority of GM crops grown today are still from the first of three generations, or waves, of the technology. This first wave, in turn, includes GM crops that fall into three different categories:

    •  The introduction of genes that give corn and cotton the ability to produce their own insecticide inside the plants;

    •  Genes introduced into corn, cotton, canola, and soybeans that make the plants tolerant of two chemicals contained in widely used weed killers that are produced by the same company—Monsanto—that controls the GM seeds; and

    •  The introduction of genes designed to enhance the survivability of crops during droughts.

In general, farmers using the first wave of GM crops report initial reductions in their cost of production—partly due to temporarily lower use of insecticide—and temporarily lower losses to insects or weeds. The bulk of the economic benefits thus far have gone to cotton farmers using a strain that is engineered to produce its own insecticide (Bacillus thuringiensis, better known as Bt). In India the new Bt cotton made the nation a net exporter, rather than importer, of cotton and was a factor in the initial doubling of cotton yields because of temporarily lower losses to insects and weeds. However, many Indian cotton farmers have begun to protest the high cost of the GM seeds they must purchase anew each year and the high cost of the herbicides they must use in greater volumes as more weeds develop resistance. A parliamentary panel in India issued a controversial 2012 report asserting that “there is a connection between Bt cotton and farmers’ suicides” and recommending that field trials of GM crops “under any garb should be discontinued forthwith.”

New scientific studies—including a comprehensive report by the U.S. National Research Council in 2009—support the criticism by opponents of GM crops that the intrinsic yields of the crops themselves are not increased at all. To the contrary, some farmers have experienced slightly lower intrinsic yields because of unexpected collateral changes in the plants’ genetic code. Selective breeding, on the other hand, was responsible for the impressive and life-saving yield increases of the Green Revolution. New research by an Israeli company, Kaiima, into a non-GMO technology known as “enhanced ploidy” (the inducement, selective breeding, and natural enhancement of a trait that confers more than two sets of chromosomes in each cell nucleus) is producing both greater yields and greater resistance to the effects of drought in a variety of food and other crops. Recent field trials run by Kaiima show more than 20 percent yield enhancement in corn and more than 40 percent enhancement in wheat.

The genetic modification of crops, by contrast, has not yet produced meaningful enhancements of survivability during drought. While some GM experimental strains do, in theory, offer the promise of increased yields during dry periods, these strains have not yet been introduced on a commercial scale, and test plots have demonstrated only slight yield improvements thus far, and only during mild drought conditions. Because of the growing prevalence of drought due to global warming, there is tremendous interest in drought-resistant strains, especially for maize, wheat, and other crops in developing countries. Unfortunately, however, drought resistance is turning out to be an extremely complex challenge for plant geneticists, involving a combination of many genes working together in complicated ways that are not yet well understood.

After an extensive analysis of the progress in genetically engineering drought-resistant crops, the Union of Concerned Scientists found “little evidence of progress in making crops more water efficient. We also found that the overall prospects for genetic engineering to significantly address agriculture’s drought and water-use challenges are limited at best.”

The second wave of GM crops involves the introduction of genes that enhance the nutrient value of the plants. It includes the engineering of higher protein content in corn (maize) that is used primarily for livestock feed, and the engineering of a new strain of rice that produces extra vitamin A as part of a strategy to combat the deficiency in vitamin A that now affects approximately 250 million children around the world. This second wave also involves the introduction of genes that are designed to enhance the resistance of plants to particular fungi and viruses.

The third wave of GM crops, which is just beginning to be commercialized, involves the modification of plants through the introduction of genes that program the production of substances within the plants that have commercial value as inputs in other processes, including pharmaceutical inputs and biopolymers for the production of bioplastics that are biodegradable and easily recyclable. This third wave also involves an effort to introduce genes that modify plants with high cellulose and lignin in order to make them easier to process for the production of cellulosic ethanol. The so-called green plastics have exciting promise, but as with crops devoted to the production of biofuels, they raise questions about how much arable land can safely or wisely be diverted from the production of food in a world with growing population and food consumption, and shrinking assets of topsoil and water for agriculture.

Over the next two decades, seed scientists believe that they may be able to launch a fourth wave of GM crops by inserting the photosynthesizing genes of corn (and other so-called C4 plants) that are more efficient in photosynthesizing light into energy in plants like wheat and rice (and other C3 plants). If they succeed—which is far from certain because of the unprecedented complexity of the challenge—this technique could indeed bring about significant intrinsic yield increases. For the time being however, the overall net benefits from genetically engineered crops have been limited to a temporary reduction in losses to pests and a temporary decrease in expenditures for insecticides.

In 2012, the Obama administration in the U.S. launched its National Bioeconomy Blueprint, specifically designed to stimulate the production—and procurement by the government—of such products. The European Commission adopted a similar strategy two months earlier. Some environmental groups have criticized both plans because of the growing concern about diverting cropland away from food production and the destruction of tropical forests to make way for more cropland.

The opponents of genetically modified crops argue that not only have these genetic technologies failed thus far to increase intrinsic yields, but also that the weeds and insects the GM crops are designed to control are quickly mutating to make themselves impervious to the herbicides and insecticides in question. In particular, the crops that are engineered to produce their own insecticide (Bacillus thuringiensis) are now so common that the constant diet of Bt being served to pests in large monocultured fields is doing the same thing to insects that the massive and constant use of antibiotics is doing to germs in the guts of livestock: it is forcing the mutation of new strains of pests that are highly resistant to the insecticide.

The same thing also appears to be happening to weeds that are constantly sprayed with herbicides to protect crops that have been genetically engineered to survive application of the herbicide (including principally Monsanto’s Roundup, which is based on glyphosate, which used to kill virtually any green plant). Already, ten species of harmful weeds have evolved a resistance to these herbicides, requiring farmers to use other more toxic herbicides. Some opponents of GM crops have marshaled evidence tending to show that over time, as resistance increases among weeds and insects, the overall use of both herbicides and pesticides actually increases, though advocates of GM crops dispute their analysis.

Because so many weeds have now developed resistance to glyphosate (most commonly used in Roundup), there is a renewed market demand for more powerful—and more dangerous—herbicides. There are certainly plenty to choose from. The overall market for pesticides in the world represents approximately $40 billion in sales annually, with herbicides aimed at weeds representing $17.5 billion and both insecticides and fungicides representing about $10.5 billion each.

Dow AgroSciences has applied for regulatory approval to launch a new genetically engineered form of corn that tolerates the application of a pesticide known as 2,4-D, which was a key ingredient in Agent Orange—the deadly herbicide used by the U.S. Air Force to clear jungles and forest cover during the Vietnam War—which has been implicated in numerous health problems suffered by both Americans and Vietnamese who were exposed to it. Health experts from more than 140 NGOs have opposed the approval of what they call “Agent Orange corn,” citing links between exposure to 2,4-D and “major health problems such as cancer, lowered sperm counts, liver toxicity and Parkinson’s disease. Lab studies show that 2,4-D causes endocrine disruption, reproductive problems, neurotoxicity, and immunosuppression.”

Insecticides that are sprayed on crops have also been implicated in damage to beneficial insects and other animals. The milkweed plants on which monarch butterflies almost exclusively depend have declined in the U.S. farm belt by almost 60 percent over the last decade, principally because of the expansion of cropland dedicated to crop varieties engineered to be tolerant of Roundup. There have been studies showing that Bt crops (the ones that produce insecticide) have had a direct harmful impact on at least one subspecies of monarchs, and on lacewings (considered highly beneficial insects), ladybird beetles, and beneficial biota in the soil. Although proponents of GM crops have minimized the importance of these effects, they deserve close scrutiny as GM crops continue to expand their role in the world’s food production.

Most recently, scientists have attributed the disturbing and previously mysterious sudden collapses of bee colonies to a new group of pesticides known as neonicotinoids. Colony collapse disorder (CCD) has caused deep concern among beekeepers and others since the affliction first appeared in 2006. Although numerous theories about the cause of CCD were put forward, it was not until the spring of 2012 that several studies pinpointed the cause.

The neonicotinoids, which are neurotoxins similar in their makeup to nicotine, are widely used on corn seed, and the chemicals are then pulled from the seed into the corn plants as they grow. Commercial beekeepers, in turn, have long fed corn syrup to their bees. According to the U.S. Department of Agriculture’s Agricultural Research Service, “Bee pollination is responsible for $15 billion in added crop value, particularly for specialty crops such as almonds and other nuts, berries, fruits, and vegetables. About one mouthful in three in the diet directly or indirectly benefits from honey bee pollination.”

Bees, of course, play no role in the pollination of GM crops, because the engineered seeds must be purchased annually by farmers, and the bees’ pesky habit of pollinating plants can introduce genes that do not fit into the seed company’s design. According to The Wall Street Journal, the growers of a modified seedless mandarin threatened to sue beekeepers working with neighboring farms for allowing their bees to “trespass” into the orchards where the seedless mandarins were growing, out of worry that the seedless mandarins would be cross-pollinated with pollen from citrus varieties that have seeds. Understandably, the beekeepers protested that they couldn’t control where their bees fly.

The global spread of industrial agriculture techniques has resulted in the increased reliance on monoculture, which has, in turn, accelerated the spread of resistance to herbicides and pesticides in weeds, insects, and plant diseases. In many countries, including the United States, all of the major commodity crops—corn, soybeans, cotton, and wheat—are grown from a small handful of genetic varieties. As a result, in most fields, virtually all of the plants are genetically identical. Some experts have long expressed concern that the reliance on monocultures makes agriculture highly vulnerable to pests and plant diseases that have too many opportunities to develop mutations that enable them to become more efficient at attacking the particular genetic variety that is planted in such abundance.

MUTATING PLANT DISEASES

In any case, new versions of plant diseases are causing problems for farmers all over the world. In 1999, a new mutated variety of an old fungal disease known as stem rust began attacking wheat fields in Uganda. Spores from the African fields were carried on the wind first to neighboring Kenya, then across the Red Sea to Yemen and the Arabian Peninsula, and from there to Iran. Plant scientists are concerned that it will continue spreading in Africa, Asia, and perhaps beyond. Two scientific experts on the disease, Peter Njao and Ruth Wanyera, expressed concern in 2012 that it could potentially destroy 80 percent of all known wheat varieties. Although this wheat rust was believed to be reduced to a minor threat a half century ago, the new mutation has made it deadlier than ever.

Similarly, cassava (also known as tapioca, manioc, and yucca), the third-largest plant-based source of calories for people (after rice and wheat), is consumed mostly in Africa, South America, and Asia. It developed a new mutation in East Africa in 2005, and since then, according to Claude Fauquet, who is the director of cassava research at the Donald Danforth Plant Science Center in St. Louis, “There has been explosive, pandemic-style spread.… The speed is just unprecedented, and the farmers are really desperate.” Some experts have compared this outbreak to the potato blight in Ireland in the 1840s, which was linked in part to Ireland’s heavy reliance on a monocultured potato strain from the Andes.

Sixty percent of the U.S. corn crop was destroyed in 1970 by a new variety of Southern corn leaf blight, demonstrating clearly, in the words of the Union of Concerned Scientists, “that a genetically uniform crop base is a disaster waiting to happen.” The UCS notes that “U.S. agriculture rests on a narrow genetic base. At the beginning of the 1990s, only six varieties of corn accounted for 46 percent of the crop, nine varieties of wheat made up half of the wheat crop, and two types of peas made up 96 percent of the pea crop. Reflecting the global success of fast food in the age of Earth Inc., more than half the world’s potato acreage is now planted with one variety of potato: the Russet Burbank favored by McDonald’s.”

Although most of the debate over genetically modified plants has focused on crops for food and animal feed, there has been surprisingly little discussion about the robust global work under way to genetically modify trees, including poplar and eucalyptus. Some scientists have expressed concern that the greater height of trees means that the genetically modified varieties will send their pollen into a much wider surrounding area than plants like soybeans, corn, and cotton.

China is already growing an estimated thousands of hectares of poplar trees genetically modified to make the Bt toxin in its leaves in order to protect them against insect infestations. Biotech companies are trying to introduce modified eucalyptus trees in the U.S. and Brazil. Scientists argue that in addition to pest resistance, modifications might be useful in enabling trees to survive droughts and could modify the nature of the wood in ways that will facilitate the production of biofuel.

In addition to plants and trees, genetically modified animals intended for the production of food for humans have also generated considerable controversy. Since the discovery in 1981 of a new technique that allows the transfer of genes from one species into the genome of another species, scientists have genetically engineered several forms of livestock, including cattle, pigs, chickens, sheep, goats, and rabbits. Although earlier experiments that reduced susceptibility to disease in mice generated a great deal of optimism, so far only one of the efforts to reduce livestock susceptibility has succeeded.

However, the ongoing efforts to produce GM animals have already produced, among other results, spider silk from goats (described above) and the production of a synthetic growth hormone in dairy cattle that increases their milk production. Recombinant bovine growth hormone (rBGH), which is injected into dairy cows, has been extremely controversial. Critics do not typically argue that rBGH is directly harmful to human health, but rather, that evidence suggests it causes the increase of a second hormone known as insulin-like growth factor (IGF), which is found in milk from cows treated with bovine growth hormone at levels up to ten times what is found in other milk.

Studies have shown a connection between elevated levels of IGF and a significantly higher risk of prostate cancer and some forms of breast cancer. Although other factors obviously are involved in the development of these cancers, and even though IGF is a natural substance in the human body, the concerns of opponents have been translated into a successful consumer campaign for the labeling of milk with bovine growth hormone, which has significantly decreased its use.

Chinese geneticists have introduced human genes associated with human milk proteins into the embryos of dairy cows, then implanted the embryos into surrogate cows that gave birth to the calves. When these animals began producing milk, it contained many proteins and antibodies that are found in human milk but not in milk from normal cows. Moreover, the genetically engineered animals are capable of reproducing themselves with the introduced genetic traits passed on. At present, there is a herd of 300 such animals at the State Key Laboratory of Agrobiotechnology of the China Agricultural University, producing milk that is much closer to human breast milk than cow milk. Scientists in Argentina, at the National Institute of Agribusiness Technology in Buenos Aires, claim to have improved on this process.

Scientists in the U.S. applied for regulatory approval in 2012 to introduce the first genetically engineered animal intended for direct consumption by human beings—a salmon modified with an extra growth hormone gene and a genetic switch that triggers the making of growth hormone even when the water temperature is colder than the threshold for normal production of growth hormone, resulting in a growth rate twice as fast as a normal salmon, which means it will reach market size in only sixteen months, compared to the normal thirty months.

Opponents of the “super salmon” have expressed concern about the possibility of increased levels of insulin-like growth factor—the same issue they have with milk produced from cattle injected with bovine growth hormone. And they expressed concern about these modified salmon escaping from their pens to breed with wild salmon, changing the species in an unintended way—much as the opponents of GM crops have expressed concern about the crosspollination of non-GM crops. Moreover, as noted in Chapter 4, farmed fish are fed fishmeal made from ocean fish in a pattern that typically requires three pounds of wild fish for each pound of farmed fish.

Scientists in Canada at the University of Guelph attempted to market genetically engineered pigs with a segment of mouse DNA introduced into their genome in order to reduce the amount of phosphorus in their feces. They called their creation Enviropigs because phosphorus is a source of algae blooms when dumped into rivers and creates dead zones where the rivers flow into the sea. They later abandoned their project and euthanized the pigs, in part because of opposition to what some critics have taken to calling “Frankenfood”—that is, food from genetically modified animals—but also because scientists elsewhere engineered an enzyme, phytase, which, when added to pig feed, accomplishes the same result hoped for with the ill-fated Enviropig.

In addition to the efforts to modify livestock and fish, there have also been initiatives over the last fifteen years to genetically engineer insects, including bollworms and mosquitoes. Most recently, a British biotechnology company, Oxford Insect Technologies (or Oxitec), has launched a project to modify the principal (though not the only) species of mosquito that carries dengue fever, in order to create mutant male mosquitoes engineered to produce offspring that require the antibiotic tetracycline in order to survive.

The larvae, having no access to tetracycline, die before they can take flight. The idea is that the male mosquitoes, which, unlike females, do not bite, will monopolize the females and impregnate them with doomed embryos, thereby sharply reducing the overall population. Although field trials in the Cayman Islands, Malaysia, and Juazeiro, Brazil, produced impressive results, there was vigorous public opposition when Oxitec proposed the release of large numbers of their mosquitoes in Key West, Florida, after an outbreak of dengue fever there in 2010.

Opponents of this project have expressed concern that the transgenic mosquitoes may have unpredictable and potentially disruptive effects on the ecosystem into which they are released. They argue that since laboratory tests have already shown that a small number of the offspring do in fact survive, there is an obvious potential for those that survive in the wild to spread their adaptation to the rest of the mosquito population over time.

Further studies may show that this project is a useful and worthwhile strategy for limiting the spread of dengue fever, but the focus on genetically modifying the principal mosquito that carries the disease poses a sharp contrast to the complete lack of focus on the principal cause of the rapid spread of dengue. The disruption of the Earth’s climate balance and the consequent increase in average global temperatures is making areas of the world that used to be inhospitable to the mosquitoes carrying dengue part of their expanding range.

According to a 2012 Texas Tech University research study of dengue’s spread, “Shifts in temperature and precipitation patterns caused by global climate change may have profound impacts on the ecology of certain infectious diseases.” Noting that dengue is one of those diseases, the researchers projected that even though Mexico has been the main location of dengue fever in North America, with only occasional small outbreaks in South Texas and South Florida, it is spreading northward because of global warming.

Dengue, which now afflicts up to 100 million people each year and causes thousands of fatalities, is also known as “breakbone fever” because of the extreme joint pain that is one of its worst symptoms. Simultaneous outbreaks emerged in Asia, the Americas, and Africa in the eighteenth century but the disease was largely contained until World War II; scientists believe it was inadvertently spread by people during and after the war to other continents. In 2012, there were an estimated 37 million cases in India alone.

After it was spread by humans to the Americas, dengue’s range was still limited to tropical and subtropical regions. But now, as its habitat expands, researchers predict that dengue is likely to spread throughout the Southern United States and that even northern areas of the U.S. are likely to experience outbreaks during summer months.

THIS CHAPTER BEGAN with a discussion of how we are, for the first time, changing the “being” in human being. We are also changing the other beings to which we are ecologically connected. When we disrupt the ecological system in which we have evolved and radically change the climate and environmental balance to which our civilization has been carefully configured, we should expect biological consequences larger than what we can fix with technologies like genetic engineering.

After all, human encroachment into wild areas is responsible for 40 percent of the new emerging infectious diseases that endanger humans, including HIV/AIDS, the bird flu, and the Ebola virus, all of which originated in wild animals forced out of their natural habitat by human encroachment, or brought into close proximity with livestock when farming expanded into previously wild regions. Veterinary epidemiologist Jonathan Epstein said recently, “When you disrupt the balance, you are precipitating the spillover of pathogens from wildlife to livestock to humans.” Overall, 60 percent of the new infectious diseases endangering humans came originally from animals.

THE MICROBIOME

We also risk disrupting the ecological system within our bodies. New research shows the key role played by microbial communities within (and on) every person. Indeed, all of us have a microbiome of bacteria (and a much smaller number of viruses, yeasts, and amoebas) that outnumber the cells of our bodies by a ratio of ten to one. In other words, every individual shares his or her body with approximately 100 trillion microbes that carry 3 million nonhuman genes. They live and work synergistically with our bodies in an adaptive community of which we are part.

Early in 2012, 200 scientists who make up the Human Microbiome Project published the genetic sequencing of this community of bacteria and found that there are three basic enterotypes—much like blood types—that exist in all races and ethnicities, and are distributed in all populations without any link to gender, age, body mass, or any other discernible markers. All told, the team identified eight million protein-coding genes in the organisms, and said that half of them have a function that the scientists still do not understand.

One of the functions performed by this microbiome is the “tutoring” of the acquired immune system, particularly during infancy and childhood. According to Gary Huffnagle, of the University of Michigan, “The microbial gut flora is an arm of the immune system.” Many scientists have long suspected that the repeated heavy use of antibiotics interferes with this tutoring process and may do damage to the process by which the adaptive immune system learns precision in discriminating between invaders and healthy cells. What all autoimmune diseases have in common is the inappropriate attack of healthy cells by the immune system, which needs to learn to distinguish invaders from cells of the body itself. “Autoimmune” means immunity against oneself.

There is mounting evidence that inappropriate and repeated use of antibiotics in young children may be impairing the development and “learning” of their immune systems—thereby contributing to the apparent rapid rise of numerous diseases of the immune system, such as type 1 diabetes, multiple sclerosis, Crohn’s disease, and ulcerative colitis.

The human immune system is not fully developed at birth. Like the human brain, it develops and matures after passage through the birth canal. (Humans have the longest period of infancy and helplessness of any animal, allowing for rapid growth and development of the brain following birth—with the majority of the development and learning taking place in interaction with the environment.) The immune system has an innate ability at birth to activate white blood cells to destroy invading viruses or bacteria, but it also has an acquired—or adaptive—immune system that learns to remember invaders in order to fight them more effectively if they return. This acquired immune system produces antibodies that attach themselves to the invaders so that specific kinds of white blood cells can recognize the invaders and destroy them.

The essence of the problem is that antibiotics themselves do not discriminate between harmful bacteria and beneficial bacteria. By using antibiotics to wage war on disease, we are inadvertently destroying bacteria that we need in order to remain in a healthy balance. “I would like to lose the language of warfare. It does a disservice to all the bacteria that have co-evolved with us and are maintaining the health of our bodies,” said Julie Segre, a senior investigator at the National Human Genome Research Institute.

One important bacterium in the human microbiome, Helicobacter pylori (or H. pylori), affects the regulation of two key hormones in the human stomach that are involved in energy balance and appetite. According to genetic studies, H. pylori has lived inside us in large numbers for 58,000 years. Up to 100 years ago, it was the single most common microbe in the stomachs of most human beings. As reported in an important 2011 essay in Nature by Martin Blaser, professor of microbiology and chairman of the Department of Medicine at NYU School of Medicine, however, studies have found that “fewer than 6 percent of children in the United States, Sweden and Germany were carrying the organism. Other factors may be at play in this disappearance, but antibiotics may be a culprit. A single course of amoxicillin or a macrolide antibiotic, most commonly used to treat middle-ear or respiratory infections in children, may also eradicate H. pylori in 20–50% of cases.”

It is important to note that H. pylori has been found to play a role in both gastritis and ulcers; the Australian biologist who won the 2005 Nobel Prize in Medicine for discovering H. pylori, Dr. Barry Marshall, noted, “People have been killed who didn’t get antibiotics to get rid of it.” Still, several studies have found strong evidence that people who lack H. pylori “are more likely to develop asthma, hay fever or skin allergies in childhood.” Its absence is also associated with increased acid reflux and esophageal cancer. Scientists in Germany and Switzerland have found that the introduction of H. pylori into the guts of mice serves to protect them against asthma. Among people, for reasons that are not yet fully understood, asthma has increased by approximately 160 percent throughout the world in the last two decades.

One of the hormones regulated by H. pylori, ghrelin, is one of the keys to appetite. Normally, the levels of ghrelin fall significantly after someone eats a meal, thus signaling to the brain that it’s time to stop eating. However, in people missing H. pylori in their guts, the ghrelin levels do not fall after a meal—so the signal to stop eating is not sent. In the laboratory run by Martin Blaser, mice given antibiotics sufficient to kill the H. pylori gained significant body fat on an unchanged diet. Interestingly, while scientists have long said that they cannot explain the reason why subtherapeutic doses of antibiotics in livestock feed increase the animals’ weight gain, there is now new evidence that it may be due to changes in their microbiome.

The replacement of beneficial bacteria wiped out by antibiotics has been shown to be an effective treatment for some diseases and conditions caused by harmful microbes normally kept in check by beneficial microbes. Probiotics, as they are called, are not new, but some doctors are now treating patients infected with a harmful bacterium known as Clostridium difficile by administering a suppository to accomplish a “fecal transplant.”

Although the very idea triggers a feeling of repugnance in many, the procedure has been found to be both safe and extremely effective. Scientists at the University of Alberta, after reviewing 124 fecal transplants, found that 83 percent of patients experienced immediate improvement when the balance of their internal microbiome was restored. Other scientists are now hard at work developing probiotic remedies designed to restore specific beneficial bacteria when it is missing from a patient’s microbiome.

Just as we are connected to and depend upon the 100 trillion microbes that live in and on each one of us from birth to death, we are also connected to and depend upon the life-forms all around us that live on and in the Earth itself. They provide life-giving services to us just as the microbes in and on our bodies do. Just as the artificial disruption of the microbial communities inside us can create an imbalance in the ecology of the microbiome that directly harms our health, the disruption of the Earth’s ecological system—which we live inside—can also create an imbalance that threatens us.

The consequences for human beings of the large-scale disruption of the Earth’s ecological system—and what we can do to prevent it—is the subject of the next chapter.


* Olaf Sporns, a professor of computational cognitive neuroscience at Indiana University, was the first to coin the word “connectome.” The National Institutes of Health now have a Human Connectome Project.

The first effective polio vaccine was developed by Jonas Salk in 1952 and licensed for the public in 1955. A team led by Albert Sabin developed the first oral polio vaccine, which was licensed for the public in 1962.

Historians date the introduction of Hammurabi’s Code to around 1780 BC.

§ Other scientists have mimicked the molecular design of spider silk by synthesizing their own from a commercially available substance (polyurethane elastomer) treated with clay platelets only one nanometer (a billionth of a meter) thick and only 25 nanometers across, then carefully processing the mixture. This work has been funded by the Institute for Soldier Nanotechnologies at MIT because the military applications are considered of such high importance.