7
IN THE SPRING OF 2014
, roughly a year before my first Davos meeting, I got a taste of what would soon become an international effort to shape the future of CRISPR.
It had been less than two years since the publication of our paper in Science describing how CRISPR could be harnessed for gene editing, but news of this technology had already spread through the scientific community—and beyond. Popular excitement about CRISPR had begun to grow, thanks to enthusiastic descriptions of gene-editing research in the mainstream media. As research using the CRISPR technology continued to accelerate, many scientists tried to keep their focus in the laboratory—advancing gene-editing methods themselves and using these methods in new ways—without getting swept into a more public discussion.
Like these colleagues, I had continued to explore and develop CRISPR, working with my academic lab at Berkeley while also devoting an increasing amount of time to better understand the challenges of using gene editing for human therapeutics. Such work was ongoing in numerous academic labs and was also getting under way at several startup biotechnology companies. It was exhilarating to be part of what felt like a massive collective effort to uncover the workings of the CRISPR technology and unlock its vast potential to manipulate genetic information inside cells. Mostly I felt excited and hopeful that our efforts would bring about positive developments in fields ranging from agriculture to medicine. But occasionally, I found myself lying awake in the wee hours of the night wondering about people outside of academia who were also taking a keen interest in this burgeoning field, and not always for the best reasons.
It was around this time that my coauthor, Sam Sternberg, then a PhD student in my lab, received an e-mail from an entrepreneur whom I’ll call Christina. She wanted to know if Sam would be interested in being a part of her new company, which somehow involved CRISPR, and she asked to meet him so she could pitch her business idea.
On the face of it, Christina’s note wasn’t surprising. Given the pace at which CRISPR had been developed and disseminated, and given its increasingly obvious potential to disrupt so many sectors of the biotechnology market, every week seemed to bring word of yet another new company, product, or licensing deal related to gene editing. But as Sam would soon discover, Christina’s venture was different—very different.
Sam didn’t really know what to expect when he met Christina for dinner at an upscale Mexican restaurant near campus, but he was nonetheless caught off guard by their conversation. Her e-mail had been vague, but in person Christina spoke freely about what she hoped to do with the technology Sam was helping to develop.
Speaking passionately over cocktails, Christina told Sam that she hoped to offer some lucky couple the first healthy “CRISPR baby.” The child, she explained, would be produced in the lab using in vitro fertilization, but it would have special features: customized DNA mutations, installed via CRISPR, to eliminate any possibility of genetic disease. While trying to entice him to come on board as a scientist, Christina assured him that her company planned to introduce only prophylactic genetic modifications in human embryos; if he wanted to be involved, he needn’t worry about making any mutations that weren’t necessary to ensure the health of the unborn child.
Christina didn’t have to explain to Sam how the procedure would work or how easy it would be. To edit the human genome in the way she was suggesting, a clinician would need only techniques that were already well understood by this time: generation of an embryo in vitro from the would-be parents’ egg and sperm cells, injection of preprogrammed CRISPR molecules to edit the embryo’s genome, and implantation of the edited embryo into the mother’s uterus. Nature would take care of the rest.
The prospect of editing genes in human embryos
Sam excused himself before dessert; he’d heard enough. Despite Christina’s assurances, he came away from their conversation rattled. She was, Sam sensed, obsessed with the power and possibilities of CRISPR. As he told me later, he’d perceived a Promethean glint in her eyes and suspected she had in mind other, bolder genetic enhancements in addition to the well-intentioned genetic changes she’d described.
Had this conversation occurred just a few years earlier, Sam and I would have dismissed Christina’s proposal as pure fantasy. Sure, genetically modified humans made for great science fiction, and they were a fertile subject for philosophical and ethical musings on the possibility of human “self-evolution.” But unless the Homo sapiens genome suddenly became as easy to manipulate as the genome of a laboratory bacterium like E. coli, there was little chance of anyone pursuing such Frankenstein schemes anytime soon.
Now, we could no longer laugh off this kind of speculation. Making the human genome as easily manipulable as that of a bacterium was, after all, precisely what CRISPR had accomplished. Just a month before Sam’s meeting with Christina, in fact, the first monkeys were born with genomes that had been rewritten through precision gene editing, bringing the steady march of CRISPR research right to Homo sapiens’ evolutionary front door. In light of this development with primates and the number of CRISPR-modified animal species—from worms to goats—that had preceded them, it seemed only a matter of time before humans were added to the growing list of creatures whose genomes had been reworked.
I felt keenly aware of, and apprehensive about, this possibility. While I couldn’t gainsay the many overwhelmingly positive effects that gene editing would have on our world—allowing us to better understand human genetics, produce food more sustainably, and treat victims of devastating genetic diseases—I was growing anxious about other uses to which CRISPR could conceivably be put. Had our discovery made gene editing too easy? Were scientists rushing too haphazardly into new areas of research without stopping to think about whether their experiments were justified or what their effects might be? Could CRISPR be misused or abused, particularly where the human genome was concerned?
I was especially beginning to worry if, someday soon, scientists would attempt to alter the human genome in a heritable way, not to treat a disease in a living patient, but to eliminate the prospect of disease in a child who hadn’t yet been born or even conceived. This was, after all, exactly what Christina had proposed to Sam. Even if she didn’t accomplish it, who was to say that someone else wouldn’t?
This possibility gnawed at me. Humans had never before had a tool like CRISPR, and it had the potential to turn not only living people’s genomes but also all future genomes into a collective palimpsest upon which any bit of genetic code could be erased and overwritten depending on the whims of the generation doing the editing. What’s more, encounters such as Sam’s meeting with Christina were forcing me to acknowledge that not everyone shared my trepidation about the prospect of scientists rewriting the DNA of future human beings without fully appreciating the consequences. Somebody was inevitably going to use CRISPR in a human embryo—whether to eradicate the sickle cell trait in a single family’s germline or to make nonmedical enhancements—and it might well change the course of our species’ history in the long run, in ways that were impossible to foretell.
The question, I was beginning to realize, was not if gene editing would be used to alter DNA in human germ cells but rather when, and how. It was also becoming clear to me that, if I wanted to have a say in when and how CRISPR would be used to change the genetic makeup of future humans, I would first have to understand exactly how much of a break from previous scientific accomplishments germline editing would be. What sorts of interventions in the human germline had previously been achieved, and how had they been tolerated? What were the goals of these earlier interventions? And what had my predecessors—in particular, the scientific luminaries of generations past—had to say about human germline manipulation, the prospect of which alarmed me so much?
It’s not as if the debate over modifying the human germline began only when CRISPR came along. Far from it. Back when the earliest hints of gene editing were emerging, physicians in reproductive medicine were already selecting certain embryos over others for establishing pregnancies, thus making choices about what genetics would be propagated to subsequent generations. And for even longer, practitioners and observers of science have been disturbed by the notion that humans might someday be the primary authors of their own genetic constitutions.
Once the role of DNA in encoding genetic information had been proved, researchers began to appreciate the power of rationally manipulating genetic code, even though the tools for doing so didn’t yet exist. Marshall Nirenberg, one of the biologists responsible for cracking the genetic code in the 1960s (a feat for which he was awarded the Nobel Prize in Physiology or Medicine), wrote in 1967 of man’s “power to shape his own biologic destiny. Such power,” he observed, “can be used wisely or unwisely, for the betterment or detriment of mankind.” Aware that such a capability should not lie in the hands of scientists alone, Nirenberg continued, “Decisions concerning the application of this knowledge must ultimately be made by society, and only an informed society can make such decisions wisely.”
Not all scientists were so restrained. Writing in the American Scientist just a few years later, Robert Sinsheimer, then a professor of biophysics at Caltech, described human genetic modification as “potentially one of the most important concepts to arise in the history of mankind . . . For the first time in all time a living creature understands its origin and can undertake to design its future.” Sinsheimer scoffed at critics who argued that genetic engineering was simply a modern version of the timeless but futile dream to perfect mankind: “Man is all too clearly an imperfect, a flawed creature. Considering his evolution, it is hardly likely that he could be otherwise . . . We now glimpse another route—the chance to ease the internal strains and heal the internal flaws directly—to carry on and consciously to perfect, far beyond our present vision, this remarkable product of two billion years of evolution.”
Within two decades of the publication of Sinsheimer’s essay, scientists were rapidly mapping the route to perfection he had been able only to glimpse in the late 1960s. By the beginning of the 1990s, gene therapy trials were under way with human patients—and while it was clear that accurately manipulating the human germline wouldn’t be feasible even with this relatively advanced technology, that didn’t stop researchers from wringing their hands about the possibility. French Anderson, the scientist who led those first clinical trials, was outspoken about the hazards and ethical arguments against using gene therapy for enhancement purposes, whether in somatic cells or in the germline. Above all, he questioned whether any scientist could wield this newfound power responsibly or if instead the scientist “might be like the young boy who loves to take things apart. He is bright enough to disassemble a watch, and maybe even bright enough to get it back together again so that it works. But what if he tries to ‘improve’ it? Maybe put on bigger hands so that the time can be read more easily. But if the hands are too heavy for the mechanism, the watch will run slowly, erratically, or not at all . . . Attempts on his part to improve the watch will probably only harm it.”
Despite the warnings of leading scientists like Anderson, the idea of altering or refining our genetic makeup continued to galvanize some biologists throughout the last decade of the twentieth century. The excitement of these women and men was stoked by ongoing research and development into human gene therapy as well as by seminal advances in three major areas: fertility research, animal studies, and human genetics.
Back then, any scientists dreaming of someday “improving” on the genetic makeup of the human race and searching for inspiration had to look no further than recent advances in the treatment of infertility. The birth of Louise Brown in 1978, the world’s first “test-tube baby,” was a watershed moment for reproductive biology, proving that human procreation could be reduced to simple laboratory procedures: the mixing of purified eggs and sperm in a petri dish, the fostering of a zygote as it grew into a multicellular embryo, and the implantation of that embryo in the female womb. In vitro fertilization, or IVF, enabled parents with various forms of infertility to produce genetically related children while also opening the door to other manipulations that could eventually be performed on the early-stage embryo during its growth in the laboratory. After all, if a human life could be created in a petri dish, the same type of sterile environment where gene-editing technologies were being developed, it was conceivable that the two methods would someday converge. Research aimed at circumventing infertility had inadvertently refined a procedure that would become integral to future discussions of germline manipulation.
Animal research, too, encouraged scientists who thought that human germline editing was almost within reach. Over the last few decades of the twentieth century, scientists had devised more and more ingenious ways of engineering animal genomes, from cloning to virus-based gene addition to the earliest uses of precision gene editing. By the 1990s, it had become fairly routine to generate mouse models of human disease by modifying specific genes in the mouse germline; although the exact procedure couldn’t be used on humans, it set the stage for inventions like ZFN and CRISPR, which transformed the formerly crude method of germline gene editing in mice into a streamlined, exact, and highly optimized method that was much better suited to human subjects. The decade also witnessed the first successful cloning of a mammal with the famous birth of Dolly the sheep in 1996. By transferring the nucleus (with all its DNA) of a somatic cell taken from an adult sheep into a recipient egg cell whose nucleus had been removed, stimulating the hybrid cell to begin dividing, and then implanting the resulting embryo in a surrogate mother, Ian Wilmut and his colleagues in Scotland produced a ewe whose genome was a perfect copy of the donor.
IVF and cloning were huge technical breakthroughs that helped lay the groundwork for germline modification. Not only did they show that scientists could generate a viable embryo in the lab by mixing egg and sperm, they also revealed that the embryos could be created using genetic information from a single animal. The feat sent regulators worldwide scrambling to enact legislation that would prohibit the reproductive cloning of humans. As it turned out, cloning mammals turned out to be so technically difficult that few laboratories in the world were capable of attempting it. Thus, unlike CRISPR, the technology of somatic cell nuclear transfer was effectively self-limiting due to the extensive expertise it required.
Finally, enthusiasm for making changes to the DNA of future humans was a natural outgrowth of breakthroughs in human genetics, especially the sequencing of the human genome. This incredible development made many people think that geneticists would soon be able to find the root causes of once-mysterious diseases as well as the genetic code for a much broader range of human phenotypes, from physical traits to cognitive ones. Once we fully understood the genetic factors that determine human health and performance, we might be able to select for—or perhaps even engineer—embryos with a genetic composition different than that of their parents. Better than that of their parents.
Or so some scientists hoped. I, for one, was skeptical about what I saw as blind optimism in this pre-CRISPR era, with some gushing about the possibilities of reshaping the germline without pausing to consider the consequences. Would this sort of procedure really be able to safely rid all of a person’s descendants of a genetic illness, or would it have side effects that we could not foresee? It seemed impossible to ever conduct experiments that would provide answers to such questions. And even if it could be done safely, would doctors and their patients really restrict it to medical operations, or would they cross the line by making nonessential modifications? Although at the time I hadn’t given all that much thought to these questions, they nonetheless nagged at me whenever the subject of the germline came up.
In 1998, the growing excitement—or unease, depending on where on the continuum you found yourself—over germline modification prompted two scientists, John Campbell and Gregory Stock, to organize one of the first symposia on the topic at the University of California, Los Angeles. Called Engineering the Human Germline, the meeting featured talks from some of the foremost researchers in the field, including French Anderson, the gene therapy pioneer; Mario Capecchi, one of the fathers of early gene editing; and James Watson, co-discoverer of the structure of DNA. Although I wasn’t in attendance—back then, my research was still focused on questions like how tiny RNA molecules folded into elaborate three-dimensional structures—records of the conference helped reassure me years later that I wasn’t alone in worrying about tampering with the human germline and that my concerns were by no means new.
At the time of the UCLA conference, its participants were wrestling with many of the same concerns about germline modification that have resurfaced in recent years with the advent of CRISPR, issues such as consent, inequality, access, and unintended consequences for future generations. Like many concerned scientists today, these researchers grappled with the thorny question of whether scientists would be transgressing natural or divine laws by changing the human germline and whether such efforts would constitute eugenics, a fallacious early-twentieth-century set of beliefs and practices that have since been thoroughly repudiated by mainstream science. But in addition to, or perhaps in spite of, these weighty ethical considerations, the participants in the 1998 symposium were clearly buoyed by extreme optimism at the possibility of using the latest scientific breakthroughs to improve humanity. Panel discussions focused on topics such as eradicating disease, avoiding serious genetic defects, and generally improving on the natural course of evolution—which, attendees argued, could be so cruel as to justify some sort of intervention.
A report authored a few years later by the American Association for the Advancement of Science on the topic of human inheritable genetic modification was considerably more restrained. The working group concluded that germline interventions couldn’t (yet) be carried out safely or responsibly, that the ethical concerns were serious, and that the risks of germline modification being used for enhancement purposes were especially problematic. A few years later, the Genetics and Public Policy Center reached similar conclusions, while also acknowledging that consumer demand for certain uses was likely to evolve if scientists developed viable procedures.
In addition to these conferences and reports, another event foreshadowed some of the goals—and the controversies—of germline modification that would gain new urgency with the birth of CRISPR. This was the advent of a medical procedure that allowed parents to choose, albeit in a limited way, the genetic material that their children would inherit.
Once the technique of in vitro fertilization transformed the act of conception into a rather simple laboratory procedure, it became feasible to subject early-stage human embryos to DNA sequence analysis just like any other biological sample. Since each parent passes down only 50 percent of his or her DNA to offspring, the particular constellation of chromosomes and genes that a child inherits is essentially random. But the ability to generate multiple embryos in the laboratory using multiple eggs and sperm changed all that. Instead of implanting random embryos into the mother, IVF doctors could first analyze the DNA of candidate embryos to make sure they were selecting ones with the healthiest genomes—a practice that’s come to be known as preimplantation genetic diagnosis, or PGD.
Of course, prenatal genetic testing exists for embryos that arise from sexual reproduction as well, and it is increasingly practiced today. Amniocentesis or a simple blood sample taken from the mother (which harbors trace amounts of the fetus’s DNA) can reveal chromosomal abnormalities like Down syndrome and even specific disease-causing gene mutations. But there are still ethical issues to consider. After all, if prenatal testing indicates that a fetus suffers from damaging genetic defects, there are typically only two options: proceed with the pregnancy or terminate it. Unsurprisingly, given the controversy surrounding selective abortion, the use of this sort of testing has been the source of vigorous debate.
Preimplantation genetic diagnosis avoids difficult issues like these by making embryo selection possible before a pregnancy is even established (though it also requires fertilization to take place in vitro, which is costly and involves invasive egg retrieval from the mother). PGD still suffers from technical challenges, but on the whole, it has been effective in preventing the birth of children with certain kinds of genetic conditions, and it has become an attractive option for parents who are already considering IVF because of fertility problems. Yet while this technique avoids the ethical conundrum of abortion, it has some heavy philosophical baggage of its own.
In its earliest implementations, PGD was used for gender selection, albeit for medical reasons; diseases linked to mutations on the X chromosome, known as X-linked diseases, could be specifically avoided if female embryos were selected. But despite scientists’ good intentions, many observers and regulators simply couldn’t abide the idea that PGD allowed parents to decide whether to have a boy or a girl—especially since, in many countries, female children are considered less desirable than males. Today, the use of preimplantation genetic diagnosis for gender selection is illegal in many countries (including India and China) or permitted only to avoid X-linked diseases (as in Great Britain). But it’s legal in the United States, where many fertility clinics offer it as a reproductive option to parents without requiring any urgent medical reason.
PGD has also been used for other controversial purposes, such as the birth of so-called savior siblings, destined from the moment of implantation not only to live their own lives, but also to serve as organ or cell donors for a sibling. And in the future, parents may be offered the option of selecting for traits that go beyond disease susceptibility and gender and cross into areas like behavior, physical appearance, or even intelligence. The list of known associations between certain gene variants and a diverse list of traits continues to grow, and as the PGD technology improves further, what’s to stop fertility clinics from consulting this genetic information so they can offer their consumers even more choices when it comes to selecting the most desirable or “best” embryos?
The implications of this kind of genetic testing are extreme, yet it’s not even the latest or most advanced technology associated with assisted reproduction. That distinction goes to mitochondrial replacement therapy, colloquially known as three-parent IVF. Paradoxically, any babies born from this procedure contain DNA from not two, but three parents: one father and two mothers. This therapy—which involves transferring the nucleus of one egg cell into another egg cell whose nucleus has been removed—aims to save unborn children from an otherwise unavoidable class of genetic conditions called mitochondrial diseases. The second egg cell contains no nucleus but does have mitochondria, which house a small portion of the human genome, so this procedure creates an individual who bears genetic relatedness to three parents: the mother who contributed the nuclear genome (and who is likely to raise the child); the mother who contributed the enucleated egg cell and its mitochondrial genome (a small but essential collection of genes); and the father who contributed the sperm and the second copy of the nuclear genome.
Mitochondrial replacement therapy has been shown to work with mice and nonhuman primates, and has already been performed on human eggs. There’s still controversy about its safety, but clinical applications are on the horizon. The advisory committee that oversees fertility research and treatments in the United Kingdom endorsed mitochondrial replacement therapy in a 2014 report, and after parliamentary approval in 2015, the UK became the first country in the world to approve regulations permitting its clinical use. The United States may not be far behind; in early 2016, the National Academies of Sciences, Engineering, and Medicine similarly recommended that the Food and Drug Administration approve future trials of three-parent IVF.
Procedures like PGD and three-parent IVF demonstrate that the scientific and medical communities are willing to push the ethical envelope in order to enable parents to have healthy children. Even three-parent IVF, which is technically very similar to reproductive cloning in some regards, has come under relatively little philosophical or regulatory scrutiny compared to that other, much more controversial technique. And three-parent IVF would permanently alter the human genome, changing the germline in ways that would be passed on to future generations in perpetuity. Regulators have nevertheless greenlighted this reproductive therapy.
Reading about these cases, I had to ask myself: Would regulators and researchers be just as comfortable using CRISPR to make heritable changes to the human genome, given that its power is so much greater than these earlier technologies? When fertility doctors eventually realize that they have the ability to enhance embryos’ genomes with many, many more gene variants than could be provided by any given set of parents, will they really pause to reflect on the possible consequences? Or will they rush to make use of this newfound power, blindly grasping a genetic tool that, wielded in the dark, cannot be fully controlled?
I wasn’t used to asking myself these sorts of questions in my day-to-day life as a professor and biochemical researcher. Although I recall writing on my application to graduate school that I was interested in scientific communication, in truth I much preferred working in the lab and trying new experiments to thinking about the theoretical, long-term implications of my research and trying to explain them to nonscientists. And as I got more deeply involved in my field, I spent increasing amounts of time talking with specialists and less time talking to people outside my immediate circle of experts. In this way, I fell into a common trap; scientists, like anyone else, feel most comfortable when surrounded by others like themselves, people who speak the same language and worry about the same issues, big and small.
Two years after my colleagues and I published the article that described CRISPR as a new gene-editing platform, though, I was finding it impossible to ignore these big-picture questions and stay inside my familiar scientific bubble. As scientists used CRISPR to edit the genes of more and more animals, and as they continued to expand the tool’s capabilities, I realized it would not be long before researchers somewhere tested CRISPR on human eggs, sperm, or embryos with the goal of permanently rewriting the genome of future individuals. But incredibly, no one was discussing this possibility. Instead, the gene-editing revolution was unfolding behind the backs of the people whom it would affect. Even as the CRISPR field was exploding, no one outside of my circle of colleagues seemed to know about it or understand what was coming. Eventually, the disconnect this created between my professional life and my personal life became profound. By day I compared notes with specialists, by night I dined with neighbors and chatted with PTA parents, and all the while I marveled over how little the denizens of these two worlds seemed to know about each other. Thus, while the UK authorities were openly deliberating over mitochondrial replacement therapy, I was privately struggling over whether I could avoid the ethical storm brewing around this technology I had helped create.
It’s not that I was categorically opposed to the idea of scientists and physicians using gene editing to introduce heritable changes into the human genome. To be sure, there were numerous philosophical, practical, and safety issues—many of which I’ll cover in the next chapter—that deserved in-depth discussion and vigorous debate, but none of these constituted a reason to absolutely forbid this use of the technology. I was far more concerned about two other, more concrete hazards: first, that through a series of reckless, poorly conceived experiments, scientists would prematurely implement CRISPR without proper oversight or consideration of the risks, and second, that by virtue of being so effective and easy to use, CRISPR might be abused or employed for nefarious purposes.
It was hard to know what such misuses might be and who might be committing them. Even in the spring of 2014, before I had a chance to consider these issues deeply, my subconscious was offering up answers in the form of nightmares—one of which I alluded to in the opening pages of this book.
In this particular dream, a colleague approached me and asked if I would be willing to teach somebody how the gene-editing technology worked. I followed my colleague into a room to meet this person and was shocked to see Adolf Hitler, in the flesh, seated in front of me. He had a pig face (perhaps because I had spent so much time thinking about the humanized pig genome that was being rewritten with CRISPR around this time), and he was meticulously prepared for our meeting with pen and paper, ready to take notes. Fixing his eyes on me with keen interest, he said, “I want to understand the uses and implications of this amazing technology you’ve developed.”
His terrifying appearance and sinister request were enough to jolt me awake. As I lay in the dark, my heart racing, I couldn’t escape the awful premonition with which the dream had left me. The ability to refashion the human genome was a truly incredible power, one that could be devastating if it fell into the wrong hands. The thought frightened me even more because, by this point, CRISPR had been widely disseminated to users around the globe. Tens of thousands of CRISPR-related tools had already been shipped to dozens of countries, and the knowledge and protocols needed to create designer mutations in mammals—at least in mice and monkeys—had been described in great detail in numerous published articles. To make matters worse, CRISPR wasn’t employed only by the hundreds of academic and commercial research labs worldwide; it was also sold online to any consumer with a hundred dollars. Sure, these DIY CRISPR kits were designed to modify only bacterial and yeast genes, but the technique was simple enough, and academic experiments with animal genomes had become so routine, that it wasn’t hard to imagine biohackers messing with more complex genetic systems—up to and including our own.
What had we done? Emmanuelle and I, and our collaborators, had imagined that CRISPR technology could save lives by helping to cure genetic disease. Yet as I thought about it now, I could scarcely begin to conceive of all of the ways in which our hard work might be perverted. Overwhelmed by how fast everything was moving and by how quickly it seemed it could all go wrong, I began to feel a bit like Dr. Frankenstein. Had I created a monster?
As if my mind weren’t occupied enough with these unsettling thoughts, I found myself worrying about yet another possibility: that scientists wouldn’t conduct their research transparently. Science, after all, does not happen in a vacuum. This is especially true for the applied sciences, in which breakthroughs often have a direct impact on society. I strongly believe that scientists working in this field have a responsibility to conduct their research openly, to educate the public about their work, and to engage in collective discussions about the possible risks, benefits, and ramifications of their experiments before conducting any that might cross the Rubicon, so to speak.
In the case of CRISPR, it seemed clear that public discussion was falling far behind the breakneck pace of scientific research. I wondered if there might be a backlash if experiments on humans were attempted before we could have an open deliberation about gene editing. And it seemed possible that such a backlash could damage or delay more urgent and uncontroversial therapeutic applications of CRISPR, such as the treatment of genetic diseases in adult patients. Increasingly concerned by these prospects, I fumbled for clues about how to proceed.
It was around this time that I found myself thinking about analogies to nuclear weapons, a field in which science advanced in secrecy and without adequate discussions about how researchers’ findings should be used. This was particularly true during World War II. J. Robert Oppenheimer, former Berkeley professor of physics and one of the fathers of the atomic bomb, made precisely this point in a series of security hearings following the war, after his outspoken calls for an end to the nuclear arms race (not to mention his Communist ties) had drawn the ire of politicians. Commenting on the American reaction to the Soviet Union’s first tests of an atomic bomb and on the ensuing debate over whether to pursue even more explosive hydrogen bombs, Oppenheimer said: “It is my judgment in these things that when you see something that is technically sweet, you go ahead and do it and you argue about what to do about it only after you have had your technical success. That is the way it was with the atomic bomb. I do not think anybody opposed making it; there were some debates about what to do with it after it was made.”
Oppenheimer’s words only pricked my conscience more. Perhaps one day we would be saying the same thing about CRISPR and genetically modified humans. While human gene editing would almost assuredly never have the same catastrophic consequences as the detonation of a nuclear weapon, it seemed likely that rushing ahead with the research could still cause harm—by undermining society’s trust in this new form of biotechnology, if nothing else. Indeed, given the widespread uneasiness about, and even antipathy toward, certain forms of genetic engineering in agriculture, I was becoming especially concerned that a lack of information—or the spread of misinformation—about germline editing could stymie our attempts to use CRISPR in far safer and more essential ways.
As my mind churned through these scenarios, I began to wonder how I could get out in front of the problem. I wanted to find a way to take preemptive action and initiate an honest and open public discourse about this technology I’d helped to create. Could I and other concerned scientists save CRISPR from itself—not after the fact, as had happened with nuclear weapons, but before a cataclysm occurred?
I sought answers in another pivotal moment in the history of biotechnology, an episode when voices of caution had resounded throughout the scientific community and beyond. Then, as now, the cause of concern was a breakthrough in genetic engineering. In this earlier instance, it was the birth of recombinant DNA. And in this case, scientists had moved proactively—and, ultimately, successfully—to prevent their work from inadvertently causing harm.
In the early 1970s, scientists made major advances in the nascent art of gene splicing—chemically fusing, or recombining, purified bits of genetic material from different organisms to create never-before-seen synthetic DNA molecules. Paul Berg, a Stanford biochemist and eventual winner of the Nobel Prize, was the first to achieve this feat, and he did so by combining DNA from three sources: a bacterial virus known as lambda phage, the bacterium E. coli, and a monkey virus known as simian virus 40, or SV40. Once he’d combined the viral and bacterial DNA, Berg planned to introduce these hybrid mini-chromosomes into cells so that he could study the functions of individual genes when expressed outside of their normal environment.
But at the time, Berg and other scientists recognized that experimenting with modified genetic material could have myriad, unpredictable, and potentially dangerous consequences. Perhaps most troubling was the thought of what might happen if the synthetic DNA wasn’t properly contained and somehow got out of the laboratory. Berg’s initial plan had been to transfer the genetic material into lab strains of E. coli bacteria, but since the human digestive system naturally harbors billions of innocuous E. coli, it seemed plausible that genetically modified E. coli might infect and harm humans. Moreover, because the SV40 virus was known to cause tumors in mice, there was a chance that the fragment of SV40 DNA could create a novel carcinogenic pathogen that, if released into the environment, might wreak havoc by spreading cancer-causing genes or antibiotic resistance to humans or some other species.
Because of these concerns, Berg and his team of researchers held off on attempting the experiment. Instead, Berg called for the first of what would eventually become two meetings held in the picturesque Asilomar Conference Grounds, nestled in Pacific Grove, California, on the western tip of the Monterey Peninsula. Before his research went any further, he wanted to enlist his fellow scientists to run a thorough cost-benefit analysis.
The meeting in 1973—eventually known as Asilomar I—focused on the DNA of cancer viruses and the risks they posed; it did not directly address the new recombinant DNA experiments Berg was considering. That same year, however, scientists held a second conference focused specifically on gene splicing. The concerns raised at this meeting led scientists to request that the National Academy of Sciences establish a committee to formally investigate the new technology. Berg would serve as the chairman of this group, the Committee on Recombinant DNA Molecules, which met at MIT in 1974. Soon after the meeting, they released a notable report titled “Potential Biohazards of Recombinant DNA Molecules.”
The “Berg letter,” as it’s often called, issued an unprecedented summons for a worldwide moratorium on experiments the committee deemed most hazardous—those aimed at creating antibiotic resistance in new bacterial strains and those aimed at creating DNA hybrids with cancer-causing animal viruses. It was one of the first times that scientists had voluntarily refrained from conducting a whole class of experiments in the absence of any regulatory or governmental sanctions.
The Berg letter also included three other recommendations: first, that scientists adopt a cautious approach to any experiments designed to fuse animal and bacterial DNA; second, that the National Institutes of Health establish an advisory committee to oversee future issues surrounding recombinant DNA; and third, that an international meeting be convened so that scientists from around the world could review recent progress in the field and compare notes on how to deal with potential hazards. This last recommendation would result in the International Congress on Recombinant DNA Molecules, held back in Asilomar in February 1975.
Much has been written about Asilomar II. Roughly a hundred and fifty people attended, mostly scientists but also lawyers, government officials, and members of the media. The debate was heated at times, with even the biology experts disagreeing with one another on the relative hazards of experiments involving recombinant DNA. Some argued against prematurely ending the moratorium, feeling that certain experiments should continue to be prohibited until much more was known about their risks; others felt the risks were likely nonexistent or at least minimal and certainly nothing that the proper safety measures couldn’t protect against. Ultimately, Berg and his colleagues decided that most experiments should proceed but with appropriate safeguards; namely, biological and physical barriers to contain genetically modified organisms.
While such resolutions were certainly important, Asilomar II was just as consequential for the link it forged between scientists and the public. The members of the media who attended the meeting informed their audiences about the scientists’ discussions. Instead of leading to an uproar and crippling restrictions, as some scientists had feared, this transparency ultimately gave rise to a consensus that allowed research to proceed with popular support.
Asilomar II was not without its critics, though. The conference was invitation-only, and with just a handful of nonscientists in attendance, some argued that the meeting failed to cast a wide enough net outside the scientific community. Others took issue with the omission of topics like biosecurity and ethics from the meeting’s agenda. Perhaps the most criticism was reserved for the notion that experts could best assess and address the risks, benefits, and ethical challenges surrounding a new technology, and therefore experts should be the ones to define the terms of the debate. As Benjamin Hurlbut, a science historian at Arizona State University, put it: “This approach gets democracy wrong. It is our technologies that should be subject to democratically articulated imaginations of the futures we want, not the opposite. Science and technology often claim to be servants of society; they should take that promise seriously. Imagining what is right and appropriate for our world—and what threatens its moral foundations—is a task for democracy, not for science.”
I absolutely agree that society as a whole—rather than scientists individually or even as a group—should decide how any given technology is used. But there’s a wrinkle here, which is that society cannot make decisions about technologies it doesn’t understand, and certainly not about those it knows nothing about. It’s up to scientists to bring these breakthroughs to the public’s attention, as Berg and his colleagues did, to introduce and demystify their technical accomplishments so the public can understand their implications and decide how to use them. When gene splicing was first developed, after all, most biologists weren’t even aware of it; the discussion necessarily had to begin within the community of experts who understood what the technology was and what experiments it made possible. By publicizing these discussions and inviting the media to further expound on the technology in terms that laypeople could understand, Berg and colleagues helped break down the wall between scientists and the public and pave the way for the creation of a governmental authority known as the Recombinant DNA Advisory Committee, which became heavily involved in overseeing subsequent research and clinical applications of recombinant DNA.
Some forty years later, in the early part of 2014, I decided that we needed to take a similar approach—not just with CRISPR but with the general practice of gene editing. The technology had already spread like wildfire through the global scientific community; in its brief history, precision gene editing had been used on a diverse and growing menagerie of animals, and all indications were that therapeutic applications in somatic human cells were not far off. But scientists and the public seemed to be ignoring the very real possibility that this same technology would soon be used on human embryos, and they were apparently oblivious to the significance of this sort of germline editing.
An open and frank discussion about germline editing clearly had to begin without delay, and I felt I needed to help initiate the discussion. Much as Berg and his colleagues had sounded the alarm when the risks of their work with recombinant DNA became clear, I would need to leave the comfort of my lab and help spread the word about the implications of our research. Only that way could CRISPR be fully understood by the people whose lives it would soon affect. Only that way, I hoped, could its worst excesses be averted.
It’s one thing for a scientist like me to organize an academic meeting on subjects that are firmly within my wheelhouse. It’s quite another to take the reins of a conversation about the broader implications of my research, a discussion dealing not with the usual questions of reaction kinetics, biophysical mechanism, and structure-function relationships but with questions of policy, ethics, and regulation. I had never before played that kind of role, and at first I found it extremely intimidating.
Luckily, I didn’t need to go at it alone. I’d recently co-founded an institute in the Bay Area called the Innovative Genomics Institute (IGI) with the goal of advancing gene-editing technologies, and I realized that the IGI was perfectly positioned to host a meeting like the ones Berg had hosted at Asilomar. But I knew we’d have to let the conversation evolve organically, not try to go from zero to sixty by holding a lengthy conference right away. I decided we should organize a small, one-day forum and invite around twenty people. The immediate goal, as I saw it, was to produce a white paper—a report proposing a path forward for the field and calling for more stakeholders to weigh in on the issue of gene editing. Much like Berg’s 1974 meeting at MIT, this first meeting—which we ultimately called the IGI Forum on Bioethics—would, I hoped, be a prelude to a much larger, more inclusive conference.
We set the meeting date for January 2015 and selected the Carneros Inn in Napa Valley, the renowned wine-growing region just an hour or so north of Berkeley, as the venue. Helping to organize the forum were Jonathan Weissman, a close colleague at the University of California, San Francisco, and codirector of the IGI; Mike Botchan, a Berkeley colleague and IGI administrative director; Jacob Corn, scientific director of the IGI; and Ed Penhoet, professor emeritus at Berkeley and co-founder of the biotechnology firm Chiron. One of the first invitations went to Paul Berg himself (who is professor emeritus at Stanford), and I was thrilled when he accepted. Also on the guest list was David Baltimore, a Nobel Prize–winning biologist at Caltech and a colleague of Berg’s; Baltimore had not only attended the MIT meeting in 1974 but also coauthored the resulting paper that called for a moratorium on recombinant DNA research, and he had played a pivotal role in the discussions at Asilomar II. Paul’s and David’s attendance meant that our meeting would have a direct link to the proceedings that had served as my inspiration. More important, their expertise would undoubtedly help us navigate the difficult terrain ahead.
Also confirmed were Alta Charo, professor of law and bioethics at the University of Wisconsin at Madison; Dana Carroll, one of the gene-editing pioneers in the pre-CRISPR days; George Daley, a stem cell expert from Children’s Hospital in Boston; Marsha Fenner, program director of the IGI; Hank Greely, director of the Center for Law and the Biosciences at Stanford University; Steven Martin, professor emeritus and former biological sciences dean at UC Berkeley; Jennifer Puck, professor of pediatrics at UC San Francisco; John Rubin, a film producer and director; Sam Sternberg, my coauthor and PhD student at the time; and Keith Yamamoto, professor at UC San Francisco and administrative director for the IGI. A few other scientists were invited but declined to attend. (George Church and Martin Jinek, two scientists who were not in attendance, ultimately signed the article that was published after the meeting.)
The meeting, which we held on January 24, 2015, featured spirited discussions on a wide range of topics. The attendees, seventeen in all, gave formal presentations on gene therapy and germline enhancement, on existing regulations that governed genetically modified products, and on the nitty-gritty details of CRISPR. Even more interesting than these presentations, in my opinion, were the group’s open-table deliberations about the future of gene editing. These conversations were enthusiastic and creative, covering topics I had previously grappled with only on my own.
As we began discussing authorship of a white paper summarizing our conclusions, we debated who our target audience should be and what kind of outcome we were hoping to achieve. Should we be dealing with all the repercussions of using CRISPR—including new kinds of GMOs and even designer organisms—not just its potential role in germline editing? Had CRISPR actually raised new issues about germline modification or were the differences between it and prior technologies only a matter of degree? And would our little group come out strongly against germline editing or would we leave open the possibility for its eventual use?
Over the course of these conversations, a consensus slowly took shape. We decided that the use of gene editing specifically in the human germline should be the focus of our white paper. Gene therapy had been applied to patients’ somatic cells for well over two decades, and early gene-editing technologies had also already been used on human somatic cells in clinical trials. It was clear that germline editing was the one area where few had ventured and where public discussion was most urgent. This was largely because CRISPR, we agreed, had lowered the technical barriers that had once made human germline editing difficult, if not impossible, to accomplish. Despite the many volumes previously written on germline modification, and despite the 1998 UCLA conference and the doomsday scenarios explored by science fiction authors over the years, it quite simply hadn’t been feasible to edit the human germline with any degree of precision before CRISPR. Now, of course, things were very different—a point driven home by one of the forum’s participants, who told us that a scientific manuscript describing experiments in which human embryos were edited with CRISPR was already circulating among major journals. This research, if real, would represent the first time that scientists had knowingly tweaked specific DNA sequences in the genome of a potential future human.
If ever there was a time to get the word out, it was now. But what would our position be? Many of us were unsure if it would ever be safe to make heritable changes to the human genome, given that any mistakes could be disastrous for the individual and for future generations. Whether such changes could be ethically justified was another issue entirely. As our conversation stretched into the afternoon, we deliberated over questions of social justice and procreative liberty and openly discussed fears about eugenics. Some participants were wary of science moving in this direction while others admitted that they had no problem with germline editing, at least not in theory. As long as it could be proven safe and effective, this cohort argued, and as long as its benefits clearly outweighed its risks, how could we hold this mode of therapy to a higher standard than any other medical treatment?
Ultimately, though, we realized that this wasn’t our decision to make. It was not up to us, the seventeen people in the room, to determine what the public should think about germline editing. We felt that our responsibility was twofold. First, we had to make the public aware that germline editing was an emerging societal issue that should be confronted, studied, discussed, and debated. Second, we had to urge the scientific community—those individuals who were familiar with the technology, and who were aggressively pushing it in new directions—to hold off on exploring this one avenue of research. We felt it was critical to discourage our peers from rushing headlong into any research efforts, let alone any clinical applications of gene editing, that involved altering the human germline. Essentially, we wanted the scientific community to hit the pause button until the societal, ethical, and philosophical implications of germline editing could be properly and thoroughly discussed—ideally at a global level.
We pondered how best to achieve these objectives. Should we submit an editorial to a major newspaper? Hold a press conference? Author a perspective—essentially, an academic op-ed—in a scientific journal? After some back-and-forth, we settled on the last option, reasoning that this would likely get the most exposure among active researchers and would probably be picked up by the popular media, as often happened with high-profile articles in major journals. And because our meeting centered on one of the hottest topics in all of biology, we knew this paper would cause a splash.
We concluded the meeting by outlining the paper, which we planned to submit to the journal Science. Its goal, we agreed, would be to draw attention to the issue without getting too deep into the weeds. There would, of course, be many highly contentious issues to eventually discuss, but this initial perspective didn’t seem like the right place to get into them. We wanted to simply get the ball rolling, and we decided to leave further discussion for a subsequent meeting when more people would be able to attend and participate.
Finally, our energy spent, my colleagues and I adjourned to Angèle, a French restaurant perched above the Napa River. Seated outside around a long oval table, a cool breeze blowing in from the nearby hills, we sipped local wine and snacked on appetizers while enjoying lighthearted conversations about work, family, and travel. We were glad to set aside the heavy topics that had occupied us all morning and afternoon. Yet, privately, my mind was still racing.
Had I really made the right move by entering this new arena? The idea of taking a public stand on a scientific issue, no matter how important, felt foreign to me, almost transgressive. It was unclear whether our perspective would make a lasting impact and whether it would be received as we intended. Even if it went over well, it might be too little, too late. The manuscript our colleague had described, the one that was currently being considered for publication by major science journals, was haunting me. Other such experiments might be in progress at that very moment or be attempted in the near future. Would they be published before we had a chance to announce our conclusions?
I was sure of one thing: now that I had committed to this path, I would move quickly. By the time I made it back home to Berkeley that night, I had already begun organizing my notes and putting together a rough outline. The actual article turned out to be a challenge to write, but within a couple of weeks, I’d sent the first draft to the other Napa forum participants, and we began the round-robin process of editing it. On March 19, 2015, the article was published online with the title “A Prudent Path Forward for Genomic Engineering and Germline Gene Modification.”
The article, which ran just a few pages, explained the technology and stated our concerns about it. After introducing CRISPR, the concept of gene editing, and the applications that were currently being pursued, we turned to the topic of germline editing. On that subject, we put forth four specific recommendations. We asked experts from the scientific and bioethics communities to create forums that would allow interested members of the public to access reliable information about new gene-editing technologies, their potential risks and rewards, and their associated ethical, social, and legal implications. We called on researchers to continue testing and developing the CRISPR technology in cultured human cells and in nonhuman animal models so that its safety profile could be better understood in advance of any clinical applications. We called for an international meeting to ensure that all the relevant safety and ethical implications could be openly and transparently discussed—not just among scientists and bioethicists, but also among the many diverse stakeholders who would surely want to weigh in: religious leaders, patient- and disability-rights advocates, social scientists, regulatory and governmental agencies, and other interest groups.
Last, and perhaps most significant, we asked scientists to refrain from attempting to make heritable changes to the human genome. Even in countries with lax regulations, we wanted researchers to hold off until governments and societies around the world had a chance to consider the issue. Although we ultimately avoided using the word ban or moratorium, the message was clear: for the time being, such clinical applications should be off-limits.
Any fears I’d had about the reception and immediate impact of our article vanished as soon as the perspective was published. In the days that followed, colleagues reached out to thank us for bringing up this issue and to inquire about the meeting to come. Would it be hosted by professional societies or national academies? How did we plan to include countries other than the United States? Would we return to Asilomar for another historic conference or pick another venue? Messages also poured in from journalists and members of the public, thanks in large part to the press our article attracted. The New York Times ran a front-page story that generated hundreds of reader comments, and our perspective was also picked up by media outlets from National Public Radio and the Boston Globe to numerous blogs and websites. It certainly helped that a team writing in the journal Nature had called for a ban on germline editing just days before we did and also that the MIT Technology Review had recently published a riveting piece on germline editing.
The topic, it seemed, had suddenly entered the mainstream. In the blink of an eye, CRISPR had morphed from a revolutionary but relatively esoteric technology into a household word. Now that the technology’s extraordinary implications for the future of humanity were out in the open, I allowed myself to hope that we could have a broad, frank conversation about germline editing: when, if ever, we would sanction its use, how we would regulate it, and what repercussions we were and were not prepared to tolerate. It was exhilarating to have finally begun the process of public discussions about CRISPR—but the path ahead would be long.