PHAETHON PAID a visit one day to his father, Phoebus, the sun god. He went to demand Greek mythology’s equivalent of a paternity test.
Rumors were swirling that Phoebus was not his father, and Phaethon wanted them put to rest. “Give me proof that all may know I am thy son indeed,” he said.
Stepping down from his throne, Phoebus embraced Phaethon. He swore to do anything to prove his fatherhood. Phaethon asked him for the one thing that Phoebus wished he could deny: to ride the chariot of the sun across the sky.
Phoebus begged Phaethon to ask for something else—anything else. The horses were too strong for Phaethon to master, the course too hard. But Phaethon, supremely confident in his own skill and strength, refused to change his mind. Phoebus realized he was trapped by his own promise and led his son to his gold-wheeled chariot.
When Phaethon climbed aboard, the horses suddenly carried the chariot high above the Earth. Phaethon went blind with fear. Simply being the son of a god did not mean that Phaethon inherited his father’s mastery. The horses galloped off course, dragging the sun down toward the Earth and far away again. Where they came too close to the land, it was scorched to desert. Where they rose too high, they left frozen wastelands behind.
Phaethon’s wild ride did more than permanently alter the landscape. It also left its mark on humanity. When the lurching chariot passed over Africa, the sun dropped so close to the ground that it scorched the people living there. Their blood rose to their skin, turning it black. Their children would inherit their dark skin, as would all future generations.
Before long, the Earth cried out to Zeus for help, and he responded by hurling a bolt of lightning at the chariot. The sun god’s son tumbled to Earth, blazing down like a shooting star. Nymphs buried his smoldering body and put a stone over his tomb. “Here Phaethon lies, his father’s charioteer,” it read. “Great was his fall, yet did he greatly dare.”
The story of Phaethon, which survives today mostly through Ovid’s telling in Metamorphoses, is many different stories bundled together. Among those stories is a tale about heredity. Phaethon’s wild ride was an explanation for an inherited difference between people. Ancient philosophers and poets offered many such explanations for why children resembled their parents and why some diseases were inherited. Yet there’s also a telling absence in their writings. As far as we know, Aristotle and other ancient scholars never offered instructions for how to alter heredity—how to extirpate inherited diseases or how to improve the animals and plants their lives depended on. Perhaps those ancient scholars thought humans could no more alter heredity than they could alter the course of the sun. And perhaps they thought that anyone who dared seize such power would be overwhelmed and die.
But there’s yet another story hidden in Phaethon’s tale. It’s odd, when you think of it, that a god like Phoebus would have to use horses to pull his heavenly chariot. Certainly they must have been remarkable horses that could gallop across the sky, but they were horses nonetheless, complete with hooves, tails, and manes—the same animals that pulled the chariots of earthly Greeks in races and battles.
And yet the ancient Greeks and their fellow humans transformed their horses in a godlike way: They altered the DNA of the animals, steering them from the genes of their wild ancestors and toward new domesticated sequences. They reared the horses, raising foals to replace their parents in the traces. Each new generation of horses inherited traits from their parents that made them well adapted to this work: powerful hearts, strong bones in their legs, and a willingness to take commands from two-legged apes.
This particular combination of traits seems to have first come together about 5,500 years ago, when nomads in central Asia began to domesticate wild horses. They unknowingly picked out certain variants of certain genes for breeding. Domesticated horses then spread across much of Asia, Europe, and northern Africa in the millennia that followed. The horses of ancient Greece were thus the product of five thousand years of modification, and in later years, people continued transforming them into new breeds. Big workhorses like Clydesdales hauled heavy loads, while Thoroughbreds galloped swifly around racetracks. Every breed of horse inherited a particular combination of variants that altered everything about them—their size, their shape, and even their gait.
The Greeks and other ancient peoples had more control over heredity than Phaethon had had over his father’s chariot, in other words. But they had little idea of what they were doing. They could not directly rewrite genes of horses to precisely meet their needs, creating permanent changes that would be passed down to future generations. They could only choose which horses to breed. The desirable genetic variants they blindly selected sat on stretches of DNA with harmful ones, too. Modern horses pay the price for this blind selection, inheriting genetic variants that make them worse at healing wounds than their ancestors, raise their risk of seizures, and create other vulnerabilities.
In the 1800s, a growing number of scientists tried to master heredity’s chariot. They ran experiments to find its rules. And yet even in the early 1900s, controlling heredity still seemed like magic—in both the wondrous and dangerous senses of the word. It was no accident that Luther Burbank earned the nickname “the Wizard of Santa Rosa.”
When the plant scientist George Shull spent time with Burbank, he realized that the wizard had no magic beyond a good eye for interesting flowers and fruits. And it was Shull, not Burbank, who would become the true pioneer of modern plant breeding. Back at Cold Spring Harbor, Shull ran an experiment on some Indian corn he had rescued from the lab’s horse feed. He planted the kernels and then carefully pollinated each plant with its own pollen. In time, he created purebred lines of corn.
The two copies of every gene in each purebred plant were identical. Shull would pick out one line with a quality he liked, such as extra rows of kernels, and then breed it to another desirable line. Their hybrid offspring inherited a copy of each gene from each parent. Remarkably, the hybrid corn would show many of the traits that Shull selected in the inbred lines, while also growing bigger, healthier ears than their parents.
Shull painstakingly improved his inbred lines and found that when he crossed them, they produced even better hybrids. Scientists still argue about why his method worked. It may be that he could eliminate harmful recessive mutations without losing the traits he desired. It may also be that corn and other plants do better when they can use two versions of certain proteins rather than just one. What was immediately clear when Shull began publishing his experiments was that his method would allow farmers to get more food from their crops—what Burbank had originally claimed was his own life’s mission.
By the 1920s, many plant scientists were following Shull’s example, and before long farmers across the Midwest were filling their fields with hybrid corn. Not only did it produce more bushels per acre, but it also withstood the Dust Bowl droughts better than earlier strains. By the end of the twentieth century, plant breeders using Shull’s methods had quintupled their yields. Yet enough genetic variation still remained in the corn plants to ensure that they could breed even better hybrid corn for many years to come.
Understanding Mendel’s Law made Shull’s hybrid corn possible. And yet Shull still worked for the most part in ignorance. He had no idea which genes he was selecting or how they made his corn better. He simply mixed the existing variations together. It was his combinations that were new.
Over the next century, scientists would gradually gain more control over heredity. Some dragged X-ray machines into cornfields and fired beams at the tassels. The radiation triggered new mutations that altered the descendants of the corn. Plant mutagenesis, as this method came to be known, threw out new varieties of pears, peppermint, sunflowers, rice, cotton, and wheat. Bombarding barley gave rise to new kinds of beer and whiskey. Scientists also hurled X-rays at mold, creating strains that could make superior penicillin.
Even these successes still depended on a lot of blind luck, though. Heredity remained a slot machine, and plant mutagenesis just gave scientists an extra bucket of coins to play it. More pulls of the arm raised their odds enough that, at some point, the reels would turn up three bars. It wasn’t until the 1960s that microbiologists would discover molecular tools that gave them a more precise control over heredity.
Many species of bacteria make proteins called restriction enzymes that recognize a short sequence of DNA and cut the molecule wherever that sequence appears. These microbes use their restriction enzymes to defend themselves against attack—specifically, by destroying the DNA of invading viruses. Tinkering with these proteins, scientists found that they could also use them to cut other pieces of DNA, even genes inside human cells. Loading such a gene onto a plasmid—a ringlet of DNA—these researchers could then move the gene into a microbe.
By the end of the 1970s, researchers created strains of bacteria that carried the gene for human insulin. With the bacteria growing in fermentation tanks, scientists could now manufacture the insulin like living factories. Other researchers went on to use similar methods to do everything from giving crops resistance to viruses to giving mice humanlike hereditary diseases.
Behind these successes, however, were long stretches of effort and failure. It could take years for scientists to discover a gene worth moving from one species to another, and then years more to load it onto a vehicle that could carry it across the species boundary. And learning how to make that transfer to one species didn’t help researchers with another. The tools that made it possible to import genes from jellyfish into rats were useless for moving daffodil genes into rice.
And even if scientists succeed in getting genes into a species, they might still fail. The scientists had little control over where a gene would get inserted in an organism’s DNA. It might end up in a spot where it could operate smoothly, or it might drop into the middle of other genes, disrupting them and killing its new host. None of these challenges spelled doom for genetic engineering, but they did keep it expensive and limited to labs of scientists with hard-fought wisdom.
It wouldn’t be until 2013, over a century after Shull discovered hybrid corn, that scientists would report their discovery of a versatile, cheap way to control the heredity of just about any species. They hadn’t thought it up. Just like restriction enzymes before it, it was a system of molecules that bacteria had been using for billions of years to alter their own heredity.
In 2006, Jennifer Doudna was sitting in her office at the University of California, Berkeley, when she got a phone call out of the blue. A Berkeley microbiologist named Jill Banfield wanted to talk to her about something that sounded like crisper.
Doudna didn’t understand what Banfield was talking about, or why she’d want to call her. But Banfield, who searched for new species of bacteria on mountaintops and ocean floors, seemed like a scientist worth talking to. At the time, Doudna studied the RNA molecules made by bacteria, humans, and other species. Most of her work took place in the quiet confines of a test tube. Banfield could enlighten her about the world beyond the tube.
The following week, Doudna and Banfield met at a café. Banfield introduced Doudna to CRISPR, at least as it was understood in 2006. She drew a diagram for Doudna in a notebook, showing the repeating sequences of DNA that some species of bacteria carried, with different bits of DNA wedged between them.
Banfield at the time was discovering CRISPR regions in the DNA of one species after another. And she could see that some of these bits of DNA had come from viruses. Other scientists had begun to explore the possibility that CRISPR was some sort of defense system that bacteria could use to fight viruses, a system they could pass down to their descendants. But nobody knew how it worked. One possibility was that bacteria made RNA molecules to seek out the viruses. Since Doudna was an expert on RNA in bacteria, Banfield wondered if she’d be willing to help find out.
Doudna took Banfield up on the offer. She hired a postdoctoral researcher named Blake Wiedenheft to work exclusively on CRISPR, and then gradually the rest of her lab switched over to studying it. A few other labs were also investigating CRISPR at the time. In 2011, Doudna joined forces with a French biologist named Emmanuelle Charpentier, and together they figured out that CRISPR, like restriction enzymes, destroys viral DNA.
But there was a profound difference between these two lines of defenses. Restriction enzymes had a shape that allowed them to recognize only a single short stretch of DNA, which could appear in many places in a genome. Microbes protected their own DNA from this attack by methylating their own sequences. Viruses, unable to methylate their genes, were left vulnerable to attack.
The Cas9 enzymes produced by the CRISPR system were far more sophisticated. Bacteria produced RNA guides that could lead the enzymes to one—and only one—stretch of DNA. By storing different RNA guides in their DNA, bacteria could precisely recognize several different strains of viruses.
Like any molecular biologist, Doudna was well aware of how restriction enzymes had helped create the biotechnology industry. She wondered if CRISPR might have a similar power. If it could recognize any stretch of DNA in a virus, perhaps Doudna and her colleagues could create RNA guides that would lead the enzymes to a particular spot in the DNA of a cucumber. Or a starfish. Or a human.
To test this idea, Doudna and her colleagues tried to cut out a piece of DNA from a jellyfish gene. (The gene is a common tool for molecular biologists, because it makes a glowing protein that can light up a cell like a microscopic jack-o’-lantern.) Doudna and her colleagues picked for their target a twenty-letter stretch. After synthesizing RNA molecules that matched the target, they mixed all the molecules together in a test tube. The RNA guides and Cas9 enzymes combined, and sought out the jellyfish genes. When Doudna and her colleagues looked at that DNA afterward, they discovered it was now cut into precisely the fragments they had hoped to create. Four more trials, using RNA guides that sought different targets in the gene, worked just as well.
“We had built the means to rewrite the code of life,” Doudna later recalled.
After Doudna and her colleagues published the details of the experiment in 2012, a CRISPR scramble began. Her team, as well as others, tried to get the CRISPR molecules into living cells. Researchers learned not only how to cut out pieces of DNA from those cells but how to repair it as well.
In one of these experiments, Feng Zhang and his colleagues at the Broad Institute in Cambridge, Massachusetts, delivered a pair of CRISPR systems into human cells. The molecules landed on two neighboring targets within a single gene and snipped the DNA at both sites, cutting out the short stretch in between. The cell’s own repair enzymes then grabbed the two sliced ends and stitched them back together. The procedure, in other words, surgically removed a piece of DNA, leaving no scar behind. And when the cell divided, its descendants inherited that deletion.
Before long, scientists were starting to use CRISPR to replace stretches of genes with new sequences. Along with the Cas9 enzymes and RNA guides, the researchers would deliver small pieces of DNA to cells. After the enzymes cut out a section of DNA, the cells would patch the new pieces into the gap.
CRISPR was a drastic improvement on both X-ray mutagenesis and restriction enzymes. CRISPR did not introduce random mutations like mutagenesis. Nor was it limited to inserting an existing gene from one species into another. Since researchers were now able to synthesize short pieces of DNA from scratch, CRISPR could potentially let them make any sort of change they wanted to any species’ own genes.
In the 1970s, Rudolf Jaenisch, a biologist at MIT, had used restriction enzymes to engineer mice for the first time. With the advent of CRISPR, he wondered if he could create new lines of mice with that tool as well. Collaborating with Feng Zhang, he and his graduate students and postdoctoral researchers played around with CRISPR until they found a chemical recipe they could use to slip the molecules into a fertilized mouse egg. They were able to alter as many as five different genes at once by delivering five different RNA guides. Jaenisch and his colleagues then implanted these altered eggs in female mice, where they developed into healthy pups. Eighty percent of the time, Jaenisch’s team successfully engineered precisely the changes they desired.
A new generation of graduate students silently thanked Jaenisch every day for making their lives easier. Many PhD projects had to start with the creation of a mouse model to study a gene or a disease. It typically took eighteen months to create a line of mice, and often it took more than one try to get the mouse right. Now, with CRISPR, Jaenisch needed only five months to get the job done.
I was working as a reporter during those frenzied years, and I did my best to keep up with CRISPR’s advances. But very soon the parade of CRISPR animals became a stampede. Scientists were altering the DNA of zebrafish and butterflies, of beagles and pigs. By 2014, it dawned on me that I was witnessing the beginning of something enormous. Biologists began speaking about their life before and after CRISPR. But I didn’t truly appreciate what CRISPR meant to scientists until I returned to Cold Spring Harbor one early spring day to spend an afternoon in a giant greenhouse with cathedral-like ceilings made of glass.
A plant scientist named Zachary Lippman led me down narrow aisles past rows of pots, each with a plant climbing a tall stake. Although he was still young, Lippman had a pair of gray patches in the dark beard framing his chin. I wondered if the six children he and his wife were raising might have something to do with them. “They say I do genetics at the lab and then do genetics at home,” Lippman said.
Lippman has a long history of showing off his plants. Growing up in Connecticut, he worked on a farm where he learned how to grow giant pumpkins. At the peak of the growing season, they put on ten or fifteen pounds a day. “To me the interest was how the hell does this thing get so big, and how can I get it bigger?” Lippman said.
Heading to Cornell for college, Lippman majored in plant breeding and genetics. There he discovered that scientists had long been asking his boyhood question—not just about pumpkins but about other fruits and vegetables. One of the key changes to crops during the Agricultural Revolution was making them bigger—to turn the stubby fruits of teosinte into long ears of corn, to swell the thin pale roots of carrots into stout orange tubers.
Using traditional methods for studying genes, the scientists had found some of the mutations that made these changes possible. They had done a lot of this work on tomatoes, Lippman discovered, because their biology lends them well to genetic experiments. Lippman followed in their scientific footsteps by studying tomatoes, too.
“Look at these tiny little berries,” Lippman said to me. He had stopped at a tomato plant towering over us. Grabbing a stem, he cradled its fruit. “These plants here are the closest that we know to the first domesticated forms of tomato,” he said.
The domestication of tomatoes by the indigenous farmers of Peru turned blueberry-sized fruits into the larger kind we find in supermarkets and at farm stands. Lippman’s research has helped reveal how those earlier breeders made tomatoes big. It turns out that they had to change the shape of tomato flowers.
When a bud on a tomato plant begins developing into a flower, it first divides up into wedges, called locules. From those locules will develop the petals of the tomato flower. And at the center of the flower, those same locules will give rise to the sections of a tomato. One gene controls how many locules form on a tomato plant. Mutate the gene, and the plant makes more locules. And more locules develop into a bigger tomato.
This locule-controlling gene was not the only one to mutate during the domestication of tomatoes. Lippman’s research has also revealed that the crops adapted to the length of day as the crops were moved to different parts of the world.
Lippman and his colleagues found that wild tomatoes, which grow at the equator in South America, are adapted to getting twelve hours of sunlight every day through the year. When they brought wild tomato plants from the Galápagos Islands north to Cold Spring Harbor, they discovered that the plants fared poorly, thanks to the long New York summer days. The plants responded to the extra sunlight with flower-suppressing proteins, delaying the growth of their fruits until the end of the season. But domesticated tomatoes that grow in Europe and North America have acquired mutations that caused the plants to make fewer anti-flowering proteins in the summer.
In 2013, Lippman learned that scientists had figured out how to use CRISPR to edit genes in a plant for the first time. He got hold of the molecules and tested them out on tomatoes to see how well they worked. No genetic tool he had used before came close. “It was black and white,” Lippman said. “We just sat down and had brainstorming sessions, just saying, ‘What can we do?’”
One of the first items on their list was to get tomatoes to stop making flower-suppressing proteins altogether in response to long summer days. They used CRISPR to cut out the genetic switch for this activity in domesticated tomatoes. When Lippman and his colleagues planted the seeds of these altered plants, they grew their flowers—and their tomatoes—two weeks ahead of schedule. They might thrive in places with much shorter summers. “Now you can start to think about growing some of your best tomato varieties in even more northern latitudes, like in Canada,” Lippman said.
Lippman had, in effect, created a new crop variety in one step. He did not need the eye of Luther Burbank, scanning thousands of plants for a single promising mutant each year. Nor did he need to transfer a gene from some other species to create a genetically modified crop. He directly altered the tomato’s own genes, using the knowledge he had gained about how tomatoes work.
This success made Lippman’s brainstorming more ambitious. He wanted to turn a wild plant into a domesticated crop. And for his new experiments, he chose ground-cherries.
I couldn’t really appreciate what he was doing, Lippman assured me, unless I ate some ground-cherries first. He brought me a plastic box filled with golden fruits the size and shape of marbles. When I bit into a ground-cherry, I tasted a rich flavor that hovered somewhere between pineapple and orange. The fruits were so delicious and so distinctive that I wondered why I hadn’t had one before. The reason, Lippman explained to me, is that they’re wild.
Ground-cherries (known scientifically as Physalis) live across much of North and South America. They grow into bushes and develop their fruit inside a lantern-shaped husk. Native Americans gathered ground-cherries to make sauces, and European settlers followed suit. Some collected the seeds and planted them in their gardens. Today you can buy a packet of ground-cherry seeds, and sometimes you can find the fruits for sale at a farmers’ market or a gourmet store. But because they’re wild, ground-cherries remain an oddity rather than a crop. The fruits ripen one by one through a long season, and gardeners have to wait for them to drop to the ground before collecting them—hence their name.
Lippman has long had a scientific curiosity about ground-cherries, because they belong to the same family as tomatoes. Their close evolutionary relationship means that they have a lot of biology in common. Both ground-cherries and tomatoes form their flowers and fruits from locules, for example, and they use related versions of the same genes to build them. It’s intriguing to Lippman that tomatoes were domesticated but their cousins, the ground-cherries, never were.
One possibility for this difference may be that ground-cherry DNA doesn’t lend itself to easy domestication. Tomatoes, like humans, have two copies of each chromosome. But ground-cherries have four. To breed ground-cherries for some particular trait, farmers need to find plants that inherited the same mutation on all four copies of one of their genes. It occurred to Lippman that he could use CRISPR to edit mutations directly into ground-cherries instead.
Lippman scooted down an aisle, the leaves brushing against his shoulders. He found a ground-cherry bush that he had edited with CRISPR. It had flowered a few days before, and by now the petals had fallen off. The sepals—the leaflike petals that surround the flower—had expanded to form the papery lanterns inside which the fruit would now develop.
On an ordinary ground-cherry plant, these lanterns would have five sepals. Lippman peeled off the sepals on his edited plant, counting them as he went: “One, two, three, four, five, six, seven.”
Once he had pulled away all the sepals, Lippman revealed a tiny young ground-cherry fruit inside. It had seven locules now instead of the normal five.
“We could never do this with traditional breeding,” Lippman said. “And we got this”—he snapped his fingers—“in one generation. All four copies of the gene mutated.”
Lippman was soon going to test other edits. He would edit genes that controlled when the fruits fell from the bushes, so that farmers wouldn’t have to rummage on the ground for them. He would make a change to get the plants to ripen their fruits in batches rather than a few at a time. He would adjust the plants’ response to sunlight so they would start producing fruit early in the growing season. They would grow to a fixed height so that farmers could use machines to gather them.
Lippman planned on starting by editing plants for one trait at a time. If he succeeded, he would then create RNA guides that could alter all the traits at one shot, in one plant. When that ground-cherry plant reproduced, its offspring could inherit all the genetic machinery required to be a domesticated plant instead of a wild one.
“I know this sounds a little ridiculous,” Lippman confessed, “but I think this will be the next berry crop.” Having listened to his plan, I didn’t think it was ridiculous at all. I thought Lippman was being too modest. He was trying to replay the Agricultural Revolution on fast-forward. Instead of a thousand years, he might only need a single growing season.
To CRISPR, ground-cherries and humans proved pretty much the same: Their DNA was equally easy to cut.
Scientists quickly began using CRISPR to edit the genes of human cells, to answer questions about ourselves that once seemed unanswerable. We each carry about twenty thousand protein-coding genes, and thousands more genes that encode important RNA molecules. But how many of those do we really need? When mutations shut down some genes, it leads to lethal hereditary diseases. Yet many of us walk about in good health despite inheriting some broken genes. Scientists have long wondered just how many genes in the human genome are absolutely essential to our survival. But they knew it was impossible to actually compile that catalog.
CRISPR made it possible. In 2015, three separate teams of scientists used CRISPR to shut down all the protein-coding genes in human cells, one at a time, to see if the cells could survive without them. They ended up with lists that were pretty much identical. About two thousand genes, only about 10 percent of all the protein-coding genes in the human genome, proved to be essential. The experiments showed that many genes were expendable because they had backup. If they failed, other genes could take over their jobs.
Other scientists began experimenting with CRISPR on human cells with a different goal: to invent new forms of medicine. In December 2013, a team of Dutch researchers demonstrated how CRISPR medicine might work. They took samples of cells from people with cystic fibrosis and raised colonies of them in dishes. The cells all shared the same defective mutation in a gene called CFTR. The scientists fashioned CRISPR molecules that could chop out the mutation, and then they stitched a working version of that DNA in its place.
It soon became clear that CRISPR’s power was not limited to altering somatic cells. It could change the DNA in germ-line cells as well. In December 2013, a group of scientists at the Shanghai Institutes for Biological Sciences in China reported the results of an experiment on mice that suffered from hereditary cataracts. The scientists injected CRISPR molecules into mouse zygotes and they repaired the mutant gene. The altered mice grew up to be fertile adults, and their descendants gazed through clear eyes.
Jennifer Doudna’s delight now began getting undercut by worry. CRISPR was turning out to be far more powerful than she had expected. Xingxu Huang, a geneticist at the Model Animal Research Center of Nanjing University in China, and his colleagues used CRISPR to alter three genes in monkey embryos. They implanted the embryos in a female monkey, and she later gave birth to a pair of healthy twins. If the monkeys had offspring of their own, they’d inherit the CRISPR-altered genes, too.
A reporter sent Doudna an advance copy of the monkey paper in January 2014 for her comment. After she read it, she couldn’t stop wondering when the first experiments on human embryos would take place. And it was about then when the nightmares started.
Sometimes Doudna dreamed she was back in Hawaii, where she had grown up, standing alone on a beach. She saw a low wave in the distance coming toward her, and after a while she realized that it was actually a tsunami. At first she was terrified, but then she found a surfboard and swam straight at the wave.
In another recurrent dream, a fellow scientist asked her to meet with someone very powerful. She went into a room. The powerful person turned out to be Hitler. In Doudna’s dream, he had the face of a pig. He kept his back turned to Doudna as he jotted down notes.
“I want to understand the uses and implications of this amazing technology,” the pig-faced Hitler told her.
Doudna woke up with her heart pattering. What, she asked herself, have we done?
Doudna was hardly the only person getting visits from Hitler. In 2015, a reporter asked the inventor Elon Musk if he was considering getting into the business of reprogramming DNA. Musk is the sort of entrepreneur who blithely sets out to replace the world’s fleet of gas-powered cars with electric ones while simultaneously building the first recyclable rockets. But gene editing gave him pause.
“How do you avoid the Hitler Problem?” Musk asked in reply to the reporter’s question. “I don’t know.”
We must never forget Hitler’s genocidal ideology. But we need to remember it for what it was, rather than wrap it around scientific advances that took place seventy years after he died. Hitler wanted Germany’s scientists to conquer the future—to build the world’s first atomic weapons, to create the first computers. But when it came to biology, he wanted Nazi scientists to revive a mythical past. He had no need for new genes, because Aryans already had all the genetic superiority they could hope for.
The genetic nostalgia of the Nazis was so powerful that it even extended to other species. Hermann Göring, Hitler’s most powerful deputy, became a patron of a project to restore the wild ancestors of cattle. Known as aurochs, these giant animals had become extinct in the Middle Ages. Under Göring’s direction, zoologists searched Nazi-held countries for cows that seemed to retain a few vestigial features of aurochs. They bred the cattle, looking among the calves for the ones that appeared to step even further back in time.
Göring’s goal was to release the restored aurochs in Poland, where they would roam one of the last primeval European forests. He pictured himself as a modern Siegfried from Wagner’s Ring of the Nibelung, hunting the same noble beasts as his Aryan ancestors. To clear the path for his romantic vision, Göring emptied the Polish forests of Jews, Polish resistance fighters, and Soviet partisans.
The Nazi plans for humanity followed the same lines. The Aryan bloodline needed to be protected, revived, and purified. Systematic murder would protect future generations of Aryans from inferior heredity. And planned pregnancies would concentrate more Aryan blood in future generations, just as breeding would turn cows back into their auroch ancestors. The Nazis even forced blond, blue-eyed people to join an association known as Lebensborn, designed to produce children to restore the Aryan race.
After Hitler’s defeat, Nazism and other forms of white supremacy did not disappear. As science advanced, latter-day Nazis kept distorting it to feed their genetic nostalgia. They took genetic ancestry tests in the hopes of demonstrating that they were indeed white. “Pretty pure damn blood,” one member of a computer forum called Stormfront crowed when he got his results. The myth of white purity has endured even after the study of ancient DNA has proven that Europeans have inherited genes from wave after wave of migrants—people with ancestries separated by tens of thousands of years of history, mixing their genes together. A fair number of other Nazis discovered to their horror that they had some Jewish or African ancestry. They coped with the news either by dismissing the results as statistical noise or by arguing that all you need to do to know your past is look in the mirror—a kind of personal bald eagle test.
It’s also a mistake to use Hitler as a label for all of eugenics. World War II and the horror of the Holocaust brought Hitler’s particular version of eugenics to a halt. It also forced conservative forms of eugenics in places like the United States and Great Britain into retreat. But ever since Francis Galton coined the term, eugenics has taken on many different forms, each shaped by the politics and cultures of the people espousing it. And after World War II, a progressive form of eugenics survived. It even rose to prominence. The leading voice for this so-called reform eugenics was a protégé of Thomas Hunt Morgan, the American-born biologist Hermann Muller.
After Muller learned how to breed Drosophila flies in Morgan’s lab at Columbia, he went to the University of Texas, where he used X-rays in the 1920s to create new mutations in the insects. He also came under the FBI’s surveillance for advising a left-wing student newspaper that espoused suspicious goals such as social security for retired people, equal opportunity for women, and civil rights for African Americans. Muller grew disgusted by the American eugenics movement in the 1920s—its shoddy science, its push to sterilize the weak and ostracize immigrants—and became one of its most outspoken opponents.
In 1932, Muller was invited to speak at the Third International Eugenics Congress at the American Museum of Natural History in New York City. Charles Davenport and the other organizers of the meeting may have assumed he would limit his talk to his work on mutations in flies. To their horror, Muller planned instead to use his speech to burn the American eugenics movement to the ground. Davenport tried to get Muller to cut his talk, originally slated for an hour, down to fifteen minutes. Then he demanded it be cut to ten. Muller pushed back, accusing Davenport of trying to stifle dissent, and delivered his full address.
On August 23, Muller shocked his audience by condemning the idea that poverty and crime in the United States were due to heredity. Only in a society where people’s needs were met—where children could grow up in the same environments—could eugenicists ever hope to improve humanity. In a country as rife with inequality as the United States, eugenics could do nothing of the sort. Instead, Muller said, it simply fostered “the naive doctrine that the economically dominant classes, races and individuals are genetically superior.”
Muller grew so disenchanted with conditions in the United States that he accepted an invitation to do research in Germany. But once he got there, he realized he had made a very bad choice. When Hitler became chancellor, Nazis raided the institution where Muller worked, and he worried that his socialist tendencies and Jewish roots could put his life at risk. Another invitation seemed to offer him a new refuge: Muller left for the Soviet Union, where he was asked to establish a genetics lab in Leningrad.
At first, Muller was happy there, working with Soviet students on groundbreaking research. But his timing turned out once again to be disastrous. A plant scientist named Trofim Lysenko rose to prominence by arguing that genetics was a fraud, and that heredity was as malleable as clay. Muller took on Lysenko in a public debate, but the audience of three thousand scientists and farmers shouted him down. When Stalin began arresting and executing scientists, Muller fled the Soviet Union.
He went to Spain to serve as a doctor in the civil war and then traveled to Scotland to teach at the University of Edinburgh. Finally, in 1940, Muller returned to the United States. There he found stability at last, becoming a professor at Indiana University. His work on mutations had proven to be some of the most important research in modern biology, and in 1946, Muller was awarded the Nobel Prize for his work on mutations. Not long after, he was elected the first president of the newly created American Society of Human Genetics. Now one of the most prominent scientists in the postwar United States, Muller made the most of his fame by promoting his own vision for social progress.
With the fall of Nazi Germany, Muller believed, the fallacies of its eugenics were exposed. But, he warned his fellow scientists, “it is by no means all dead and buried yet, but represents a continuing peril, to be vigilantly guarded against by all serious students of human genetics.” Muller urged a fight against American eugenicists—“racist propagandists,” as Muller called them—who would try to smuggle their old ideology into postwar genetics.
But Muller also used his newfound megaphone to call for a different kind of eugenics. “Eugenics, in the better sense of the term, ‘the social direction of human evolution,’ is a most profound and important subject,” he said.
In his research on mutations, Muller had made an important discovery: From one generation to the next, a species can become burdened with a growing load of mutations. Every new offspring may spontaneously gain new mutations, most of which will be fairly harmless. But added together, they could create diseases and lower fertility. In the wild, natural selection eliminated many of these new mutations. In our own species, Muller worried that our mutation load would become dangerously large. Thanks to medicine and other advances, natural selection had grown weak in humans, unable to strip out many harmful mutations from the gene pool.
It was naive to deny the existence of humanity’s mutational load, Muller argued, but it was even more naive to try to blame it on some despised race, or on people with some form of intellectual disability. “None of us can cast stones,” Muller said, “for we are all fellow mutants together.”
Still, something had to be done—something beyond “making the best of human nature as it is, the while allowing it to slide genetically downhill, at an almost imperceptible pace in terms of our mortal time scale, hoping trustfully for some miracle in the future,” Muller said.
Muller had a plan. He called it “Germinal Choice.”
Traditionally, children could inherit genes only from people who had sex. But in the mid-1900s, sexless reproduction slowly began to emerge. Animal breeders had led the way, perfecting the art of artificial insemination. A prize bull could father countless calves without ever leaving his stanchion. Once breeders figured out how to safely freeze semen, bulls began fathering calves long after their deaths.
Doctors quietly started imitating veterinarians, using donated sperm to help couples when the husband was infertile. When the practice came to light, it was roundly condemned. The pope declared donor insemination adultery. In a 1954 divorce case, an Illinois judge ruled that a child produced by donated sperm was born out of wedlock. But the practice grew more common, and less controversial. By 1960, about fifty thousand children in the United States had been conceived with donated sperm.
Muller’s Germinal Choice plan would turn artificial insemination into a national—perhaps even global—campaign against the mutation load. The sperm from the finest specimens of manhood would be collected and stored away in subterranean freezers, to protect their DNA from radiation and cosmic rays. The sperm from a single man could theoretically produce hundreds, perhaps thousands, of children. In the 1950s, eggs were proving harder for scientists to handle than sperm, but Muller was optimistic that, at some point, the underground germ-cell bunkers could store samples from superior women as well.
The public would then be educated about the coming mutational catastrophe and be invited to use the superior eggs and sperm to build their own families. Forward-minded couples would appreciate the scale of the threat and be the first to step forward. To avoid any awkward encounters with the biological parents, Muller would offer gametes only from people who had been dead for twenty years.
The volunteers would have to be ready to withstand the ignorant mockery of others. But as they raised their obviously glorious children—with the “innate quality of such men as Lenin, Newton, Beethoven, and Marx,” Muller promised—other parents would follow in their path. “They will form a growing vanguard that will increasingly feel more than repaid by the day-by-day manifestations of their solid achievements, as well as by the profound realization of the value of the service they are rendering,” Muller predicted.
Muller’s Germinal Choice plan was met with nods and curiosity. Leading scientific journals asked him to write about it. Conferences invited him to speak. Newspapers and magazines ran interviews with him. Muller’s fellow Nobelists considered Germinal Choice a step in the right direction.
For all its science-fiction luster, however, Muller’s Germinal Choice was still a fairly traditional form of eugenics. It was what Francis Galton had called for in the nineteenth century, an imitation of what animal and plant breeders had been doing for centuries: combining existing genetic variations into better arrangements that could be inherited by descendants. Germinal Choice didn’t require rewriting genes. Apparently, that was too much even for Hermann Muller to imagine.
While Muller’s vision of a public Germinal Choice program never came to pass, a private version did. Sperm banks emerged from the shadows, taking on not just married heterosexual couples but single women and lesbian couples as well. By the early 2000s, over a million children had been born with donated sperm in the United States alone. While sperm banks tended to keep their donors anonymous, they allowed customers to pick men based on certain traits—traits, it must be said, that probably don’t get transmitted in the chemistry packed inside a sperm cell.
The Fairfax Cryobank, located in Virginia, lets customers search by a donor’s astrological sign, favorite subject in school (arts, history, languages, mathematics, natural science), religion, favorite pet (bird, cat, dog, fish, reptile, small animal), personal goals (community service, fame, financial security, further education, God/religion), and hobbies (musical, team sports, individual sports, culinary, craftsman). Looking at that list, I picture parents putting their child in a wood shop and waiting for freshly carved coffee tables to pile up.
In order to become a sperm donor, men have to get through a screening for sexually transmitted diseases, as required by the FDA. But there aren’t any regulations about checking DNA, and so genetic screening varies from clinic to clinic. Many of them look at a prospective donor’s family history for signs of a hereditary disease. Many also carry out a few genetic tests to see if donors are carriers of diseases such as cystic fibrosis. Sometimes a dangerous variant can slip through the screening. In 2009, for example, a Minnesota cardiologist discovered that a young patient with a hereditary heart condition had been the result of sperm donation. He tracked down the donor and discovered that he carried a dangerous variant. Out of twenty-two children he fathered with donated sperm, nine inherited his faulty gene. One of them died of a heart attack at age two.
The falling price of DNA sequencing may be able to eliminate most of these disasters. Rather than look for a handful of common disease variants, it is now possible to scan every protein-coding gene in a potential sperm donor. Men with a dominant disease-causing mutation could be barred altogether from donating sperm. To avoid recessive diseases, doctors could sort through sperm and eggs to make sure they couldn’t combine two dangerous mutations.
Muller was right to expect eggs to be harder to use for Germinal Choice. In the 1930s, scientists were able to fertilize rabbit eggs in a dish with rabbit sperm and coax the embryos to start dividing. But it wasn’t until the 1960s, however, that two scientists—a University of Cambridge physiologist named Robert Edwards and a gynecologist named Patrick Steptoe—figured out how to get viable eggs from women. Their next challenge was to find a chemical cocktail—the “magic fluid,” as Edwards called it—that could keep the eggs alive in a dish long enough to be fertilized. In 1970, Edwards and Steptoe announced that they had succeeded at last. After they fertilized human eggs, they managed to keep them alive for two days, during which they divided into as many as sixteen cells.
Edwards spoke about this milestone at a meeting in Washington, DC, in 1971. One of his fellow panelists was a professor of religion named Paul Ramsey. After Edwards finished speaking, Ramsey declared the procedure an abomination that must be banned. He believed it would bring the world closer to “the introduction of unlimited genetic changes into human germinal material.” Heredity, in other words, was a sanctuary that humans dare not enter.
Edwards and Steptoe weren’t scared away by Ramsey’s warnings. Instead, they invited couples having trouble getting pregnant to come to them for help. In 1978, they had their first success: a healthy girl named Louise Joy Brown. Louise’s birth led people to see in vitro fertilization not as the first step to human genetic engineering but as a treatment for infertility. In the 1980s, in vitro fertilization clinics opened up across the world, capitalizing on the pent-up demand of struggling couples. The procedure remained hit-or-miss, though, with many embryos failing to implant. To improve their odds, fertility doctors would produce batches of embryos and then pick out the healthiest among them.
In time, it became possible to inspect an embryo’s DNA, too. Scientists would analyze a single cell removed from an embryo in its first few days of life and analyze its genes. (If a cell is removed at that stage, the remaining ones can still proliferate into a healthy fetus.) Fertility doctors could use this method to reduce the odds that parents would pass down a genetic disorder to their children.
In one early experiment, a team of British doctors treated women who carried disease-causing mutations on one of their X chromosomes. The women themselves were healthy, thanks to their second X chromosome, which lacked the mutation, but any sons they might have would run a 50 percent chance of developing the disorder.
There was a straightforward way to avoid this suffering: make sure the women had only daughters. In 1990, the British doctors screened the embryos of two women who carried X-linked mutations. One had a variant causing mental retardation, the other a potentially devastating nerve disorder. The women both underwent in vitro fertilization, and then the doctors plucked a cell from each embryo to examine. They used molecular probes that could detect a distinctive segment of DNA repeated many times over on the Y chromosome, and only the Y chromosome. The doctors set aside any embryo that tested positive and used the rest for implantation. Nine months later, each mother gave birth to twin girls. Because the babies carried a normal X chromosome from their fathers, all of them turned out healthy.
By the early 2000s, it became possible to test embryos for mutations on other chromosomes, too. Karen Mulchinock, a woman living in the English city of Derby, had grown up knowing that Huntington’s disease ran in her family. Her grandmother had died of it, and she had watched her father decline through his fifties, dying at age sixty-six. Mulchinock got a test at age twenty-two and found she carried a faulty copy of the HTT gene as well. She and her husband decided to use in vitro fertilization to prevent the next generation of her family from inheriting it. In 2006, she had her eggs harvested, and her doctors tested the embryos for the Huntington’s mutation. Over the course of five rounds of IVF, she gave birth to two children, neither of whom had to worry about the disease. “The curse is finally broken,” she said.
Couples who worried about other inherited disorders began coming to fertility doctors to ask for the same tests, tailored instead for their own mutations. When an English couple had their first child, they were surprised that she had PKU. Like Pearl and Lossing Buck, neither of the parents knew before that they carried a faulty copy of the PAH gene. The parents decided to have more children, using preimplantation genetic diagnosis to prevent their other children from inheriting it.
After fertility doctors produced a set of embryos from the parents, they tested the PAH in each one. They successfully detected the mutations in some of the embryos, and implanted the mutation-free ones in the mother. In 2013, they reported that she gave birth to a healthy boy. He was free not only of PKU but of the worry of passing down a faulty PAH gene to his own children.
Preimplantation genetic diagnosis has grown more popular as the years have passed, not just in Europe and the United States but also in developing economies such as China. But the procedure is still rare. Despite the gripping stories of parents like Karen Mulchinock, only a tiny fraction of people affected by Huntington’s disease (roughly 200,000 people worldwide) use the procedure. The cost puts it out of reach of many. Even in Europe, where the procedures are covered by government-provided health insurance, few people with Huntington’s disease follow Mulchinock’s example. Between 2002 and 2012, only an estimated one in one thousand cases of Huntington’s disease was prevented.
Many children with Huntington’s disease don’t get tested themselves, since there’s no treatment they could get if they turned out positive. Because Huntington’s disease doesn’t affect people until after fifty, they usually start their families long before they find out if they inherited the allele. If they have a 50 percent chance of having Huntington’s disease, the odds for their children are one in four. They may be so busy caring for their sick parent that they don’t want to bother with the time, money, and frustration that in vitro fertilization demands.
In other words, we are not living in Muller’s eugenic utopia. Nor are we living in a nightmare of the sort Aldous Huxley imagined in his 1932 novel Brave New World. Of the small fraction of people who are using in vitro fertilization, an even smaller fraction is using it to control the inheritance of their children. We have the technology right now to effectively eradicate Huntington’s disease from the planet, along with many other genetic disorders. But the messy realities of human existence—of economics, emotions, politics, and the rest—override the technological possibilities.
In April 1963, a microbiologist named Rollin Hotchkiss traveled from New York City to the town of Delaware, Ohio. He had been invited to take part in a meeting that must have felt at the time like a feverish dream. As Hotchkiss later put it, he and nine other biologists spent a day together at Ohio Wesleyan University “to consider whether man can and will change his own inheritance.”
One of Hotchkiss’s fellow speakers was Hermann Muller. Muller described to the audience his plan for sperm banks and Germinal Choice. To Hotchkiss and the other scientists, there probably wasn’t much surprising in what Muller had to say, given that he had been talking about his reform eugenics for more than thirty years. But when it was Hotchkiss’s turn to talk, he described something that was fundamentally different from Muller’s Germinal Choice—or from any of the eugenic breeding schemes bruited over the previous century.
Hotchkiss raised the possibility of directly altering human DNA. He used a new term to describe what he and some of the other scientists at the meeting envisioned: genetic engineering.
It might seem odd for a microbiologist to talk about changing humanity’s inheritance. But in 1963, Hotchkiss had come as close as anyone on Earth to carrying out genetic engineering. In the 1950s, he had begun working with Oswald Avery, following up on the original experiments on the “transforming principle” that turned harmless bacteria into killers. Hotchkiss and his colleagues performed more sophisticated versions of Avery’s original experiments, proving beyond a shadow of a doubt that the transforming principle was DNA. By moving this DNA into bacteria, Hotchkiss was effectively engineering their genes. In later years, Hotchkiss discovered how to transform bacteria in other ways—moving genes into microbes to make them resistant to penicillin, for example.
At the Delaware meeting, Hotchkiss predicted that the same procedure would be used on humans. “I believe it surely will be done, or attempted,” he said.
After all, Hotchkiss pointed out, our species was always searching for improvements. We started by finding better food and shelter, and now our search had evolved into modern medicine. When Hotchkiss spoke in 1963, doctors were celebrating their recent victory over PKU, thanks to their ability to identify babies with the hereditary condition and treat them with a brain-protecting diet. “We cannot resist interfering with the heritable traits of the phenylketonuric infant by feeding him tyrosine at the right time to form a normal nervous system,” Hotchkiss said. If scientists learned how to rewrite the faulty genes that caused PKU, Hotchkiss predicted, it would be hard to resist the temptation to do it in people. “We are going to yield when the opportunity presents itself,” he said.
Hotchkiss left the Delaware meeting convinced that the world had to get ready for that opportunity. Humanity had to think ahead to all the benefits and risks that the opportunity might bring. Hotchkiss gave lectures and wrote scientific prophecies. Genetic engineering would not follow the traditional eugenic game plans, he said, driven by government decree. It would instead be driven by consumers. People would be persuaded by seductive ads for the latest “gene replacements” to alter their own DNA.
At first, Hotchkiss predicted, doctors would use genetic engineering to cure hereditary diseases like PKU in both adults and children, changing their genes as Hotchkiss changed genes inside bacteria. “One would presumably want to act at the earliest possible time in the development of the organism,” Hotchkiss said. “Even in utero.”
Hotchkiss could see the appeal of genetically engineering embryos. Doctors might be able to fix a genetic defect across much of the body if they could manipulate just a tiny clump of cells. But these doctors might accidentally alter germ cells, too. If those unborn patients grew up and had children, they might very well inherit the gene replacement. And they would pass the gene replacement down to their own children.
“Now one is meddling with the gene pool of the entire race,” Hotchkiss warned.
If a gene replacement turned out to be harmful, it would be bad enough to make one patient suffer. But if that gene was replaced in the germ line, future generations could inherit the suffering as well. To Hotchkiss, the decision to alter the genes of people yet to be conceived was an assault on liberty. No man should ever be given total discretion to determine his brother’s fate. The same held true for his great-grandchild.
Hotchkiss turned out to be a pretty good prophet. To look into the future of genetic engineering from 1964, he could rely only on what little hard evidence existed at the time—much of it coming from his own crude experiments on bacteria. Within a decade, some of the pieces of his vision would already fall into place. Rudolf Jaenisch was breeding mice with DNA he had pasted into their genomes. Robert Edwards and Patrick Steptoe were growing human embryos in petri dishes. And by the mid-1970s, some scientists were even trying to cure hereditary diseases in people, with Hotchkiss’s gene replacements. They called it gene therapy.
One of the pioneers of gene therapy was a hematologist at UCLA named Martin Cline. He developed a method for getting genes into mouse cells, jolting the cells with a pulse of electricity to open temporary pores in their membranes. As a hematologist, he turned his mind immediately to blood disorders. Blood cells came from lineages of stem cells nestled in bone marrow. If Cline could slip a working version of a broken gene into a patient’s stem cells, their daughter cells would inherit it. He could thus create a lineage of healthy blood.
Cline tested this idea first on mice. After injecting the engineered stem cells back into the bones of the animals, he waited two months to let them multiply. Cline then drew blood circulating through the mice and inspected the cells. Half of the cells had inherited the gene he had added.
That was good enough for Cline. He immediately set out to use gene therapy to cure people. He chose for his disease beta-thalassemia. This hereditary disorder disables a gene called HBB, leaving blood cells unable to build hemoglobin. People with beta-thalassemia can die because their blood can’t deliver enough oxygen around their bodies. To Cline, the need for a cure justified trying out his new gene therapy techniques on people. But when he submitted his proposal to UCLA, the university judged it reckless and turned him down. That didn’t stop Cline. He went abroad, recruiting a patient in Israel and another in Italy.
In 1980, Cline performed gene therapy on both patients. He extracted cells from their bone marrow and added working HBB genes to them. Then Cline injected the altered cells back into the patients’ bones, where they could proliferate. The new bone marrow cells would inherit the HBB cells and produce working blood cells.
At least, that was the plan. After the procedure, Cline’s patients didn’t experience any improvement in their symptoms. And when word got out of the experiment, the scientific community roundly condemned Cline. Not only had he leaped far beyond his studies on mice, but he had changed his protocol midway through the human experiment without telling anyone—not his colleagues, not the committees overseeing his research, not even his two patients.
In the wake of the scandal, the National Institutes of Health took away Cline’s grants, and UCLA forced him to resign as chairman of his department. In an editorial headlined “The Crime of Scientific Zeal,” the New York Times even singled him out for condemnation. “He was rightly punished,” they declared.
With all the news about Cline’s reckless experiments, test-tube babies, and human-bacteria chimeras, a general alarm about genetic engineering began to blare. In 1980, President Jimmy Carter dispatched a commission to explore the ethical landscape of human genetic engineering, and soon after, Congress asked the Office of Technology Assessment to also look into the matter. They followed Hotchkiss’s lead fifteen years earlier. To dissect the ethics of genetic engineering, they split it into August Weismann’s two fundamental categories: somatic and germ line.
Somatic genetic engineering—otherwise known as gene therapy—got a green light. Politicians and scientists alike agreed that it might eventually cure thousands of hereditary diseases. As long as the research went carefully, as long as the treatments were safe, no one saw any serious ethical concerns.
The green light in the 1980s prompted a number of scientists to begin work on gene therapy. They needed first of all to find a new way to deliver genes to cells. Cline’s method worked only for types of cells that could be removed from a patient, altered in a petri dish, and then put back in. If someone needed gene therapy for a brain disease, no one would try pulling chunks of gray matter out of their head.
Viruses offered a promising solution. Scientists figured out how to paste human genes into viruses, which could then infect cells to deliver their payload. By the 1990s, scientists were getting promising results from viruses in experiments on mice, and they had even been encouraged by a few human trials. But it turned out the viruses were not as safe as once thought. An ill-fated trial for a metabolic disorder in 1999 brought gene therapy research to a halt for several years. One of the volunteers, a nineteen-year-old man named Jesse Gelsinger, had an intense immune response to the viruses. He developed so much inflammation that he died in a matter of days.
After Gelsinger’s death, the clinical trials of gene therapy stopped. The few researchers who remained in the field took a step back, searching for safer viruses. Within a few years, new clinical trials started, and after a few more years they began to deliver promising results. Philippe Leboulch of the Paris Descartes University and his colleagues tackled beta-thalassemia, the disease that had bested Martin Cline thirty years earlier. The French researchers extracted bone marrow cells from a boy, infected them with a virus carrying HBB, and then injected the cells back into his bones. In 2010, they reported that his cells started to make normal hemoglobin again, and he stopped needing a monthly transfusion of blood to stay alive.
People who suffer from other diseases, such as muscular dystrophy and hemophilia, are waiting in the hope that gene therapy will help them as well. Instead of struggling with difficult diets, many people with PKU look to gene therapy as a true cure.
In the debates over genetic engineering in the 1980s, almost everyone agreed that somatic cells were a promising target, but germ cells had to stay out of bounds. “Deciding whether to engineer a profound change in an expected or newborn child is difficult enough,” President Carter’s commission concluded in its 1982 report. “If the change is inheritable, the burden of responsibility could be truly awesome.”
When the Office of Technology Assessment looked into the matter, they agreed. There were too many uncertainties—both medical and ethical—to even investigate a technology that might someday alter heredity. “The question of whether and when to begin germ line gene therapy must therefore be decided in public debate,” they concluded. In 1986, the US Recombinant DNA Advisory Committee, which determined what sort of research in genetic engineering could be funded, cut off the money. They flatly declared that they “will not at present entertain proposals for germ line alteration.”
And that, for the next three decades, was pretty much that. From time to time, a few scientists would rattle the regulatory cage, arguing that germ line engineering would be a boon to humanity, not a threat. In 1997, the American Association for the Advancement of Science revisited the matter with a forum about germ line intervention. The assembled scientists and philosophers granted that tinkering with the germ line might bring good changes. But they weren’t willing to give a full-throated endorsement for the idea. They warned that genetic engineering “might some day offer us the power to shape our children and generations beyond in ways not now possible, giving us extraordinary control over biological and behavioral features that contribute to our humanness.”
And yet, even as the forum tried to peer into the future from 1997, that day had already come. A few doctors had gone ahead and tampered with human heredity in a way no one had ever imagined, without asking anyone’s permission.
In 1996, a woman named Maureen Ott went to Saint Barnabas Medical Center in Livingston, New Jersey, in the hopes of having a baby. Seven years of in vitro fertilization had failed. Her eggs seemed healthy, but whenever her doctors implanted embryos in her uterus, they stopped dividing. Now, at age thirty-nine, Ott was watching her biological clock wind down. She came to Livingston because she had heard that doctors there had found a way to rejuvenate eggs.
The Saint Barnabas team, led by a French-born doctor named Jacques Cohen, had run some encouraging trials on mice. In the experiments, they drew off a little of an egg’s jellylike filling—its ooplasm—and injected it into a second, defective egg. This microscopic injection raised the odds that the defective egg would develop into a healthy mouse embryo. The researchers speculated that the procedure worked because molecules from the donor egg undid some unknown damage.
If the procedure worked on mice, it might also work on humans. Cohen and his colleagues recruited healthy young women to donate eggs for a study. The doctors would draw off ooplasm from the donated eggs and inject it into the eggs of women struggling to have children—women like Ott.
Ott’s doctors warned her that the procedure was far from a sure thing. Ooplasm contains many different kinds of molecules. Some of those might rejuvenate an embryo, but others might cause harm. It was even possible that the doctors might inject some mitochondria from the donor eggs into the eggs of the patients. If that happened, any children born through the procedure would inherit the mitochondrial DNA from the donor. Its genetic inheritance would come from three people instead of two.
To Ott, the prospect of her child inheriting someone else’s genes wasn’t a reason to stop. Mitochondria were merely responsible for making fuel and for other basic tasks in the cell. They didn’t create the traits that defined a person. “If I was doing something like, say, I only wanted a blond-haired girl, I would feel that was unethical,” Ott later explained to a reporter.
Cohen’s team injected ooplasm into fourteen of Ott’s eggs. They fertilized the treated eggs with her husband’s sperm, and nine of the embryos began to divide. Nine months later, in May 1997, Ott gave birth to a healthy baby girl, Emma. A cursory look at Emma’s cells revealed no sign of the donor’s mitochondria.
Two months after Emma’s birth, Cohen’s team published an account of Ott’s unprecedented experience in the Lancet. Newspapers reported in awe about the revival of Ott’s flagging eggs. Other couples struggling to get pregnant besieged Cohen’s team. Fertility specialists in the United States and abroad used the Lancet paper as a cookbook recipe for performing ooplasm transfers of their own.
But it didn’t take long for the enthusiasm to curdle into suspicion. In June 1998, a Sunday Times journalist named Lois Rogers reported how doctors in California were offering ooplasm transfer to their own patients. Rogers didn’t portray the effort as a way to help would-be parents. In her articles, it turned into a dangerous experiment in heredity.
Rogers said that embryologists and politicians were fretting “that the treatment is being given without a full debate over the biological and ethical implications of a child inheriting genes from two mothers.” What the doctors were actually doing, Rogers wrote, was creating “the three parent child.”
That turn of phrase locked onto the public consciousness and proved impossible to shake off. A Canadian columnist named Naomi Lakritz railed against the quest for “three-parent children,” attacking doctors for concerning themselves only about the science involved. “Never mind science,” Lakritz declared. “What about the ethical implications of cooking up a human omelet which results in the hapless child carrying the genetic material of two mothers?”
In 2001, the fear of three-parent babies flared even higher when Cohen and his colleagues published a new paper on their work. They closely examined the DNA of some of the babies that had developed from their rejuvenated eggs. Cohen’s team found two children with a mix of mitochondria from their mother and the ooplasm donor.
“This report is the first case of human germline genetic modification resulting in normal healthy children,” the researchers announced.
By then, dozens of children had been born through ooplasm transfers. Some of them were probably genetically modified as well. Despite two decades of government-sanctioned hand-wringing over genetic engineering, fertility specialists had waltzed right over Weismann’s barrier. The rules that had been put in place had applied only to research funded with government grants. Cohen and his colleagues, working at a clinic, had done their work in private.
That freedom didn’t last much longer. A month after Cohen and his colleagues published their report, they received a letter from the FDA. So did all the other American fertility clinics that were performing ooplasm transfers. The FDA pointed out that it had jurisdiction over the procedure. From now on, it would give ooplasm transfer an official status as an Investigational New Drug. That designation meant that anyone who wanted to try to carry it out would first have to fill out a mountain of paperwork and follow a lengthy set of procedures to make sure it was safe. Big pharmaceutical companies could handle those demands, but not small fertility clinics. Cohen and the other ooplasm transfer practitioners in the United States gave up.
But two American doctors, Jamie Grifo and John Zhang of New York University School of Medicine, refused to stop. When the FDA sent out their letter in 2001, Grifo and Zhang had been working on an advanced version of ooplasm transfer. Instead of using a bit of ooplasm from a donor egg, they wanted to try using all of it. They would put a woman’s nucleus into a donor’s nucleus-free egg.
Now unable to carry out their work in the United States, Grifo and Zhang went to China. There they collaborated with doctors at Sun Yat-sen University, looking for struggling parents who would volunteer for a study.
The doctors carefully removed the entire nucleus from each donated egg. They replaced the original nucleus with a nucleus from one of the patient’s unfertilized eggs. After they fertilized the egg with the partner’s sperm, the new zygote began to divide in a normal fashion.
Chinese doctors at a local hospital implanted embryos from this procedure into a thirty-year-old woman. Three of the embryos went on to develop regular heartbeats. Grifo, Zhang, and their colleagues urged the woman and her partner to come to the United States to get better medical care, but the couple decided to stay at the local hospital. About a month later, their doctors decided to remove one of the embryos to improve the odds of survival for the other two, despite protests from the Sun Yat-sen team. The remaining twins developed for another four months. Then one fetus’s amniotic sac burst and it died soon after an early delivery. The woman developed an infection—possibly due to the delivery of the first baby—and the second baby died as well.
Although the experiment ended in heartbreak for the would-be parents, Grifo and Zhang saw it as a step forward for the procedure. With a fresh supply of ooplasm, the woman’s embryos had developed normally, and they might have even survived if she had gotten better prenatal care. In 2003, the scientists decided to present their results at a conference and share the news with reporters from the Wall Street Journal.
Interviewed thirteen years later by the Independent, Zhang said he regretted that decision. “I think some of my team members were so eager to be famous,” he said. “They wanted to let the whole world know.”
The world greeted the news not with celebration they hoped for but with dismay. Critics said that Zhang and his colleagues were veering recklessly toward the manufacture of human clones. The Chinese government responded by banning the procedure, effectively ending all research on ooplasm. The experience was such a catastrophe that Zhang and his colleagues didn’t even publish details of the case until 2016. “There was too much heat,” Zhang said.
This line of research might have ground to a complete halt if not for a few scientists in England and the United States who quietly continued running experiments on mice. They did not want to invent a way to help infertile parents have children, however. They wanted to stop mitochondrial diseases.
The discovery of mutations that caused mitochondrial diseases made it possible to diagnose mysterious illnesses and make sense of their strange heredity. But it didn’t immediately point to any straightforward cure. In 1997, a British biologist named Leslie Orgel proposed a different line of attack. Rather than curing the disease, doctors could block its inheritance. In the journal Chemistry & Biology, Orgel published a diagram showing how to move the nucleus of a fertilized egg into another egg that had its nucleus removed. No longer burdened with defective mitochondria, the egg could give rise to a healthy child. Orgel gave his speculative procedure a name: mitochondrial replacement.
By the mid-2000s, scientists had learned enough about mitochondria and manipulating cells to try to turn Orgel’s idea into real medicine. In the United States, Shoukhrat Mitalipov launched a series of experiments at Oregon Health & Science University. He found that mitochondrial replacement therapy could cure sick mice. Then he successfully carried out the same procedure on monkeys. The monkeys grew to adulthood without any sign of harm. Meanwhile, at Newcastle University in England, Doug Turnbull led an effort on a different method that also yielded promising results. Both teams then experimented on human embryonic cells, and found that mitochondrial replacement therapy might work in our species, too. With these results in hand, Mitalipov and Turnbull went to their respective governments to ask for permission to push mitochondrial replacement therapy toward clinical trials.
Their requests opened up a fresh debate about the wisdom of human genetic engineering. Much of the debate revolved around strictly medical questions: Would mitochondrial replacement therapy be safe and effective?
Thanks to the ooplasm transfers of the 1990s, it was possible to get a few clues. By the early 2010s, the children born through the procedures had become genetically modified teenagers. Cohen and his colleagues tracked down fourteen of the children and found that they were going to school, trying out for cheerleading squads, getting braces, taking piano lessons, and doing all the things that teenagers typically do. Some of them had medical conditions, ranging from obesity to allergies. But these were nothing beyond what you’d expect from a group of ordinary teenagers. Still, promising as the survey might be, it was too small to put concerns about safety to rest. And there was no telling whether the teenagers might develop medical problems later in life.
Some critics also raised doubts about how effective mitochondrial replacement therapy could ever be. When researchers hoisted a nucleus out of a woman’s egg, it didn’t slip out cleanly. Sometimes mitochondria remained stuck to its sides. After the researchers plunged the nucleus into a donated egg, the embryo that resulted would sometimes end up with a mixture of old and new mitochondria. Even if 99 percent of the mitochondria in the embryo came from the donor egg, the 1 percent from the mother might still put a child at risk. Making matters worse, that dangerous 1 percent might increase as the cells in the embryo divided.
Even if doctors could scrub all the old mitochondria from a nucleus, there might still be risks from mitochondrial replacement therapy. Many of the proteins that generate fuel inside mitochondria are encoded in genes in the nucleus. Once the cell makes those proteins, it has to ferry them into the mitochondria, where they have to cooperate with the mitochondria’s own proteins. Some researchers wondered if mitochondrial replacement therapy would create a mismatch between the proteins, causing the mitochondria to malfunction.
To test for this mismatch, scientists carried out mitochondrial replacement therapy on mice. They gave some of their animals mitochondria from a genetically identical donor, while others got genetically distinct ones. In some trials, genetic mismatches led to troubling problems. Some mice became obese. Others had trouble learning. Some differences in the mice—such as the amount of fat in their heart and liver—appeared only late in life. Since mice have such short lives, the researchers had to wait only a matter of months to see the symptoms emerge. In humans, it might take decades to discover such side effects. Some researchers urged that only genetically similar donors provide their mitochondria.
But a lot of the debates over mitochondrial replacement therapy were driven not by concerns over safety but by deeper passions. After all, people were talking about making three-parent babies. To tamper with heredity was deeply frightening to many people. In a 2014 congressional hearing, Jeff Fortenberry, a representative from Nebraska, condemned mitochondrial replacement therapy as “the development and promotion of genetically modified human beings with the potential for unknown, unintended, and permanent consequences for future generations of Americans.” If it sounded like Fortenberry was describing a living nightmare, he didn’t mind. “These scenarios scare people,” he said, “and I would be very worried if it didn’t scare people.”
Fortenberry cast genetic engineering as a kind of hereditary plague. Once a tampered gene was introduced into one child’s DNA, it would sweep through a country like a vicious strain of influenza. But that’s not how heredity works. It’s been estimated that only 12,423 women in the United States are at risk of passing down mitochondrial diseases to their children. If every last one of those women underwent mitochondrial replacement therapy before having children, the result would be barely detectable. The mitochondrial DNA from the donors would become slightly more common among people in the United States. As the next generation had children of their own, that donated DNA would not become any more common. The gene pool of the United States (and the rest of the world, for that matter) would barely ripple.
To opponents like Fortenberry, mitochondrial replacement therapy not only threatened the future but desecrated the past. Every child born through the procedure had inherited their genes in a manner unlike anyone born before 1997. And any person who was a source of genes became, by definition, a parent. The “three-parent” label, first stamped on ooplasm transfer, was now attached to mitochondrial replacement therapy. “The creation of three-parent embryos is not an innocuous medical treatment,” Fortenberry warned. “It is a macabre form of eugenic human cloning.”
There’s nothing macabre about the teenagers who inherited mitochondria through ooplasm transfer, and there’s nothing about the egg donors that merits the word parent, no matter how often congressmen and headline writers use it. We don’t hand out such an important title so carelessly. When women get eggs from donors—complete with nuclear DNA from another woman—they still get to be called their children’s mothers.
While opponents of mitochondrial replacement therapy have resorted to fearmongering at times, its supporters have sometimes lapsed into faulty logic of their own. One common strategy is to play down the importance of mitochondrial DNA. In 1997, Maureen Ott convinced herself that she wasn’t crossing a moral line because she wasn’t picking out meaningful traits, like the color of her daughter’s hair. Seventeen years later, the United Kingdom’s Department of Health made much the same argument in a favorable report on mitochondrial replacement therapy they published in 2014.
“Mitochondrial donation techniques do not alter personal characteristics and traits of the person,” the report declared. “Genetically, the child will, indeed, have DNA from three individuals, but all available scientific evidence indicates that the genes contributing to personal characteristics and traits come solely from the nuclear DNA, which will only come from the proposed child’s mother and father.”
When I read the report, I scoured it for a definition of personal characteristics and traits. I found nothing. As best I could tell, the authors dismissed mitochondria as doing nothing more than producing fuel. Meanwhile, I could only conclude, the genes inside the nucleus handled the really important stuff, like coloring our hair.
Ranking genes this way is absurd. The entire point of mitochondrial replacement therapy is to change something profound about a person: to take away a mitochondrial disease. People who inherit mitochondrial mutations may end up with traits that profoundly influence their lives and their identity, from a short stature to weakened muscles to blindness. Indeed, the huge range of symptoms that mitochondrial diseases can cause reveals just how many parts of our lives are influenced by the way we produce fuel for our cells.
And there’s no organ where fuel production matters more than in the brain, where neurons have to burn a lot of it to fire signals. Some mitochondrial mutations affect the way certain parts of the brain function. Others slow down the migration of neurons through the brain during its development, so that they fail to reach their destinations. If altering the brain can’t affect “personal characteristics and traits,” I can’t imagine what could.
Mitochondria have also turned out to be important for other functions besides generating fuel. Some proteins that mitochondria produce make their way into the nucleus, where they relay signals to thousands of genes there. Genetic variations in mitochondria can thus do much more than cause rare hereditary diseases. They can influence how long we live, how fast we run, how well we handle breathing at high altitudes. Genetic variants in mitochondrial DNA can influence our ability to remember things. Some mutations have been implicated in psychiatric disorders such as schizophrenia.
The UK report helped persuade Parliament to take up the matter of mitochondrial replacement therapy. The health minister, Jane Ellison, reassured MPs that the procedure merely replaced a cell’s battery packs. In 2015, Parliament voted to approve mitochondrial replacement therapy, and in March 2017, a fertility clinic in Newcastle won the first license to perform the procedure.
The deliberations in the United States veered off onto a different path. In a survey of people with mitochondrial mutations, they overwhelmingly said mitochondrial replacement therapy research was worthwhile. The National Academy of Sciences examined the evidence and came out in 2016 with a cautious endorsement. It might be wise to start using the procedure only on sons, they said, since they wouldn’t be able to pass on their altered mitochondria to their own children. Scientists would also need to keep careful tabs on the children born to women who underwent the mitochondrial replacement therapy, to make sure they didn’t suffer unexpected harm years later.
But all these discussions ended up being moot. Somebody in Congress—no one has ever figured out who exactly—slipped a ten-line provision into an enormous 2016 spending bill that blocked the FDA from evaluating mitochondrial replacement therapy. Without any debate in Congress, the ban went into effect.
In that same year, however, John Zhang—who had gone to China back in 2003 to continue his research—announced that he and his colleagues had taken another trip outside of the United States to perform the first human mitochondrial replacement therapy.
Now working at the New Hope Fertility Center in New York City, Zhang had been contacted by a couple in Jordan for help. Their two children had developed Leigh syndrome, a rare mitochondrial disease that weakens the muscles, damages the brain, and usually leads to death in childhood. The couple’s first child died at age six, their second at just eight months.
Before having children, the mother had no idea that she carried Leigh syndrome. In her own cells, only about a quarter of her mitochondria carried the mutation, while the rest functioned normally. In both her children, her mutant mitochondria became more common and crossed the deadly threshold.
The couple came to Zhang in the hopes of having another child, one without Leigh syndrome. He knew he couldn’t use mitochondrial replacement therapy in the United States. But he also knew that Mexico had no regulations in place that would block him. He traveled there with the couple, carrying out the procedure at a branch of his clinic. Zhang’s team transferred five nuclei from the woman’s eggs to donor eggs, which they then fertilized. When they implanted one of the embryos in the woman’s uterus, it developed normally, and in April 2016 she gave birth to a boy.
When Zhang and his colleagues examined the boy, his health seemed good. But the doctors found that his mother’s faulty mitochondria had not been entirely replaced. Two percent of the mitochondria in cells they sampled from his urine came from his mother. In cells from his foreskin, that fraction jumped to 9 percent. No one could say for sure what the levels were in his heart or his brain. And it was doubtful that anyone would ever find out. Barring some unforeseen medical emergency, his parents refused any further testing. The boy slipped away from science’s gaze, another genetically engineered child brought into this world.