Transfusions Help Win the War

Blood Plasma

The Allies were bleeding out. Just months into World War II, British and other Allied troops on the battlefield and British civilians suffering under the Nazi bombing “blitz” of their cities were perilously low on blood for life-giving transfusions. Blood donations simply could not keep pace.

Desperate for help, Dr. John Beattie, the English surgeon in charge of the United Kingdom’s blood supply, looked to America to help. But how?

Contrary to popular belief, blood—to this day—can be saved only for up to six weeks before it starts to degrade. Blood plasma, the yellow liquid that constitutes most of blood, could substitute for whole blood in keeping wounded soldiers alive. Predominantly water, plasma also contains vital elements for the human body: electrolytes to prevent dehydration and stimulate neurons and muscle tissue; immunoglobulins to bolster the immune system; hormones, proteins and minerals, and key factors in clotting blood and speeding recovery from burn wounds.

Yet plasma was even more perishable than whole blood, surviving just five days without refrigeration. This was not even enough time for ships to carry it across the U-boat-infested waters of the North Atlantic. Dr. Beattie nonetheless telegraphed an urgent plea to America: “Secure 5,000 ampules of dried plasma for transfusion”—that is, more than the total existing supply of plasma in the world at the time.

The request went to Dr. John Scudder, a Columbia professor and scion of a family with a long and selfless history of medical missionary work in India. Dr. Scudder immediately recruited the best man he knew for the job, someone Dr. Beattie might have remembered as a bright young student he had taught at McGill University’s medical school a few years before: Dr. Charles Drew.

Drew hailed from the poor, black section of Foggy Bottom in the nation’s capital. One of five children born to a carpet layer and a schoolteacher, he excelled in high school and was able to win a sports scholarship to Amherst, thanks to his skill in swimming, track, and football. After graduating from Amherst, though, Drew discovered that fellowship money for a talented young black man was hard to come by in Depression America, and he worked for two years teaching biology and coaching sports teams at all-black Morgan College in Baltimore. Finally enrolling at McGill, in Montreal, he finished second in his medical school class of 127 and won both his MD and a master’s in surgery.

After teaching at Howard University and working at the Freedmen’s Hospital back in Washington, Drew earned a Rockefeller fellowship to Columbia University. There he became the first African American ever to achieve a PhD at Columbia, with a thesis on “banked blood.”

It was a propitious moment for such work. Drawing on the work of several leading scientists in the field as well as his own, Dr. Drew developed a process for reducing blood plasma to a powder form and then reconstituting it with distilled water. In this form it could last much longer—long enough to get across the Atlantic and make it to the battlefields of Europe. Once in the field, medics toted cans carrying two 400 cc bottles: one with the powdered plasma, one with the water. When they were combined, the reconstituted plasma could last for up to four hours.

Drew’s organizational genius matched his scientific know-how. Almost overnight, he transformed what had been little more than a series of lab experiments into an industrial-size program to collect, process, and ship out blood in all its forms. Using New York–area hospitals and the donations of some one hundred thousand American servicemen, “Blood for Britain” and “Plasma for Britain” ended up delivering some 14,500 pints of plasma to the Allies, saving countless lives.

As the United States prepared for war in late 1941, Dr. Drew was recruited by the Red Cross to manage a similar blood drive for America. But he was outraged when, a few months into his tenure, the Red Cross bowed to popular ignorance and announced that it would segregate blood by race—a distinction based on nothing but bigotry and superstition. Drew resigned and devoted the rest of his life to teaching at Howard and serving as chief of surgery at the Freedmen’s Hospital.

Tragically, one morning, after spending all night in the operating theater at his hospital, Dr. Drew fell asleep at the wheel while driving with some colleagues to the Tuskegee Institute’s annual free clinic. He was just forty-five when he died, but he had already left a legacy of service that few will ever match.

It Started with Frankenstein

The Artificial Pacemaker

A modern pacemaker, to be implanted in a patient’s chest between the skin and the ribs. The wires are attached to the heart muscle.

The dream of some artificial means to keep our hearts beating is an old one, and doctors and engineers around the world have made important contributions to developing a pacemaker. But it was a couple of inventive American engineers, Earl Bakken and Wilson Greatbatch, who turned the pacemaker into a practical reality in our time.

Challenged by her husband to write a horror story during the chilly, bleak summer of 1816, eighteen-year-old Mary Shelley came up with a novel she titled Frankenstein, or the Modern Prometheus. Her theme reflected a Europe that was obsessed with the reanimative powers of electricity and had been ever since that colorful American genius Benjamin Franklin—nicknamed, not so coincidentally, “the Modern Prometheus”—conducted his experiments with lightning back in the 1750s. The Italian scientist Luigi Galvani—as in “galvanize”—even posited that all creatures possessed a natural “animal electricity.” His nephew, Giovanni Aldini, attempted to prove this by hooking up a primitive battery to the face and rectum of an executed English murderer, George Forster, and letting fly. Forster’s jaw dropped and one eye flew open; he writhed about, kicked his foot, and even punched a fist into the air. But he did not, thankfully, come back to life.

Galvani’s theories were soon discredited by another Italian scientist, Alessandro Volta—as in “volt”—but the idea of stimulating matter with electricity proved to have as much life as Mrs. Shelley’s novel. By the late nineteenth century, a young British physiologist named John Alexander MacWilliam was speculating that electrical impulses could control the beating of the heart.

An early external pacemaker was invented in Australia. But the term artificial pacemaker was coined by a New York heart specialist named Albert Hyman, who, with the help of his brother, Charlie, developed a spring-wound, hand-cranked motor designed to revive stopped hearts—more of a mechanical defibrillator than what we think of today as a pacemaker. (Pacemakers regulate the heart and adjust its rate of beating if it is too fast or too slow. Defibrillators, or ICDs, shock the heart back into beating if it has stopped.) Hyman did not patent or publicize his invention, as such efforts were then characterized as “playing God” by the newspapers.

By the end of World War II, enough hearts had been stopped for a little God-playing to be judged in order. Work on transcutaneous (through the skin) pacemakers accelerated around the world, but most of this involved things like vacuum tubes, large and immobile machines, and wall socket connections that came with a not insignificant risk of electrocution.

Earl Bakken of Minneapolis had a better idea. Inspired by, yes, the 1931 movie version of Frankenstein, Bakken at the age of eight devised a five-foot robot of sorts that could talk, blink its eyes, smoke a hand-rolled cigarette, and wield a knife. He also invented a sort of primitive taser, purportedly to keep bullies away. Instructed by a minister to “use science to benefit humankind, not for destructive purposes,” Bakken would later build the first wearable external pacemaker for Dr. C. Walton Lillehei in 1958, after Lillehei lost a patient thanks to a local blackout.

Now Dr. Lillehei equipped his heart patients almost immediately with Bakken’s artificial pacemaker, which utilized the new silicon transistors. This device was kept in a small plastic box and came equipped with controls to change the heart rate and voltage. Connected to leads that passed through the patient’s skin, and to electrodes attached to the surface of the heart’s myocardium, it was a great step forward. But it still meant attachments that could serve as a highway for infection from outside the body.

At nearly the same moment, Swedish doctors installed the first fully implantable pacemaker. But problems remained, such as how to recharge its batteries. (Originally an induction coil was used, which must have been little better than what was done to poor Forster.) Many individuals and companies labored to make a better battery over the next decade, but none outdid a Buffalo Sunday school teacher and member of his church choir named Wilson Greatbatch. After getting his engineering degree from Cornell on the GI Bill, Greatbatch invented first a mercury pacemaker battery, then a lithium-iodide cell battery that became the industry standard.

Neither the iodine nor the poly-2-vinyl pyridine in his cathode conducted electricity. But Greatbatch found that when he heated them to 150 degrees Celsius (302 degrees Fahrenheit) they formed an electrically conductive viscous liquid, which maintained its conductivity after being poured into a cell to harden with the lithium. Encased in a titanium cell, Greatbatch’s battery proved to have the endurance (energy density), small size, low self-discharge, and reliability needed. Installed, usually, under the clavicle, it could be attached to one or two chambers of the heart, the atrium, and/or the ventricle. Best of all, its life span was soon extended to an average of ten years.

First installed in 1960, Greatbatch’s pacemakers were the industry standard by 1971. Medical science continues to make great strides in pacemakers. Today’s pacemakers can be implanted with a simple operation, sometimes one that requires only local anesthesia and can be completed in as little as thirty minutes. Leads are fed into the heart via a large vein, with the use of a fluoroscope. When the batteries need to be changed, they can be replaced with another simple operation—one that does not usually need to disturb the ventricular leads. One advance expected imminently is a pacemaker the size and shape of a multivitamin that can be implanted in your leg and control your heart rate from there. Dr. Frankenstein would be pleased.

the genius details

Swedish surgeon Åke Senning implanted a pacemaker designed by Rune Elmquist, a Swedish inventor, into Arne Larsson in 1958. It was the first internally implanted artificial pacemaker. It failed after just three hours. Larsson would receive a total of twenty-six pacemakers over the remaining forty-three years of his life and would outlive both the inventor of his pacemaker and his surgeon.

A firewall has now been developed that prevents outside parties from reading the medical information from one’s pacemaker or altering it in any way.

The invention of the transistor at Bell Labs in 1947 (see “A Computer on a Chip”: the Microprocessor) made possible the modern pacemaker.

Internal pacemakers with mercury batteries, devised by Wilson Greatbatch, were successfully implanted in 1960 at a Veterans Administration hospital in Buffalo by doctors William Chardack and Andrew Gage.

There are about three million people worldwide who now have pacemakers, with another 600,000 receiving implants every year, including roughly 200,000 Americans.

Mapping the Body

The MRI

A modern MRI machine, the product of groundbreaking research by generations of American immigrants.

The contributions of immigrants stand out on every page of the long American record of invention, innovation, and achievement. But in no field is the input of new Americans and their children more conspicuous than the development of that miraculous lifesaving device, the magnetic resonance imaging (MRI) machine.

An MRI is not an X-ray, a CAT scan, or a photograph but a radio-created image of the human body that is translated into a picture by computer technology. The idea of magnetic resonance imaging—or “nuclear magnetic resonance imaging” (NMR)—originated with Isidor Isaac Rabi, born in a small town in Galicia and brought to the United States when he was three years old. Small and sickly, surviving—barely—in the slums of New York, he read voraciously, tearing through the shelves of the Brownsville, Brooklyn, public library, attracted especially to the writings of Jack London: “What appealed to me was the democratic idea that anybody could become anything.”

Rabi would work his way through a graduate degree in chemistry at Cornell, become the first Jewish professor in the sciences at Columbia, and build the best physics department in the world there. His great strengths lay in research and team building, leading to a breakthrough that determined how a bombardment of electromagnetic waves could reveal the chemical composition of nuclei.

Rabi and his assistants achieved this by first exposing protons to an oscillating magnetic field, which made them line up facing north or south, much as a needle does on a compass. Hit with electromagnetic waves of a precisely calculated frequency, the protons then flipped over. After a fraction of a second, the nuclei relaxed and flipped back, but as they did they “resonated”—that is, they sent out radio signals at the same frequency they received.

All this would mean little—save for the fact that the protons’ relaxation period varied according to their makeup. “We Are All Radio Stations” was how a New York Post headline summed up Rabi’s work in 1939—which was essentially correct: our atoms “broadcast” back what is inside us.

Rabi’s discovery would have enormous implications in any number of fields. Scientists used nuclear imaging for all sorts of purposes. But no one thought to use it for purposes of medical diagnosis until Raymond Vahan Damadian, the son of immigrants who had fled the Turkish genocide against the Armenian people. Born in Manhattan and raised in Forest Hills, young Raymond was a prodigy in several fields, studying the violin at Juilliard’s School of Music by the time he was eight, earning money as a tennis pro in the Hamptons by age fourteen, and being selected for a special Ford Foundation college scholarship when he was just fifteen.

Playing you like a radio: a Golay coil, complete with the main magnetic coil that creates a total magnetic field around your body; the x, y, and z magnetic coils that create varying fields to “read” your cells; and the radio receiver and transmitter that receive and convey information from your body.

Damadian headed for the University of Wisconsin, then the Albert Einstein College of Medicine. He became an emergency room doctor, then a medical researcher, dreaming of curing the cancer that had killed his beloved maternal grandmother.

Damadian was studying how the kidney balances fluids and electrolytes when he audited a Harvard course in quantum physics taught by Edward Purcell, another of the Nobel-winning geniuses who had advanced NMR spectroscopy. Damadian’s supple mind grasped a greater possibility before anyone else’s did: maybe it was possible to distinguish cancer cells from normal ones by means of magnetic resonance imaging. With the human body made up mostly of water, it would be relatively easy to resonate the single proton in hydrogen atoms.

An experiment with the cancerous tumor tissue of rats soon bore him out. Exultant, Damadian proposed a giant leap forward in medical diagnosis: machines big enough to scan a living human being. The NMR machines he was using at the time were built to handle no more than a spinning test tube or a lab slide.

“What I was talking about was like going from a paper glider that you tossed across the classroom to a 747,” he admitted. But, “Once you get a strong idea, you can become its prisoner.”

Damadian would endure a stiff sentence as the prisoner of his idea, but he could not give it up. Despised as a mere MD by physicists and other PhDs, he was shunned by the cancer research community, labeled a crackpot, and refused funding after his first few years of research. His only backers were his extended family. His wife’s brother raised $10,000 on Long Island in the 1974 equivalent of a Kickstarter campaign, even passing the hat at flea markets.

Damadian had to build his own MRI machine with his two assistants, Michael Goldsmith and Lawrence Minkoff. This entailed taking a course at the RCA Institute of Electronics and borrowing software from the Brookhaven National Laboratory (an institution started by Izzy Rabi). Damadian bought thirty miles of superconducting wire, cheap, and he and his assistants spent another year winding it around homemade spools made from metal bookcases, welding an insulated container for the helium the machine would need, and building a radio coil out of discarded cardboard and copper foil tape.

The radio coil had to be wrapped around the body of a very reluctant Minkoff (an experimental mouse had accidentally been “cooked” by the device), the only person on Damadian’s team who was thin enough for it to fit. The process of scanning his chest took nearly five hours, until 4:45 on the morning of July 3, 1977, and left Minkoff freezing cold. But when the results were charted, then fed into a computer, they produced the first MRI in history.

Even then, Damadian’s troubles didn’t end. The National Cancer Institute expressed little interest in the MRI’s still relatively crude images as compared to existing CAT scan technology—a bizarre choice, since, unlike CAT scans, MRIs involved no cancer-causing radiation. A friend set up a meeting with General Electric, which Damadian attended, fearful they would steal his idea. Steal it they did, setting off twenty years of litigation.

In 2003, the Nobel Prize Committee awarded prizes for developing the MRI to America’s Paul C. Lauterbur and England’s Sir Peter Mansfield, whose own MRI machines raced ahead of Damadian’s in producing clearer images. Damadian was ignored, even though the written record of their work clearly documented that it originated from his, and even though he held the first-ever patent for an MRI.

Damadian would, in the end, win a boatload of scientific and technological prizes—and $128.7 million, when the Supreme Court ruled that GE had indeed violated his patent rights. His company would go on inventing life-saving devices, such as a mobile MRI scanner and an “open” MRI—work one cannot put a price on.

the genius details

The three gradient magnets of an MRI can produce an image of the human body from every possible angle, providing a huge advantage over all other imaging technologies.

Within a year of his first scan of Lawrence Minkoff, Damadian had reduced the scan time of a subject from nearly five hours to thirty-five minutes and demonstrated the difference between healthy and cancerous tissue.

There are now over twenty-five thousand MRI scanners worldwide.

Examined in an MRI machine near the end of his life in 1988, Dr. Rabi remarked, “I saw myself in that machine. I never thought my work would come to this.”

The Nobel Prize committee never gave a reason for why Damadian was not included in the prize. Nobel Prizes can honor up to three individuals. The committee’s deliberations will not be open to the public for another thirty-eight years.

Man to Cheetah

The Modern Prosthesis

Van Phillips’s cheetah-inspired prosthetic leg. The blades are almost five times more efficient than the human ankle in retaining energy with each thrust.

Van Phillips was a twenty-one-year-old student at Arizona State in 1976 when a motorboat propeller cut off his left leg below the knee in a water-skiing accident. Back in school on crutches within a week, Phillips soon received another shock: his artificial limb, which he described as “a leg that felt like a fencepost with a bowling ball on the end of it.” For the highly athletic Phillips, who had been a pole vaulter, springboard diver, and karate enthusiast before the accident, it was a “most brutal piece of reality.”

“I couldn’t run on the beach. I couldn’t step off a curb. I would hit a pebble no bigger than a dime and fall over it,” Phillips said of his prosthesis. “I hated it. I would throw it across the room.”

For all the wars of the last century, for all the amputations necessitated as people lived longer and became more prone to diabetes, prosthetics were much the same as they’d been since time immemorial. For legs, that meant a wooden foot on a metal leg, secured by straps to a leather knee (or hip) casing—a state of affairs reduced to the acronym SACH, for Solid Ankle Cushion Heel, about the best that was thought could possibly be done for the amputee. The whole device was lifeless, a sort of stand that sent a shock back up the spine every time its wearer landed on the fake heel.

Phillips had no medical background, but he decided to build a better leg himself. That first required years of education in biomedical engineering, but he never stopped thinking of how he might regain his old, athletic lifestyle. Practicing tae kwan do, he decided that what he needed was a prosthetic that dispersed shock resistance and gave him at least an inch of bounce; one that could bend and fit the body as it changed, without the usual adjustments that had to be made in prosthetics when their users lost and gained even small amounts of weight.

Hiking for hours, pulling his unwieldy false leg with him, Phillips looked everywhere in nature for some model. Then it came to him: the cheetah, the fastest animal on earth. The key to its speed was how the long tendon of the big cat’s hind legs curved in the shape of a “C” from hip to foot. Every time it hit the ground, that tendon stretched like a catapult, firing the cheetah forward again. Its foot stored energy every time it landed, then fired it back out again.

Phillips began to search for the right material, going through titanium, plastics, steel chrome, rubber, and aluminum before deciding on a light carbon graphite foot. To his delight, it allowed him to walk better and even run and play tennis. But he still found himself breaking a foot a week, something that plunged him into depression each time he had to mold a new one.

Then Phillips met Dale Abildskov, a specialist with an aerospace company, who moved him away from some imitation of a foot and toward making the entire prosthesis like that C-shaped tendon. Phillips knew the moment he tested it that the idea was sound, transferring more energy to and from the “blade” of the prosthesis with each step forward than the human foot could ever provide. It didn’t even need a heel.

Phillips quit his job the next day and set about perfecting the new leg. It only took . . . about two years, as he went about building—and breaking—some two hundred to three hundred more artificial “feet.” Working in his basement laboratory, he constantly risked serious injury and nearly sustained a fatal accident when he inhaled some toxic fumes that laid him up for a week, unable to eat, speak, or move.

Van Phillips found that only carbon graphite could compete with human or animal tendons and ligaments for strength and agility. Each individual strand of carbon graphite is thinner than a human hair, but tens of thousands of these strands, bound together with a gluelike carbon epoxy matrix, can make a material that surpasses even the capacity of human tendons to store and release energy.

Abildskov not only helped Phillips to reengineer his foot but also guided him toward obtained financing and support. Phillips, Abildskov, Bob Barisford (Abildskov’s old boss at Fiber Science, his aerospace company), and another partner, Walt Jones, joined them to form Flex-Foot in 1983. Bob Fosberg would later bring his expertise as a Harvard MBA and pharmaceutical executive to the new company.

“There’s nothing as much fun as getting four or five people together, all with the same goal,” Phillips would later tell groups of young people about their efforts. “Each of us would spur on the others, and you get this vortex of energy as the idea goes round and round.”

What they produced was a stunning breakthrough in the prosthetics world, and while Phillips sold the venture to the Icelandic firm Ossur in 2000, his work nonetheless became famous as the design that allowed Irish American athlete, model, and actress Aimee Mullins, who lost both her legs below the knee before the age of one, to become a Paralympic champion. An estimated 90 percent of all Paralympic athletes today use a Phillips-designed prosthetic. His blades made it into the 2012 Olympics, and many suspect that one day the fastest athletes in the world will be amputees fitted with “superior” new legs. Some even call it part of an evolution to a new form of human being, while others are looking forward to the continuing advances in myoelectric prostheses that promise to allow people to control artificial limbs by their minds, and even to regain feeling in them.

Phillips himself predicts that the ultimate solution will be “regenerated limbs,” but he’s not waiting for that. Instead, ever since selling his company, he has devoted his energies—and over a million dollars of his own money—to developing advanced prosthetics that will cost no more than ten dollars and be easily repairable for the estimated five to ten million individuals who have lost legs and feet to the land mines that now litter the developing world. Will he succeed? As Van Phillips likes to tell classes of children: “Anything you can think of, you can create.”

the genius details

Van Phillips holds sixty patents today for designing prosthetic legs that allow him to run and walk, and also ones specifically for skiing, swimming, surfing, scuba diving—and riding horseback, with his daughter.

Each prosthetic “Cheetah leg” costs between $15,000 and $18,000 today.

The Cheetah blades put runners in a competition at a disadvantage in that they do not allow thrust at the beginning of a race. Runners using Cheetah blades must generate more power from their gluteal and abdominal muscles.

Some studies have shown that after the runner has reached maximum speed the blades do offer a theoretical advantage, as they lose only 9.3 percent of energy while hitting the ground, while the human ankle loses 42.4 percent of energy.

Heal Thyself

Gene Therapy

The prospects were grim in 1990 for four-year-old Ashanti DeSilva, who suffered from a rare ailment known as adenosine deaminase (ADA) deficiency, which caused severe combined immunodeficiency (SCID), leaving her prey to infections or viruses that could easily kill her. The problem was a single defective gene that kept Ashanti’s cells from producing the enzyme that makes white blood cells what they are: the body’s natural protectors against the myriad diseases and viruses that routinely attack us.

But doctors hoped to help DeSilva with the new tool of gene therapy. This had been anticipated for many decades. Experiments in 1952 had confirmed DNA (deoxyribonucleic acid) as the molecule of genetic information. Then in 1953 James Watson and Francis Crick discovered its double-helix structure. By 1966, the genetic code was largely deciphered, thanks to the work of Marshall Nirenberg and three of his colleagues at the National Institutes of Health (NIH), as well as other scientists around the world. Only a few years later, in 1972, Drs. Theodore Friedmann and Richard Roblin proposed that undamaged DNA could replace defective or ineffectual DNA in patients suffering from hereditary disorders, in a process that would become known as recombinant DNA technology.

The basic idea of Ashanti’s gene therapy was to draw her blood, treat her defective cells with the working gene she lacked, then inject the healthy cells back into her. The result: the patient lived—but the procedure was far from a complete success. The treated cells managed to produce the enzyme, but they did not produce more healthy cells, thereby sinking hopes—for the time being—that the body could be trained to repair itself indefinitely.

DeSilva remains alive and well as of this writing, but she still periodically repeats her gene therapy treatments and takes a drug that contains the needed enzyme. Her treatment made it clear that gene therapy was more complicated than had been hoped. In 1999, the terrible death of Jesse Gelsinger, an eighteen-year-old who had volunteered to take part in gene therapy trials for an inherited liver disease that he had been able to control with drugs, sparked a crisis of doubt in (and about) the field. Gene therapy projects around the country were suspended or canceled, funding was slashed, lawsuits were launched, ethics panels were convened, and the entire treatment fell into ill repute.

The problem lay both in the system for delivering those repaired, healthy genes via “vectors”—“deactivated” or “hollowed-out” viruses—and in the sheer complexity of the human body. A previous, maybe unsuspected exposure to the virus could set off an extreme counterattack by the body’s immune system (possibly what proved fatal for Jesse Gelsinger). Delivery to some of the wrong genes could cause them to stop functioning and allow the rise of cancerous tumors—something that victimized other gene therapy subjects.

Yet the research effort in labs throughout America has recovered, and dozens of trials are now reaching their late stages, promising ways to deliver genes with safer and more effective viruses. There have already been some encouraging early results in combating more immunological disorders, as well as hemophilia, anemia, and muscular dystrophy; neurodegenerative diseases such as Parkinson’s and Huntington’s; viral infections such as HIV, hepatitis, and influenza; heart disease, diabetes, numerous cancers, and even blindness. Researchers have traced some four thousand diseases to defective genes, from amyotrophic lateral sclerosis (Lou Gehrig’s disease) to Alzheimer’s to arthritis. The delivery vectors, or viruses, they are working with may soon be able to repair, replace, or even “silence” genes gone awry (those that create cancers or kill brain cells), reaching every part of the body and crossing the blood-brain barrier. Eventually, “improved” genes may even be able to do things like kill your urge to smoke—or endow your offspring with advanced abilities and powers in any number of endeavors.

This potential raises obvious ethical questions about just where “gene therapy” becomes “genetic engineering,” a much more dubious concept. It has resulted in the banning of “germline,” or hereditary, gene therapy, at least until we can better understand all the consequences. For the time being, gene therapy also remains almost prohibitively expensive. But as of 2014, some two thousand gene therapy clinical trials around the world have been conducted or approved. Over six hundred such trials are under way in the United States. As our science continues to advance, and as US and foreign companies continue to pour money into gene therapy research, it seems inevitable that one day we will be able to cure some of what are now our most formidable diseases with a quick injection—and maybe raise our future progeny’s SAT scores at the same time.

the genius details

Philip Leder, a geneticist working with Nirenberg, developed a method for reading the genetic code on pieces of transfer RNA in 1964, thereby accelerating the deciphering of the genetic code.

The Human Genome Project, which has plotted the entire sequence of genes in the human body, estimates that each human being has twenty thousand to twenty-five thousand genes.

Glybera, a gene therapy product for treating a protein production deficiency, costs $1.6 million per patient and is estimated to be the world’s most expensive drug.

Somatic gene therapy—the repair, transformation, or replacement of a person’s individual genes—is allowed in the United States, as it affects just the individual patient, not his or her offspring.

Children born with donated ooplasm—a key part of an embryo’s formative yolk—to mothers whose own ooplasm is defective have three genetic parents.