CHAPTER NINE
 
An Emerging Enemy
 
Australia, 1941
London, England, 1962–63

Although one was struck with the unusual appearance of the cataracts in the first few cases, it was only when other similar cases continued to appear that serious thought was given to their causation.

—Norman McAlister Gregg, Australian ophthalmologist, 19411

EARLY IN 1941 a tall, athletic eye surgeon with a thriving practice in Sydney, Australia, noticed an alarming uptick in the number of blind babies being sent to his office.

Norman McAlister Gregg had a mind as sharp as his impressive abilities in cricket, golf, and tennis. He had finished with first-class honors in his medical class at the University of Sydney in 1915 before departing for World War I, where in France he served as a captain in the Royal Army Medical Corps. He was decorated for “conspicuous gallantry” after searching out and tending to the wounded while under heavy enemy fire.

After the war Gregg completed a residency in ophthalmology in the United Kingdom. He then returned to Sydney, where he launched a successful private practice. He was a man with little tolerance for slackers or fools, but he was kind and compassionate to patients and had a habit of listening deeply to their stories. He was also infectiously enthusiastic, with a curious, penetrating mind.2

By early 1941 Gregg, balding and bespectacled at age forty-nine, had become the senior eye surgeon at the Royal Alexandra Hospital for Children in Sydney. In the first half of that year, the blind babies began turning up, one after another. By June he had seen thirteen of them—an unusually high number in a city of around one million residents.

All the babies had cataracts: milky white opacities in what should have been the transparent lenses of their eyes. (The lens is an elliptically shaped, bloodless, nerveless structure that measures about one centimeter in diameter in adults. It sits behind the pupil, shape-shifting to help the eye focus on near or distant objects.) The white opacities, usually in both eyes, had been present from birth, their parents said. They made what should have been black pupils appear as if they were white.

When Gregg put drops in the babies’ eyes to expand their pupils, the pupils responded weakly and sluggishly to light. The older babies—those more than three months old—also displayed coarse, jerky, purposeless eye movements. “It was a searching movement of the eyeballs and indicated the absence of any development of [focus],” Gregg would write.

But it was the extent of the cataracts themselves, which were unlike any congenital cataracts that he had seen, that caught Gregg’s attention as he examined baby after baby. The opacity was densely white in the dead center of the lens, but toward the periphery its density lessened, changing to a cloudier, smoky appearance: a “whitish haze.” Finally, there was an unaffected zone at the very edge of the lens.

Gregg knew that during embryonic development the center of the lens grows first and that its peripheral layers are laid down later in pregnancy, like the outer layers of an onion. Whatever had caused the cataracts must have done so during early embryonic life.

The babies’ eyes weren’t their only problem. They tended to be small and poorly nourished. They had difficulty breast-feeding, a problem often seen in newborns with heart defects. Gregg asked a pediatrician colleague, Margaret Harper, to examine eight of the babies. She heard a harsh murmur along the breastbone in every one of them. It would ultimately emerge that twelve of the thirteen had been born with heart anomalies. Gregg was disturbed, and suspicious that what he was seeing wasn’t a mere coincidence. The received wisdom held that all birth defects were inherited, transmitted from parent to child in the genes. To suggest that environmental factors might play a role was considered patently unscientific. But this sudden “epidemic” of cataracts in newborns, the similarity of the babies’ unusual cataracts, and the co-occurrence of the babies’ eye and heart problems led Gregg to suspect a common, possibly environmental cause.

One day two mothers of babies with cataracts were sitting in Gregg’s waiting room talking about their babies. One mentioned to the other that she had had German measles while she was pregnant. She was worried that it had affected her baby. The other mother said that she too had had German measles while she was expecting. During her baby’s appointment, each woman mentioned this fact to Gregg and asked him if the disease might be to blame.3

Gregg had been searching for a clue, and this one all but shouted at him. He took careful histories from the two mothers and began contacting the mothers of the other eleven babies to ask if they had suffered from German measles while they were pregnant. He also contacted close colleagues—fellow Sydney-based ophthalmologists—and asked them how many cases of congenital cataracts they had seen recently. Could they ask the mothers whether they had had German measles while pregnant? He put the same questions to ophthalmologists around the rest of eastern Australia from Melbourne to Brisbane.

The medical name for German measles is rubella. It comes from the Latin and means “little red.” Rubella is generally a mild disease that is transmitted by droplets coughed, sneezed, or otherwise expelled from the mouth or nose of an infected person. It is characterized by fever, swollen glands, and a rash. It owes its popular nickname to the fact that it was first described by a German, Fried-rich Hoffmann, in 1740.4 (The first English description was penned by British physician William Maton in 1815.)5

A Scotsman, Henry Veale, serving with the Royal Artillery in India, coined the term “rubella” in 1866, after carefully documenting the course of an outbreak in a Bombay boarding school. His study also distinguished the disease from classical measles, a distinctly different malady, which was present in the school at the same time.6

Before Veale’s paper was published, and for most of a century afterward, rubella was viewed by doctors as an annoyance—a sort of “bastard measles”—although thanks to the work of Veale and others, it was formally recognized as a distinct entity by an international medical congress in 1881. It was a trivial disease, they thought, an irritant that could confuse the diagnosis of other, more dangerous rash-inducing diseases, particularly classical measles and scarlet fever, which killed children with regularity.

In fact, rubella is so mild that up to two-thirds of infected people aren’t even aware they have it.7 Those who do have symptoms may experience a low-grade fever and swollen glands where the jaw meets the neck and at the hairline on the back of the neck—symptoms that set in twelve to twenty-three days after they are first infected with the virus. In some people, especially young women, rubella can cause aching joints, or even an arthritis that makes joints red, hot, stiff, and swollen and that can continue or recur for months. In about one in five thousand cases, and typically in adults rather than children, rubella causes encephalitis, an inflammation of the brain that is fatal in one of five cases.8

Rubella’s hallmark is a pink or red rash that starts about two weeks after exposure. It often begins on the face and travels down to the trunk and limbs. It’s composed of flat pink or red patches and can have small, raised bumps. Occasionally it is itchy. It lasts about three days.

Rubella is not as wildly contagious as classical measles. But it definitely spreads. While people are at their most contagious when the rash is new, they can pass on the virus for a week before and a week after the rash appears. If they don’t have a rash or other symptoms, they can spread the virus anyway.

When Gregg began seeing the blind babies, Australia had been at war since September 1939, with large numbers of young men living in closely packed military camps in preparation for shipping out to Europe and Africa. Infectious diseases circulated easily in the crowded barracks.

In 1940 Australia experienced a widespread rubella epidemic. Unusually, it was a severe rubella that knocked down fully grown adults. Many had throbbing wrists and ankles and raw, sore throats. Others were simply laid low. In the new infectious disease unit at Sydney’s Prince Henry Hospital, the average rubella patient stayed eight days.9

Medical experts have hypothesized that conditions were ripe for the Australian outbreak not only because of the crowded military camps but also because many of the recruits in wartime Australia came to large cities from rural areas where they had likely never been exposed to the disease. They therefore hadn’t developed antibodies against the virus, making them a prime breeding ground for an epidemic. As the war went on, the continuing influx of recruits to the cities provided a constantly replenished source of nonimmune soldiers.10 They then went home on leave, taking the disease back to their families, wives, and girlfriends. The situation may have been aggravated by the employment of young women in munitions factories, offices, and the armed services.

After his two patients first asked if their babies might have been damaged by rubella, Gregg checked the records of rubella admissions to the infectious-disease unit at the huge Prince Henry Hospital. What he found confirmed his suspicion: the peak period of rubella admissions had occurred between mid-June and August of 1940, seven to nine months before the bulk of the unlucky babies were born in March, April, and May of 1941.

When Gregg interviewed the mothers of his eleven other infant patients with cataracts, only one of them said that she had not had German measles while she was pregnant. She also told him that she was kept so busy looking after her ten children that she couldn’t remember any details of her pregnancy, except that she was ill at about the sixth week.11

The answers that came in from his ophthalmologist colleagues in Sydney and the rest of eastern Australia were as close to definitive as Gregg could have hoped. His colleagues had diagnosed sixty-five babies with cataracts. This brought the total, including the babies that Gregg had seen, to seventy-eight. Of these, the mothers of sixty-eight reported having had German measles while pregnant. Among the remaining ten, five said they didn’t know, or hadn’t had rubella. In a couple of cases the ophthalmologists didn’t get around to asking the question. In another the mother reported “kidney trouble” while pregnant.

Of the sixty-eight women who were sure they’d had rubella, the vast majority had been ill during the first or second month of pregnancy. For most of these that meant July or August of 1940.

For Gregg the case was clinched. In October of 1941 he stood before the Ophthalmological Society of Australia and reported on his cases, stating plainly that rubella during pregnancy had caused not only the cataracts but also the heart defects. By that time fifteen of the babies were dead. Their autopsies had revealed a number of heart defects, most commonly the failed closure of a fetal blood vessel connecting two major arteries near the heart, a condition called “patent ductus arteriosus” (PDA).

Gregg published his findings that same year in Transactions of the Ophthalmological Society of Australia, in a now-classic paper entitled “Congenital Cataract Following German Measles in the Mother.”12

While his discovery was taken seriously and quickly followed up and confirmed in Australia, elsewhere Gregg’s findings were slow to be picked up, in part because people were distracted by the war. He also took his share of disdain for bucking the received wisdom of the day by suggesting an infectious cause for a set of congenital defects. One editorial in the British journal the Lancet in 1944 noted that the study was retrospective and had relied on women’s word-of-mouth accounts of having had rubella. Gregg, it intoned, “cannot yet be said to have proved his case.” The editorial writer went on to assail the lack of statistical rigor in a key 1943 follow-up study by other Australians, which linked maternal rubella during pregnancy to cataracts, deafness, heart disease, and microcephaly—an abnormally small head, which is frequently accompanied by intellectual disability.13 The Lancet writer concluded that, if rubella was a real problem in pregnancy, it would likely have been noticed long ago: “The lay public have always held that congenital malformations have an extrinsic explanation—from being frightened by a dog to falling down stairs—and it will be strange if the influence of a mild illness in the first months of pregnancy, accompanied by a rash, has escaped attention.”14

In 1946 an editorial in the Journal of the American Medical Association fully accepted Gregg’s findings and their serious implications but conjectured that the particular rubella virus that caused the severe epidemic in Australia in 1940 might have had unique abilities to affect the fetus and might be responsible, through travelers, for cases that had since been reported in the United States and England.15 American women were paying attention, and many decided to take no chances. One study followed 104 women in New York City who between 1949 and 1955 were diagnosed with rubella during the first three months of their pregnancies. Forty-five chose to have abortions because of their infections.16

It would take study upon study in the 1950s to win full acceptance of Gregg’s findings by the medical profession. The follow-up studies documented a broad range of damage that the virus did to fetuses in early pregnancy—any rubella virus, not just the Australian virus of 1940. They confirmed that rubella’s ruinous results included deafness, cataracts, heart defects, microcephaly, and associated intellectual disability. Later, autism would be recognized as another sign of the brain damage in congenital rubella survivors.17 Any combination of rubella-induced problems would come to be dubbed congenital rubella syndrome—CRS for short. Rubella, an exclusively human virus no more than three millionths of an inch in diameter, was, it was at last quite plain for all to see, a menace to life in the womb.

And there was no defense against it. The rubella virus had not been isolated in the lab. Until it was captured, there could be no vaccine.

In late September of 1962 a brief article appeared in the British Medical Journal under the title “Rubella, 1962.” The report, by three doctors in general practice in Beckenham, a London suburb, described “a widespread epidemic of rubella” between March and July that year.

The three general practitioners had seen 355 patients with rubella in that short space of time—nearly 6 percent of the patients in their practice of about 6,500. But, they wrote, this was likely an underestimate. They suspected that another 200 people had been infected but had not been seen in the office—people who had telephoned them but not come in; others who had mentioned having the disease after the fact; and still others who they suspected had been infected but had not noticed or reported the disease.

Children aged five to ten years old were by far the most often affected, the trio reported. Fully 25 percent of the practice’s patients of this age had been diagnosed with rubella. But, the doctors added, “in this group it is very probable that over 50 percent were infected. Local schools have confirmed that more than half of classes were absent during this period, presumably because of rubella.”18

What the Beckenham doctors saw was repeated in untold numbers of doctors’ offices in the United Kingdom during the spring of 1962 and again in the spring of 1963. (In the United Kingdom, as in the United States, rubella infected people throughout the year, but infections peaked in the early spring.)fn1 19

Rubella seemed to be everywhere: The mayor of Rugby and his three children were down with German measles and confined to their home, the Times of London reported in mid-March 1962.20 “Between 20 and 35 Eton College boys have German measles and about 20 are in the school sanatorium. The others are recovering at their homes,” the newspaper added in a report about the famous prep school later that month.21

The star batter R. E. Marshall of the Hampshire cricket team “had contracted German measles and had taken them home with him—so his colleagues hope,” the London newspaper the Guardian reported in June 1962.22

And women who, early in their pregnancies, knew or suspected they had the disease wrestled with a terrible choice. They could carry the fetus to term, accepting the high risk that it would be damaged by the virus. Or they could seek an abortion.

One woman, identified only as “A Mother,” wrote to the British newspaper the Guardian in August 1963. After she contracted rubella very early in her pregnancy, her doctor told her there was only a one-in-three chance that the baby would be born undamaged.

“When he gave me the chance to enter the hospital immediately and have the pregnancy terminated, I felt I had no choice,” she wrote, adding that she nonetheless felt “a deep elemental repugnance for what I was doing.” She added: “Now when people say, or I hear myself saying, how lucky I was, I feel simultaneously a twist of revulsion. It is not lucky to have disposed of a life. What was lucky, however, was the chance that I was sent to a humane doctor.”23

Many pregnant women weren’t as certain as the letter writer that they had had rubella. For them, decisions about whether to continue a pregnancy were perhaps still more agonizing. What would have served them—and what wasn’t available—was a definitive lab test to identify rubella infection.

The first vital step toward developing such a test came in October 1962, when two groups of American researchers published papers in the journal Proceedings of the Society for Experimental Biology and Medicine. More than twenty years after Norman Gregg recognized the link between rubella and fetal damage, the virus had finally been isolated in the lab.

One pair of physicians, Thomas Weller and Franklin Neva at the Harvard School of Public Health, had cultured the virus by inoculating plates of human amnion cells with the urine of Weller’s ten-year-old son, who was ill with rubella. But their method of identifying the virus in a lab dish took so long—from two and a half to four months—that it rendered it useless for worried pregnant women.24

The more practical achievement came from Paul Parkman, the mild-mannered son of a post office clerk and a homemaker from the tiny town of Weedsport, New York. Parkman was a young physician and virologist at the Walter Reed Army Institute of Research in Washington, DC. With his colleague Malcolm Artenstein and his boss, Edward Buescher—and with throat washings from a score of young military recruits who were hospitalized with rubella at Fort Dix, New Jersey, in February and March of 1961—Parkman had devised a way to get around a vexing property of the rubella virus.

Unlike viruses such as polio, which tears through cells in culture, exploding them and leaving chaos and debris in its wake, rubella was seemingly indolent in a culture dish, leaving no clear signs that it had infected cells. Parkman’s group came up with an indirect test to prove that the virus was present in culture. They first inoculated throat washings from the Fort Dix recruits onto African green monkey kidney cells, keeping uninoculated monkey cell cultures as controls. Seven to fourteen days later they added another virus, a gut virus called ECHO-11, to all of the cultures. ECHO-11 destroyed the uninfected control cells in two or three days. But rubella virus blocked ECHO-11’s effects, leaving the kidney cells intact.25 It was a cumbersome way to identify a virus, but it was certainly quicker than Weller and Neva’s method. And in the face of an epidemic, it was a welcome development.

In London that autumn of 1962, a pediatric resident at the Hospital for Sick Children on Great Ormond Street—nicknamed “GOSH,” for “Great Ormond Street Hospital”—read the new papers on the isolation of the rubella virus with keen interest.

Stanley Plotkin, who had helped test Koprowski’s polio vaccine while at the Wistar Institute in Philadelphia, had paused his research career to complete the training in patient care that would qualify him as a pediatrician. That summer, just after his thirtieth birthday, he had finished a first year as a pediatric resident at the Children’s Hospital of Philadelphia. This second year of residency, which he was spending at GOSH, would see him through to full qualification as a pediatrician. After that, he had reassured Koprowski, he intended to return to the Wistar.

Plotkin was born in 1932 in the Bronx, the son of a telegrapher and a bookkeeper—first-generation U.S. immigrants whose own parents had fled the hostile climate for Jews in eastern Poland. A slight, bookish, precociously intelligent child, he was nearly felled by pneumococcal pneumonia in 1936, before antibiotics were available. He was plagued by asthma and at age nine was sent alone to the National Home for Asthmatic Children in Denver, where he contracted influenza, was hospitalized, became comatose, and again nearly died. He emerged months later weighing forty-three pounds, the left side of his face stilled by a facial nerve paralysis called Bell’s palsy.26 (The paralysis was transient in Plotkin’s case; it isn’t always.)

Plotkin was a quiet, studious boy who frequently skipped grades in school. He graduated from the Bronx High School of Science at sixteen, after working furiously to keep up with peers who were two years older. He says that to this day he has never inhabited a more competitive academic environment.27

As a teenager Plotkin read voraciously, regularly raiding the public library a few blocks from his family’s two-bedroom apartment on East 178th Street. At fifteen he stumbled on two books that changed his life. The first was Arrow-smith, a 1925 novel by Sinclair Lewis, which chronicles the career journey of a young physician who tries his hand as a small-town doctor but eventually becomes an immunologist and vaccine researcher under a larger-than-life mentor named Max Gottlieb, who is based at a thinly disguised Rockefeller Institute. The second book, Microbe Hunters, published in 1926, was a best-selling nonfiction rendering of discoveries by great biologists like Louis Pasteur. It was written by Paul de Kruif, a Rockefeller Institute microbiologist-turned-writer who had cowritten Arrowsmith, although his name wasn’t on it.

Both books dripped with a romantic view of science. Microbe Hunters declares on its first page that it is the story of “bold and persistent and curious explorers and fighters of death” and reminds readers that scientists’ achievements “are on the front pages of the newspapers.” Arrowsmith is flush with the thrill of discovery, the agony of being bested by a competitor, and the eventual rewards of long, painstaking hours in the lab. (The central character, Martin Arrowsmith, is also confronted with the corrupting temptations of wealth, fame, and the flesh.)

Inspired, Plotkin attended New York University on a fully funded state scholarship, then sat a three-day exam trying to win one of thirty-five sought-after scholarships that would pay his way through any in-state medical school. Without the scholarship, financing an MD would be impossible for his family.

Plotkin recalls that he applied to half a dozen medical schools while he was awaiting the exam results. In an era when Jews were not welcome as medical students, he heard back from none of them. Then he heard from the state. Plotkin had placed fifteenth among the test takers and won full funding at any New York medical school.

“We definitely have to accept you, since you’ve won this scholarship,” he recalls being told by an administrator at the State University of New York’s Downstate College of Medicine in Brooklyn, the only medical school that accepted him.

By the time he landed in medical school in 1952, Plotkin the undergraduate had fallen in love with Shakespeare, studied philosophy, and dissected a cat, a shark, and a fetal pig. None of which quite prepared him for Kings County Hospital—the huge, busy hospital in the Flatbush section of Brooklyn that was the teaching hospital for the Downstate College of Medicine. It was a world apart where, as Plotkin recalls, “after dark the medical student was king.”

He rotated through the specialties and soon knew that he didn’t want to be a surgeon. He would avoid holding retractors in the operating room by swapping duties with another classmate so he could instead care for patients on the ward. Yet the prospect of specializing in internal medicine didn’t thrill him either. It seemed like a road to treating unhealthy adults just to keep them in a holding pattern.

Pediatrics was different. There he might influence the whole of a life. And there vaccine research was desperately needed. That realization came home to him as he cared for children who were brain damaged and deafened by meningitis—an inflammation of the membranes that enclose the brain—caused by the bacterium Haemophilus influenzae. By the end of his third year in medical school, Plotkin knew that he would become a pediatrician. He was also determined to be a research scientist.

For his rotating internship—a mandatory yearlong boot camp for newly minted doctors—he chose to go to Cleveland Metropolitan General Hospital, because there the director of pediatrics was Frederick Robbins, who two years earlier had won a Nobel Prize. Plotkin found that he was too busy taking care of patients to do any research, but he enjoyed the proximity to the man who, with Enders and Weller, had discovered that poliovirus could be grown in nonnervous tissue, opening a whole new world to virologists.

As he finished his internship in Cleveland, Plotkin’s next step was clear. Because he was between eighteen and twenty-six years old, the Selective Service Act of 1948 required twenty-one months of military service from him. The only way to avoid this was to go to work for what was, in the letter of the law, a branch of the U.S. military: the Commissioned Corps of the U.S. Public Health Service. So he signed up with the Epidemic Intelligence Service, the “disease detective” branch of the CDC, itself part of the Public Health Service.fn2

After introductory training in Atlanta, Plotkin surprised his CDC supervisor, Alexander Langmuir, by requesting assignment to an anthrax investigations unit in Philadelphia. But Plotkin, as usual, had done his homework. He had been reading groundbreaking papers by a polio vaccinologist named Koprowski, who was just taking over the Wistar Institute. And the CDC’s anthrax project was based at the Wistar. His move there, he says, was the single most important decision of his professional life because it landed him in the lab of his own “Max Gottlieb.”

Plotkin remembers vividly the moment, shortly after he arrived at the Wistar in August 1957, when he first presented himself in Koprowski’s office, hoping, in addition to his CDC work on anthrax, to talk his way into a polio research position in Koprowski’s lab. Displayed prominently on Koprowski’s desk was a cartoon depicting a particularly brutal-looking Neanderthal man. Its caption read: “We welcome your suggestions.” It set Plotkin to laughing, which in turn made him fear that the illustrious Wistar director would think he was an idiot.

Koprowski did not think so and made room for the twenty-five-year-old Plotkin in his nascent polio lab. In August 1957 this lab consisted, until the Wistar’s renovations could be completed, of a big semicircular second-floor room without air-conditioning. There Koprowski’s first hire, the young lab technician Barbara Cohen, sat measuring the amount of poliovirus in clear tubes of chimpanzee stool that Koprowski’s team had sent back from the Belgian Congo. (Feeding experimental polio vaccine to chimpanzees was Koprowski’s prelude to vaccinating hundreds of thousands of people in central Africa.)

The less-than-perfect lab space did not deter Plotkin, who was thrilled to be taken on by a man he judged to be not only highly intelligent but also highly cultured, a man with a breadth of vision and a zest for life that Plotkin simply wanted to be around.

And as the Wistar’s face-lift progressed under its new chief, so did Plotkin, eventually moving into a third-floor lab, where he worked on anthrax when not laboring in Koprowski’s second-floor polio operation. There he learned how to grow and count polioviruses, how to isolate different strains of the virus, and how to weaken them for use as vaccines. His name began appearing on Koprowski’s polio papers.

In the spring of 1959 Koprowski offered Plotkin the opportunity to go to the Belgian Congo himself, to work with Koprowski collaborators—expatriate Belgians—who were vaccinating tens of thousands of children in Léopoldville with Koprowski polio vaccine. Plotkin jumped at the opportunity. Plotkin’s CDC bosses, loath to appear to be taking sides in the vaccine race, made Plotkin take a leave of absence from his CDC duties during the two-month trip.

“The culture shock engendered by a visit to an undeveloped country was unforgettable,” Plotkin wrote later. “More importantly, it taught me that vaccine development did not end in the laboratory and that field studies were not only essential but difficult and even dangerous.”28

When he wrote “dangerous,” he meant it. At one point during the vaccination campaign, Plotkin and his Belgian colleagues were vaccinating infants in the city of Kikwit when they were surrounded by an angry crowd who believed that the researchers were desexing their children because they were drawing blood, to determine the babies’ antibody status, from the femoral vein, located in the groin. In an effort to calm the crowd, one of the expatriate Belgians drew blood from his own child’s femoral vein, in full view of the crowd. This failed to quell the anger. The scientists were forced to call the local army base, which sent a unit of soldiers in trucks to escort the researchers out of the area.

Koprowski made the most of his protégé’s African adventure, inviting print, radio, and TV reporters to a buffet luncheon and press conference just after Plotkin returned to Philadelphia in June 1959. It would mark the “first announcement of effectiveness of Wistar Institute oral polio vaccine during a recent epidemic in the Belgian Congo,” the press release announced. And it would feature Plotkin, “who has just returned from making a survey of the mass inoculation.”29

Plotkin, who as a fifteen-year-old had dreamed of being a microbe hunter, had, at the tender age of twenty-seven, become a certified member of that club.

Plotkin began his one-year residency at the Great Ormond Street Hospital in central London in July of 1962. He was working at a mecca for what he and other doctors called “clinical material,” meaning people with diseases, in this case children referred from all over the southern part of England. Because GOSH was one of only two children’s hospitals in a city four times as big as Philadelphia, kids with run-of-the-mill earaches and sore throats were few and far between.

Instead children with serious childhood ailments, from congenital heart disease to cystic fibrosis, turned up at the hospital, giving an ambitious young pediatrician all the disease exposure that he could dream of. To boot, it was not he but the housemen—brand-new doctors, called “interns” in the United States—who were responsible for looking after the patients staying on the wards. That left Plotkin with duty at the outpatient clinic—and time for research.

Plotkin had chosen GOSH with a view to working with one member of Koprow ski’s far-flung network of colleagues, Alastair Dudgeon, a brisk, impeccably dressed virologist with an upper-class accent and an omnipresent bowler hat, who had twice been decorated for bravery during World War II, when he commanded a company of the British army’s Seventh Battalion Rifle Brigade in North Africa. Dudgeon was interested in congenital infections—infections acquired in the womb and carried in newborn babies into the world. As Plotkin began working with Dudgeon in July, across the Atlantic Hayflick was in the process of launching WI-38. In September the report from the doctors in the London suburb of Beckenham was published, documenting a glut of patients with rubella. In October the papers from the American virologists were published, announcing that they had captured rubella in lab dishes. It was as if an unseen hand had now put in place all the elements necessary for the drama that would follow.

Plotkin read the Parkman and Weller papers reporting on the lab isolation of rubella virus with great attention. The implications were impossible to miss. If rubella could be captured in a lab bottle, it could perhaps be weakened in a lab bottle to produce a vaccine. Koprowski’s gut-wrenching loss in the race to license a live polio vaccine had been painful for Plotkin too. But the isolation of rubella opened a whole new opportunity to create a lifesaving preventive. And as the months passed in London, Plotkin saw firsthand what the absence of a vaccine meant in human pain and suffering.

Beginning in late 1962—nine months after cases of rubella began to surge in March of that year—babies with congenital rubella could be found on the wards of GOSH on practically any day. There was a two-month-old with cataracts and the heart anomaly called patent ductus arteriosus (PDA). A common heart defect in congenital rubella, it can cause a baby’s heart to fail without a surgical repair. There was a five-month-old with microcephaly. There was an eleven-month-old who was deaf and blinded by cataracts, and whose heart was hobbled by a hole in the wall separating the ventricles, the two chambers that pump blood out to the lungs and the rest of the body.

There were also slightly older children, victims of the rubella that had circulated at lower levels prior to the epidemic, like the deaf and blind four-year-old who also suffered with a condition called tetralogy of Fallot. This four-part heart defect starves the blood of oxygen, sometimes turning sufferers blue.30

When he wasn’t seeing patients, Plotkin worked in Dudgeon’s lab, using Paul Parkman’s new technique to grow the virus from throat swabs of patients with active rubella. He often got positive results from swabs taken during the first week of the rash and especially during the first few days. The problem was, while this test could confirm a rubella infection, a negative test could not rule out infection.

The new technique for isolating the virus also allowed Plotkin to measure the change in rubella antibody levels in blood taken from patients with active disease and blood drawn from the same patients two or three weeks later, when, he found, their antibody levels had risen significantly. There was one kind of patient who desperately wanted laboratory confirmation of whether she had had the disease: the worried pregnant mother.

“The fact that it is now possible to diagnose rubella infection by virus isolation from throat swabs and by [blood tests] is of practical importance in relation to rubella in pregnancy,” Plotkin, his British boss Dudgeon, and another colleague, A. Melvin Ramsay, wrote with what can only be called understatement in the resulting paper, published in the British Medical Journal.31

Plotkin, Dudgeon, Ramsay, and another colleague, N. R. Butler, also studied what was happening in the immune systems of babies, toddlers, and children with congenital rubella syndrome—patients on the wards at GOSH or sent to them from elsewhere by other doctors. Some scientists had wondered if babies infected in the womb failed to “see” the virus as foreign and make antibodies against it—a phenomenon called immunological tolerance.

But twenty-two out of twenty-five children with congenital defects who were older than six months of age—and therefore didn’t have maternal antibodies lingering to confuse the test—had rubella antibodies in their blood, meaning that they had responded normally to the presence of the virus, “seeing” it as foreign and making antibodies against it.32

There was so much that still wasn’t known. Was the blood test not sensitive enough to pick up low levels of antibody that might nonetheless have been present in the other three children? Had the youngsters with antibodies generated these only after birth, when the damage was already done? If they had in fact produced their own antibodies while in the womb, when exactly during pregnancy had their immune responses kicked in? Clearly they couldn’t have been effective in those first vulnerable weeks of embryonic life.

Plotkin and his colleagues were mapping uncharted terrain. Today it’s known that rubella is a spherical virus made of a single strand of the genetic material RNA wrapped in a protein coat that is in turn surrounded by a fatty envelope. This envelope is studded with protein spikes of two types, labeled E1 and E2. When the body’s immune defenses react to rubella, it’s mainly these protein spikes, especially E1, that they are reacting to.

Rubella is, as viruses go, quite small. At fifty to eighty-five nanometers in diameter, it’s about half the size of the HIV virus.33 And it’s some one thousand times smaller in diameter than the human cells it invades. Of course, size isn’t what matters. What matters is what a virus does in the body.

Rubella begins by colonizing the nose and throat, where it lives and multiplies in surface cells and local lymph nodes for several days before invading the blood. From there, and before a nonimmune pregnant woman has mounted her immune response, it travels to multiple tissues, including the placenta.

Hunkered down in the placenta, the virus evades the maternal antibodies that soon obliterate it from the mother’s blood. It can persist and replicate in the placenta for months.34

Probably due to damage that the virus causes to the blood vessels of the placenta, the virus is frequently able to infect the embryo, likely traveling in toxic clumps of cells that slough off from the inside of placental blood vessels and enter the embryo’s circulation. And the embryo, during at least the first twelve weeks of pregnancy, doesn’t have the tools to mount its own immune response. Instead it must rely on the mother’s own antibodies for protection.35

But movement of maternal antibodies across the placenta isn’t very efficient early in a rubella-infected pregnancy. Even by the middle of pregnancy, fetal levels of the mother’s rubella antibody are only 5 percent to 10 percent of what they are in the mother’s blood. So the growing embryo, in the crucial, earliest stages of its life, is left mostly defenseless against the virus, which circulates widely in its blood and can take up residence in virtually any organ.36 Molecular tools and imaging methods available today have identified rubella virus in the livers, kidneys, lungs, hearts, spleens, lymph nodes, brains, and eyes of fetuses aborted due to maternal rubella.

Rubella is different from other agents that cause birth defects, in that it doesn’t usually affect the carving out and shaping of organs and other structures. In rubella-affected infants, you won’t find the shortened, deformed limbs that marked thalidomide babies. You won’t find cleft palates or club feet or the exposed spinal cord that marks the failure of an embryonic structure called the neural tube to close. Instead the virus homes in on newly formed structures: the long, thin fibers of the lens of the eye; the delicate inner ear, the seat of hearing; the lining of the heart; the small blood vessels that feed what should be a growing brain with oxygen and nutrients.37

Virologists have found that rubella doesn’t immediately kill the cells that it invades. Rather, it slows them down. They don’t replicate as quickly as uninfected cells; in fact, the virus prompts them to make a protein that inhibits mitosis.38 Eventually they die, sooner than they should, prompted by the virus. So it makes sense that organs of rubella-infected fetuses and infants have been found to have fewer cells than normal and that affected babies weigh on average 65 percent of normal. Why do any cells survive? Because the virus doesn’t by any means infect all the cells in an organ. As few as 1 in 100,000 cells may be invaded. These infected cells occur in patches that are scattered in affected organs.39

There are very few embryos that escape rubella once it strikes in early pregnancy. There’s a 90 percent risk of fetal damage with a rubella infection during the first two months of pregnancy and a 50 percent risk of such damage during the third month.40 And during particular windows the growing embryo is exquisitely vulnerable: one prospective study found that ten out of ten embryos became infected when pregnant mothers had a rubella rash between three and six weeks after their last menstrual period.fn3 41

Once a pregnancy is into its fourth month, the odds of rubella infection doing damage to the fetus diminish significantly. The reason: the increasingly active fetal immune response combines with antibodies from the mother to keep the virus in check.

After a baby with congenital rubella is born, the virus can remain living in certain tissues and continue to do damage. Long-term problems can include the persistence of virus in the cerebrospinal fluid that bathes the brain and spinal cord, leading to bouts of brain inflammation called encephalitis.42 And whether because of direct viral damage to the pancreas or because the virus triggers an autoimmune reaction, causing the body’s own antibodies to attack the pancreas’s insulin-producing cells, babies with congenital rubella grow up to get type 1 diabetes at many times the rate of the general population.43 Congenital rubella sufferers also endure eye problems that go well beyond cataracts and include glaucoma—elevated pressure in the eyeball that damages whatever limited vision a person may have—and chronic inflammation of the iris and its appendages.44 It’s not known how often the virus itself continues to live in the confined chamber of the eye, doing damage. It certainly does in some cases: in 2006 living rubella virus was captured from the eyes of a twenty-eight-year-old man who was born blind and deaf from congenital rubella after a British rubella epidemic in 1978.45

Papers documenting the long-term problems of rubella-affected children wouldn’t begin to appear until the late 1960s, when Australian virologists published a twenty-five-year follow-up on the damaged infants who were born in 1941.46 But Plotkin had plenty of other rubella-related problems occupying him as he finished his residency in London in June of 1963.

It was clear to him that rubella was going to plague generations of newborns if a vaccine was not developed. Rubella epidemics recurred cyclically, predictably, every three to five years in the United Kingdom and every six to nine years in the United States.47 The next one was only several years down the road. Those were years, he was convinced, in which a vaccine could and should be made.

That summer Plotkin and his wife, the former Helen Ehrlich, whom he had married as he graduated from medical school, set off on a three-month tour of Europe—along with their newest family member, one-year-old Michael. They drove a blue Ford two-door and otherwise lived, as Plotkin would write a few months later, “like gypsies.”48 They traversed France, Switzerland, and Italy and ended up on the Croatian Riviera. Finally, in September, they boarded a Great Holland Line steamer for home.

Several months behind them, another traveler would cross the Atlantic, arriving in time for spring on the U.S. East Coast. It was the rubella virus.