5.
THE WONDER DRUGS

As I drove to work on a spring morning in 1980, the air in Atlanta held a chill. I had been away for more than two months working in the hot enclaves of Bangladesh and India and was relieved to be back at the Centers for Disease Control. The office held the usual welcomes, big piles of mail, and much to organize, but in the afternoon, I started to feel achy. Maybe it was jet lag; I had arrived the night before. But I felt lousy. My forehead was hot to the touch. After about an hour, I decided to go home. Maybe I had caught the flu on the airplane or on a prolonged stopover in England. I couldn’t remember the last time I felt too sick to work. Time to go to bed, and by the morning I would feel better.

But the next morning, I wasn’t better. My fever was up to 101°F. As an expert in infectious diseases at the CDC, I knew that malaria can begin like a case of the flu: fever, headache, achiness, sore muscles. Could I have picked up malaria? When travelers die from malaria, it’s because the diagnosis is missed and treatment is started too late. People think that they have the flu. With that in mind, I called one of my CDC colleagues in the Parasitic Diseases Branch, Dr. Isabel Guerrero. I wanted to get a blood smear taken to see if it was malaria.

“I’ll come right over,” she said.

In about thirty minutes, she was at my bedside at home, where she pricked my finger, put a spot of blood on a glass slide, and told me she would call with the results.

About an hour later, she did. “You don’t have malaria.”

Thus reassured, I was ready to wait out the flu. By then, I had developed a mild cough.

The next morning, Wednesday, I was still sick. I didn’t feel too bad, but I still had a fever. My wife convinced me to go see a specialist in infectious diseases, Dr. Carl Perlino. He examined me and, other than the fever that perversely vanished while I was in his office, I checked out fine. Even my screening blood tests were okay.

The next day, Thursday, I still had a fever and the mild cough. I was in bed all day, and that night I had a vivid nightmare. I don’t remember who was chasing me, but I woke in a cold sweat. The sheets were drenched. Even in my delirium, I knew instantly what the problem was: typhoid fever! Traveling in Bangladesh and India, where human waste often gets into food … symptoms that began about a week after I left … day after day of fever, now worsening—vague symptoms. That’s what it had to be.

By the next morning, I was very weak. My temperature had shot up to 104°F. I didn’t have the strength to button my own shirt or sit up in the car without leaning on the window. I knew that I had about a 10–20 percent chance of dying if I wasn’t treated with antibiotics. Achy, sweating, no strength, no intake of food in days but no appetite—I knew that I was acutely ill. As we drove on that exquisite spring day down a street filled with blooming magnolias, I thought that it would be a real shame to die at thirty-one.

When we got back to the doctor’s office, I was huddled and shivering. They had to put me in a wheelchair. My greatest fear was that Dr. Perlino would not understand how sick I was and would send me home. It was ironic; I knew that hospitals are dangerous places and should be avoided at all costs—people fall out of bed, they get the wrong medications, they acquire new infections—but I was desperate to be admitted, to start on treatment, not to go home.

Fortunately, he took one look at me and immediately admitted me to the hospital. Another irony is that my main job at CDC was as the Salmonella surveillance officer of the United States. Doctors from all over the country would call me to ask advice about patients and outbreaks of salmonella. So here, too, my doctor asked me what antibiotic I should be treated with. I knew that Salmonella typhi, the main cause of typhoid fever, could be treated with ampicillin, an advanced form of penicillin. Ampicillin was life-saving for millions of people. But there was a big problem: it had been used so much that by 1980 many strains of S. typhi had become resistant. It might be completely ineffective.

So instead I recommended a newer formulation of a sulfa drug, called co-trimoxazole. It combined two agents developed in the 1960s and was still widely effective against S. typhi (though resistance to it would later develop as well). Evidently, despite my high fever, I could still think straight. Even if I was wrong about typhoid, I was so acutely ill that the doctors had to treat me with something in case I had some other kind of bacteria spreading through my bloodstream.

Medical students came to take samples of my blood to the culture lab. If I had typhoid fever, Salmonella typhi would show up in the petri dishes. Then they hung a bag of fluid containing the co-trimoxazole and dripped it into my veins. I knew that the odds were turning in my favor. The chance of dying was getting smaller with each hour. That is the miracle of the antibacterial drugs that started being discovered in the 1930s.

I slept and slept. But the next morning, I wasn’t better. Still feeling achy and miserable, I asked the team, “What do the blood cultures show?”

“Nothing growing.”

Could my self-diagnosis be wrong? Was it not typhoid? But it was only twelve hours or so since the cultures had been taken, so maybe it was too early. In the odd position of both patient and specialist doctor, I recommended that we continue the course, and they agreed.

The next morning, the team came into my room. “The cultures are positive. You have Salmonella in your blood. The microbes are growing.”

So it was typhoid after all.

The next day there came a small surprise. It wasn’t Salmonella typhi, the usual cause of typhoid fever, but Salmonella paratyphi A, essentially the twin of S. typhi. But according to the textbooks, the cases are indistinguishable, and I could vouch for that.

With treatment, and a few complications, I turned the corner and started to get better every day. After a week, I was discharged from the hospital, and I spent another week at home before returning to work. A week sick at home, a week in the hospital, a week convalescing back at home—this was a serious illness. I shuddered to think what it would have been like without the co-trimoxazole.

A few years later, I was speaking with a colleague who had worked in Asia for many years. I told him that, as far as I knew, my only dietary indiscretion in the weeks before my illness was that one hot night in Mumbai, as I was walking around, I saw a street vendor who was selling slices of watermelon. His stand didn’t look that great, so I asked him to give me a slice from an unopened melon. I thought that would protect me. That was about nine days before I started to become ill, which is almost a classic incubation period.

“Of course,” my colleague said. “Of course, it was the melon.

“You see,” he told me, “in India, they sell the watermelons by weight. So the farmers inject water into them, to make them weigh more. The water comes from the rivers and streams near their fields.”

My stomach churned at the thought. The watermelon was contaminated with human waste. You get typhoid fever from ingesting food or water contaminated with the fecal waste from a person who is a carrier of the disease. I thought of the most famous carrier, Mary Mallon, better known as Typhoid Mary, the young Irish immigrant who worked as a cook for well-to-do families near New York around 1900. After an outbreak of typhoid fever in her house, she would move on to another family. And then there would be another outbreak and then another. It is not clear whether or not she understood that she was causing the outbreaks. There was plenty of typhoid around in those days; hospital wards were filled with suffering people, and at that time about a quarter of them died. A great medical detective named George Soper traced the outbreak back to Mary and made her promise to stop working as a cook. She was a typhoid carrier: she felt entirely healthy, and she was entirely healthy. Carriers aren’t ill; they just shed the organism.

Mary denied that she had anything to do with the prior cases, and within a short time she absconded from her parole. Eventually there was a trail of new outbreaks. Soper found her again. Here was the dilemma: Mary was perfectly well, but a menace to the community, no less than if she fired a loaded gun at random into a crowd. Typhoid was not a mild illness; people died after coming into contact with her cooking. Ultimately, a judge decided. Mary was imprisoned on North Brother Island in New York’s East River and lived the rest of her life in custody, swearing her innocence to the end. These days, we could probably cure her condition by removing her gallbladder and giving her antibiotics. And the people she infected could be treated with antibiotics, as I was.

Fast-forward twelve years from Atlanta to May 1992, when I was asked to speak at a conference highlighting advances in our understanding and treatment of infectious diseases. My topic was how our work had linked a newly discovered bacterium in the stomach, Helicobacter pylori, to stomach cancer, a common and difficult-to-treat malignancy. It was, we thought, a new pathogen, and people were curious to learn more.

The symposium was held at Yale specifically to mark the fiftieth anniversary of the first use of penicillin in the United States. The moderator began by recounting the case of Anne Miller, a thirty-three-year-old nurse who had suffered a miscarriage in 1942. She had been acutely ill for a month, with fevers up to 107°F, delirium, and signs of a streptococcal infection raging throughout her body. She had childbirth fever, or what doctors called puerperal sepsis. It was an infamous killer of young women after a miscarriage or birth of a child. Miller drifted in and out of consciousness, very near death.

By an incredible stroke of luck, her physician gained access to one of the first tiny batches of penicillin, which was not even commercially available yet. The drug was rushed via airplane and state troopers to Yale–New Haven Hospital, where it was administered to Miller on her sickbed.

Her recovery began within hours. The fever broke, the delirium ended, she could eat, and within a month she had recovered. It was the scientific equivalent of a miracle. What made the difference was 5.5 grams of penicillin, about 1 teaspoon worth, diluted into her intravenous solutions. Penicillin was in such short supply that her urine was saved so it could be shipped back to the Merck pharmaceutical company in New Jersey where the excreted penicillin was purified for use in another patient.

As the moderator recounted the details and the drama of the story, the audience was transfixed. You could hear a pin drop. And then, after a short pause, he said, “Now, will the patient please stand up.”

I turned around to look. In the third row, a small-boned, elegant, elderly woman with short white hair stood up and, with bright eyes, looked out across the room. She was Anne Miller, then in her eighties, given fifty more years of life by the miracle of penicillin. I can still picture her shy smile. She lived another seven years before her death at age ninety.

When Anne Miller’s life was saved, medical science was just beginning to learn how to defeat bacterial infections. Pneumonia, meningitis, abscesses, and infections of the urinary tract, bone, sinuses, eyes, and ears—indeed, all parts of the body—were still being treated with marginal or questionable remedies from the past. When George Washington developed a throat infection, he was bled by a surgeon. Doctors had great faith in this therapy, but it probably hastened the president’s demise. Bleeding continued as a remedy into the twentieth century.

Some treatments helped slightly but none dramatically, and the side effects of many patent medicines were worse than the diseases being treated. Some contained high levels of arsenic. Even as surgical techniques improved, infection was a constant worry and could transform a successful operation into a disaster; with bad luck, removing an ingrown toenail could lead to a foot amputation. An infected heart valve was 100 percent fatal, worse than cancer.

During the American Civil War, more soldiers died from typhoid fever and dysentery than from bullets. No one was immune. Leland Stanford Jr., the son of the governor of California and for whom the university is named, died of typhoid fever in Italy. He was fifteen. During World War I, dysentery and typhus took a greater toll than combat. In 1918 and 1919, the Great Spanish Flu spread across the globe to infect 500 million people, about a quarter of the world’s population, killing between 20 million and 40 million of them, frequently from complications due to bacterial pneumonia.

Scientists worked feverishly in the late nineteenth and early twentieth centuries to combat infectious diseases. They had one light to guide them: germ theory, the concept that many diseases are caused by the presence and action of microorganisms, especially bacteria.

A handful of brilliant scientists, the giants in their field, led the way. In 1857, French chemist Louis Pasteur showed that fermentation and putrefaction are caused by invisible organisms floating in the air. He demonstrated that meat decay was caused by microbes and that disease could be explained by the multiplication of germs in the body. Following Hungarian physician Ignatz Semmelweis, who markedly reduced deaths due to childbirth fever by requiring hand washing, the British doctor Joseph Lister revolutionized surgical practice by introducing new principles of cleanliness. Inspired by Pasteur, he soaked dressings with carbolic acid (a form of coal tar with antiseptic properties), covered infected wounds, and thereby improved their healing. And Robert Koch, a German doctor, developed methods to assess whether a particular microorganism causes a specific disease; these criteria are known today as Koch’s postulates. He also developed stains for visualizing the bacteria that cause tuberculosis and cholera under the microscope.

But while germ theory led to improved sanitation and a better understanding of disease, it did not revolutionize treatment. Just because it was possible to see and grow bacteria did not mean that finding ways to kill them would be easy. Another pioneer, Paul Ehrlich, who worked in Koch’s bacteriology lab, was searching for “magic bullets”—dyes, poisons, and heavy metals—that would stain specific germs and then, in a double whammy, attach to the germs and kill them.

But no one thought to look in the natural world for living organisms that would knock back pathogens. Why would they? The astonishing diversity of the microbial world is only now becoming appreciated.

Such was the mind-set of the scientific community when Alexander Fleming, a bow-tie-clad Scotsman working in St. Mary’s Hospital in London, made a discovery that changed the world. Like many of his contemporaries, he was looking for ways to kill bacteria. In classically designed experiments, he placed a jellylike growth medium (agar and heated blood) into shallow, circular, transparent plates, called petri dishes, and then inoculated the medium with bacteria. Bacteria, which are too small to be seen with the naked eye, love to eat agar. As they ate, they divided again and again. Eventually agglomerations of millions of bacteria formed a colony that could be seen by the naked eye. After putting the plates into a warm incubator overnight, Fleming was able to grow huge, clearly visible, gold-colored colonies of Staphylococcus aureus and other bacteria that he would try to kill with enzymes derived from white blood cells and from saliva.

In August 1928, Fleming went on vacation to France. When he returned in early September, he found several petri dishes that he had neglected to throw out. They had been inoculated with Staph and then they sat on his lab bench for the month he was away. As Fleming was tossing out the now useless plates, one of them caught his eye. It was flecked with a patch of blue-green fuzz, which he recognized as the common bread mold Penicillium. He saw that the luxuriant lawn of golden Staph, the filmy layer of billions of bacterial cells growing wall-to-wall on the plate, had disappeared near the mold. There was a kind of halo around the mold delineated by something in the medium that had prevented the Staph from growing.

Fleming’s trained eye immediately recognized what had happened. The mold, which is a fungus that also likes agar, had produced a substance that had diffused into the agar and killed the Staph. That substance, the first-identified true antibiotic, dissolved bacterial cells, just as did lysozyme, an enzyme he had discovered in saliva in his experiments years earlier. It wiped them out, scorched earth–style. Fleming thought his “mold juice” contained an enzyme (like lysozyme), although it was later learned that the substance was not an enzyme but nevertheless disrupts the ability of bacteria to build their cell walls, causing them to burst.

The miraculous mold was identified as Penicillium notatum. Actually the antibacterial effects of Penicillium molds had been known since the seventeenth century but not to Fleming or any of his contemporary physicians. Ancient Egyptians, Chinese, and Central American Indians all used molds to treat infected wounds. But it was Fleming’s training as a scientist that enabled him to move the fungus from a folk remedy into the scientific spotlight.

Over the next months, Fleming was able to grow the mold in liquid broth, pass the broth through a filter, and isolate a fluid that was rich in antibacterial activity. He called it penicillin. But there were many obstacles to obtaining enough of it. Not all strains of P. notatum made penicillin. Fleming was fortunate in that the one that fell onto his petri dish produced penicillin, but the yields remained tiny, unstable, short-lived, and slow acting. Unable to devise ways to make penicillin medically useful, Fleming gave up. After publishing his results, and trying some crude extracts on a few ill patients without any apparent effect, he concluded that his discovery had no practical importance.

But others had noticed. A few years later, a chemist in Germany who worked for the giant I. G. Farben chemical company that made aspirin and dyes used for textile coloring was looking for a dye that would inhibit the growth of bacteria. In 1932, Gerhard Domagk discovered a red dye (called Prontosil) that contained a wholly synthetic antibacterial agent, the first sulfonamide. A class of related sulfa drugs followed. These were the first agents that had any sustained and reproducible activity against bacteria and were not so overly toxic to people that they were injured by the side effects. Over the next few years, doctors began to use sulfa drugs to treat infections. But their spectrum of activity was limited. The drugs were good but not good enough.

With the outbreak of World War II, the need for antibacterial agents was urgent. Thousands of soldiers were destined to die from battle wounds, complicating pneumonias, and abdominal, urinary, and skin infections. In 1940 a team at the Sir William Dunn School of Pathology at Oxford University, led by Howard Florey and Ernst Chain, dusted off Fleming’s penicillin and embarked on a journey to develop ways to make it in quantity. Because London was being bombed, they took their project to the Rockefeller Foundation in New York, where they were introduced to several pharmaceutical firms in the area. The companies did not welcome the scientists with open arms, for they knew that penicillin was at a very early experimental stage. Yields rarely exceeded 4 units per milliliter of culture broth—a drop in the proverbial bucket.

So the British scientists took their efforts to Peoria, Illinois, where the new Fermentation Division of the Northern Regional Research Laboratory was gearing up studies about using the metabolism of molds (fermentation) as a source of new microorganisms. Its staff was experienced and had a substantial collection of molds, but few of their strains made penicillin, and none was prolific. Thus the call went out to everyone they knew: send us samples of soil, moldy grain, fruits, and vegetables. A woman was hired to scour the markets, bakeries, and cheese stores of Peoria for samples bearing blue-green mold. She did the job so well they called her Moldy Mary. But in the end, a housewife brought in a moldy cantaloupe that changed the course of history. This particular mold produced 250 units of penicillin per milliliter of broth. One of its mutants churned out 50,000 units per milliliter. All strains of penicillin today are descendants from that 1943 mold.

The scientists ultimately developed methods for making this more potent form of penicillin in quantity. Later the pharmaceutical firm Charles Pfizer & Company used molasses as a way of growing the pensicillium molds in bulk. By the time of the invasion of Normandy in June 1944, 100 billion units were being produced each month.

Penicillin ushered in a golden age in medicine. Here was a drug that could, at last, treat infections caused by deadly bacteria. Because it was so astoundingly effective, it was considered to be truly “miraculous.” What could this wonder drug not do? Press reports heralded “a new era of medicine, the conquest of germs by interfering with their eating and digesting, [that] is sweeping through the military hospitals of America and England.”

In 1943 streptomycin, the first effective agent against M. tuberculosis, was developed from soil bacteria. Then came others: tetracycline, erythromycin, chloramphenicol, and isoniazid, which together brought about the antibiotic era. At the same time, new forms of semisynthetic drugs were developed via chemical modification of natural substances, as well as the manufacture of purely synthetic or nonnatural compounds. Today, for convenience, we call all of these drugs antibiotics, although strictly speaking antibiotics are substances made by one living form to fight against another.

Those original antibiotics and their descendant drugs transformed the practice of medicine and the health of the world. Formerly lethal diseases like meningitis, heart valve infection, and childbirth fever could be cured. Chronic bone infections, abscesses, and scarlet fever could be prevented and cured. Tuberculosis could be arrested and cured. Sexually transmitted diseases like syphilis and gonorrhea could be cured. Even my case of paratyphoid could be cured without months of illness and a big risk of dying. Cure also was a great form of prevention, since a cured person would not spread the pathogen to others.

Surgery got safer. Antibiotics could be given pre-operatively to lower the risk of many surgical infections. If infection developed, antibiotics came to the rescue. Surgeons could attempt more sophisticated surgeries to correct a myriad of woes, such as removing brain tumors, correcting deformed limbs, repairing cleft palates. It is fair to say that without antibiotics there would be no open-heart surgery, organ transplantation, or in vitro fertilization.

Similarly, chemotherapies used to fight cancer often suppress the body’s ability to fight infection and lead to bacterial infection. Without antibiotics, leukemias and many other cancers would not be treatable. It would be too dangerous to give the massive amounts of chemotherapy required without the safeguard of antibiotics.

In the 1950s the Chinese government decided to wipe out syphilis. Tens of millions of people were treated with a long-acting form of penicillin. This massive public-health campaign worked. The age-old scourge was virtually eliminated from China. Yaws, a related ancestral disease, was successfully eradicated from vast swaths of Africa after a series of similar campaigns.

How did and how do these drugs perform their miracles? Antibiotics work in three general ways. One, as exemplified by penicillin and its descendants, is by attacking the machinery used by bacteria to create their cell walls. With defective walls, bacterial cells die. Interestingly, they often commit suicide: lack of a cell wall triggers bacterial hara-kiri. We are not certain of the biological reason for their suicide, but nature selected for fungi like the Penicillium mold that make these antibiotics and are able to exploit that weakness.

The second mechanism is inhibiting of the way bacteria make the proteins that perform all of the important functions of the bacterial cell. The proteins within a cell are vital for life. Cells need proteins to digest food, build their walls, enable reproduction, defend against invaders and competitors, and help the bacteria move around. Such antibiotics directly target the machinery that allows proteins to be made, crippling the bacteria, and having minimal effects on protein production by human cells.

A third is interfering specifically with the ability of bacteria to divide and reproduce, thereby inhibiting their doubling. With slower growth, they become less of a threat so that the host can mount an immune response to deal with them more easily.

If you think about it, antibiotics are natural substances made by living organisms—fungi and other bacteria—that want to throw a monkey wrench into the workings of their competitors. Their neighboring bacterial cells are little machines, all with multiple moving parts; over the eons, they have found many different ways to attack. And the bacteria have found so many ways to defend, which are the very basis of bacterial resistance to antibiotics. Since time immemorial, it has been an arms race. But for us humans, our development of antibiotics has been like getting the atomic bomb. It has fundamentally changed the playing field. Interestingly, both came on the scene at the same time. The scientific developments of the 1920s and 1930s led to their deployment in the 1940s. As with the atomic bomb, our hope was that it would be a panacea. The threat of the bomb would be so great that we would war no more. Similarly, the power of the antibiotics would once and for all defeat bacteria. Although there is some truth to both, neither has fulfilled that promise, nor could they ever. Both are just tools, and the fundamental causes of war between men and between men and bacteria remain.

As use of antibiotics became more and more widespread, a few side effects appeared, but most were mild—a few days of loose bowel movements, an allergic rash. In nearly all cases, these problems went away as soon as the drug was stopped. A handful of people had serious, sometimes fatal, allergies to penicillin. But the risk of dying from penicillin allergy was, and is, lower than the risk of being struck by lightning. It is a remarkably safe drug.

Other antibiotics did produce adverse effects. Some damaged the auditory nerve; others could not be used in children because it mottled their teeth. A very commonly used antibiotic in the 1950s, chloramphenicol, was found to cause a rare suppression of the bone marrow’s ability to form blood cells, which was fatal about once in forty thousand courses of the drug. For very serious infections, such a low risk of dying from a drug allergy was infinitesimal compared to the risk of dying from the infection. But in some places, hundreds of thousands of healthy young children with mild sore throats were treated with chloramphenicol. For them, the risk clearly exceeded the benefit, and there were lots of alternative antibiotics. Doctors stopped using it almost entirely. Still, for years, I have said to my students that if I were marooned on a desert island and I could have only one antibiotic with me, I would choose chloramphenicol; it is that powerful.

The idea that other potent antibiotics also could have side effects beyond those immediately apparent wasn’t part of the conventional wisdom; it was not even a consideration. If there were no allergies manifested in the days and weeks after receiving the drugs, they were considered safe.

Almost all the great advances in medicine from the second half of the twentieth century continuing through to today were catalyzed by the deployment of antibiotics. No harm could come from their use, or so it seemed. The fallout appeared only later.