The discovery of a drug combination capable of controlling the human immunodeficiency virus (HIV) was one of the great triumphs of biomedical research in the postwar era. Over the last quarter of the twentieth century, no disease spread greater havoc across the globe than HIV-caused AIDS (Acquired Immune Deficiency Syndrome). At century’s end, more than forty million people were infected, with AIDS threatening to devastate large swaths of the developing world, especially in sub-Saharan Africa. It is a mark of progress that public health officials and activists—who fought to bring the drugs to control AIDS into existence—have turned their attention to making drugs affordable in those parts of the world that have been hardest hit by the epidemic.
Developing AIDS drugs was not an easy or inexpensive task. Every step of the process was dogged by controversy, and success often seemed an unreachable goal. But in the end, the successful campaign represented the triumph of a simple idea, one that in recent years has been overshadowed by the public’s infatuation with private sector ingenuity. Significant medical advances are almost always the product of collaborations between the public and private sectors, and in areas of the greatest public health concern, the government invariably plays the leading role.
Yet well into the 1990s, the public sector’s effort to develop treatments for AIDS seemed doomed to failure. To much of the public, not to mention an aroused and desperate patient population, the government’s effort seemed hopelessly misguided. Many Americans wrongly believed the disease was somehow caused by the hedonistic lifestyle of its victims and would remain resistant to the best efforts of medical science. Like tuberculosis in the nineteenth century or cancer for much of the twentieth century, AIDS was perceived by many as divine retribution. The metaphor insinuated itself into the scientific debate. To this day, the National Institutes of Health (NIH) feels compelled to refute that myth on its Web site by carefully documenting that AIDS is caused by a viral infection and is spread by the usual suspects for blood-borne pathogens: unprotected sex, tainted blood supplies, infected needles, and, tragically, during childbirth or breastfeeding if the mother carries the virus.
The AIDS metaphor could flourish for a simple reason: The virus, discovered in 1983 and thoroughly described and categorized by 1987, proved remarkably resistant to the best efforts of modern medicine to control or eliminate it once inside its human host. But thanks to the willingness of government and industry to pour billions of dollars into researching cures and vaccines, there are now drugs available that are somewhat effective in controlling the disease. They are cumbersome to take, have debilitating side effects, and are extraordinarily expensive, at least in the advanced industrial world, where they were initially researched, patented, and produced. Research today is focused on developing vaccines to prevent the spread of the disease and on producing new medications that are less toxic, easier to take, and more effective at preventing the wily virus’s ability to mutate, survive, and spread.
The story behind the discovery of the first generation of drugs for controlling HIV involved thousands of scientists on three continents and housed in hundreds of public and private institutions. Those drugs’ relatively rapid emergence came about in no small part because of the public health movement that arose among AIDS carriers and their advocates demanding that something be done about the epidemic. The following chapters are not an attempt to tell that entire story, but only one part of it: the emergence of a class of drugs called protease inhibitors, which, when used in combination with previously discovered drugs, showed that effective therapy was possible. The first protease inhibitor was approved by the Food and Drug Administration (FDA) in December 1995, with two more in March of the following year. These powerful new drug combinations proved capable of controlling the virus in most patients who became infected. Though drug resistance was and continues to be a major problem, the U.S. death toll was cut by two-thirds within two years of their introduction.
This tremendous victory could not have been achieved without the private sector, but over the course of more than fifteen years of research and development, governments in the United States, Europe, and Japan spent three times more money than private firms on the basic science, drug development, and clinical trials that led to the drugs that tamed the disease. It comes as no surprise then that the public sector’s fingerprints are all over the final products of that research. What is surprising is how private pharmaceutical firms sought to wipe them away.
It is bracing to recall how grim the prospects for such a breakthrough appeared before the emergence of protease inhibitors. Indeed, if the AIDS epidemic had a darkest hour, the summer of 1993 was surely it. The Centers for Disease Control had just announced that more than a half million Americans were infected with the virus. The death toll in the United States alone that year reached a staggering 37,267, a majority of them young homosexual men. The obituary pages of the nation’s leading newspapers read like a dirge for the worlds of high fashion, literature, and the arts, where many gay men had made their careers.
And to the consternation of many Americans, AIDS was no longer a disease limited to homosexuals. Los Angeles Lakers star Magic Johnson’s forced retirement from basketball in 1991 had brought home to the heterosexual population that it, too, was at risk, just as the death of Ryan White, the eleven-year-old Indiana hemophiliac who had died in 1990 after receiving a tainted blood transfusion, made AIDS as all-American as apple pie. A disease that had announced its presence in 1981 when a Los Angeles physician noticed an outbreak of a rare skin cancer among five of his homosexual patients had become by 1993 a society-wide pandemic with no effective treatment or cure.
More than fourteen thousand scientists, activists, and media people gathered in Berlin that summer for the ninth International AIDS Conference. Hoping for good news from the frontlines of medical research, they came away bitterly disappointed. The low point among the eight hundred lectures and forty-five hundred poster presentations occurred when European physicians issued the final statistics from the so-called Concorde trial, which had compared the life expectancy of untreated AIDS patients to those on Burroughs Wellcome’s drug zidovudine (AZT). AZT had been approved by the FDA in 1987, and for many years was the only drug for HIV. But according to Concorde’s Anglo-French authors, people who took AZT when their immune systems began to deteriorate fared no better in the long run than people who took no drug at all.
Researchers from NIH immediately went into damage control mode. Daniel Hoth, the director of the AIDS Clinical Trials Group (ACTG), a nationwide network of government-funded academic researchers, issued a press release claiming the Concorde study results were merely preliminary and did not contradict earlier ACTG studies. AZT, a drug that government scientists had discovered, screened, and cajoled Burroughs Wellcome into taking through the FDA approval process, at least delayed the onset of serious disease, Hoth insisted. “At this time, we see no basis for changing the current recommendation to initiate antiretroviral therapy for HIV-infected persons.”1 Moreover, there was good news from some patients in new trials combining AZT with other drugs that government-funded scientists had discovered and licensed to private drug firms. The two-drug combination therapy had slightly increased the number of disease-fighting white blood cells in some of the patients in those trials.
The hundreds of AIDS activists in attendance—many of whom privately referred to AZT as rat poison because of its side effects—weren’t impressed. AIDS activists had radically transformed the traditionally paternalistic relationship between doctors and patients. Angry, aware, and articulate, people with AIDS had forced two successive conservative administrations to take the plague seriously. Their noisy protests and sitins at NIH, the FDA, and in the corporate suites of major drug manufacturers were largely responsible for the fact that by 1993 taxpayers were spending more than a billion dollars a year on AIDS research, dwarfing the efforts of private industry. The activists had even forced their way into scientific meetings and onto government panels, intruding on a medical world that preferred to operate in secrecy.
In Berlin, when the results of the latest studies became known, many of the AIDS activists had the sophistication to read between the lines of the government-funded studies. “Our deepest impression from the conference is that the most important and productive approach possible to saving the lives of those already infected was simply not on the table there—not among the scientists, not among the physicians, and not among the activists,” wrote John James in AIDS Treatment News, the widely read newsletter in the AIDS activist community. “The greatest need, everyone did seem to agree, is for better drugs. . . . Existing drugs are largely useless.”2
For those paying close attention at the Berlin conference, though, the news wasn’t entirely grim. At the last minute, conference organizers allowed a number of drug companies to make presentations on early-stage protease inhibitors, which they hoped would become the next generation of AIDS medications. The first AIDS drugs—three had already been approved by the FDA—were called nucleosides because they directly interfered with an enzyme called reverse transcriptase that HIV needs to copy itself during reproduction. HIV comes from a class of viruses known as retroviruses, whose genetic material is made up of ribonucleic acid, or messenger RNA, and who reproduce in backward fashion. After invading a host white blood cell, the virus produces a mirror-image strand of deoxyribonucleic acid, or DNA. That strand in turn takes over the host cell’s DNA and spawns a new RNA-based retrovirus. The evil genius of HIV is that it infects the white blood cells needed to fight off invaders. It is the arsonist that targets firehouses, to paraphrase one memorable metaphor.3 The clinical trial results shown in Berlin proved what many AIDS patients already knew: The effects of nucleoside reverse transcriptase inhibitors were short-lived at best. The virus was mutating around them.
But scientists had long known that the HIV genome had other potential targets. Researchers by the mid-1980s knew that the retrovirus produced an enzyme called a protease, which cut the new genetic material of a reproducing retrovirus into little pieces before reassembling them into a new offspring. If scientists could find a chemical to block the action of the protease scissors, the infected cell would be unable to reproduce and would soon die. Since the late 1980s private drug firms and government-funded researchers had been searching for a protease inhibitor, which they hoped would be the “magic bullet” to kill HIV. By the summer of 1993, their efforts were beginning to bear fruit. A number of firms pushed to get their initial attempts onto the Berlin agenda.
Keith Bragman, Hoffmann–La Roche’s top European clinical virologist, coordinated one of the presentations at the sparsely attended meetings on protease inhibitors. Bragman, a thin, retiring Englishman in his early forties, whose soft voice masked a fierce determination to leave a mark on AIDS research, had recruited doctors from three of Europe’s top academic research institutes to test Roche’s new protease inhibitor, known then only as Ro31-8959. It would eventually be called saquinavir. To anyone who understood the dynamics of pharmaceutical research, the Roche compound presented huge problems. The immense molecule was hard to absorb through the gut, or, to use the industrial term of art, it lacked bioavailability. Test subjects had to take a fistful of capsules before it began showing up in their bloodstreams. Although it was potent in the test tube—at the concentrations at which they had succeeded in getting it into the blood—it had only a minor ability to kill HIV and raise white blood cell counts. But at least it showed activity, the clinicians reported to the meeting. It was concrete evidence that a protease inhibitor actually worked in humans. All Bragman needed now was a better version of the drug.
Illinois-based Abbott Laboratories also pressed to get on the Berlin agenda, which was something of a surprise. The buttoned-down mid-western firm usually maintained a high level of secrecy about its operations, not unusual in the hush-hush world of industrial drug development. But John Leonard, who had been brought in from a small drug testing firm to run Abbott’s AIDS program a year earlier, also felt pressure to show that his protease inhibitor could work. Abbott, though a large health care firm, was a bit player in pharmaceutical research, especially when compared to industry giants like Pfizer, Merck, or Hoffmann–La Roche. It had even sought federal government help to fund its initial protease inhibitor research. But that grant had recently run out, and Leonard was under strict orders from chief executive officer Duane Burnham to avoid all further contact with government-funded clinicians. He went to Europe instead, where scientists doing initial safety tests on drugs didn’t have to divulge their experiments to the government. Leonard asked Sven Danner at the University of Amsterdam to test the company’s initial protease inhibitor.
Danner’s findings, unveiled at Berlin, attracted almost no attention and for good reason. Abbott’s protease inhibitor candidate was so unwieldy it had to be given intravenously, and even then the liver cleared it from the body almost as fast as the doctors could pump it in. It also caused blood clots. “I felt so sorry for the guy having to present that embarrassing story,” Leonard recalled.4 But like Roche’s drug, it seemed to inhibit viral replication in a handful of patients. It was the “proof of concept” that the Abbott scientists needed to convince top management to continue supporting their bare-bones AIDS drug discovery effort.
Several other companies made presentations about their preclinical protease research. Vertex Pharmaceuticals, a biotech start-up from Cambridge, Massachusetts, announced it had just come up with a protease inhibitor candidate. Though the young firm had attracted some of Merck’s top AIDS research scientists in the late 1980s, the company had initially ignored AIDS research in order to hunt for drugs that would impede transplant rejections. But with chief executive Joshua Boger desperate to show progress to his Wall Street backers, the firm had suddenly shifted back into AIDS research and fairly quickly came up with a drug they thought would inhibit viral replication.5 And since their drug candidate was small, it should, theoretically at least, have good bioavailability. Merck’s joint venture with DuPont Pharmaceuticals also unveiled a potential drug. But neither Vertex’s nor Merck-DuPont’s drugs had entered human trials.
Pessimistic press accounts from the conference completely overshadowed the scanty news about protease inhibitors. Robert Yarchoan, one of the National Cancer Institute doctors who had played a key role in bringing the first AIDS drugs to market, was one of the few scientists who came away from Berlin in an upbeat mood. “People are too down about things,” he recalled thinking at the time. “For the first time, a new class of drugs was shown to have activity.”6
Three years later, those initial rays of hope would blossom into a significant medical breakthrough. In December 1995, Roche’s saquinavir would become the first protease inhibitor approved by FDA; an Abbott drug derived from the one shown at Berlin would be second a few months later, with Merck’s entry a close third. Vertex, the nimble biotech whose entrepreneurial zest was supposed to run circles around the traditional drug firms, wouldn’t get its drug to market until April 1999, three years behind the old-line pharmaceutical firms.
The studies submitted to the FDA showed that protease inhibitors were not the magic bullets that cured AIDS. But when used in combination with at least two other antiretroviral drugs, HIV could be suppressed to near undetectable levels in most patients, and therefore prolong life. The annual U.S. death toll, which had soared to more than forty thousand in the mid-1990s, fell below fourteen thousand within two years.
In the years since protease inhibitors came on the market, huge problems have arisen with drug combinations to control HIV. Resistance arises in anywhere from a third to half of patients, usually among those who do not closely adhere to the complicated pill-popping regimens or had previous exposure to the individual drugs used in combination therapy. The protease inhibitors also turned out to have a host of unwanted side effects. They induced nausea, diarrhea, and fatigue. Prolonged usage also caused lipodystrophy, an unsightly condition where fat cells from the face, arms, and legs redistribute themselves to the abdomen and the back of the neck, leaving many on combination therapy looking like concentration camp survivors.
But throughout 1996, as word of the miraculous breakthrough spread through the subcultures that had been hardest hit by the AIDS epidemic, the idea that their world had been given a biomedical reprieve acted like an intoxicant. Newsweek proclaimed, “The End of AIDS.” Time hailed David Ho, head of the Aaron Diamond AIDS Research Center in New York and a lead investigator for Abbott’s protease inhibitor, as “Man of the Year.” The New York Times Magazine carried an eight-thousand-word article entitled “When Plagues End” by former New Republic editor Andrew Sullivan, himself HIV-positive. “The power of the newest drugs, called protease inhibitors, and the even greater power of those now in the pipeline, is such that a diagnosis of HIV infection is not just different in degree today than, say, five years ago. It is different in kind. It no longer signifies death. It merely signifies illness.” Larry Kramer, the radical playwright and founder of ACT UP (AIDS Coalition to Unleash Power), the most militant of the AIDS activist groups, signaled the next phase of the anti-AIDS struggle when he penned a long article complaining about the high price of drugs and their lack of affordability in the developing world, where most AIDS sufferers lived.7
In these and subsequent accounts, the emergence of protease inhibitors and cocktail therapy was portrayed as a triumph of private enterprise. Whether the writers lamented or endorsed the high price of the drugs, their accounts provided tacit endorsement of the drug industry’s insistent claim that the high prices of AIDS medications were necessary to fund the extraordinarily expensive research and development behind them. “As the new protease inhibitors remind us, however, large corporations are in many cases the only organizations with the resources capable of providing us with the innovations we need,” Louis Galambos, a drug industry historian at Johns Hopkins University, wrote in an op-ed article for the Washington Post. “In the case of AIDS, companies like Merck, Hoffmann–La Roche and Abbott started their programs when much of the basic research on the virus and the disease still remained to be done. Only companies with significant scientific resources could afford to mount sustained research-and-development campaigns under these conditions.”8 It was a tidy story. But it was misleading about the science and flat-out wrong about the economics.
John Erickson wasn’t a typical employee at Abbott’s research labs in a far northern suburb of Chicago. In the mid-1980s, Erickson wore long hair, a shaggy beard, and sandals. Raised by a physician and a social worker in Buffalo, he was more liberal and more academically inclined than most of his industry peers. Hungry for “medical relevance,” he had started his academic career studying human viruses but switched to the biochemistry of plant pests while finishing his doctoral work at the University of Western Ontario. At the urging of his adviser, he accepted a postdoctoral fellowship with Purdue University’s Michael Rossmann, who in the early 1980s was pioneering the use of X-ray crystallography to view biochemical interactions at the molecular level. There, the young scientist found his professional calling. The Rossmann-Erickson team would eventually come in second in the race to publish an accurate portrait of the virus that causes the common cold—then considered the moon shot of the tiny X-ray crystallographic world.
With his fellowship nearing its end, he accepted a teaching post at the University of Wisconsin at Milwaukee. But in late 1985, shortly after starting his teaching assignment, Erickson heard from a former Purdue colleague that Abbott wanted to put the new technique to work in its drug discovery division. X-ray crystallography, which was developed in academic labs with federal research dollars, was quickly becoming one of the key technologies behind the newly emerging field of structure-based drug design, sometimes called (to the consternation of traditional medicinal chemists) rational drug design.
For nearly a century, medicinal chemists working in public health labs or pharmaceutical firms pursued new drugs using methods that were not much different from those pioneered by German scientist Paul Ehrlich, the father of modern drug therapy. Ehrlich had cut his teeth in the 1890s in the laboratories of Robert Koch, who was himself following in the footsteps of Louis Pasteur, the discoverer of the microbes responsible for infectious diseases such as cholera, diphtheria, and tuberculosis. Ehrlich eventually shared the 1908 Nobel Prize for discovering how the immune system develops antibodies—“magic bullets”—to combat invading organisms. By that time he was running his own lab in Frankfurt and began experimenting with the dyes used to stain cell specimens that might be used as artificial magic bullets. Ehrlich surmised that since different colored dyes bound to particular cells, there must be unique receptors on cells. If he could find chemicals that bound to the receptors that played a role in a disease and at the same time blocked their action, physicians could use those chemicals as therapeutic agents. He spent the rest of his career searching for chemicals that would attack cellular interaction at the molecular level. Ehrlich would eventually become a household name and one of the early superstars of drug discovery when he and a Japanese assistant developed the first medicine for combating syphilis, a derivative of arsenic called arsphenamine.
Ehrlich’s pioneering research revolutionized the tiny, turn-of-the-century drug industry, forcing it to move away from the quackery of patent medicine into the modern world of scientific drug discovery. The move proved far from a magic bullet for the firms. Medicinal chemists who followed in Ehrlich’s footsteps would invariably screen hundreds of chemicals in their search for agents active against a disease, and their search more often that not proved fruitless. Even when they found a compound that showed some activity in a test tube, they often had to synthesize version after version (called analogues) to come up with one worthy of testing in animals, and if that proved nontoxic, in humans.
The commercialization of advanced X-ray crystallography techniques in the early 1980s promised to overthrow the screening regime by hastening the search for new drugs. Crystallographers sent high-powered X-rays through the tiny protein molecules in cells, whose atoms would bend the beams before they hit the film. The crystallographers then used computers to create a three-dimensional image of the molecules. When viewed through special 3-D glasses, the long, convoluted chains that made up the proteins looked like Lego contraptions. The hope was that by peering intently at the revealed structure, biochemists could design drugs that fit precisely into the proteins’ chemical folds and block its action.
Always on the lookout for ways to make their research and development departments more efficient, most major companies jumped on the X-ray crystallography bandwagon. In late 1985, Erickson joined a small department at Abbott formed to experiment with the technology. But what would their first target be? Besides X-ray crystallographic expertise, Erickson brought his background in viruses to the firm. AIDS was the hottest topic in virology and a social problem of growing proportions. Abbott was already involved, having licensed an AIDS diagnostic kit from NIH. But Abbott had tried its hand at researching drugs for other viral diseases without success. So despite the millions it was making off the AIDS diagnostic kit, Abbott wasn’t spending a dime to combat the disease. Erickson was appalled. “There was no virology,” he recalled in 2001. “It was all bacteria and fungi. I thought to myself, As a drug company, shouldn’t we have antivirals?”9
He began researching the literature. Scientific papers about the AIDS virus were already pouring out of dozens of government and academic labs and a handful of industry labs. The virus had been codiscovered in 1983 by Robert Gallo of the National Cancer Institute and Luc Montagnier of the Pasteur Institute in Paris, although their joint claim to the discovery remains a subject of heated controversy to this day. In 1987, President Ronald Reagan and Prime Minister Jacques Chirac of France signed an agreement that gave equal credit to the two scientists; they split the royalties from the AIDS diagnostic kits based on that discovery between the two governments. But that hardly put the matter to rest. A Chicago Tribune special section written by Pulitzer Prize–winning investigative reporter John Crewdson in November 1989 concluded that Gallo either stole the virus or had allowed his laboratory samples to become contaminated with the French isolate. His fifty-thousand-word article—later expanded into a book—launched a round of congressional investigations that proved inconclusive.
But from the point of view of scientific inquiry into the causes and potential cures for AIDS, the controversy was irrelevant. The virus’s discovery led to the sequencing of its genes, which in turn enabled scientists to begin tearing apart its inner workings. They quickly discovered it was made up of at least nine genes, six of which contained information necessary for the HIV to reproduce itself. Their work was hastened by the knowledge that HIV was a retrovirus, a class of viruses that had been thoroughly studied during the 1970s by Gallo’s Tumor Cell Biology lab at NCI during its largely fruitless hunt for viruses that caused cancer. (Some rare forms of leukemia were the exception.)
Retroviruses require several enzymes to reproduce themselves, all of which would eventually become drug targets. There was, of course, reverse transcriptase, the key building block for HIV. The fact that all retroviruses needed reverse transcriptase for reproduction had been discovered by Howard Temin of the University of Wisconsin and David Baltimore of the Massachusetts Institute of Technology in 1970. These NIH-funded scientists would win the Nobel Prize for their work five years later. The virus also produced an integrase enzyme, which it used to insert itself into the host white blood cell. There were genes that made the proteins for its viral offspring and genes that controlled the rate of reproduction. There were receptors that allowed the virus to insert itself into the target cell. And it produced a protease enzyme, which the virus needed in order to dice and splice its genetic material as it reproduced. It has often been called chemical scissors.
From the start, the HIV protease was a tempting target, especially for industry scientists. Drug industry researchers had spent much of the 1970s and 1980s investigating potential inhibitors of a scissorslike protease enzyme called renin, which helps regulate blood pressure. They had even synthesized many renin inhibitors. But those were more costly to make and harder to get into the blood stream than dozens of other blood pressure drugs already on the market, so it made no sense to take them into clinical trials. However, that failed program left behind a corps of industry scientists with protease knowledge and experience that could be tapped when the search for an HIV protease inhibitor came along.
A few of those industry scientists even contributed to the basic scientific understanding of HIV’s protease. But most of the basic research and key breakthroughs came from academic labs. In September 1987 Laurence Pearl and William Taylor of Cambridge University published an article in Nature, the leading British science magazine, which laid bare the inner workings of the HIV protease scissors.10 They proved that its chemistry was very similar to renin. But the two British scientists took the research one step further. They used computer simulations to predict that the HIV protease had two matching halves, like a clam shell. The implication to anyone reading the paper was clear. If a chemical could be found that would stick inside the clamshell and jam up the mechanism, the virus would be unable to reproduce itself.
At Abbott, Erickson had already gravitated to the HIV protease as his target. He independently figured out its chemistry by consulting with Steve Oroszlan, a Hungarian-born senior scientist in Gallo’s NCI lab. Oroszlan had discovered the chemistry and structure of the protease in a leukemia retrovirus. By the time the Pearl-Taylor paper came out and confirmed his suspicions about the HIV protease, Erickson was already constructing crystallographic models on his computer. The next step was to get a chemist who could synthesize chemicals—potential drug candidates—that might stick to the innards of the protease and gum up the works. Company officials asked biochemist Dale Kempf, a Nebraska farm boy who had done his postdoctoral work at Columbia University after earning a doctorate at the University of Illinois, to help out on Erickson’s project. At thirty-one, Kempf had already logged three years in Abbott’s renin program. Perhaps he could try some of his renin inhibitors against the HIV protease. “I can design better ones,” Kempf recalled telling his superiors.
But he couldn’t do it by himself. Kempf needed meticulous bench scientists who wouldn’t bungle the multiple steps needed to synthesize the complicated compounds. Erickson needed help, too, to tweak his computer simulations. In academia, when a professor needs help in his experiments, he recruits postdoctoral researchers. But in industry, one hires help. And at tight-fisted Abbott, help was not immediately forthcoming.
Through his contacts at NCI, Erickson learned about a new government program to develop drugs to combat AIDS. The National Institute for Allergies and Infectious Diseases (NIAID), which under Anthony Fauci had become the lead agency in the fight against AIDS, had just launched the National Cooperative Drug Development Grant (NCDDG) program. Over five years in the late 1980s and early 1990s, the NCDDG would spend about $100 million at both nonprofit and private-sector labs to develop drugs to fight HIV. The model for the program was NCI, which had spent years forging ties with both nonprofit and industry scientists to come up with anticancer chemotherapy agents. The government’s managers had a clear picture in their own minds about how to foster innovation. “You fund competing groups and don’t worry about overlap,” recalled John McGowan, who was directing outside grant-making for NIAID at the time. “You try to get competition among them to get things moving faster to the market.”11
Overcoming some initial skepticism inside the firm (“They don’t give grants to industry,” one executive scoffed), Erickson applied for a million-dollar-a-year grant to fund Abbott’s protease inhibitor program over the next five years. It came through. Abbott wasn’t the only company ready to jump onto the government payroll. Hoffmann–La Roche received a grant to pursue inhibitors of one of HIV’s regulatory proteins. William Haseltine, a star researcher at Boston’s Dana-Farber Cancer Institute who would go on to form Human Genome Sciences, hooked up with SmithKline and Beecham Research Laboratories to pursue a range of anti-HIV drug targets. Former NCI scientists at the University of Miami got a grant to begin testing Upjohn’s repository of chemicals against AIDS. A number of independent academic investigators like Garland Marshall at Washington University in St. Louis received grants to develop protease inhibitor drug candidates, some of which were eventually licensed to private firms like G. D. Searle. By 1990, nearly a dozen firms had drug development programs aimed at HIV, with about half getting some form of direct government support.
When I met Kempf at Abbott’s sprawling research campus north of Chicago more than a decade later, he had risen to become head of Abbott’s antiviral research efforts and was considered one of the nation’s best medicinal chemists in the anti-AIDS fight. I asked him to recall the significance of that original grant. He pushed back his wire-rim glasses, which make him look every inch of his Swiss-German heritage. “It was through that NIH funding that head count opened up and I was able to hire a postdoc and associate chemist. The three of us started working full-time on HIV chemistry,” he said. “Before that, I was making HIV inhibitors on the side.”12
Parroting ideas drawn from Vannevar Bush’s 1946 study, Science, The Endless Frontier, business leaders and government officials tell a tidy story about government’s role in technological innovation. It is government’s job to fund basic research, the pure science conducted by inquisitive investigators at the nation’s universities that advances the nation’s storehouse of knowledge. Applied research—taking that science and fashioning it into products and processes for the marketplace—is industry’s job.
During the war, Bush ran the Office of Scientific Research and Development (OSRD). The executive-branch agency’s wartime mission had succeeded in tearing down the walls that separated pure science, conducted mainly in universities, and applied science, conducted mainly within private industry. It oversaw the development of a cornucopia of what science historian Daniel J. Kevles has called “military miracles”: microwave radar, proximity fuses, solid-fuel rockets, and, in the most spectacular government-funded science project of all time, the Manhattan Project, which built the world’s first atomic bomb.13
Less well known were the achievements of the OSRD’s Committee on Medical Research, which spent a mere $25 million during the war. Its federally financed breakthroughs included the mass production of penicillin and the development of blood plasma, steroids, and cortisone. The penicillin breakthrough has often been claimed by the private firms that supplied the “miracle drug” to the troops abroad, but their efforts would have been impossible without the fermentation techniques developed at a federal lab in Peoria, Illinois.14 Similarly, the federal government spent a half million dollars to turn blood plasma, which had been developed by the Rockefeller Foundation in 1938, into an industrial commodity so that it could be purchased from government contractors.15
With the end of the war in sight, Franklin Delano Roosevelt asked Bush, a Massachusetts minister’s son whose prewar career was spent on the electrical engineering faculty at MIT, to draw up a blueprint for government support of science in the postwar world. Roosevelt wanted to continue the “unique experiment of team work and cooperation” that had been developed between academia and industry during the war. His seminal report was delivered to President Harry S. Truman on July 19, 1945.
Bush turned his back on the wartime experience and came down squarely on the side of those who saw a limited role for government in controlling the direction of scientific research. The report drew immediate opposition, spearheaded by Senator Harley Kilgore, a crusty West Virginia Democrat who happily admitted his ignorance of the technical side of science and technology. During the war, Kilgore complained that the government was being too generous in its reimbursements to university and industry for war-related research and development. He also worried about the government giving industry the patents to federally funded inventions, which he feared would be used to monopolize the exciting new markets for technological products that were sure to open up after the war. The New Dealer was also concerned with the future direction of government-funded science. He was the first to propose a National Science Foundation, but his 1944 vision put the government—not scientists—in charge of the agency. He wanted it involved in both basic and applied research and in the training of scientific personnel. He also wanted it to promote social goals, including small business promotion, pollution control, and rural electrification. Kilgore also wanted to give financial support to the soft social sciences such as sociology, economics, and political science.16
Bush, who before the war had helped turn MIT into the preeminent basic science research institution in the country, rejected such thinking out of hand. Government should use its money to support basic research alone, he wrote. Wartime breakthroughs had drawn down the capital stock of basic scientific understanding, which had to be rebuilt by funding pure science in the nation’s universities. Bush scoffed at social science as thinly disguised political propaganda. He supported the idea of a National Science Foundation, but he wanted the agency to give out peer-reviewed grants that would promote intellectual innovation in hard sciences like chemistry, physics, mathematics, geology, and biology. Applied research, the report said, should be left to private industry, which was perfectly capable of sifting through the intellectual breakthroughs generated at publicly supported universities and federal labs to pick out the nuggets that would lead to technological and commercial innovation. Years later, science policy historian Donald E. Stokes, who spent his career managing major research centers at the University of Michigan and Princeton University, decried this separation of research from its uses. But he understood the economic and professional motivations of the men who designed the system. “The task Bush and his advisers set for themselves was to find a way to continue federal support of basic science while drastically curtailing the government’s control of the performance of research,” he wrote. “It [the scientific community] wanted, in other words, to restore the autonomy of science.”17
Legislatively, Bush won. But while the creation of NIH in 1948 (before that there had been just the Public Health Service’s National Institute of Health and the National Cancer Institute) and the National Science Foundation in 1950 were premised on Bush’s vision, the rapidly escalating cold war got in the way of its faithful execution. Military spending dominated the government’s research-and-development budgets during the 1950s. The vast majority of resources went to applied research and prototype development of military hardware. Basic research received substantial funding from the Pentagon, too, but often along lines that had military application. Throughout the 1950s, the Defense Department and the Atomic Energy Commission were by far the largest bankers of pure and applied scientific research in the United States, and by 1956 more than half of industrial research was military related. Many of the technologies had duel uses and thus spun off huge civilian industries—computers, nuclear power, jet airplanes, for instance—but the initial thrust of the research, whether in pure or applied science, was to achieve some prespecified military mission.18
NIH rode the rising tide of federal science budgets. Immediately after the war, the tiny agency—it had just eleven hundred employees on its Bethesda campus in suburban Washington—successfully avoided takeover by the soon-to-be-created National Science Foundation. It then won control of the military’s medical research grant program. Congress, recognizing there was broad public support for medical research invariably billed as a war on disease, added institute after institute to the NIH roster. Budgets rose twenty-five-fold in the first decade after the war and tenfold again by 1967 when they topped the $1 billion mark for the first time. Most of the money, especially under James Shannon, who took over the agency in 1955, was distributed through the so-called extramural grant program, which came to represent four-fifths of the NIH budget. These grants to university and nonprofit researchers were based on peer review of proposals, just as Bush had envisioned it. Yet the institute heads at NIH, who each year trekked up to Capitol Hill to justify their rising budgets, told Congress that their research was mission oriented, which much of it was. “What emerged was a comprehensive strategy, unique in America’s experience, of research investments that . . . clearly centered on use-inspired basic science, an institutional strategy that has led at times to a kind of schizophrenia among both NIH staff and principal investigators,” Stokes wrote. In policy circles, they stressed how they were going to cure disease, while in academic circles, “where the ideal of pure inquiry still burns brightly,” they billed their research as pure science.19
Virtually every medical discipline benefited. Star academic researchers in the laboratories of the nation’s leading medical schools were able to build small empires on NIH grants. No research enterprise benefited more than the emerging field of molecular biology, which used biology, chemistry, and physics to understand life through its biochemical interactions. Decades of scientific discovery in those fields eventually gave birth to the biotechnology industry.
But long before the practical application of scientific research electrified the public (and the nation’s stock market), politicians and patient lobbyists had completely lost interest in the pure-science ideal. In 1966, a war-beleaguered President Johnson, hoping to extend his domestic legacy into a new arena, called all the NIH division heads into his office. “I think the time has come to zero in on the target—by trying to get our knowledge fully applied,” he said. “We must make sure that no lifesaving discovery is locked up in the laboratory.”20
The president had been influenced by New York philanthropist Mary Lasker, whose husband, an advertising executive, died of cancer in 1952. Using his substantial estate, she lavished support on the American Cancer Society but soon realized only the federal government had the ability and resources to coordinate a full-fledged assault on the disease. She launched the Citizen’s Committee for the Conquest of Cancer, which, though short-lived, can safely be called the most influential patient lobbying group in the nation’s history. On December 9, 1969, Lasker funded a full-page ad in the New York Times. “We are so close to a cure for cancer. We lack only the will and the kind of money . . . that went into putting a man on the moon.” Within a year, a new president, Richard Nixon, declared war on cancer, and both houses of Congress resolved to find a cure by the nation’s bicentennial. Congress passed the National Cancer Act in December 1971. NCI, whose budget was $190 million in 1970, would see its budget quadruple over the next five years.
NCI used the new infusion of funds to build on its long history of applied research. The agency poured billions of dollars into a wide-ranging search for anticancer drugs. Its scientists developed assays for testing drugs and screened thousands of natural and synthetic chemicals for anticancer activity. Through its grant system, the agency set up a clinical trials network at fifteen academic research centers to test therapeutic agents in cancer patients and, when they showed promise, developed the capability of taking them all the way through the final trials on hundreds of patients that were needed to win FDA approval.
It is popular, and in some ways accurate, to brand the government-funded war on cancer a failure (for a full discussion of the government’s war on cancer, see chapter 7).21 But the system set up during the in-house NCI hunt for cancer drugs also had its unforeseen successes, most notably against AIDS.
Just as the failed hunt for cancer viruses provided the expertise for the rapid discovery and characterization of the AIDS retrovirus, the NCI system for drug discovery and applied clinical research became the model for NIH as it geared up to combat the AIDS epidemic. The Reagan administration at first did not want to respond to the outbreak. Reagan press spokesman Larry Speakes was still making jokes about gay cruising as late as mid-1983.22 Government spending on the “gay plague” was just $66 million a year in 1985. But as activists made inroads in Congress and promising research began to emerge from science laboratories, the government began taking the disease seriously. By the end of Reagan’s second term, funding for AIDS research had jumped to $500 million a year.
The person who deserves the most credit for that change of heart is Samuel Broder, who was head of NCI when the AIDS epidemic began. Conventional wisdom—derived from years of research into retroviruses—suggested they couldn’t be stopped with conventional drug therapy. Broder set out to prove the conventional wisdom wrong.
Broder, the diligent son of Jewish holocaust survivors, had grown up in postwar Detroit where his parents ran a diner. He won a scholarship to attend the University of Michigan, where he developed a passion for medicine. He received his medical training at Stanford before joining NCI in 1972.
Not long after Gallo identified the retrovirus that caused AIDS, Broder joined a special task force to fight the disease, with Gallo as scientific director and himself as head clinician. Throughout 1984, he called on drug companies across the country, asking them to send potential anti-AIDS compounds to the NCI labs in Bethesda. He wanted to screen them for antiviral activity. An assay for testing drugs had already been developed in-house by Hiroaki “Mitch” Mitsuya, a Japanese postdoc who had trained at Kumamoto University on the southern Japanese island of Kyushu, where his mentor had been Takaji Miyake, Eugene Goldwasser’s collaborator in purifying erythropoietin (see chapter 1).
One of Broder’s first stops was at Burroughs Wellcome, which had its main U.S. research facilities in Research Triangle Park, North Carolina. Broder knew Burroughs Wellcome had been home to Gertrude Elion, who had retired a year earlier after a dazzling career. Elion, who would eventually win the Nobel Prize, never earned her doctorate and was a high school chemistry teacher before joining Burroughs Wellcome in 1944. Her pioneering investigations into the biochemistry of viruses had culminated in the development of acyclovir for genital herpes, one of the few successful antiviral drugs on the market. Yet in the 1980s the top managers at Burroughs Wellcome weren’t interested in drugs to fight AIDS, the most significant new viral disease to come along in years. Where was the market, they wanted to know. Broder persisted, as did AIDS researcher Dani Bolognesi at nearby Duke University. In meetings at Burroughs Wellcome’s offices, the two men argued the disease would eventually spread far beyond the few tens of thousands of cases that had appeared thus far. In early 1985, Burroughs Wellcome became one of fifty companies sending compounds to NCI for testing.
In February 1985, Mitsuya passed Burroughs Wellcome’s AZT through his assay. The chemical had been synthesized by Jerome Horwitz of the Detroit Institute for Cancer Research on an NCI grant in 1964, but it didn’t have anticancer properties and as a result was never patented. It would later become one of the many chemicals Elion licensed at Burroughs Wellcome in her search for an antiherpes drug. Mitsuya saw that it was active in the test tube against HIV. In June, the company filed for a patent on how to use the drug against AIDS and applied to the FDA to begin investigating the drug for safety on patients recruited by NCI and Duke. Traditionally, drug companies conduct all the blood tests on patients in their clinical trials, even when they are done by academic investigators. But AIDS was a different story. Fearing for the safety of its lab personnel, Burroughs Wellcome backed out of the project at the last moment, leaving it to NCI researchers to conduct the blood tests themselves.
After safety tests established the maximum dosing levels at which AZT could be tolerated by patients, Burroughs Wellcome jumped back in the game. It arranged for a dozen academic medical centers to test AZT for efficacy. When those results began looking good, they expanded the trials and submitted an application for new drug approval to the FDA. Though FDA reviewers had serious questions about the toxicity of the drug, it was approved for general use on March 19, 1987, a scant twenty-two months after submission of the new drug application. “For drug development, that is the speed of light,” Broder said.23
AIDS activists raised angry questions about AZT almost immediately. They complained about its toxicities, about its effectiveness, and, most of all, about its price, which came to the then unheard of total of ten thousand dollars a year to treat a single patient. Challenged in court a few years later by two generic manufacturers, Burroughs Wellcome successfully defended its use patent in a case that went all the way to the Supreme Court (in January 1996 the high court refused to hear a final appeal). NIH supported the suit against Burroughs Wellcome. In the late 1980s the company had begun claiming that it had developed the drug on its own. Responding to a letter from Burroughs Wellcome chairman T. E. Haigler Jr. to the New York Times making such a claim, Broder and four other NCI-funded scientists excoriated the audacity of the firm.
The company specifically did not develop or provide the first application of the technology for determining whether a drug like AZT can suppress live AIDS virus in human cells, nor did it develop the technology to determine at what concentration such an effect might be achieved in humans. Moreover, it was not first to administer AZT to a human being with AIDS, nor did it perform the first clinical pharmacology studies in patients. It also did not perform the immunological and virological studies necessary to infer that the drug might work, and was therefore worth pursuing in further studies. All of these were accomplished by the staff of the National Cancer Institute working with staff at Duke University. Indeed, one of the key obstacles to the development of AZT was that Burroughs Wellcome did not work with live AIDS virus nor wish to receive samples from AIDS patients.24
For government scientists, the lesson drawn from AZT was clear: They had the capacity to bring drugs to market. “The key issue in that era, really my obsession, was to find something practical that would be shown at a clinical level to work,” Broder later said. “I felt the fate of all future antiretroviral drug development programs would be linked to the success or failure of AZT. If AZT succeeded, then many other programs would be possible. If AZT failed, it would set the field back many years.”25
After AZT gained FDA approval in early 1987, NIAID, which had only recently been designated the lead agency in the AIDS fight, began laying plans for a broader pharmaceutical assault on HIV. Borrowing pages wholesale from the NCI playbook, the agency set up a nationwide network of academic physicians to test new drugs. It became known as the AIDS Clinical Trials Group, or ACTG. “The ACTG established the ground rules, funded the people who continued the field, and provided all the basic mechanisms for conducting AIDS clinical trials,” said Lawrence Corey of the University of Washington’s School of Medicine, who headed its executive committee for its first four years. “Once the basic concepts were established, the companies went it alone.”26
The agency also launched the National Cooperative Drug Development Program (NCDDG), again drawing on NCI’s experience in developing anticancer drugs. The NCDDG had a lot on its plate. They were under pressure to come up with drugs to fight the numerous infections and cancers that ravaged AIDS patients once their immune systems collapsed. They also launched a high-profile hunt for a vaccine, which remains a priority for government-funded research. But a key part of the program encouraged academic investigators and private drug firms to come up with drug candidates that might block the virus’s reproduction after it had entered its human host. How? By blocking the actions of its proteins, which in the previous two years had been identified in academic labs around the world.
The HIV protease was one of those proteins. Dan Hoth, who had moved from being chief of NCI’s investigation drug branch to the new division of AIDS within NIAID, recalled walking into director Margaret Johnson’s office shortly after she was appointed head of the NCDDG program. She held up an X-ray crystallographic picture of the HIV protease. “This is where I want to focus,” Hoth recalls her saying.
That’s where Merck wanted to focus, too. Whether drug development is taking place in the public or private sectors, a new therapy—at least the ones that are truly novel and represent a significant medical advance—almost always depends on the dogged determination of a committed researcher, a true believer who is willing to stake his or her career on bringing a particular drug to market because he or she fervently believes in its promise. For a brief period of time, Merck had such a person for its protease inhibitor program.
Because of its willingness to nurture such careers, Merck has had its share of breakthroughs over the years. Indeed, if corporations have personalities, Merck would have to be considered the aristocrat of the U.S. drug industry. The company traces its roots to Friedrich Jacob Merck, a seventeenth-century German apothecary, and has long professed the noblesse oblige characteristic of old money. Until its merger with Sharp and Dohme in 1952, the firm didn’t even sell drugs directly. It simply researched, developed, and manufactured them before turning them over to other firms for marketing. In the 1940s, the firm worked closely with Selman Waksman of Rutgers University to develop the antibiotic streptomycin and then donated a million dollars worth of the drug to researchers for clinical tests. When Waksman and university officials asked Merck to cancel its exclusive rights to the patent so it could be licensed to other firms to promote competition, chairman George W. Merck returned the patents “without demur.” In short order, several companies began rapidly reproducing large quantities of streptomycin, the first great breakthrough in the treatment of tuberculosis.27 The company’s commitment to scientific research encouraged Vannevar Bush to join Merck’s board of directors after he left government service. A Fortune poll in 1986 showed Merck had derailed IBM as America’s most admired company, and a glowing profile in Business Week magazine on the eve of the 1987 stock market crash, when Merck’s 30-percent profit margins made its stock market value the seventh largest in America despite having only $5 billion in annual sales, called it “The Miracle Company.”
That same year, Merck chairman P. Roy Vagelos made a major corporate commitment to combat AIDS. Vagelos had been recruited in 1975 from Washington University in St. Louis to run Merck’s famed Research Laboratories, then headquartered in grimy Rahway, New Jersey, where Vagelos grew up. He rebuilt Merck’s drug discovery capabilities, which had suffered a dry spell in the early 1970s, and helped launch blockbuster drugs to lower cholesterol and blood pressure. Vagelos rode their success to become the company’s chief executive. In 1982, he recruited Edward M. Scolnick from NCI to run its labs. Scolnick, a Harvard-trained physician, had spent the prior decade in the government’s huge but unsuccessful effort to uncover the viral causes of cancer and from 1982 to 1985 edited the Journal of Virology, a rare honor for an industry-based scientist. It gave him a front-row seat for watching the AIDS discovery drama unfold in government and academic labs.
Even before the company had committed itself to fighting AIDS, Scolnick had begun building up Merck’s capabilities in virology and molecular genetics, his own specialties. Scolnick recruited Irving Sigal, a brilliant young scientist whose father had once run Eli Lilly’s research department, and Emilio Emini, a recent Cornell graduate with a doctorate in microbiology. By 1986, with NCI chief Broder pushing the AZT through highly publicized clinical trials and the academic literature exploding with information about HIV’s viral mechanics, the firm was ready to jump into the AIDS fight. Emini was asked to spearhead the firm’s push for a vaccine, an area where the company had substantial expertise. Sigal would lead the drug discovery team. Scolnick later recalled Sigal coming into his lab to discuss a scientific paper showing how HIV had a protease very similar to renin and asking for resources to develop potential inhibitors. “Great, go do it,” Scolnick had said.28
Though they were initiating their antiretroviral research about the same time NIAID was launching its drug development program, Merck did not pursue government aid. The company was a firm believer in the Vannevar Bush model. Merck scientists took pride in their independent research skills, which were nurtured by Merck’s surging sales and profits. In the early days of its hunt for a protease inhibitor, Nancy Kohl, a Merck scientist recruited by Sigal from MIT’s Center for Cancer Research, conducted a series of experiments that proved inhibiting protease would inhibit replication of the virus in a test tube. Her findings appeared in the proceedings of the National Academy of Science in July 1988.
While that purely scientific work was interesting, the real work of the drug company was taking place in its chemistry labs. Sigal initiated a dual-track strategy for finding a molecule that would inhibit the HIV protease. He asked thirteen of Merck’s medicinal chemists to begin developing analogues of its renin inhibitors to see if they could come up with one that halted the protease’s action in a test tube. In the summer of 1988 they found one. But like Abbott’s drug, it was so large that it had to be administered intravenously. Needing a better molecule, Sigal, a hard-charger prone to yelling at colleagues who couldn’t meet his demanding deadlines, doubled the number of chemists on the job.29 At the same time, he pushed Merck’s X-ray crystallography department, headed by Manuel Navia, to come up with the structure of the protease so the chemists could design a better molecule. In February 1989 Navia became the first scientist to publish the HIV protease structure in the scientific literature, earning Merck’s protease inhibitor program a front-page profile in the Wall Street Journal and Navia an appearance on NBC’s Today Show.30 While the scientists carefully couched their presentations—a cure could be years away, they cautioned—the media hoopla left the impression that private industry was well along the trail of a cure for AIDS.
The media attention also provided a temporary respite from the troubles plaguing Merck’s protease inhibitor program. A month and a half earlier, Sigal had flown to London to attend a scientific meeting. He booked a flight back for December 22, but at the last minute decided to take the prior evening’s flight out of Heathrow so he could spend more time with his family. Pan Am 103 never made it. A terrorist’s bomb destroyed the plane thirty-one thousand feet over Lockerbie, Scotland, killing the 259 passengers and crew and 11 persons on the ground. “His death was devastating to the organization, but it didn’t affect our protease inhibitor work at all,” Emini recalled. “We just went on. It is always possible to say what things might have been if he had been here. Would we have done it differently? Would we have done it better? It’s impossible to answer.”31
Navia, the crystallographer, thought the post-Sigal program could have been run a lot better. A few months after his brief brush with media fame, he asked Merck’s chemistry department to send down some more of their analogues so he could model them on his computers for potential antiprotease activity. The head chemist refused to cooperate. An outraged Navia protested to Scolnick, who eventually backed his media-savvy crystallographer. But the headstrong Navia quit in protest and a few months later moved on to Vertex, a biotech start-up run by another Merck refugee. He ignored entreaties to stay from both Scolnick and Vagelos.32 Navia’s potential for contributing to the hunt for a protease inhibitor disappeared with him. At Vertex, it would be two years before he returned to HIV work.
Crystallographers at other firms and in government labs, meanwhile, began puzzling over the structure that Navia had published in Science. He hadn’t released any of the data points, so no one else could use it to design drugs. A little more than a year later, Alex Wlodawer, an X-ray crystallographer at NCI, published a complete portrait of the protease molecule. It was more accurate than the Navia model and included data on how it bound to the other HIV proteins as it carried out its mission. Unlike Navia and Merck, the government was more than happy to share its information with everyone. “I spent lots of time flying in 1989 and 1990 to every pharmaceutical company in the universe talking about our structure,” Wlodawer told me in 2001. “They were trying to think how their drug development programs could benefit. I briefed their scientists. I went to Abbott. I went to Merck. They were trying to energize their own scientists.”33
However, Merck scientists turned their backs on X-ray crystallography-based rational drug design and returned to traditional screening. In the spring of 1989 its chemists finally came up with a protease inhibitor candidate. But when safety experts tested the drug in dogs, it cut off bile flow. Some company officials speculated all protease inhibitors would be toxic.34 With Sigal and Navia gone, there was no longer anyone around to champion alternative approaches. Scolnick, sensing depression setting in among his medicinal chemists, sharply cut back on the number of chemists working on protease inhibitors. He authorized Emini to begin chasing down reverse transcriptase inhibitors that might be an alternative to AZT. The focus of Merck’s AIDS research shifted.
About the same time that Merck was deemphasizing its work on protease inhibitors, Hoffmann–La Roche scientists were ready to move their candidate into clinical trials. The pharmaceutical giant, based in Basel, Switzerland, was clearly in the lead. The company traced its roots to the merger of Fritz Hoffmann and Adele La Roche, who followed a Swiss custom of combining their names when they married. Fritz Hoffmann–La Roche was born in 1868 to wealthy Basel merchants and worked for a Belgian drug company before launching his own manufacturing firm at age twenty-eight. Over the next decade, the company spread across Europe and in 1905 established U.S. operations. On the eve of the Great Depression, the company opened a huge research and manufacturing complex in Nutley, New Jersey, which to this day remains the company’s largest worldwide research and manufacturing facility.
Though proud of its research capabilities on both sides of the Atlantic, Hoffmann–La Roche has never been shy about forging collaborations with the public sector to pursue medical breakthroughs. Near the end of World War II, Elmer H. Bobst, whose legendary skill at marketing drugs to physicians made him president of Roche’s U.S. operations, was getting ready to step down. Basel-based Roche officials sought out Lawrence D. Barney, the head of the Wisconsin Alumni Research Foundation, to replace Bobst. The foundation had been established in 1925 to help the land-grant university commercialize the food and vitamin patents pouring out of its labs and became the prototype for academic-industry collaborations enabled by the 1980 Bayh-Dole Act. Barney passed muster with his corporate interviewers when he rattled off the names of twelve pharmaceutical company chief executives whom he knew personally through his work at the foundation.35 Over the ensuing decades, the firm’s European operations maintained close ties with universities, especially in Switzerland, and with the government-sponsored Medical Research Council in the United Kingdom.
The company’s willingness to engage government researchers on their turf enmeshed it in the earliest days of the AIDS fight. In 1983, NIAID officials contacted the firm to see if its interferon, an overhyped biotech product, might prove useful as an immune system booster.36 The substance eventually proved marginally useful to AIDS patients by slowing the progression of opportunistic infections and combating hepatitis C. Another one of the company’s biotech products—interleukin 2—was tested against AIDS and shown to be useless. The company also sold an early diagnostic test for AIDS after licensing the assays from the government.
These early experiences made the company a willing partner when Broder and his colleagues at NCI finished pushing Burroughs Wellcome’s AZT through the FDA approval process in 1987. A coterie of scientists at Roche not only understood the disease, but was anxious to compete for the next antiretroviral coming out of NCI’s labs. Broder and his team had not stopped at AZT. They kept looking for more effective drugs and, when they found something that worked in test tubes and passed animal toxicity tests, sent them out to private firms for clinical trials. Roche conducted the initial clinical testing of government-owned didanosine (ddI), which was eventually licensed to Bristol-Myers Squibb. It did the same for zalcitabine (ddC), the next reverse transcriptase inhibitor to emerge from NCI labs.
Indeed by late 1987 there was no shortage of companies willing to jump into the AIDS market. The price tag Burroughs Wellcome set on AZT made AIDS drugs financially attractive. When ddC came along, there were fifty companies competing for the right to develop it. NCI staged a beauty contest among the four finalists, and Roche won. “NCI saw itself as the incubator to get these drugs to a certain level but didn’t have the infrastructure to move these drugs through the late clinical trials stage or commercial manufacturing,” recalled Whaijen Soo, vice president for clinical sciences at Hoffmann–La Roche.37
The Taiwanese-born scientist had earned a doctorate in retrovirology and biochemistry at Berkeley and received his medical training at the University of California at San Francisco. He had treated some of the nation’s first AIDS patients (in those days the disease was known as GRID, or Gay-Related Immune Disease) while in medical school and during his postdoctoral years at Harvard. By the mid-1980s, several of his mentors had become frontline physicians in the government clinical trials network. After moving to Roche, Soo gladly ran the clinical trials for ddC through the ACTG. It gained FDA approval in June 1992, the third nucleoside reverse transcriptase inhibitor approved to combat AIDS.
But long before that program got under way, Roche scientists were looking for ways to cooperate with the government. In early 1988, Ming-Chu Hsu, who also worked at the Nutley complex, applied to NIAID for a drug development grant to pursue inhibitors of an HIV gene (known as tatIII) whose proteins regulated replication of the virus once it was inside white blood cells. “The whole idea of attacking the regulatory proteins was still up in the air,” Soo recalled, “and [Hsu] wanted to be academically known. Scientists in drug companies also want to be considered solid scientists and a peer that’s well respected by academic scientists.” Hsu’s team eventually developed a tat inhibitor, but it foundered in early testing. While it slowed viral reproduction in a test tube, it didn’t increase the white blood cell count in patients. The project was terminated when the company’s protease inhibitor program, based at its UK facilities, began showing positive results.38
The Roche complex in Welwyn, England, was roped into the anti-AIDS fight by company scientists in the United States, who were worried about the severe side effects of ddC and ddI. They asked the Welwyn chemists to develop analogues that might be less toxic to patients. The Welwyn group immediately immersed themselves in the literature, which pushed them in a very different direction. They learned that HIV had a protease that was very similar to renin, the blood pressure protease. Like many drug companies, Roche had a renin inhibitor program, which was in Welwyn. In November 1986, the nascent AIDS team launched a protease inhibitor program and asked Noel Roberts, who had spent a dozen years in the fruitless hunt for a renin inhibitor, to run it.
It took the small Welwyn team of basic scientists nearly three years to come up with a viable drug candidate. The Pearl-Taylor paper in 1987 confirmed their suspicions that the HIV protease had a clamshell-like structure. They spent much of the next year developing assays to test drug candidates for antiviral activity. As more information about the protease appeared in the academic literature, house crystallographer Anthony Krohn began constructing a theoretical model of the protein, which, like Erickson’s theoretical model at Abbott, could be used by company chemists to design inhibitors. When Merck’s Navia published his paper in Nature in February 1989, Roberts pulled together the team to discuss the implications.39 Were they on the wrong track? Krohn kept them waiting for fifteen minutes before bursting into the room and slamming the paper on the conference table. “The bloody bugger’s got it wrong. It doesn’t agree with my model,” he exclaimed. Roberts told his team members to follow their own instincts. “It turned out [Krohn] was absolutely right. Merck had not interpreted the crystallography structure right . . . as Wlodawer later proved.”40 Three months later, Roche chemists synthesized Ro31-8959, which would become known as saquinavir.
The initial results with the drug were encouraging. It was extraordinarily potent in the test tube. But the euphoria quickly faded as company scientists faced two major problems. The drug was difficult to make, requiring twenty-three separate steps. Roberts checked with company chemists. They assured him they could make it in sufficient quantities and a low enough cost to ensure its commercial viability. “They turned out to be completely right,” Roberts later said. “We’ve had no problem making this in bulk.41
The second problem was more difficult to resolve. Keith Bragman, a cancer doctor in Bristol-Myers’s European operations, joined the company that fall to run clinical trials on saquinavir. He would oversee the clinical development of the drug all the way through its European and FDA licensing in December 1995. In mid-1990, Bragman arranged with doctors in France, Italy, and the United Kingdom to begin human testing. Those tests were designed to determine how much drug could be administered before provoking unacceptable side effects and how much had to be administered to get enough into the bloodstream to have an antiviral effect.
The initial reports were devastating. While the drug was sufficiently nontoxic for further human trials, only 4 percent of the administered dose got into the bloodstream. Patients took eighteen hard capsules a day, but the dose had a minimal effect against the virus. Bragman and Roberts were desperate to get a better drug. They went to top management to get authorization for company chemists to pursue an analogue of saquinavir that would be more easily absorbed into the bloodstream. Or, barring that new expense, they wanted to start a new clinical test with higher dosing.
By mid-1991 AIDS had already become a terrifying epidemic with global implications. Yet efforts to find treatments for HIV were no more than a blip on the radar screens of top managers at Roche. They still perceived it as an orphan disease without much economic potential. Research spending on AIDS at the closely held firm didn’t exceed more than 5 percent of the total research budget.42 Even the ddC program, which by 1991 was in the final stages of government-funded clinical trials and less than a year away from licensing, was perceived as “sort of a side venture,” according to Miklos Salgo, who came to Roche in 1989 to direct the ddC clinical trials. He saw his first AIDS patients in 1982 at Bellevue Hospital while a medical student at New York University and completed his medical training at Albert Einstein Medical School at Montefiore Hospital in the Bronx. “How could you not pay attention to the biggest epidemic taking place in our times,” he recalled a female coworker telling him as they labored long into the night to treat the steady stream of AIDS victims from the South Bronx.43
That attitude wasn’t shared by the managers contemplating the Bragman and Roberts request. More than a decade later, now a top official at the firm, Salgo recalled their decision with some hesitancy. “It was only after we got Hivid [the trade name for ddC] licensed that [they realized] the extent of the epidemic and accepted that this was a field that pharmaceutical companies could make . . .” He paused and then clarified, “. . . be productive in and be worthwhile to enter.”44
Management turned down Bragman’s requests to go hunting for an analogue of saquinavir with superior bioavailability. Jürgen Drews, then president of global research for Roche, was a key member of the senior management committee that vetoed new funds to continue searching for a better protease inhibitor candidate. The committee also turned down Bragman’s request for new first-stage clinical trials at higher doses. In his 1998 book, In Quest of Tomorrow’s Medicines, Drews described how “one pharmaceutical company developing the first protease inhibitor against AIDS questioned whether an economically feasible synthesis would ever be developed.”45
Roberts and his chemists, who were certain saquinavir could be manufactured at a reasonable cost, weren’t invited to the meeting and thus couldn’t counter the argument. Neither was Bragman, who wanted to initiate new clinical trials at higher doses. “Concern about the economic viability of this medicine reached the point where a highly placed employee of the firm attempted to forbid the use of high dosages in the clinical trials, though from a scientific standpoint these were deemed absolutely essential,” Drews wrote. “He was convinced that this medicine would forever remain unprofitable. ‘How effective it is at higher dosages I have absolutely no desire to know,’ he declared.”46
A decade later, the scientists involved in developing saquinavir would look back on that decision as a dreadful mistake. But they would largely blame it on the external political environment, not top management. Roche was under intense pressure to offer ddC, then in late-stage trials to AIDS patients under a compassionate use program. Under the rubric of compassionate use, doctors with desperately ill patients, often near death, can ask companies for drugs that are still in clinical trials, even though they have not yet been proven effective. Companies believe compassionate use programs hinder them from recruiting patients for double-blind, placebo-controlled clinical trials, which are the gold standard of the drug industry and preferred by regulators. But desperate AIDS patients rebelled against what they called “dead body” trials. One of the more famous placards held by AIDS activists as they marched outside FDA headquarters in the late 1980s read, “I died on a placebo.” Compassionate use for drugs in clinical trials, or expanded access, as the AIDS activists preferred to call it, was one of their main demands. Roche, unaccustomed to hearing from patients, much less accepting criticism from them, was unwilling to change its research procedures to accommodate their demands.
The company’s top managers were also rattled by the ongoing controversy over price, which was again in the headlines. NCI’s ddI, which the government had licensed to Bristol-Myers Squibb, had just been approved by the FDA. Its hefty price tag, coming on top of the price of AZT, had restoked the anger of the AIDS community and the Democrats who controlled Congress. Public Citizen, which had been created by Ralph Nader in 1971 to fight abusive corporate practices, filed suit to cancel Burroughs Wellcome’s AZT patent. The government intervened on Barr Laboratories’ behalf when the generic drug manufacturer sought to void Burroughs Wellcome’s exclusive manufacturing rights. The firm wanted NIH named as coinventor of the drug so that it could be licensed to generic manufacturers like Barr.
“This was a conservative Swiss company dealing with some difficult characters,” Bragman recalled a decade later. “For Hoffmann–La Roche, money [for additional research] would not have been the issue. This was an extraordinarily hot political area to work in. We had [a protease inhibitor] that was well tolerated, that had activity, that was comparable to AZT. Why don’t we just develop it and get it on the market? You can understand how the company might say let’s not make our lives more difficult than they already are.”47
Yet Drews learned a very different lesson from the experience. “As long as the search for new drugs, and above all, their development, is almost exclusively the province of profit-oriented enterprises, it will be impossible to untangle the relationship between economic calculation and the needs of medicine.”48 Bragman never got a better molecule.