8

Me Too!

President Franklin Delano Roosevelt had every reason for optimism in the winter of 1936. He had just won reelection in a landslide, and the prospects for the more far-reaching of his New Deal reforms never looked brighter. But just before Christmas, close aides brought word that his only son, Franklin Delano Jr., had a bad case of tonsillitis. With her son’s fever soaring, Eleanor Roosevelt called in White House physician George Tobey Jr. He feared the worst. The infection had seeped into the blood, which in those days was a potentially fatal condition.

More out of desperation than any sense that it might help the young man, Tobey gave the president’s son a new German drug called Prontosil. When news of the drug first appeared in the medical literature a year earlier, most American doctors scoffed. How could a derivative of a chemical dye cure a bacterial infection? But to Tobey’s surprise, young Roosevelt’s fever quickly subsided. A few days later the press heralded both the medicine and the miraculous recovery in the first family. “New control for infections,” the New York Times headlined its front-page story. The era of wonder drugs was underway.

Prontosil not only heralded the modern era of drug therapy, it ushered in the modern era of drug marketing. It helped transform the Depression-era pharmaceutical industry from a sprinkling of small firms peddling a handful of cures (an early 1930s symposium listed only seven diseases amenable to drug treatment) to the modern corporations that we know today: vertically integrated giants that can develop, produce, and, most important to their bottom lines, market drugs.

Prontosil was discovered by Gerhard Domagk, a young physician on the staff of Bayer Laboratories in Elberfeld, Germany. Inspired by the pioneering work of fellow countryman Paul Ehrlich, who had discovered the first drug treatment for syphilis, Domagk spent five years screening hundreds of Bayer’s industrial dyes and their derivatives for their antibacterial properties. Five days before Christmas 1932, he discovered that one of his red dyes cured a handful of mice that he had infected with deadly streptococcus. Over the next two years, while ignoring the social upheavals around him that brought Adolf Hitler to power, Domagk and physicians on the staff of the local hospital injected dozens of patients with the new drug. It not only killed streptococci but had powerful effects on patients suffering from a host of life-threatening infections like rheumatic and scarlet fever, which had been the scourge of children for centuries.

Domagk published the first report about his miraculous cures in February 1935 in an obscure academic journal. Researchers around the world immediately began trying to replicate his results. A husband-and-wife team in France soon discovered that it wasn’t the dye that killed the streptococci, but one of its constituent chemicals, which only became active after the patient metabolized the original drug. The active ingredient in Prontosil, they discovered, was sulphanilamide, a common industrial chemical that was no longer patented and that no one had ever thought to test against bacteria.

Within months, every drug company in the world began synthesizing their own versions of sulfanilamide. Bayer was left without any financial remuneration for the pioneering research of Domagk and his colleagues. German dictator Adolf Hitler’s health ministers, meanwhile, heaped scorn on his extraordinary achievement. They called the medicine quackery and in late 1939 forced Domagk to write a letter to the Caroline Institute in Stockholm turning down his Nobel Prize.1

As war clouds gathered over Europe, dozens of companies in England, France, Germany, and the United States began peddling their own versions of the miracle sulfa drugs. These first copycat drugs, usually called me-too drugs by industry insiders, created a problem that has bedeviled the industry thereafter—the propensity for some of the newer versions of the drug to be less safe than the ones that already existed. In 1937, a small Tennessee firm named Massengill and Company started making a liquid form of the medicine because it believed southerners and children preferred it that way. Since sulfanilamide did not dissolve in water or alcohol, company chemists opted to suspend the drug in diethylene glycol, an industrial solvent used to make antifreeze. No one at the company thought to test the product for safety before it began selling the concoction. Later testimony showed that no one at the company even bothered to look up diethylene glycol in a textbook. Within weeks of the medicine’s initial marketing, more than one hundred people were dead, most of them children. When questioned by the dozens of reporters who poured into Tennessee to cover the tragedy, the company’s president refused to take responsibility. His chief chemist committed suicide.2

The incident led an outraged Congress to alter the 1906 Pure Food and Drug Act. The original Progressive Era legislation, which had been created in response to public outrage over contaminated food, had drugs in its title but did little to regulate the industry. The Massengill tragedy put an end to that. For the first time, companies were required to prove to an expanded Food and Drug Administration (FDA) that their drugs were safe for human consumption before they could put them on the market.

The advent of FDA drug regulation radically transformed the pharmaceutical marketplace. Companies began marketing their wares directly to doctors—either through advertising in medical journals or through office visits (called detailing in the trade because the salesmen provided physicians with the latest details on new medicines) instead of through the traditional channels, which to that point had been largely newspaper and magazine advertising.

The result was intense competition among many companies in the still limited marketplace for scientifically proven medicines. Detailers would crowd physicians’ offices, leaving behind free samples and various trinkets. But it was very difficult to differentiate their products. Every version of the new sulfa drugs, for instance, had basically the same medical outcome. Textbook economics took over. The price of the new sulfa drugs plunged.

The pattern was repeated when the first miracle antibiotics came along in the years immediately after World War II. The government, which had developed the mass production techniques for penicillin as a wartime measure, licensed the drug to five firms. Those firms engaged in a fierce competition for sales. Between 1945 and 1950, the price of penicillin plunged from $3,955 to $282 a pound.

The pattern happened yet again with the next generation of antibiotics. In the late 1940s Selman Waksman and his colleagues at Rutgers University in New Brunswick, New Jersey, developed streptomycin, a derivative of bacteria-killing microbes that he had found in soil. Waksman, a soil botanist, made his discovery by pursuing the reasonable assumption that soil must contain something that killed bacteria since they didn’t survive burial. His drug proved to be the first effective treatment for tuberculosis, earning Waksman the Nobel Prize and making him America’s most celebrated research scientist until Jonas Salk and the first polio vaccine came along in the mid-1950s. But unlike Salk, who would refuse to patent the polio vaccine (“Could you patent the sun?” Salk answered Edward R. Murrow when he was asked who owned the vaccine on See It Now), Waksman patented streptomycin and licensed it to Merck Research Laboratories in nearby Rahway, whose engineers and scientists had done much of the production work.

Waksman’s decision to seek a patent on his discovery represented a second watershed event in the evolution of the modern drug industry. For the first time, the Patent and Trademark Office (PTO) gave seventeen-year exclusivity to the chemical modifications and the processes that created a product—streptomycin—that in its raw state had been part of nature. Merck wouldn’t benefit from that decision, however. Worried about a public backlash against a private company generating massive profits from scientific research conducted at a public university, Waksman convinced Merck to return the license for streptomycin to the nonprofit Rutgers Research Foundation. The drug was then licensed broadly and sold generically. The price of the miracle drug soon fell to rock-bottom levels, a repeat of the penicillin story.3

The industry recognized it had to deal with its disastrous experience with the first three antibiotics. A number of firms had already deployed chemists to develop new microbe killers using Waksman’s techniques. Three firms quickly came up with new medicines comparable to streptomycin. They patented the results despite the fact the uses of the new drugs were virtually indistinguishable from their predecessors. However, without the government or Waksman to prod them, they refused to license the new medicines to other firms. Given the similarity in medical outcomes from the various antibiotics now on the market, an intense competition for market share should have broken out. But this time, just the opposite occurred. The price of the new drugs, marketed as improved versions of the generic antibiotics penicillin and streptomycin, soared.

A decade later, the Federal Trade Commission launched a massive investigation into the antibiotic cartel. It turned up overwhelming evidence showing the industry refused to compete against one another on price even though every company was charging far more than the cost of production and a reasonable return on its investment. Yet the agency refused to crack down. In essence, it accepted industry’s argument that it was sufficient that competition took place in arenas other than price, such as the frequency of dosage or the method of getting the drug into the body. “The producers regained this market power by differentiating their products along the lines that any other consumer good is differentiated,” economic historian Peter Temin wrote. “Since the therapeutic effects of the drugs appeared to be identical, other—more familiar—quality dimensions had to be employed. So the firms intensified their advertising, their detailing, and their reliance on company identities. The postwar pattern of integrated drug companies competing by introducing and marketing new drugs was beginning to take shape.”4

Throughout the 1950s, drug companies, often drawing on the latest research emerging from academic labs but sometimes relying on their own resources, discovered class after class of new medicines. Antidepressants, antacids, anti-inflammatory medicines, antihistamines, and new chemicals for controlling blood pressure became mainstays of the modern medicine chest. Whenever one company broke new ground, other firms in the industry would introduce copycat versions of the original molecule within a very short time. The me-too drugs almost always entered the market at the same or within a few percentage points of the innovator’s price.

By the early 1960s, popular anger over the high price of drugs led Senator Estes Kefauver of Tennessee to hold a series of hearings on the drug industry’s behavior. A Yale-trained lawyer who had arrived in Washington in the late 1930s as an idealistic New Dealer, Kefauver by the early 1950s had became one of Washington’s most powerful and closely watched senators, largely because of his well-publicized attacks on organized crime. But after his support for civil rights and principled opposition to the demagogy of Senator Joseph R. McCarthy cost him a shot at the presidency, he turned his attention to abusive corporate practices, using his chairmanship of the Senate Subcommittee on Antitrust as a platform. “I keep feeling that mergers, consolidations, and cooperation between large blocs of economic power are on the increase, and that this is bound to lead to total abuse of our free-enterprise system, and inevitably, to total state control—in short, statism,” he told New Yorker writer Richard Harris in 1961. “That is something none of us want.”5

In a series of hearings between 1960 and 1962, Kefauver focused public attention on the drug industry’s penchant for spending much of its time and resources developing copycat drugs, which, in defiance of every economics textbook, rarely resulted in competition on price. He called numerous medical professionals and former industry executives to testify. At one point, Kefauver pressed the former head of research at E. J. Squibb to estimate how much corporate drug research was driven by the desire to come up with me-too drugs. The retired executive replied that “more than half is in that category. And I should point out that with many of these products it is clear while they are on the drawing board that they promise no utility. They promise sales.”6

Ironically, the 1962 amendments to the Food and Drug Act that resulted from the hearings did little to curb the industry’s penchant for pursuing me-too drugs. They required drug companies for the first time to prove their drugs were not only safe but effective. That change was put into effect largely because of the thalidomide tragedy, which came to light as the hearings were drawing to a close and was only prevented in the United States by the stalling tactics of an eagle-eyed FDA physician.

The first great era of drug discovery, then, which stretched roughly from 1935 to the mid-1960s, could also be called the era of molecular modification. Once a researcher—often in the public sector—identified a new chemical class that was effective against a disease state, every major drug company put chemists to work coming up with their own versions that could do roughly the same thing. “The great drug therapy era was marked not only by the introduction of new drugs in great profusion and by the launching of large promotional campaigns but also by the introduction of what are known as ‘duplicative’ or ‘me-too’ products,” noted pharmacologist Milton Silverman and physician Philip R. Lee of the University of California at San Francisco. Surveying the drug scene in the early 1970s, they counted more than 200 sulfa drugs, more than 270 antibiotics, 130 antihistamines, and nearly 100 major and minor tranquilizers. Most of the new drugs “offer the physician and his patient no significant clinical advantages but are different enough to win a patent and then be marketed, usually at the identical price of the parent product, or even at a higher price.”7

The biotechnology revolution of the late 1970s and 1980s and the NIH-funded explosion of knowledge about cellular interactions set off a second wave of drug innovation. Drawing from the government’s vast investment in biomedical research since the end of World War II, medical researchers promised unique cures for the chronic diseases that had become the leading causes of death: heart disease, cancer, diabetes, and dementia. Cover stories in the nation’s popular magazines and newspapers heralded the exploits of scientific medicine, often focusing on the drug companies that were bringing the new products to market. No longer would the drug industry focus on developing and marketing me-too drugs that did little more than cloud physicians’ judgments and crowd pharmacists’ shelves. A new era of miracle drugs was at hand, the companies’ press releases suggested.

Progress on delivering on those promises was slow, however. A handful of the new biotechnology products, such as erythropoietin, human growth, and blood clotting factors, which hit the market in the first decade after biotechnology’s emergence, certainly were unique. Physicians for the first time were able to replace or enhance patients’ supplies of naturally occurring proteins by injecting artificial versions.

The handful of genetically engineered medicines that emerged in the first two decades of the biotechnology revolution were also unique in an economic sense. In 1980 the Supreme Court in Diamond v. Chakrabarty liberalized the nation’s intellectual property laws by allowing the patenting of living things. (The case involved a patent on an oil-eating bacteria.) The young start-up companies that manufactured the proteins were now able to protect themselves against me-too competition by surrounding their inventions with gene and gene-process patents. For this new class of medicines, companies no longer had to rely on marketing and cartel-like behavior to ensure against a sharp decline in prices due to competition from me-too drugs. They could rely on the exclusivity granted by patent law. There would be only one genetically engineered version of a therapeutic protein.

By the mid-1970s, traditional pharmaceutical companies were also beginning to take advantage of the burst of knowledge generated by the government’s generous funding of academic research during the postwar years. Firms introduced a number of new drugs and new classes of drugs, which they advertised as clearly superior to the older drugs on pharmacists’ shelves. A 2001 survey of 225 physicians ranked the top innovations in medicine over the previous thirty years, putting several new medicines near the top of the list. A majority of doctors ranked angiotensin converting enzyme (ACE) inhibitors for controlling blood pressure and statins for lowering arterial plaque-forming cholesterol levels among the top six medical inventions. About 40 percent of the doctors thought the new antidepressants and new antacids were clearly superior to older medications aimed at the same symptoms. However, the physicians weren’t convinced that every new class of medicine represented a significant medical advance. The same survey showed fewer than 2 percent of doctors considered nonsedating antihistamines, calcium channel blockers, and erectile dysfunction drugs as major innovations.8

As each new class hit the market, however, whether or not it represented a therapeutic improvement over older medicines, the leading pharmaceutical companies reverted to their now familiar pattern of introducing roughly comparable products. Their sometimes vicious marketing competition resulted in a divvying up of the market but rarely competition on price. By the early 1990s, drug prices, like health care costs generally, were soaring at double-digit rates. When the Clinton administration put health care reform at the top of its political agenda, drug prices came under increasing public scrutiny. For the first time since Kefauver, the industry’s me-too research practices were being called into question.

But this time industry officials used a new set of arguments in their response to the charge that they wasted research dollars on copycat drugs. Many of the new me-too drugs had fewer side effects than their predecessors, industry officials claimed. They also suggested that individual patients responded differently to drugs, so me-too drugs offered an alternative for people who did not respond to other drugs in that class. “There’s no such thing as a ‘one-size-fits-all’ drug,” a typical handout from the Pharmaceutical Research and Manufacturers of America (PhRMA), the industry’s main trade association, said. “Each patient is unique and may respond to the same drug differently. What works for one person does not necessarily work for another. Physicians and patients benefit from a variety of medicines available to treat each ailment.”

The Clinton administration’s top drug officials were unimpressed by those arguments. Amid the 1993-94 health care debate, David Kessler, the activist head of the FDA, and a team of FDA drug reviewers published a scathing response to the industry’s claims for the latest generation of me-too drugs. “In today’s prescription-drug marketplace a host of similar products compete for essentially the same population of patients,” they wrote in the New England Journal of Medicine. Reviewing the 127 new drugs approved between 1989 and 1993, Kessler and his team found that “only a minority offered a clear clinical advantage over existing therapies. Many of the others are considered me-too drugs because they are so similar to brand-name drugs already on the market.”

“Pharmaceutical companies are waging aggressive campaigns to change prescribers’ habits and to distinguish their products from competing ones, even when the products are virtually indistinguishable,” the article continued. “This is occurring in many therapeutic classes—antiulcer products, angiotensin-converting-enzyme inhibitors, calcium-channel blockers, selective serotonin-reuptake-inhibitor antidepressants, and nonsteroidal anti-inflammatory drugs, to name a few. Victory in these therapeutic-class wars can mean millions of dollars for a drug company. But for patients and providers it can mean misleading promotions, conflicts of interest, increased costs for health care, and ultimately, inappropriate prescribing.”9

No drug class illustrated Kessler’s concerns better than the great stomach acid wars of the 1990s. The problems of heartburn, sour stomach, and acid indigestion are as all-American as ordering takeout pizza and beer after a long day at an aggravating job. The usual cure is a nonprescription acid neutralizer that can be purchased anywhere and in almost every form imaginable, from crunchy tablets to chalky liquids. But for some patients, the condition is chronic, often leading to stomach ulcers, gastroesophageal reflux disease (the backflow of stomach acid into the esophagus), and eventually erosive esophagitis. In those cases, doctors often prescribe one of the more powerful new medicines that arrived on the scene in the late 1970s through early 1990s, which attack the problem at its source—the production of acid—rather than relying on a neutralizer once it is already in the stomach. In recent decades, these prescription antacids have been among the pharmaceutical industry’s most broadly prescribed and lucrative medicines.

The first class of prescription antacids to come along targeted histamines, whose production in the stomach is triggered by the presence of food. European scientists discovered histamines in the 1930s, and over the next several decades academic scientists on both sides of the Atlantic linked various histamines with complex body processes including the regulation of blood pressure, bronchial reactions, and the production of stomach acid. In 1937, a French academic discovered the first inhibitor of histamine, and over the next decade scientists came up with a number of comparable drugs. The most famous member of the class was diphenhydramine, sold over the counter today as Benadryl. It was developed by a U.S. academic scientist and later provided the chemical basis for the wildly popular antidepressant fluoxetine, more commonly known by its trade name, Prozac.

The first industry scientist to conduct systematic studies on histamine blockers was James Black, who worked at Smith, Kline, and French in the 1960s and 1970s. Black began his career as an academic pharmacologist in Glasgow, but during the 1950s he moved to Great Britain’s ICI Pharmaceuticals (the drug wing of the mammoth Imperial Chemical Industries), where he helped develop the first drugs that could block adrenaline’s effect on the heart. While still an academic, he had shown that there were at least two cell receptors that bound to adrenaline, but only one—the beta-receptor—was present on the heart muscle. His ICI team developed the first beta-blocker, propranolol. When it was introduced in 1962, it was considered a major breakthrough in the treatment of high blood pressure and heart disease.

After moving on to Smith, Kline, and French, Black applied the same dual-receptor concept to the histamines that were unleashed by the presence of food and sent signals for the production of stomach acid. European academic researchers had shown that the first generation of antihistamines, while useful for allergic reactions, did not block the secretion of stomach acid. Positing there must be at least two receptors, Black began synthesizing analogues of a histamine blocker that might block only the histamine receptor that triggered action in the stomach. Eight years and seven hundred chemicals later, Black came up with his first drug for blocking the stomach histamine (H2) receptor. He spent several more years of fiddling before coming up with one that was useful as a drug. He called it cimetidine, which is sold under the trade name Tagamet. Other companies soon jumped on the bandwagon. In 1984, scientists at Glaxo won FDA approval for ranitidine (Zantac), which was similar chemically to cimetidine but had fewer side effects. It soon became the best-selling drug in the world, surpassing SmithKline’s Tagamet and generating billions of dollars in sales for Glaxo.10

While Black and his imitators were pursuing H2 antagonists, academic scientists began looking for the engines in the stomach cells that actually produced the acid. In 1977, George Sachs, a Scottish physician who taught at the University of Alabama at Birmingham, attended a symposium in Sweden, where he presented his work on the ion-exchange mechanism in stomach cells that produced acid. He called it the proton pump. After his talk, a young scientist from Astra Pharmaceuticals approached the podium. “This Swedish person asked me a question that was intriguing,” Sachs recalled. “He had found a compound that inhibited the gastric pump in rats. They sent me a couple of compounds. My lab discovered the acid pump was the target. We also discovered that the drugs were converted into active form by the acid. In 1978, we went there, told them the mechanism, and started a tight collaboration that resulted in the synthesis of omeprazole (trade name Prilosec) as a candidate drug.”11

The drug’s development was delayed when some early safety experiments with omeprazole generated tumors in mice. Company officials feared the new drug might be a carcinogen. But low-dose experiments in monkeys and later humans dispelled those fears, and widespread clinical trials resumed. The FDA approved it for sale in 1989, with Merck acting as Astra’s marketing agent in the United States.

By the early 1990s, the companies that made competing versions of the new antacids were battling over a $7-billion-a-year market. The leading firms began pouring hundreds of millions of research dollars into clinical trials in an effort to prove that their product was better than the competition. There is little interest among elite scientists in conducting these types of studies, although many medical professionals at the nation’s academic medical centers take part in order to raise money for their labs. Many times the results aren’t even published in the literature, or when they are, they appear in second-tier journals that receive little notice from the mainstream of the profession.12

By the end of 1994, Astra, Glaxo, and SmithKline had sponsored hundreds of studies on the relative merits of Prilosec, Zantac, and Tagamet. One reviewer counted 293 clinical trials comparing the drugs. He concluded that proton-pump inhibitors were marginally more effective at healing ulcers, with cure rates at 94 percent after four weeks for Prilosec compared to 70 to 80 percent for the H2 antagonists. The cure rate for Prilosec fell to 84 percent after eight weeks, and for some types of ulcers and conditions, the cure rates were statistically indistinguishable.13 Despite the similarities between the drugs, Astra and Merck used the results to launch a massive marketing push for its proton-pump inhibitor, which soon turned Prilosec into the best-selling medicine in the world. By 2000, it was racking up nearly $5 billion a year in sales in the United States alone. TAP Pharmaceuticals’ me-too proton-pump inhibitor Prevacid, launched in 1995, was the third-best-selling medicine in the United States with more than $3 billion in sales.14

Astra’s research team wasn’t through with heartburn yet. With the company’s patent on Prilosec set to expire in 2001, company officials knew that generic manufacturers would line up to manufacture the lucrative pill. As early as 1995, Astra officials launched a massive research project to come up with a successor to their wildly popular purple pill (the color became a mainstay of its advertising campaigns). It would be best if they came up with a better drug, company scientists knew. But with an 80-percent cure rate for the existing antacids, a better mousetrap would be hard to find.

The company never considered one possible approach, which had been percolating in the world of academic medicine for more than a decade. In the years since the discovery of H2 antagonists and proton-pump inhibitors, scientifically inclined academics had moved away from interfering with the mechanisms for generating stomach acid. In 1983, Barry Marshall, then working at the Royal Perth Hospital in Australia, had isolated the Helicobacter pylori bacteria that flourished in the excess stomach acids of gastritis and ulcer patients. He believed it was the root cause of ulcers. After returning to the United States to a post at the University of Virginia, he used NIH funding to establish the Center for the Study of Diseases Caused by Helicobacter pylori. Over the course of the next decade, Marshall and other scientists showed that the bacteria, which infects about half the world’s population, was the leading cause of stomach and intestinal ulcers, gastritis, and stomach cancer. The center even developed regimens of common antibiotics that could eliminate the minor infection.

Unfortunately, no pharmaceutical company championed the cure. They had no interest in eliminating the cause of ulcers with a short, cheap course of generic antibiotics when they could make billions of dollars treating their chronic recurrence with expensive prescription antacids. As one NIH analyst put it: “A one-time antibiotic treatment regimen to eliminate H. pylori, as opposed to long-term maintenance with H2-antagonist drugs, recurrence, and sometimes surgery as a last resort, is an obvious benefit both to the patient and to the health care insurers. However, [promoting this approach would lead to] the possible decline in sales.”15

Instead of pursuing this potential cure for ulcers, Astra scientists launched Operation Shark Fin, an effort to find a drug to replace Prilosec after it came off patent and became generically available. At first they tried drug combinations and oral suspensions, but they didn’t work any better and were less convenient. Finally, Astra scientists created a molecule that was, in essence, half of Prilosec. They dubbed it Nexium. In doing so, they used a process that by the late 1990s had become one of the drug industry’s chief strategies for extending patents, a strategy that was garnering an increasing share of industry research-and-development budgets.

The process is based on a quirk in the chemistry of organic molecules. Scientists have long known that most organic molecules come in two shapes because their carbon atoms arrange themselves in six-sided rings. The side chains of atoms that make the molecule unique can attach themselves to either side of the symmetrical rings. The result is a mixture of two versions of the molecule, each with the same chemical formula, but different in that they are mirror images of each other, much like a person’s left and right hands. Each version is called an enantiomer (science literature occasionally refers to them as isomers). Sometimes only one enantiomer is active against the disease. The other causes unwanted side effects or is inactive. Drug companies could not do much about it until the early 1990s when chemists developed a way of separating the two sides. That deft piece of chemistry was pioneered by K. Barry Sharpless of the Scripps Research Institute in La Jolla, California, Ryoji Noyori of Nagoya University, and William S. Knowles of Monsanto Company, who jointly shared the 2001 Nobel Prize for chemistry.

The new process succeeded in rescuing some drugs that had been sidelined for their unwanted side effects. In 1992, for instance, the FDA ordered Merrell Dow, which later became part of Aventis, to put a warning label on its allergy drug terfenadine (Seldane) after adverse reaction reports began pouring into the agency. Doctors who prescribed the nonsedating antihistamine for their allergy patients reported many terfenadine users had suffered severe heart palpitations after taking the drug. Six years and at least eight deaths later, it was withdrawn from the market. But the drug was resuscitated when a specialty chemical company called Sepracor separated the two enantiomers of terfenadine for Aventis, which was then able to continue marketing the safe but active half. They called it Allegra. Sepracor later performed the same trick for Johnson and Johnson after its allergy drug astimezole (Hismanal) suffered a similar fate.

Operation Shark Fin’s Nexium, marketed as the new purple pill, was nothing more than one of Prilosec’s enantiomers. But unlike the antihistamines that had to be withdrawn from the market, Prilosec had no major side effects. It was even possible that both of Prilosec’s enantiomers became active in the stomach. Getting rid of half of the drug would provide no significant clinical benefits for patients. All it provided was a new chemical entity—in reality half the old entity—that could be patented separately and submitted to the FDA for approval.

Recognizing the inadequacy of their solution, Astra scientists launched a desperate search for some way to differentiate the Nexium and Prilosec. They authorized four wildly expensive studies comparing the two drugs against erosive esophagitis. If Nexium proved to be a better drug for that one indication, they would at least earn a unique label from the FDA and give company detailers some talking points when they were out visiting physicians. It wasn’t a foolproof strategy, however, since a worse outcome would have to be reported on the label. “You spend $120 million studying the thing, and it could have come out worse,” one Astra official told the Wall Street Journal. “You’re scared as hell.” The company won its bet, but by the thinnest of margins. By comparing the two drugs at equal doses, Astra discovered the more slowly metabolizing Nexium healed 90 percent of patients after eight weeks compared to 87 percent for Prilosec. Two of the studies did not show Nexium to be a better drug and were never released to the public.16

Sachs, the codiscoverer of the proton-pump mechanism, who had worked closely with Astra to develop Prilosec, provided a final epitaph for the hundreds of millions of dollars that the company, now called AstraZeneca, had poured into Nexium research. “Both enantiomers in the end would appear to be equally active at the pump,” he told me in an interview. “Once they are activated, they are no longer enantiomers anyway. They are the identical molecule.”17 Though medically irrelevant, the costly research paid off for AstraZeneca. While the company deployed its patent attorneys to delay generic firms from selling Prilosec, it sought FDA approval for Nexium, which arrived in 2001. Once on the market, the company’s detailers, backed by a massive television advertising blitz, convinced thousands of physicians to switch their patients to the new purple pill, which like Prilosec, sold for about four dollars a dose.18 The company then convinced the FDA to allow Prilosec onto the over-the-counter market, thus frustrating the generic manufacturers and giving Nexium free rein as the prescription—and presumed stronger—antacid.

The Prilosec-to-Nexium transition exemplified a common industry practice. Throughout the 1990s, the drug industry poured billions of research dollars into developing alternatives to drugs that were approaching the end of their patent terms. In most cases, the alternatives were little changed from the originals. The better the original sold, the more likely it was that the company would devote considerable research resources to generating a copycat version with renewed patent life.

Another example that garnered considerable public attention was Schering-Plough’s Claritin, one of the antiallergy medicines developed in the early 1980s as a nonsedating alternative to an earlier generation of antihistamines. By the late 1990s, the drug was generating over $2 billion a year in sales for Schering-Plough, a figure that was growing rapidly because of the 1997 legalization of direct-to-consumer advertising. To reach the estimated thirty-five million allergy sufferers in the United States, Schering-Plough poured hundreds of millions of dollars a year into ads for the drug. Consumers were encouraged to ask their doctors for a pricey prescription—it cost eighty dollars for a month’s supply—that, according to the original studies submitted to the FDA, worked only marginally better than a placebo.

Though you would never know it from the television advertisements featuring handsome women frolicking through flowering fields oblivious to the pollen-laden air, the FDA’s reviewer was openly skeptical about the drug’s efficacy at the low dose offered by Schering-Plough. The company, which tested the drug on thousands of patients, needed a low dose to ensure that it would be nonsedating, which was the only way the new drug would be able to gain a toehold in the already crowded antihistamine market. But at the low, nonsedating dose, clinical trials showed that only 43 to 46 percent of Claritin users gained relief of allergy symptoms compared to a third of patients on a sugar pill. A separate study that asked doctors to assess the patients on the placebo found that 37 to 47 percent of them had a “good to excellent response to treatment,” which as a practical matter was no different than those who took the real pill.19

In addition to questioning its marginal medical significance, other reviewers at that late 1980s FDA hearing worried that Claritin, whose generic name is loratadine, might be a carcinogen. It took the company several more years of studies before it could dispel those fears. Finally, in 1993, the drug was approved. The delays actually proved to be an auspicious event for Schering-Plough. In the early 1990s, patients on Seldane and Hismanal, the first nonsedating antihistamines to hit the market, began turning up in hospital emergency rooms because of the drugs’ violent interactions with other drugs and the development of life-threatening heart irregularities. By the time Claritin hit pharmacists’ shelves, there was pent-up demand for a safe alternative, and the new drug immediately jumped to number one in sales in its class.

Yet in the late 1990s, as Claritin neared the end of its patent term, Schering-Plough launched a massive lobbying campaign in Washington to get an extension on its patent. The company claimed the long delays at the FDA had robbed it of years of market exclusivity. Aware of the history, Congress rebuffed Schering-Plough’s frequent requests.

Forced to fall back on research and development, Schering-Plough scientists took apart loratadine to see what made it tick. They discovered the active part of the drug was actually a metabolite of the whole molecule, which became active in the stomach after patients began digesting the pill. They patented this metabolite, called it desloratadine, and filed a new drug application with the FDA. It was approved in late 2001, just months before the expiration of loratadine’s patent. The company launched a massive advertising campaign that convinced millions of their customers to switch to the new, equally expensive but no more effective drug. Then, to frustrate the generic companies getting ready to sell loratadine, Schering-Plough announced it would begin selling Claritin as an over-the-counter allergy remedy.20

Public-sector science has sometimes pushed industry researchers down the road to better medicine, only to discover as they neared the end of their labors that they developed yet another me-too drug. During the late 1990s, few drug classes received more media attention than a new pain reliever known within the medical community as Cox-2 inhibitors. The original members of this new drug class were Celebrex, made by G. D. Searle (later bought by Pharmacia), and Vioxx, made by Merck. In 2001, Pharmacia came out with a follow-up drug to Celebrex called Bextra.

One of the discovers of the mechanism behind the new drugs was Philip Needleman, a professor of pharmacology at the Washington University School of Medicine in St. Louis who went on to become chief science officer of Pharmacia. While still an academic, Needleman, whose NIH support began in 1977 and lasted for nearly twenty years, surmised there must be a specific enzyme that caused inflammation and pain around arthritic joints and traumatic injuries. Scientists had already discovered an enzyme called cyclo-oxygenase—or Cox for short—that triggered the production of prostaglandins, which in turn caused swelling. Existing painkillers like aspirin and ibuprofen (known in the medical literature as non-steroidal anti-inflammatory drugs, or NSAIDs) reduced the pain by blocking the action of Cox and limiting the production of prostaglandins. But scientists like Needleman hypothesized there had to be at least two versions of Cox, including one that produced enzymes for protecting the digestive tract from stomach acid. A tiny proportion of patients who took NSAIDs, which blocked the Coxes indiscriminately, suffered from gastrointestinal bleeding and, in the worst cases, ulcers.

By the late 1980s, scientists working in industry and government labs around the United States had identified the Cox specific to swelling, which they dubbed Cox-2. They then turned to finding the gene that expressed Cox-2. If they could produce the protein through genetic engineering, they would be able to give medicinal chemists at pharmaceutical houses a powerful tool for producing large volumes of a juicy drug target. In 1992, three teams of NIH-funded scientists at the University of Rochester, Brigham Young University, and the University of California at Los Angeles, each working independently, discovered the gene. But only Donald Young at the University of Rochester thought to file for a patent on it, which was granted by the PTO in April 2000.21

Needleman, meanwhile, had moved across town to Monsanto (though he continued receiving NIH grants until 1995 as an adjunct professor at Washington University, according to NIH records). He eventually became president of G. D. Searle after it was acquired by the bigger chemical company. His main focus at Searle became developing a Cox-2 inhibitor, which later became Celebrex.

Merck’s road to a Cox-2 inhibitor also began in 1992 when Peppi Prasit, a Thai-born medicinal chemist who was working in the company’s Montreal office, saw a scientific poster at a small medical conference. The poster reported the latest research from a Japanese company that was trying to come up with a painkiller that targeted the newly discovered Cox-2 enzyme. That summer, Prasit replicated its work in his own lab. His work excited Edward Scolnick, the director of Merck’s research division, who authorized a major search for its own version of the molecule. By 1994, it had discovered Vioxx. A dosing glitch in clinical trials slowed its race to market, which it lost to Searle by a few months.22

Though billed as super-aspirins, the Cox-2 inhibitors provided no more pain relief than over-the-counter aspirin, ibuprofen, or prescription naproxen, which were the most popular NSAIDs on the market. This inconvenient fact was overlooked by the new drugs’ marketers. In the spring of 2002, Pfizer Inc. chief executive Henry McKinnell, whose company comarketed Celebrex, awarded PhRMA’s highest research award to the four scientists who developed the drug. “Thanks to their pioneering work, millions of people throughout the world who were once crippled with arthritis can now work, walk, garden, and do all the little things that make life worthwhile,” he said, even though they were no more able to perform those tasks than if they had popped a couple of over-the-counter ibuprofen.23

The only medically legitimate selling point for the Cox-2 inhibitors was the premise that the newer drugs would eliminate the ulcers and even deaths that on rare occasions resulted from the prolonged use of generic painkillers. Yet the FDA didn’t allow them to claim that in their advertising or literature since the clinical trials failed to turn up evidence that the new drugs were safer than NSAIDs. The package insert, which goes out with every prescription, contained the same warning label as all the other NSAIDs.

Yet FDA oversight didn’t stop the companies from launching a surreptitious marketing campaign claiming otherwise. Articles, often written by scientists who had conducted the companies’ clinical trials, flooded the medical literature about the major public health hazard posed by the traditional NSAIDs.24 Relying on extrapolations from small group studies, one physician claimed that NSAID use resulted in forty-one thousand hospitalizations and thirty-three hundred deaths a year among the elderly. Another put the death rate at five times that level. Meanwhile, other articles reported the results from small clinical trials for Cox-2 inhibitors that hinted the new drugs might prevent the side effects.

The higher number of deaths from NSAIDs rapidly found its way into the popular press as the drugs neared FDA approval and the companies began gearing up their marketing campaigns for their “super-aspirins.” Reporters, anxious to jump on the bandwagon of the next medical miracle, never read the fine print. “Pain-Killers Promise to Be Tummy-Friendly,” read the headline on a typical story heralding a medicine that promised “new arthritis relief.”25 A more circumspect Business Week article pointed out that “Celebrex is no more effective at relieving pain that the commonly prescribed NSAIDs” but went on to state that “it’s less likely to cause the stomach bleeding and ulcers experienced by about 30 percent of patients on the older treatments.”26 By the time Vioxx got its FDA approval, the Washington Post was reporting that NSAIDs were responsible “for 107,000 hospitalizations and the death of 16,500 people every year.”27

Sales exploded the instant the FDA gave the okay for the drugs’ makers to rev up their marketing machines. Commercials featuring frisky seniors flooded the airwaves. Detailers inundated doctors with free samples. Millions of people pestered their physicians to give them prescriptions for the new drugs, requests that fell on receptive ears. Wall Street’s stock analysts considered the rollouts of Celebrex and Vioxx the most successful drug launches in pharmaceutical industry history. Within a year of its launch, Celebrex was generating more than $2 billion a year in sales for Pharmacia and its comarketer Pfizer. Merck’s Vioxx was right behind with about $1.5 billion. Arthritis pain relief medicine that had once cost pennies a day was now costing millions of patients and their insurers nearly three dollars a pill.

Amid all the hype, two questions remained unexplored: Were the traditional NSAIDs really as dangerous as a growing volume of medical reports claimed? And did the Cox-2 inhibitors solve the problem?

Reviewers of medical studies for peer-reviewed journals sometimes apply what is called a face test to check the validity of extrapolation studies that draw broad conclusions based on the sampling of small groups. Are there any statistics out there that call into question the validity of the extrapolation study? In 1999—the year the two Cox-2 inhibitors were approved for sale to the general public—the Centers for Disease Control reported in its annual survey that fewer than six thousand Americans died the previous year from all forms of gastrointestinal bleeding disorders, including ulcers. That’s ten thousand fewer than the claims in some of the NSAID studies. It is possible that at least a few of those six thousand bleeding-ulcer deaths were from something other than NSAID use. After all, contemporary accounts of Alexander the Great’s untimely passing—he died at age thirty-two from acute abdominal pain—suggest he suffered a perforated peptic ulcer after several days of binge drinking.

Moreover, the assertion that many NSAID users suffer gastrointestinal distress from the painkillers was never proven to the FDA’s satisfaction. The government-mandated package insert for one popular prescription NSAID warns users that 1 percent of users will experience some gastrointestinal problems anywhere from mild to severe within three to six months, and 2 to 4 percent will have such problems after one year. But even that may overstate the case. A recent study in Scotland that followed more than fifty thousand people over fifty years of age for three years found that 2 percent of NSAID users were hospitalized for gastrointestinal problems after using the drugs for a prolonged period of time, compared to 1.4 percent of people who took no drugs at all.28

In a final attempt to manufacture proof that Cox-2 inhibitors were safer than traditional NSAIDS, Pharmacia and Merck launched postapproval clinical trials that compared Celebrex and Vioxx against several older prescription and over-the-counter NSAIDS. Since so few NSAID users suffered from gastrointestinal tract problems, the trials had to be enormous—more than eight thousand patients each—in order to get statistically valid results. The first published accounts of the trials seemed to justify their enormous cost. The Vioxx trial, which compared the new drug to naproxen over a period of about nine months, cut the incidence of gastrointestinal bleeding and ulcers from 4.5 incidents per 100 patient years (100 patients taking the drugs for a year) to 2.1 incidents. The Celebrex trial, which allowed patients to continue taking aspirin, published only six months of data (although the trial lasted for thirteen months) and found the incidence rate fell from 1.5 to 0.9 incidents per 100 patient years. When the latter study appeared in the Journal of the American Medical Association in September 2000, an accompanying editorial called the new Cox-2 inhibitors “a welcome addition to the therapeutic armamentarium” that might benefit an “enormous number of individuals . . . who do not take aspirin.”29

Reviewers soon began poking holes in the industry-funded studies. Many of the patients enrolled in the trials had other risk factors for developing ulcers. They, like Alexander the Great, were drinkers, for instance. Patients without those risk factors had less than a half of 1 percent chance of developing gastrointestinal tract problems on NSAIDS. Even in the higher-risk group, the Vioxx study suggested “that forty-one patients needed to be treated for one year to prevent one such event.” The Celebrex study, meanwhile, because of its short duration, had “no statistically significant difference between the groups.”30

When the regulators got their hands on the data in the studies, things took a turn for the worse from the drugmakers’ perspective. It turned out the patients on Vioxx developed serious heart problems at three times the rate of those on naproxen, the traditional NSAID that it had been compared to in the study. Merck quickly pointed out that the overall rate of heart problems remained small and probably meant that the new “super-aspirins” did not provide the same cardiovascular benefits as taking older NSAIDs like naproxen, aspirin, and ibuprofen, which reduce the blood-clotting factor in the blood while fighting pain and inflammation.31

The FDA was not impressed by that logic. To the regulators, the new data suggested that for every patient saved from gastrointestinal complications by taking Vioxx, two patients would develop a potentially life-threatening heart condition. In April 2002, the FDA ordered Merck to revise its Vioxx label to contain the new warning. The FDA also said the new study didn’t warrant removing the gastrointestinal complications warning that had been slapped on Vioxx’s label when it was initially approved—and whose removal was the whole purpose of the giant study.32

The Celebrex study, meanwhile, received the most damning evaluation possible. Its organizers were accused of junk science in the influential British Medical Journal. A year after the study appeared, it reported that Celebrex’s allegedly superior safety profile over the two NSAIDs in the company-funded study had been based on just six months of data, even though many patients had remained in the study for more than a year. If the entire data set was evaluated, the Celebrex patients developed just as many ulcers as the generic and over-the-counter competition. “I am furious. . . . I looked like a fool,” M. Michael Wolfe, a noted gastroenterologist at Boston University, told the Washington Post. Wolfe had written the glowing editorial in the Journal of the American Medical Association that accompanied the report on the original study.33

After a Swiss scientific team reviewed the entire study, it concluded in the British Medical Journal that the original protocols of the study “showed similar numbers of ulcer-related complications in the comparison groups and that almost all the ulcer complications that occurred in the second half of the trials were in users of celecoxib (Celebrex).” Pointing out that all the authors of the original study were industry-funded and more than thirty thousand copies of the erroneous study had been distributed to physicians around the world, the editorial charged that “publishing and distributing overoptimistic short-term data using post hoc changes to the protocol . . . is misleading. The wide dissemination of the misleading results of the trial has to be counterbalanced by the equally wide dissemination of the findings of the reanalysis according to the original protocol. If this is not done, the pharmaceutical industry will feel no need to put the record straight in this or any future instances.”34

As the twenty-first century dawned, the drug industry’s search for new drugs to replace old ones coming off patent became frenzied. There were fifty-two drugs with more than $1 billion in sales in 2000, but forty-two were slated to lose their patent protection by 2007. The drugs that account for fully half the industry’s sales were on the cusp of low-cost, generic competition. But instead of looking for truly innovative medicines, which are dependent on the maturation of biological understanding and even then are difficult to find, an increasing share of the industry’s research and development budgets turned to the search for replacement drugs—drugs that would provide fairly similar medical benefits to patients as the drugs losing their patent protection, drugs that could be positioned in the marketplace as “new and improved” medicines. This chapter anecdotally documented some of the more broadly prescribed and financially significant examples. But as we’ll see in the next chapter, the effort was pervasive.

Before turning to the economics of me-too research and the extent to which it dominated industry’s budgets for research and development, it is worth noting that the aggressive search for me-too medicines also drove the rapid rise in marketing expenses at drug companies. If one only looks at what the industry refers to as marketing expenses—the free samples, detailing, direct-to-consumer and professional journal advertising—totals rose 71.4 percent to $15.7 billion between 1996 and 2000, with direct-to-consumer ads representing the fastest growing expense. If one expands the definition of marketing to include continuing medical education, physician support meetings, and the postmarketing research (sometimes called fourth-phase clinical trials), which are aimed almost exclusively at expanding sales of the drugs by getting them into the hands of more doctors, then the total marketing budgets among drug industry firms may have exceeded $40 billion. Meanwhile, research-and-development budgets rose at a slower pace—52.7 percent—to $25.7 billion.35

By decade’s end, with drug costs soaring at double-digit rates, these skewed priorities—which were a major component in the rising cost of drugs—were again drawing fire from the guardians of scientific integrity. “The industry depicts these huge expenditures as serving an educational function,” the New England Journal of Medicine editorialized.

It contends that doctors and the public learn about new and useful drugs in this way. Unfortunately, many doctors do indeed rely on drug-company representatives and promotional materials to learn about new drugs, and much of the public learns from direct-to-consumer advertising. But to rely on the drug companies for unbiased evaluations of their products makes about as much sense as relying on beer companies to teach us about alcoholism. The conflict of interest is obvious. The fact is that marketing is meant to sell drugs, and the less important the drug, the more marketing it takes to sell it. Important new drugs do not need much promotion. Me-too drugs do.36