CHAPTER 7

THE COMMERCIAL TAKEOVER OF MEDICAL KNOWLEDGE

From their first day of training, medical students are taught to trust the research published in peer-reviewed medical journals. They learn to take for granted that publication of research findings in these journals ensures that the principles of rigorous science have been followed: that the research has been properly designed to answer the question in a way that can be translated into clinical practice; that the data have been analyzed fairly and completely; that the conclusions drawn are justified by the research findings; and that the scientific evidence that has been published constitutes our best medical knowledge. This medical literature then serves as the source that enables doctors to keep current with new developments in medicine.

As part of my fellowship in the early 1980s, I spent many hours with some very smart people, meticulously analyzing and critiquing scientific articles. Of course there were flaws and limitations in virtually every study, but I can’t remember a single instance when the validity of a study was called into question because of manipulation of the data or compromise of the rules of science to gain commercial advantage. That vision of the medical literature now seems as quaint as Norman Rockwell’s painting of the boy standing on a chair, bending forward slightly, about to get an injection in his backside from his trusted doctor.

It’s not news that medical research has become big business, often with billions of dollars on the line. The problem is that the search for scientific truth is, by its very nature, unpredictable, and this uncertainty is hardly optimal from a business point of view. There is far too much at stake to leave this process to the uncertainties of science. In this context, the role of the drug and medical-device companies has evolved so that their most important products are no longer the things they make. Now their most important product is “scientific evidence.” This is what drives sales. In this commercial context, the age-old standards of good science are being quietly but radically weakened, and in some cases abandoned. Here’s how it works.

THE MEDICAL INDUSTRY STARTS TO CALL THE SHOTS

Prior to 1970, medical researchers had relatively little problem obtaining funding from the National Institutes of Health, and few medical studies were sponsored solely by drug companies. An article published in the journal Science in 1982 describes medical scientists thumbing “their academic noses at industrial money” in the 1970s. But as government support for medical research started to decline, scientists and universities were forced to look for alternative sources of support for their research. The medical industry was more than willing to step in and lend a helping hand. Universities had no choice, and researchers’ attitudes about commercial funding changed. Government funding continued to decline so that by 1990 almost two-thirds of requests for research funds from the NIH were not granted. Meanwhile, between 1977 and 1990, drug company expenditures on research and development increased sixfold, and much of the money went to support university-based clinical research.

This shift in the source of funding set the stage for what was to follow. In 1991, four out of five commercially sponsored clinical drug studies were still being conducted by universities and academic medical centers. Academic researchers still played key roles in all phases of the research, from designing studies to recruiting patients to analyzing data to writing the articles and submitting them for publication. This may have been good for medical science and good for universities, but it was certainly not optimal for the drug and medical-device companies. Research done in university medical centers cost more and involved more administrative hoops and delays. Most important, the checks and balances present in an academic environment could be sidestepped if the research dollars were taken elsewhere.

As drug and biotech industries assumed an ever-larger role in funding clinical trials (reaching 80 percent by 2002), they increasingly exercised the power of their purse. Control over clinical research changed—quietly at first, but very quickly, and with profound effects on medical practice. The role of academic medical centers in clinical research diminished precipitously during the 1990s as the drug industry turned increasingly to new independent, for-profit medical research companies that emerged in response to commercial funding opportunities. These companies could gain access to patients for clinical research through community-based doctors, or play a larger role in research design, data analysis, and even writing up the findings and submitting complete articles to journals for publication. By 2000, only one-third of clinical trials were being done in universities and academic medical centers, and the rest were being done by for-profit research companies that were paid directly by the drug companies.

Increased reliance on private research companies allowed the drug industry to kill two birds with one stone: It could now call the shots on most of the studies that were evaluating its own products without having to accept input from academics who were grounded in traditional standards of medical science. And the increasing competition for commercial research dollars put academic centers under even more pressure to accept the terms offered by the commercial sponsors of research, threatening the independence and scientific integrity that had been the hallmark of the academic environment. In 1999 Dr. Drummond Rennie, deputy editor of the Journal of the American Medical Association, characterized the response of academic institutions to this changing climate: “They are seduced by industry funding, and frightened that if they don’t go along with these gag orders, the money will go to less rigorous institutions. It’s a race to the ethical bottom.”

AN ALARM IS SOUNDED

In September 2001 an unprecedented alarm was sounded. The editors of 12 of the world’s most influential medical journals, including the Journal of the American Medical Association, the New England Journal of Medicine, The Lancet, and the Annals of Internal Medicine, issued an extraordinary joint statement in their publications. In words that should have shaken the medical profession to its core, the statement told of “draconian” terms being imposed on medical researchers by corporate sponsors. And it warned that the “precious objectivity” of the clinical studies that were being published in their journals was being threatened by the transformation of clinical research into a commercial activity.

The editors said that the use of commercially sponsored clinical trials “primarily for marketing . . . makes a mockery of clinical investigation and is a misuse of a powerful tool.” Medical scientists working on corporate-sponsored research, the editors warned, “may have little or no input into trial design, no access to the raw data, and limited participation in data interpretation.”

Commercial influence on medical research raises two kinds of concerns: First, what is being studied? Those who pay the piper get to call the tune. The drug companies’ funding buys them the right to set the research agenda. The result of commercial sponsorship is that medical knowledge grows in the direction that maximizes corporate profits, in much the same way that plants grow toward sunlight. The questions that do get answered, and thus become our medical knowledge, are often not the ones that will contribute most to improving our health.

Second, is commercially sponsored research “disinterested,” or neutral, enough to stand as good science? There is mounting evidence that it is not. One would have expected that, after the editors’ extraordinary warning of the mounting threat to the integrity of clinical research, scientific business would not just go on as usual; that this public airing of concern about the health of our medical science would have created a stir in the media and alerted doctors across the country to the commercial bias in their most trusted source of medical knowledge. But it didn’t, and most doctors still hold fast to the basic tenet of their training: that the scientific evidence reported in respected peer-reviewed medical journals is to be trusted and serve as the basis of good medical care.

Studies repeatedly document the bias in commercially sponsored research, but the medical journals seem powerless to control the scientific integrity of their own pages. In 2003, separate studies were published in JAMA and the British Medical Journal showing that the odds are 3.6 to 4 times greater that commercially sponsored studies will favor the sponsor’s product than studies without commercial funding. And in August of 2003 a study published in JAMA found that among the highest-quality clinical trials, the odds that those with commercial sponsorship will recommend the new drug are 5.3 times greater than for studies funded by nonprofit organizations. The authors noted that the lopsided results of commercially sponsored research may be “due to biased interpretation of trial results.” They cautioned that readers should “carefully evaluate whether conclusions in randomized trials are supported by data.” In other words, doctors are warned that the conclusions of even the best research published in the best journals cannot be taken at face value: Caveat lector—let the reader beware. This is the sorry state of the “scientific evidence” on which medical practice is based in the United States today.

Although many doctors have a gut feeling that there is a pro-industry bias in the scientific evidence that guides their care, almost all of the information that comes their way, including the opinions of the experts they trust, reinforces the validity of this “knowledge.” Plus, the findings are made to appear so overwhelmingly compelling and contain such enormous hope to provide ever more effective care to their patients that it is hard not to be a believer. There is a magical quality to all this progress that causes us to suspend our better judgment and seduces us into believing that what we are hearing and seeing is really true.

The techniques used by world-class magicians are nearly impossible to spot, but once their methods are exposed, the magic quickly fades. The rest of this chapter explores the techniques used by the most talented masters of commercial medicine to brilliantly skew and slant their findings in the production of their “scientific” illusions.

BROADENING THE MARKET: WHO NEEDS A DEFIBRILLATOR?

After new drugs and medical devices are introduced to the market, the manufacturers go to great lengths to convince health care professionals that their products should be used for an ever-expanding range of symptoms. The case for implantable defibrillators is a perfect example. A patient of mine, Mr. Peters, is a 78-year-old easygoing retired mechanic who has been living alone since his wife passed away. A few years ago he was hospitalized for what turned out to be a small heart attack. While he was resting peacefully in his hospital bed, without warning his heart suddenly went into ventricular fibrillation (a rapidly fatal arrhythmia in which the heart’s ventricular contractions become chaotic and ineffective). The nurses saved his life by responding to the alarm set off by his heart monitor, immediately initiating cardiopulmonary resuscitation (CPR) and successfully shocking his heart back into a normal rhythm with defibrillator paddles applied to his chest.

The risk of this lethal arrhythmia recurring over the next two years was high, making Mr. Peters a perfect candidate for an implantable cardiac defibrillator. Just like U.S. Vice President Cheney, Mr. Peters had a defibrillator—an electrical device slightly smaller than a pack of cigarettes—surgically inserted underneath the skin of his chest.

About two months later, Mr. Peters was standing in his kitchen when, with absolutely no warning, he was suddenly knocked to the floor. Lying there, he realized that the jolt must have come from the cardiac defibrillator, and indeed, the recorder built into the device showed that his heart had once again gone into ventricular fibrillation. This episode probably would have been fatal without the defibrillator. When Mr. Peters told me this story, he was clearly grateful that his life had been saved by the device. And he chuckled about having been knocked to the floor.

The cost of the defibrillator is another story—about $25,000 for the device and another $5,000 to $15,000 for the doctor and hospital charges. Medicare covers the costs, and the therapy is, literally, lifesaving. But the number of people who survive ventricular fibrillation to become candidates for implantable defibrillators is limited. After its initial success with patients such as Mr. Peters, Guidant, the manufacturer, set its sights on a much larger group of patients.

Guidant turned its attention to the 400,000 Americans whose hearts are weakened each year by heart attacks, but who, unlike Mr. Peters, have not experienced life-threatening disturbances of their heart rhythm. These patients have a much higher risk of dying than do heart attack victims whose hearts remain strong: about 20 percent die in the 20 months following their heart attacks. Guidant hit a grand slam when a study was published in the NEJM showing a significant benefit of implanted defibrillators in this population. The patients who were randomly assigned to receive a defibrillator had 31 percent less risk of dying over the next 20 months than the patients in the control group. The article concluded that “prophylactic implantation of a defibrillator [in patients with hearts weakened by heart attacks] improves survival and should be considered as a recommended therapy.”

On the surface, this appears to be the best of all worlds: private enterprise motivated by the prospect of greater earnings discovering new ways to save lives. But let’s look at the results of this study from a slightly different perspective. For the first nine months of the study there was no difference between the death rate in the people who got the defibrillator and those who did not. Over the next 11 months, 5.6 percent fewer people who received defibrillators died. Based on these results, if 1000 heart attack patients with weakened hearts received defibrillators, a total of 56 would be alive at the end of 20 months who would otherwise have died. The other 944 patients would derive no benefit. In fact, there would be a downside for them: for each life saved by the defibrillator, there would be one additional hospitalization for congestive heart failure among the people who got defibrillators, compared with the control group.

The cost per year of life saved? Between $1.1 and $1.5 million, without including the cost of the additional hospitalizations required for the people who developed congestive heart failure.* Though it may sound callous, around $100,000 per year of life saved is considered the upper limit of cost-effectiveness for routine medical interventions. Although this number may be drifting upward as new, more expensive technologies are introduced, more than $1 million per year of life saved is clearly a staggering sum for any nation, even the richest in the world.

The NEJM article failed to mention that there is good evidence that there are other, much less expensive ways to prevent many more deaths among these high-risk patients. Only three years before the NEJM study, Circulation, the journal of the American Heart Association, published an article in which a group of Italian researchers looked at the effects of exercise training on a similar group of people with weakened hearts. This study randomized patients to receive either exercise training three times a week for eight weeks and then twice a week for one year, or to be in the control group and receive no exercise training. The results were dramatic: The risk of death was reduced by 63 percent in the exercise group (more than twice the benefit of the defibrillator); the risk of hospitalization for congestive heart failure went down by 71 percent (instead of up by 33 percent in the patients who received implanted defibrillators); and both exercise capacity and quality of life improved significantly in the exercise group and remained improved for the 40 months of the study (p < .001 for both).

The patients in this study were not exactly the same as those in the defibrillator study, but they did have similar mortality rates (about 20 percent per 20 months for the control groups). In absolute terms, twice as many lives were saved by exercise (22.8 percent over 40 months) as were saved by implanted defibrillators (5.6 percent over 20 months).

The study sponsored by Guidant made no mention of changes in exercise capacity and quality of life in the heart attack patients who received implanted defibrillators. Nor did it reference the Italian study in Circulation showing the dramatic benefits of exercise in a similar population of patients.

There is another effective and inexpensive tool to help these patients that was overlooked: smoking cessation. Eighty percent of the patients in the defibrillator study were either “current or former smokers.” How many of those patients were still smoking? The NEJM article does not tell us, but we do know from a review article in the Archives of Internal Medicine that smoking cessation after heart attack is associated with 1.5 to 2 times as much benefit as a defibrillator. The NEJM article reporting the benefits of implanted defibrillators did not venture beyond the interests of the study’s sponsor; there was no mention of exercise, smoking cessation, or other lifestyle changes.

While there were no technical violations in Guidant’s defibrillator study, sleight of hand was at work: the study was presented as if its purpose were to determine the best treatment for heart attack patients with weakened hearts. Closer inspection suggests that its real purpose was to create scientific evidence that would support sales of Guidant’s product. The study of defibrillators could easily have been designed to include lifestyle interventions, but, like the vast majority of commercially sponsored studies, it didn’t. Such a study might well have shown that defibrillators do play a role in the optimal treatment of some heart attack patients with weakened hearts. If so, that finding would have provided invaluable information to doctors. Instead, we are left not knowing the appropriate role of this potentially remarkable device in post–heart attack patients. Such research issues will not be addressed by the drug and medical-device companies as long as sales are rolling along. Why would the company that makes implantable defibrillators risk doing a study that might show that lifestyle changes were more effective than its expensive devices? That’s not its job.

TINKERING WITH DOSAGES

The first step in a clinical trial is to decide what the drug or medical device will be compared with. The researchers then decide upon the doses for both the new and comparison drug(s). Companies can design studies in which the doses of the drugs being compared are not equivalent.

For example, Nexium, the “purple pill” for gastroesophageal reflux disease (GERD), is chemically almost identical to the acid-blocking drug Prilosec. Both are manufactured by AstraZeneca. In 2001, the patent was about to expire on Prilosec. This basically means that a drug’s “recipe” enters the public domain, and other companies can manufacture generic equivalents of it that sell for a small percentage of the price of the brand-name drug. So AstraZeneca sponsored “head-to-head” studies between Prilosec and Nexium, whose patent would remain in effect for several more years. One such study, done at the Cleveland Clinic, concluded that Nexium “demonstrates significantly greater efficacy than [Prilosec] in the treatment of GERD patients with erosive esophagitis.” It sounds as though doctors should abandon Prilosec and start prescribing the newer Nexium. The catch is that the dose of Nexium used in the study was 40 mg, but the dose of Prilosec was only half of that. Would 40 mg of Prilosec daily work as well as 40 mg of Nexium daily? The drug company never bothered to find out. Does 20 mg of Nexium work better than 20 mg of Prilosec? Not according to AstraZeneca’s own research. Nonetheless, Nexium 20 mg costs $4.90 per dose, while Prilosec 20 mg without a prescription costs about one-eighth as much.

COMPARING SOMETHING WITH NOTHING

One might think that a new drug earns its place among preferred therapies only after it has been shown to be superior, or at least equal, to the best available therapies. Often this is not the case. Expensive brand-name drugs are frequently tested against a placebo (meaning no therapy) even when effective alternative therapies are already in use. Evidence of being significantly more effective than no treatment is sufficient for the FDA to approve new drugs and for doctors to prescribe them in place of older, established, and usually less expensive treatments. A study of OxyContin, a long-acting form of oxycodone (commonly known as Percocet), provides a particularly dramatic example.

The study was designed to test OxyContin’s ability to provide relief to patients who were having “moderate to very severe” pain following knee replacement surgery. Patients were randomized into two groups: those assigned to the treatment group received OxyContin twice daily as “preemptive” pain medication, in a dose equal to six Percocet tablets over each 24-hour period. The patients assigned to the control group (also having moderate to very severe pain) were given twice-daily preemptive doses of a placebo. Patients in both groups could request a single Percocet tablet every four hours if they were uncomfortable, and the preemptive doses of pain medication were adjusted based on patients’ requests for additional medication—meaning that the patients in the treatment group who were having breakthrough pain received a higher dose of OxyContin and those in the control group received a higher dose of an inert pill. Can you guess which group had more pain? A hint: the people in the OxyContin group averaged the equivalent of more than 10½ Percocet tablets per day, while the people in the control group averaged 2½ Percocet tablets per day. The results of this study, published in the Journal of Bone and Joint Surgery, concluded: “Preemptive use of controlled-release oxycodone [OxyContin] during rehabilitation following total knee arthroplasty [replacement] leads to improved pain control, more rapid functional recovery, and a reduced need for inpatient rehabilitative services.”

In other words, treatment of moderate to very severe pain after knee replacement surgery with preemptive doses of OxyContin is superior to treatment with preemptive doses of nothing. Based on this study, the drug manufacturer earned the right to claim that treating post–knee replacement surgery patients with OxyContin significantly decreases their pain, facilitates their rehabilitation, and shortens the time spent in rehabilitation centers. Does this mean that routine “preemptive” treatment with OxyContin is more effective than routine “preemptive” treatment with other shorter-acting (and much less expensive) pain medication? This study leaves that question unanswered.

STUDYING THE WRONG PATIENTS

The next step in designing a clinical trial is to determine the characteristics of the people to be included in the study. Ideally, people included in a trial reflect the population of patients to whom the results will be applied—those most likely to use the drug or device being tested. But this is not always the case, as we’ve seen with the studies supporting the use of both Vioxx and Pravachol. Many studies choose a population that is younger and fitter than the target population, and therefore less likely to show side effects. An editorial in the Canadian Medical Association Journal pointed out that only 2.1 percent of all patients in studies of anti-inflammatory drugs are over the age of 65; yet senior citizens, the editorial points out, “are among the largest users” of these drugs and are more likely to have serious complications from them. The editorial also indicts research on drugs for Alzheimer’s disease: A study to determine the effectiveness of one such drug, Aricept, restricted the range of patients to age 65 to 74 and excluded people with other medical problems besides Alzheimer’s disease, thus minimizing the likelihood of side effects. The problem is that the vast majority of patients for whom this drug will be prescribed are older than this, and therefore the results of the study do not apply to them. As the editorial pointed out, “If frail older patients are going to be targeted for dementia therapy we need to study this group in clinical trials to ensure the safe administration of the drug.”

Studies of cancer drugs are similar. Nearly two-thirds of all cancer patients are 65 or older, but only one-quarter of the people in cancer studies have reached 65. Most of these studies exclude older patients by requiring that participants be able to take care of themselves independently or be able to work. Conducting research on only the strongest subset of cancer patients is not a good way to find out how to treat most cancer patients. Perhaps older people will have more severe reactions or derive less (or possibly more) benefit from the cancer therapies being tested. In any event, the systematic inclusion of unrepresentative patients in clinical studies may be good for the profits of the commercial sponsors of the studies (at least in the short term), but it is not good for the people who will receive care based on this distorted “science.”

GETTING OUT WHILE THE GETTING IS GOOD

Even when studies are designed to enroll the right mix of patients, make fair comparisons between drugs, and measure valid end points, there is still no guarantee that they won’t be prematurely stopped. That’s what happened in a study sponsored by Pharmacia, ironically titled CONVINCE. The study compared Pharmacia’s blood pressure drug Covera, a long-acting reformulation of an older calcium channel blocker, with far less expensive standard therapies, a beta blocker (atenolol) and a diuretic (hydrochlorothizide). This huge study included 16,600 patients and was planned to continue for five years, yet it was stopped two years early. The results up to that point showed that the sponsor’s more expensive drug was slightly less effective at preventing the complications of high blood pressure than the less expensive drugs. According to an editorial in JAMA, the decision to stop the study went against the recommendation of its own data and safety-monitoring board. Ignoring its own experts, the CONVINCE study quit while it was behind. What rationale did the sponsor give for stopping the drug trial? According to the JAMA editorial, “business considerations.” What were they? We may never know, but I’d hazard a guess that it had something to do with the fact that the sponsor’s drug, costing about $1.50 per day, was proving to be no better than the generic drugs that cost as little as 15 cents per day. Nonetheless, doctors continue to prescribe calcium channel blockers for hypertension more than any other drugs, believing them to be in some way better. The marketing is evidently still “convincing,” even if the scientific evidence is not.

KEEPING THE REAL DATA HIDDEN

Often the medical researchers who carry out company-sponsored studies are not even allowed to see all of the data from the studies they are working on. These researchers are left in the position of analyzing and including in their articles only the data that the drug or device manufacturers have allowed them to see. In May 2000, Dr. Thomas Bodenheimer brought many of these issues to light in an important article in the New England Journal of Medicine titled “Uneasy Alliance: Clinical Investigators and the Pharmaceutical Industry.” One researcher quoted in the article explained that controlling access to the data allows drug companies to “provide the spin on the data that favors them.”

The September 2001 joint statement issued by the editors of major medical journals weighed in heavily on this important issue: “we strongly oppose contractual agreements that deny investigators the right to examine the data independently.. . . Such arrangements not only erode the fabric of intellectual inquiry that has fostered so much high-quality clinical research, but also make medical journals party to potential misrepresentation.” Practicing doctors count on the articles in medical journals to present and interpret the complete data, thus providing the “scientific evidence” that they trust to guide their patient care decisions. If even the researchers who write the articles have access to only the data that the corporate sponsors allow them to see, how can anyone have confidence in the “scientific evidence” published in the medical journals? And how can anyone have confidence in the medical care that is based upon results that have been censored to serve commercial interests?

The editors revised the guidelines of the International Committee of Medical Journal Editors to make explicit the recommendation that researchers have control over their data, analysis, and publication of their work. A follow-up study was done a year later to see if the new guidelines were being honored in university-based research contracts. It turns out that, despite their highly unusual public statement, the medical editors might as well have been whispering into the wind. The study found that their recommendations had not been implemented, and concluded that “academic institutions rarely ensure that their investigators have . . . unimpeded access to trial data.”

Before any medical article is accepted for publication in a respectable journal, it is peer-reviewed. Independent experts are called upon to evaluate the study’s data, and to concur (or not) with the authors’ analyses and conclusions. Most doctors believe that this peer-review process guarantees the integrity and completeness of the scientific evidence presented. But peer reviewers see only the data that have been included in the article—not all of the data the authors had access to and certainly not all of the data from the study. Readers of medical journals cannot assume that the process of peer review ensures fair and impartial presentation of research results.

USING GHOSTWRITERS AND RUBBER STAMP EXPERTS

Yet another way that drug companies can make sure that research results are written to best represent their interests is to hire ghostwriters to write the original draft of the article after a clinical trial is completed. As described by Melody Petersen in the New York Times, the ghostwriter submits his or her draft to the drug company, which then passes it along for final approval to the authors of record—often busy doctors who are happy not to have to labor over the first draft. The problem with this system is that the drug companies get to infuse their perspective into the results from the very beginning. Dr. Linda Logdberg, a former medical ghostwriter, explained that drug companies “will drop a doctor if they don’t think he will be particularly malleable.” The New York Times article says, “The result . . . is marketing masquerading as science.” According to a study published in JAMA, 11 percent of the articles published in peer-reviewed medical journals are written by “ghost authors.” (And 19 percent of the articles named “honorary authors” who had not contributed enough to the research and writing to justify being listed as authors.)

CONTROLLING THE DAMAGE

Even when studies that do not support the use of a sponsor’s product are published in medical journals, there is still a chance that well-funded marketing and public relations efforts will be able to protect drug sales. The Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attacks Trial (ALLHAT) study,* for instance, compared the effectiveness of four drugs in preventing complications from high blood pressure. The study had been designed to measure important outcomes (heart attack and the broader category of cardiovascular disease—heart disease, stroke, other vascular disease, and the need for cardiac procedures to open blocked arteries). The study was to continue for four to eight years, but a part of it was stopped prematurely because the people who had been assigned to take one of the brand-name blood pressure drugs, Cardura—manufactured by Pfizer—were developing significantly more cardiovascular complications (particularly congestive heart failure) than the people taking a diuretic. At the time the results were published in JAMA, in April 2000, about $800 million worth of Cardura was being sold worldwide each year. The diuretic that was proving more effective than Cardura at preventing the complications of high blood pressure cost about one-seventh as much.

According to a report in the British Medical Journal, as soon as they learned of the disastrous results for their drug, Pfizer hired damage-control consultants. The consultants discovered that most doctors simply weren’t aware of this research, and weren’t aware that Cardura ought not to be their first choice for the treatment of high blood pressure. So Pfizer simply kept quiet.

The American College of Cardiology (ACC), however, responded to the findings published in the JAMA article by issuing a press release, posted on its website, recommending that doctors “discontinue use” of Cardura. But within hours, the ACC downgraded its warning, recommending only that doctors “reassess” their use of Cardura. What happened? A confidential memo from Pfizer to the ACC requested a “clarification” of the ACC’s original press release. Bear in mind that Pfizer contributes more than $500,000 each year to the ACC.

The next round of results from the ALLHAT study came out two years later and contained more bad news for the manufacturers of brand-name blood pressure medicines. Again the low-cost diuretic was shown to be equal to or better than the higher-cost drugs—this time a calcium channel blocker (Norvasc) and an angiotensin-converting enzyme (ACE) inhibitor (Zestril and Prinivil). If medical practice were truly “evidence-based,” these results would have been a major problem for the manufacturers of the far more expensive but not as effective brand-name drugs. But not such a problem if the game is really hardball dressed up as evidence-based medicine. A strategic marketing consultant for the pharmaceutical industry was quoted in the British Medical Journal as saying, “So you’ve got one study that says yes, you should [use a diuretic], then starting the day after, you’ve got a $10 billion industry . . . and 55 promotional events . . . for an ACE inhibitor coming back in and saying ‘Here’s why my ACE inhibitor is safe and here’s why you should be using this.’ I mean, it’s promotion. Can ALLHAT stand up to that?” Almost certainly not.

Research results cannot always be hidden when studies don’t come out in the drug company’s favor, but that doesn’t mean drug companies don’t try to influence researchers to minimize the damage. Dr. William Applegate, then from the University of Tennessee, was a principal investigator in a study of a new blood pressure pill, DynaCirc, sponsored by the drug maker Sandoz (now Novartis). Not long before a dramatic meeting at which researchers were going to be shown the results of their study, Applegate was offered a $30,000-a-year consulting position with the drug company. He turned down the offer. Then, when he saw the data, he told the Baltimore Sun, “I thought the company was trying to buy my favor and my opinion.” It was simple. Sandoz’s new blood pressure drug had a higher rate of complications than the older drug with which it had been compared. The company twice made offers of research grants to Applegate’s research center, each time asking whether he had reconsidered his conclusions about the study.

Applegate eventually resigned from the project along with three of his colleagues. In a letter to JAMA, explaining the reason for their departure, the investigators stated: “We believed that the sponsor of the study was attempting to wield undue influence on the nature of the final paper. This effort was so oppressive that we felt it inhibited academic freedom and led to substantial differences . . . with regard to the ultimate presentation and interpretation of the results.” Dr. Applegate and his three colleagues endorsed the ultimate presentation of the study in JAMA, but most likely their willingness to resign on principle in the face of drug-company pressure played an important role in the publication of a fair report.

As the function of medical research in our society has been transformed from a fundamentally academic and scientific activity to a fundamentally commercial activity, the context in which the research is done has similarly changed: first in universities funded primarily by public sources, then in universities funded primarily by commercial sources, then by independent for-profit research organizations contracting directly with drug companies. And most recently, the three largest advertising agencies, Omnicom, Interpublic, and WPP, have bought or invested in the for-profit companies that perform clinical trials. These advertising agencies are now full-service operations, as an executive for one of the biggest health care marketing companies told the New York Times: “We provide services that go from the beginning of drug development all the way to the launch of your products.” The dialectic of the market rolls along.

There is nothing illegal or unethical about these commercial arrangements, but both the public’s interest and the commercial sponsor’s interest cannot always be served simultaneously: Either a study is designed to maximize sales or it is designed to determine the best way to prevent or treat a particular health problem. Certainly commercially sponsored research has produced important findings. But at best, the medical knowledge produced by commercial interests is restricted to the medical problems that are most profitable to study. And at worst, research is manipulated, misrepresented, or withheld, with the goal of maximizing sales. The most visible consequence of this is ignoring diseases like malaria, which causes millions of unnecessary deaths each year but has little appeal to industry because the disease occurs in the third world, where there are relatively few paying customers. Much less obvious is the extent to which it has become accepted as “normal” to sacrifice the well-known standards of medical science to achieve commercial goals.

The drug companies pour billions of dollars each year into medical research, and they need to have a number of successes in order to stay in business. Nonetheless, as Drs. Bruce Psaty and Drummond Rennie said in a JAMA editorial, “Medical research, even if it is conducted by the pharmaceutical industry, is not solely a commercial enterprise designed to maximize personal gain or company profits. The responsible conduct of medical research involves a social duty and a moral responsibility that transcends quarterly business plans or the changing of chief executive officers.”