Chapter 4
Improving R&D Output
Maybe the answer is for Pfizer not to worry so much about its own pipeline. Spending billions of dollars in R&D doesn’t change the fact that the odds of any single compound making it to market are long, indeed. Perhaps it makes more sense, over the long run, to save on R&D and wait to license or acquire drugs (or companies) once they have made it to market or shown strong enough signs of efficacy and safety to de-risk the proposition.
—Stephen D. Simpson, Investopedia, November 2, 2011
The pharmaceutical industry prides itself on its ability to discover and develop new medicines. Yet, this capability is being challenged because the industry’s productivity has not lived up to expectations in the past decade. To someone like me, who spent 30 years in pharmaceutical R&D, the thought of totally outsourcing R&D is preposterous. But I suspect that Simpson isn’t alone in his views. His challenge deserves an answer. Should research-based pharmaceutical companies forgo R&D altogether?
Before getting to the specifics of Simpson’s proposal, it is important to understand the nature of a company’s R&D budget line. First of all, everything is included in that figure, not just internal R&D costs but also things like the expenses generated in running co-development programs for clinical candidates partnered with other companies and milestone payments made to small companies whose early stage candidate it has licensed. Second, the size of an R&D budget is largely driven by “D,” not “R.” Based on my personal experience, the research component of a pharmaceutical company’s budget tends to be only about 15% of the total as a result of the tremendous costs involved in developing a clinical candidate from Phase 1 through NDA approval.
In addressing Simpson’s proposition, it must be noted that companies are already outsourcing a significant part of their pipeline. Martin Mackay, President of AstraZeneca’s R&D division, has recently said that their goal is to have 40% of its pipeline generated from in-licensed compounds. My sense is that this is a goal shared by a majority of pharmaceutical companies, and, as will be discussed later, I support this strategy. But should a company be totally dependent on outside sources for its future?
Simpson’s comment that companies would be better off acquiring drugs that have shown enough signs of efficacy and safety to de-risk the proposition is, frankly, naïve. A compound can show promising signs of efficacy in Phase 2 studies, and you can license it at that point. However, you will still need to invest heavily in the Phase 3 programs needed to get the drug approved. Given the needs these days to perform very costly outcomes and differentiation studies, the bulk of the development costs remain in this aspect of Simpson’s strategy. Furthermore, a program is never totally de-risked. There are a number of examples of drugs that prove to be disappointing commercially because of adverse events found post-launch when the drug gets used by millions of people.
So perhaps then, Big Pharma should forgo the licensing of compounds and simply buy companies, as Simpson suggests. History teaches that this strategy has its issues. Beyond the internal disruption not just of your entire organization but also of the acquired company, these mergers have not been viewed as being financially attractive by Wall Street analysts due to the premium that the acquiring company must pay and the lack of sustainability of the merged pipeline to meet long-term growth targets. Besides, you cannot just assume that the smaller company will automatically roll over and let itself be purchased, nor can you assume that smaller companies will be available on demand.
While Simpson didn’t directly address the complete shutdown of a company’s early research activities, others have. I also find this problematic. As noted above, the “R” component makes up only a lesser percentage of the overall R&D budget. And, when internal efforts deliver new products, you own them completely—no milestone payments, no royalties, no co-marketing deals. Thus, these products are more profitable. Even Simpson acknowledged that “Pfizer has some encouraging drugs that should come out soon,” and most of these are internally derived. But it is also important to have a strong internal cadre of scientists to help evaluate the strength of the supporting data of compounds being considered for in-licensing. A good research organization provides this. Many a deal has been squashed by internal scientists based on their rigorous reviews.
Any company forgoing internal R&D risks its future. It is not something I would recommend.
This is but one example of a number of “solutions” to fix pharmaceutical R&D. This is a topic of interest for dozens of people: industry consultants, former R&D executives, Wall Street analysts, journalists, and even former FDA commissioners. It seems that everyone has the answer. Some of these proposals merit interest, but many have holes. This chapter seeks to review these ideas and critically evaluate them.
Over the past few years, a number of critics of biopharmaceutical companies have predicted the demise of the industry because of its dependence on blockbusters. A blockbuster is defined as a branded prescription drug that generates annual revenues of $1 billion or more. Discovering a blockbuster should be a good thing because it is a medicine that is prescribed to millions of people because of its beneficial effects on disease and suffering. However, many major blockbusters, like Zyprexa, Lipitor, and Plavix, have already lost or are about to lose their patent protection and it is thought that, in the drug makers’ pipelines, there is a dearth of new compounds with blockbuster potential to take the place of older products.
A few years ago, no less an authority than the former head of the FDA, Dr. David Kessler, slammed the blockbuster mentality saying:
The model that we’ve based pharmaceutical development on the past ten years is simply not sustainable. The notion that there are going to be drugs that millions of people can take safely, the whole notion of the blockbuster, is what has gotten us into trouble.
Melody Petersen was even more strident in an opinion piece entitled “A Bitter Pill for Big Pharma”1:
For 25 years, the drug industry has imitated the basic business model of Hollywood. Pharmaceutical executives, like movie moguls, have focused on creating blockbusters. They introduce products that they hope will appeal to the masses, and then they promote them like mad.
It’s hard for me to envision my old boss, former Pfizer CEO Hank McKinnell, “taking a lunch” to discuss strategy with the heads of Paramount and Twentieth Century Fox.
First, it must be pointed out that a company doesn’t set its research priorities based on whether or not a program can eventually yield a blockbuster. Such predictions are difficult, if not impossible. For a new medicine to be successful, it must be safe and effective and must meet a major medical need. Assuming that 15 years after starting a new R&D program, the new compound finally gets approved, it then needs to get a favorable label from the FDA, reasonable pricing from those who reimburse drug costs, and acceptance by physicians and patients. A great example in the difficulty of predicting blockbusters, interestingly enough, is the biggest blockbuster in history—Lipitor. When Warner-Lambert was seeking a partner to help sell and market what proved to be the biggest selling drug of all time, the company approached Pfizer. The Pfizer marketing team’s analysis said that the peak sales potential of this medicine could be substantial. However, the actual peak in worldwide sales for Lipitor was almost $13 billion—more than anyone imagined. What happened?
The answer lies in the studies that Pfizer carried out with Lipitor AFTER it had already been approved and on the market. Pfizer invested over $800 million to show the importance of driving LDL cholesterol as low as possible. One such study was “Treating to New Targets” (also known as TNT). Conceptually, it was a simple study. Ten thousand patients with stable coronary artery disease and a baseline level of 130 mg/dL of LDL (which was considered reasonable 15 years ago) were randomly assigned to get either 10 mg or 80 mg of Lipitor and followed for nearly 5 years. At the end of the study, those on 10 mg of Lipitor had a median LDL level of 101 mg/dL and those on 80 mg had a median LDL cholesterol of 77 mg/dL. More importantly, those on 80 mg of Lipitor had 22% fewer heart attacks and 25% fewer strokes.
This proved to be a landmark study. For the first time, it was shown that lower LDL is better and that for people with a high risk of cardiac events, driving LDL levels down can be lifesaving. Suddenly, Lipitor’s potency advantage proved to have a major clinical benefit. Pfizer also performed another major study known as CARDS (Collaborative Atorvastatin Diabetes Study), which for the first time showed that diabetics can reduce their risk of heart attacks and strokes by lowering their LDL levels with Lipitor. Similarly, in the lipid-lowering arm of the Anglo-Scandinavian Cardiac Outcomes Trial (ASCOT), lowering LDL cholesterol in patients with high blood pressure was shown to lower the risk of adverse cardiac events over hypertension therapy alone.
These studies and others helped to change medical practice. The importance of lowering LDL cholesterol as much as possible in patients at risk of a heart attack or stroke was unquestionable. In addition, these studies greatly expanded the patient population for those who would benefit from Lipitor therapy. With these data in hand, the sales of Lipitor soared. There is no doubt that the results from these studies proved to be crucial in recognizing the full potential of this important medicine.
That Lipitor evolved into the biggest selling drug of all time was due to a “perfect storm” of great efficacy, excellent safety, and the growing realization of the need to lower bad cholesterol (LDL) more than had been previously recognized. To base a company’s strategy on such a sequence of events is foolish.
Nevertheless, with the large number of blockbusters going off patent, stories abound on the topic of the biopharmaceutical industry’s need to change its business model. One such article had the following line regarding Pfizer:
The next step will be rebuilding the world’s biggest drugmaker into a smaller, faster moving company that focuses on development of biological drugs and specialty medicines.2
Unfortunately, specialty medicine R&D programs do not necessarily move more quickly to NDA approval. As was described earlier, the outstanding work that led to the cystic fibrosis drug, ivacaftor, began in 1989, and this drug reached the market in 2012. Furthermore, given the time it takes to go from an idea to a marketed product, any company, regardless of size, has a pipeline rooted in experimental medicines discovered five to ten years ago. Suddenly shifting one’s strategy to specialty medicines is not feasible.
In reality, biopharmaceutical companies have been exploring new medicines for diseases that, while not representing patient populations of hundreds of millions, nevertheless are major medical needs. Such drugs are now being called “niche blockbusters,” a term coined by Jonathan Rockoff in the Wall Street Journal in a story about a newly approved lung cancer drug called crizotinib (sold as Xalkori).3 The article details the history behind the discovery and development of this breakthrough medicine. As was described in Chapter 3, crizotinib is an ALK-inhibitor that targets the genetic abnormality that causes about 5% of new lung cancers that are diagnosed each year. The beauty of this drug lies in the fact that a patient newly diagnosed with lung cancer can undergo a genetic test to determine if his or her lung cancer is ALK-dependent. If it is, then the physician now has a drug that is almost guaranteed to work in this patient.
The crizotinib story is not unique. Roche also has recently launched vemurafenib, a targeted drug for melanoma. We are moving away from the days when the only drugs that an oncologist had to treat his or her patients with were general cytotoxic compounds that come with myriad toxicities. The rapid advances being made in understanding the genetic basis of disease have led to the discovery and development of new drugs like these. But pharmaceutical R&D has been moving into this direction for a decade. After all, drug discovery is not an overnight process, and the research programs that are yielding these breakthroughs were started in the 1990s.
Thus, I am surprised by some of the following statements in the media that appeared as a result of Rockoff’s article: “Can targeted drugs save Big Pharma?” “Perhaps the pharmaceutical industry has come kicking and screaming (to this).” “New cancer pill gives hope, new strategy.” “There has been a paradigm shift.” There still remains the view that pharmaceutical companies are only interested in Lipitor-like blockbusters and that smaller commercial opportunities are disdained. Of course, every company would love to have a drug with enormous sales. But very significant commercial returns can be made with crizotinib that more than justify its clinical development.
A number of years ago, I was asked by an industry analyst if I felt that the inevitable fractionation of patient populations by genetic subtype would be a death knell for Big Pharma. His rationale was that diseases like lung cancer would be treated with agents specific to a particular mutation for a subtype of lung cancer. Designed for very few patients, this process would result in treatments that are far less commercially viable than a lipid-lowering agent designed to treat millions. I explained that most drugs that are broadly prescribed do not work in a significant percentage of patients. The current paradigm is for physicians to prescribe drugs and then monitor their patients to see if the drugs are working. Often, patients come back complaining that their drug hasn’t helped, leading the physician to try something new. This overall process is costly, inefficient, and frustrating to all concerned. By having specific, targeted drugs, physicians will have the confidence that the new medicine will help their patients, the patients will have confidence that the pills they are taking will help them, and payers will have confidence that they are paying for a worthwhile treatment. In such a world, while the number of patients taking the drug is relatively smaller, better pricing for the drug should be obtainable due to improved drug effectiveness. Basically, you might not have a few $10 billion-selling drugs, but you’re likely to have many billion dollar products.
This scenario is being borne out by crizotinib. In his article, Rockoff reported that market analysts are predicting peak sales of crizotinib to exceed $1 billion. This would put the drug in the top third in sales for all of Pfizer’s portfolio. Some might argue that the only reason that crizotinib will be a commercial success is that, as a cancer drug, it can command premium pricing. Perhaps this is true. But hopefully, genetically targeted drugs will be developed for other polygenic diseases like depression, schizophrenia, migraine, and so on. These patient populations should be large enough to support niche-like products with a reasonable price.
Targeted drugs have been envisioned and sought for over a decade. The first wave of these is now hitting pharmacy shelves. This is great news for patients and physicians—and not too bad for the companies developing them either.
There are a few lessons in all of this. First of all, predicting commercial success for a new medicine is always difficult. The biopharmaceutical industry has always been surprised, both positively and negatively, by the performance of new drugs. That is not going to change. To say that you are going to focus only on specialty products can be a prescription for disaster. Such compounds can play a role in a company’s portfolio of products, but this shouldn’t be the main driver for companies with sales of $25–$60 billion.
Second, from a discovery research standpoint, the resources needed to come up with a new compound for clinical development differ little for a specialty product or a projected blockbuster. Admittedly, the development costs for a specialty product can be less, particularly for a so-called orphan disease for which there is no treatment and relatively few patients worldwide. But a big pharmaceutical company’s portfolio should have a limited number of such approaches if it is to thrive.
Finally, there is a view that there are fewer and fewer opportunities for major blockbusters. I beg to differ. A truly effective and safe drug to cause weight loss would likely have sales in excess of Lipitor’s. The challenge in this field is clearing the high regulatory hurdle that exists for such a compound. A new drug that can slow or reverse Alzheimer’s disease would also have tremendous commercial potential because the incidence of this disease will surge over the coming years with the increasing life spans of people globally. Heart disease continues to be a problem, and the obesity/diabetes epidemic will likely reverse the progress made in this arena over the past decade. Will an agent that raises the good cholesterol, HDL, be a new breakthrough for treating cardiovascular disease? If yes, major blockbuster status will be achieved here as well. There are also other major medical needs awaiting new, effective treatments.
You don’t build a business strategy on the hope of discovering $10 billion/year products. You do base it on having the most productive R&D organization possible. And this leads to my final point. Slimming down R&D isn’t the answer. Rather focus, stability, and resources are required for an R&D organization to thrive.
Drug discovery and development is founded on the premise that one can predict whether a compound will have clinical efficacy in a disease based on in vitro experiments in cells and in vivo experiments in animals. Unfortunately, scientists have shown for years that, while these models have some value, they are not foolproof. It has been said that cancer has been cured many times in mice. The problem is that results in rodents don’t often translate to humans. Human beings are more than a giant two-legged mouse. Human biology and disease pathology are quite complex and cannot be easily mimicked in artificial laboratory settings.
Thus, scientists continually strive to find ways to identify research programs with inherently increased chances of success. This is referred to as “predictive innovation.” This was described by Anders Ekblom, head of Science and Technology Integration at AstraZeneca, in the following way4:
How early can I know that the approach I’m taking will definitely turn into a drug that delivers exactly what I would like to see? A lot of the cost in today’s drug development is the cost of failures. We are all trying to focus our energy on how we can get different technologies to better predict outcomes.
A valuable paper by scientists at Eli Lilly5 goes into great detail on this topic with respect to target selection. They believe that “validated targets for drug discovery are now materializing rapidly,” and the attractiveness of many new targets is enhanced by the availability of biomarkers and surrogate endpoints which enable researchers to prove very early in clinical trials whether the hypothesis behind the compound’s activity actually is relevant in humans. This is an excellent way to proceed, and I believe that all biopharmaceutical companies are seeking this path.
However, this approach to drug discovery isn’t a guarantee for success, as evidenced by the two examples given in the Lilly paper. The first target is PCSK-9, a new approach for lowering LDL that was discussed in Chapter 3. There is a great scientific rationale for why an entity that blocks PCSK-9 function should be very effective in lowering a patient’s LDL cholesterol. I will be shocked if the current clinical candidates now in early development, such as the PCSK-9 antibody, didn’t have the desired effects. But this approach will not be guaranteed of success based on early clinical data. Remember that the CETP inhibitor, torcetrapib, also had excellent supporting science to justify advancing this novel HDL elevator to phase 3 studies (Part 3). Torcetrapib’s dramatic lipoprotein remodeling was known from the very first patients dosed. Only the phase 3 results showed that this activity was irrelevant when trying to reduce heart attacks and strokes. The PCSK-9 blocker approach is based on the well-known benefits of lowering LDL cholesterol, so in theory the clinical effects on patients should be beneficial.
The scientific precedence that justifies the study of PCSK-9 blockers is also its Achilles heel. First of all, long-term studies comparing a PCSK-9 blocker with a statin will be needed to convince physicians and patients that they should switch to the former compounds. But payers will be another story. Why will they want to reimburse payments for a new but expensive treatment to lower LDL cholesterol when generic statins are available? Because the early PCSK-9 inhibitors are biological, such as antibodies, they will not come cheaply. Thus, even if a PCSK-9 were to successfully navigate the entire gamut of phase 3 studies needed to justify FDA approval, it may be limited for use only in patients who have dangerously high LDL cholesterol levels not controlled by statins.
The second target advocated by the Lilly authors as having a higher likelihood of success is an exciting new approach to treating pain involving a voltage-gated sodium channel known as NaV1.7. There is a rare disorder found in people born with a genetic mutation that prevents them from making NaV1.7. As a consequence of this defect, these people are insensitive to pain from birth. They are otherwise normal. Based on this genetic information, in theory a chemical that blocked NaV1.7 could be a great agent to treat severe pain without other side effects. Furthermore, one could demonstrate this activity in early clinical trials in people, and so clinical proof-of-concept could be had without hundreds of millions of dollars of investment.
This sounds great, but we must remember that, clinical outcomes are difficult to predict. Take, for example, studies with another novel experimental pain treatment, tanezumab, an inhibitor of nerve growth factor (NGF). This is a protein that modulates pain through sensitization of neurons. Multiple studies in animal models of pain show that NGF can both cause and augment pain. Furthermore, blocking NGF alleviates pain. About a decade ago, scientists at Rinat, a biotech company since acquired by Pfizer, developed an antibody to NGF called tanezumab. Tanezumab worked extremely well in animal models of inflammation, and so an obvious path for clinical study was to treat painful osteoarthritis of the knee, a poorly served condition.
The initial results with tanezumab were extremely exciting. Patients, for whom pain medications no longer worked and others who had been recommended for a knee replacement, suddenly felt great. Given that the sole biological role for tanezumab was to bind to NGF and prevent its harmful effects, it was thought that this antibody would have a great safety profile because it was a targeted drug. However, patients exhibited worsening of their arthritis, and some scientists have theorized that the patients were feeling so good that their rejuvenated active lifestyle resulted in worsening of their arthritis. It turns out that complete elimination of pain in these patients may not be such a good thing because pain serves as a warning sign that damage is occurring. Whether or not this theory is true, the clinical trials with tanezumab and the entire class of NGF inhibitors in knee pain were halted until regulators could better understand this effect and its overall impact on the risk–benefit profile.
The story isn’t over for this compound. This is an important enough area of research that the FDA convened a panel of outside experts to review the data. This Advisory Committee recommended that studies continue with this drug with the rationale that there are some pain conditions for which there are few, if any, options for the patient. For example, the risk–benefit profile for tanezumab may be highly favorable in treating cancer pain. Thus, the FDA has removed the hold and allowed clinical trials with tanezumab and other NGF inhibitors to resume.
Perhaps the NaV1.7 program will yield a tremendous pain reliever with minimal side effects. Then again, maybe it won’t. Pharmaceutical R&D is a high-risk, high-reward enterprise. There are no easy pathways to getting a major new medicine approved. “Predictive innovation” is part of the evolution of the drug discovery–development process. It may even improve the odds of success. But it is not a panacea.
Dr. Peter Hirth, the CEO of the biotech company Plexxikon, has had a very productive career in the biotech industry. He’s played a key role in the discovery of three successful drugs, including Recormin for anemia (sold by Roche), Sutent for kidney cancer (sold by Pfizer), and the recently approved Zelboraf for melanoma (to be sold by Roche). His stellar track record was the subject of an article in Forbes.6
His views on the problems with industry productivity can be summarized as follows:
Hirth’s serial success stands out vividly in a pharmaceutical industry that has for years suffered from a profound innovation drought. He says that large companies should learn from what Plexxikon has done with a staff of only 43, explaining that if he were running a big business like Pfizer, he would form small units of 40 or 50 researchers and fund them sparingly but promise them royalties on any drugs that succeeded. Productivity, he says, would go way up.
There are a few issues with this statement. First of all, in big companies like Pfizer, the drug discovery project teams do, in fact, contain 40–50 scientists. This was true in my day with the teams in disease areas such as osteoporosis, atherosclerosis, ophthalmology, and urology. Yes, there were larger teams in broad disease categories such as Neurosciences, but this area was made up of teams working on depression, schizophrenia, addiction, and so on. Breaking this zone down into its components shows that the 40–50 scientist concept still holds. During my Pfizer tenure, the one exception was in oncology, where we had a group in excess of 200. Contrary to the view that size is detrimental to productivity, this group was the most productive we had at the time as judged by the number of clinical candidates it produced relative to the group’s size.
Joshua Boger, an excellent scientist in his own right and the former CEO of Vertex, recently said that, in his experience, the relative size of the respective discovery groups doesn’t matter—it’s having the right people. I absolutely agree with him.
But the bigger issue for me with Hirth’s statement is the fact that the overall process of going from an idea to the discovery of a clinical candidate to the conversion of this candidate to an approved new medicine does take an army. It’s amusing to read “Plexxikon created Zelboraf . . . (and) Roche helped the tiny biotech test it.” This “help” undoubtedly involved literally hundreds of Roche scientists to develop a formulation that enabled Zelboraf to be tested in the clinic, synthesize Zelboraf in sufficiently large quantities for clinical testing, run the necessary toxicology studies in animals to show that the drug was safe, and carry out the full gamut of Phase 1, 2, and 3 trials to justify FDA approval. My guess is that if you created a list of all the people who were involved in the discovery and development of Zelboraf, it would number at least 500 people.
And that brings me to my final issue with Hirth’s stance, that of royalties. There are often seminal discoveries made in the development of a new medicine that extend beyond the discovery laboratory. Many a drug program has been saved by a key observation in the clinic on a drug’s activity, or a breakthrough new formulation that allows the drug to be suitable to be made into a pill or capsule, or a key toxicology study. For example, the first observation of the potential use of Viagra for erectile dysfunction was made by a nurse who was monitoring male volunteers in the first clinical test of this drug. Should she get a royalty? I would argue that such a royalty scheme is unworkable because, in my experience, you would have to grant royalties to at least a dozen people who have made a seminal contribution. Unless the royalty was minuscule, it would greatly cut into the revenues for the company.
It is my experience that scientists are an extremely highly motivated bunch. They are driven by making use of their scientific talents to discover and develop something that, if successful, could benefit millions of people around the globe. I am not sure that the potential for a royalty would cause them to work harder. They are already incredibly dedicated.
The term “drug repositioning” is used to describe efforts around looking for new therapeutic opportunities for existing drugs. This also goes by other names such as “drug reprofiling,” “drug repurposing,” or “reusable drugs.” The concept behind this is based on the principle that it is possible to find new uses for drugs that have already been approved. A drug approved by the FDA for a specific condition has cleared all the major safety, efficacy, and pharmacokinetic issues. Any new use for the drug that may be uncovered can be developed relatively quickly and cheaply since, theoretically, the major hurdle for this repositioned drug would be a clinical trial for the new disease indication.
It is interesting that whenever researchers discuss the value in repositioning old drugs for new uses, they always use sildenafil (tradename Viagra) as their poster child. This drug was serendipitously discovered as an agent to treat erectile dysfunction (ED), even though that wasn’t the specific use it was designed for. However, this discovery was not so accidental. Sildenafil was designed as a potent inhibitor of an enzyme known as PDE-5. The interest in PDE-5 inhibitors stemmed from the fact that inhibition of this enzyme should result in elevation of nitrous oxide (NO) in vascular tissue beds. NO is well known to be a vasodilator. Pfizer scientists hoped that by blocking PDE-5 in the heart vasculature, arteries would dilate and the result would be enhanced blood flow in patients with cardiac disease like congestive heart failure.
Sildenafil did, in fact, cause vasodilation. However, this vasodilation was first observed in the penis and not the heart. Instead of being a breakthough medication for heart disease, sildenafil became a major treatment for ED. So, yes, this was a biological consequence that was not initially envisioned. The key in all of this is that Pfizer scientists designed and synthesized a safe and effective PDE-5 inhibitor that could be tested in clinical trials to determine what the utility of such an agent would be. Sildenafil was, in fact, designed as a vasodilator. Its effects, however, were manifest in an organ other than the heart.
When a new mechanism is found to be effective in patients, a company’s scientists often explore where else such an agent may be of use. Pfizer researchers were also interested in learning whether sildenafil would cause vasodilation in other parts of the body. One theory was that the small arteries in the lung might also be sensitive to sildenafil’s effects. Patients with primary pulmonary hypertension (PHT) suffer from arterial constriction which is extremely debilitating, and people with this disorder have trouble breathing. Clinical trials showed that sildenafil was effective in treating PHT, and it is marketed for this condition as Revatio.
Another example is Pfizer’s tofacitinib, an inhibitor of the enzyme JAK-3. This orally effective drug was initially designed to be used as an agent to prevent organ transplant rejection. However, when the impressive early clinical data first came in, researchers began to envision other uses for a drug that acted by this mechanism, including rheumatoid arthritis and psoriasis. Hopefully, tofacitinib will soon be available for rheumatoid arthritis patients.
These two examples illustrate a few important points. The first is that, while serendipity is always appreciated in any research program, for any pharmaceutical research program to be successful, you need to have a safe compound that targets a specific biological process. Once in the clinic, you may find that the mechanism for which the drug was originally designed does not prove to be the optimum use for the new drug (there is also a downside to this: sometimes the new mechanism may have a mechanistically related side effect that turns out to kill the drug). But you don’t go blindly into clinical trials with the hope that a PDE-5 inhibitor might do something beneficial in people. Rather, you must connect the mechanism to a biological effect.
The second point is that when a mechanistically exciting drug shows beneficial effects in a disease, the news spreads rapidly throughout a research organization. Scientists will share these results and then hypothesize where else such a compound may be effective. This leads to many other experiments to explore the new exciting finding and, potentially, new uses for this drug in medicine.
Finally, as a part of any drug’s preclinical testing paradigm, it is explored in perhaps 100 different assays, to confirm the compound’s selectivity. Thus, if it were to possess other unexpected activities, this is likely to be found in advance of the first human studies.
Admittedly, I am skeptical of the potential for drugs that have been discovered and developed over the past decade or so to be repositioned. However, I think there might be potential for repositioning much older drugs. A great example of this is the thalidomide story.
In the late 1950s, thalidomide was marketed in Europe. This agent was prescribed to pregnant women to help overcome morning sickness and to help them sleep. Unfortunately, insufficient toxicology testing had been done on this drug before it was approved. Thalidomide is a teratogen, a compound that interferes with the growth of the fetus and causes severe birth defects. Thousands of babies were born with severe deformities because their mothers took this drug. Fortunately, thalidomide was never approved by the FDA.
However, thalidomide was first repositioned 50 years ago.7
In the 1960s Jacob Sheskin, while working at Hadassah Hospital at Hebrew University in Jerusalem, was trying to help one of his leprosy patents sleep. He found some thalidomide and, remembering that it had helped patients with psychological problems sleep, he tried it on one of his patients. Surprisingly, the response to thalidomide in this patient and others was dramatic. Within days, most of the symptoms of leprosy disappeared. In what is a great example of the appropriate use of a drug once its risk–benefit is understood, thalidomide has become the drug of choice to treat leprosy, albeit under conditions carefully controlled by physicians.
Thalidomide was repositioned yet again a few years ago. In investigating thalidomide’s beneficial effects in treating leprosy, it was found that this drug helps to stimulate the immune system. In addition, it appears to have antiangiogenic effects on tumor cells. This led scientists to explore the use of thalidomide in multiple myeloma, and it proved to be effective. It was approved by the FDA for this condition in 2006.
Efforts to reposition older drugs are flourishing in academic laboratories. Dr. Atul Butte and colleagues at Stanford have developed technology that allows them to screen genomic databases rapidly in such a way that they can identify examples where a drug creates a change in gene activity opposite to the gene activity caused by a disease.8 Such an observation with an older drug could allow them to identify new uses for it.
The work at Stanford is not unique. Academic researchers are exploring a wide variety of drug classes such as phenothiazines, β-adrenergic receptor blockers and nonsteroidal anti-inflammatory drugs across a wide swath of diseases.9 Even the NIH has gotten into the act with a deal involving the exploration for new uses of dozens of compounds from major pharmaceutical companies. I firmly support these efforts to find new uses for old or discarded experimental medicines. Hopefully, this work will lead to a major breakthrough. But I am not convinced that this will be a major source of new products. Casting a broad net with the hope of finding something new will be very challenging. The thalidomide example is more the exception than the rule.
It is no surprise that pharmaceutical companies make use of consultants. Often, they have skills lacking in a company. It is always a good idea to have a set of fresh eyes looking at the challenges and issues you face. Sometimes their advice is of value, sometimes not. But their views certainly can serve to broaden the internal discussion you are having and the decisions you will make.
It is surprising, however, to read the comments of people who purportedly are knowledgeable about the biopharmaceutical industry but yet who are totally off-base with their views. A Wall Street Journal op-ed piece10 by Dr. Scott Gottlieb is particularly striking in this regard. Dr. Gottlieb was FDA deputy commissioner from 2005 to 2009. According to this op-ed, he now consults with drug companies. Here are some of his views and my thoughts on them.
Dr. Gottlieb has other errors in how drugs were discovered in his piece as well. This discussion is not meant to single out Dr. Gottlieb. He is not unique in making recommendations on how to improve R&D productivity without necessarily having a complete understanding of the R&D process. Jack Scannell is an analyst at Sanford C. Bernstein. He attributes part of the decline in R&D productivity to the industry’s use of high-speed technologies to aid in the drug discovery process.11
The majority of first-in-class drugs (33/50) had their origins in phenotypic screening or were derived by modifying natural substances that were already known to have some kind of biological action. In other words, the processes that the industry was trying to abandon proved to be more successful at delivering major innovation than the processes that the industry was trying to industrialize and optimize.
First of all, the industry hasn’t abandoned any method for discovering new biologically active molecules. Drug industry scientists are very pragmatic. They will use any method, tool, or paradigm to discover a new medicine. However, different projects call for different solutions. A few examples will help illustrate this.
In the 1990s, Pfizer scientists looked for compounds that could act as a nicotine partial agonist, a compound that they believed would be useful in treating smoking addiction. They approached this project as Scannell described. They started with cytosine, an alkaloid found in plants and known to have modest binding potential to nicotine receptors. After a couple of years of work spent in making and testing modified versions of cytosine, they found varenicline (tradename: Chantix), an important medicine to help smokers quit. Also in that decade, Pfizer scientists discovered the blocker of the enzyme, phosphodiesterase5 (PDE-5), in a similar fashion. Starting with known, nonselective PDE inhibitors, the team was successful in its search and discovered sildenafil.
In both of these cases, chemists were able to start with a known molecule with limited biological potential and, using their insight and experience, design a successful new medicine. However, oftentimes there is no compound available that can be used as a starting point. In other words, project teams often have an interesting biological target for which there is no known small molecule that interacts with it. The lack of a starting point (called a “lead”) became particularly problematic as the function of more and more genes were discovered in the 1990s. Work from the Human Genome Project was providing exciting new theories on how to treat various diseases; but, without having a lead compound that modulated these new targets, drug discovery was impossible because there was no logical place for a chemist to begin synthetic efforts.
The solution to this problem was found in high-throughput screening (HTS), a technology discovery by John Williams and Dennis Pereira at Pfizer. HTS is the type of “industrialization” that Scannell rails against. It is an empirical approach based on testing millions of different compounds in microarray format to try to find a “lead,” a compound with some small degree of activity that the team can use as a starting point. Much like cytosine was a starting point for the Chantix discovery team, any team running an HTS hopes to get a similar “lead” to enable the start of the discovery testing process.
So, how successful has the discovery “industrialization” been? Well, considering that it has only been available for 20 years and that the drug discovery–development process takes 10–15 years to complete, the results have been remarkable. A terrific perspective in Nature Reviews Drug Discovery12 provides a history of the impact that HTS has had on the drug discovery process. Importantly, the article lists 11 recently approved drugs that had their origin in HTS, including Januvia, Iressa, and Tarceva. Others await approval, such as Pfizer’s tofacitinib for rheumatoid arthritis. Without the invention and application of HTS, these drugs for cancer, AIDS, diabetes, and so on, would never have been discovered.
The pipelines of pharmaceutical companies are now replete with exploratory medicines that had their origins in “industrialized” methodology. Many of these will become important medicines to treat diseases in areas of major medical need. These technologies are invaluable in discovering new medicines, but scientists don’t rely solely on “industrialized” solutions. It is a pathway used when appropriate. But to refer to this type of research as a blind alley is wrong. And to blame this methodology as a key reason for a decrease in the industry’s R&D productivity is absurd.
In 1998 I was named the global head of Discovery Research for Pfizer. It was a wonderful opportunity because I was able to interact with outstanding scientists at our laboratories in Groton, Connecticut; Sandwich, England; and Nagoya, Japan. Working with these people was very rewarding because I was able to share in their efforts in diverse research areas like oncology, arthritis, pain, AIDS, psychiatric diseases, asthma, diabetes, and atherosclerosis. But my goals included getting to know the colleagues in the commercial division. Thus, periodically, I would go to the Pfizer headquarters in New York City and meet with different leaders in that organization. I got to know people who taught me a lot about the challenges they faced in commercializing a new medicine and what it took for a compound to be a success. I learned from these interactions and developed friendships over the years that have lasted to this day. But the person whose comments had the biggest impact on me was the person who, at the time, was the head of Pfizer’s business in Latin America and South America–Ian Read, now Pfizer’s CEO.
Ian has always been noted for his candor. From my first conversations with him, he expressed strong views about early research. One of his biggest concerns was in trying to evaluate the output of the discovery organization. He correctly opined that you can’t really appreciate a given year’s achievements in discovery until 10–15 years later. While a new drug candidate that hits a particularly interesting experimental target to treat cancer may be found, it may be shown to be inactive in a clinical proof-of-concept study conducted 5 years in the future. Thus, the oncology team could be rewarded for discovering a first-in-class experimental drug that eventually proved to be a bust. This experience was completely foreign for people like Ian and those in his commercial organization. They had hard annual targets to achieve that were clearly measurable every December. If they achieved these concrete goals, they were appropriately rewarded. If not, they paid the consequences.
This point of view had never previously been put forward so forcefully to me. I always knew that the organization depended on discovery research: If we didn’t produce, everyone suffered. The discovery output was taken on faith by Pfizer. They put their trust in us to produce a portfolio of experimental medicines that would serve the corporation’s growth objectives for the foreseeable future. Pfizer’s R&D head at the time, John Niblack, often emphasized to me the importance of “getting discovery right.” The views of Read and Niblack made a big impression on me and provided the foundation for many of my beliefs about R&D, beliefs that are outlined below.
It is not unusual to hear a statement like: “If you do great research, you’ll get new drugs.” On the surface, this seems like a pretty reasonable comment. But, unfortunately, it is not true. A scientist can spend years doing excellent laboratory experiments, without having any of them lead to a potential medicine. The work done in pharmaceutical labs is applied research designed to take observations made in research institutes, government labs, or academia and to convert these into exciting compounds for clinical testing. Focus on getting a drug candidate is essential. A focus only on interesting science may lead to interesting publications, but not necessarily a drug.
Also problematic is that there are no sure things in discovery research. There are a variety of examples in this book where new exciting approaches for breakthroughs in treating a host of diseases, despite great scientific rationale and genetic information, turn out not to work when tested in clinical trials. Thus, we had at Pfizer what we called a “shots on goal” philosophy.
“Shots on goal” is a term obviously derived from sports. In theory, you have a better chance to score goals in hockey by taking 20 shots as opposed to taking only 10. Of course, they need to be good shots and not just wild attempts taken blindly. Each shot is taken not just to get it on goal, but rather to get it in goal. Thus, it would be a bit strange to hear a hockey coach differentiate between “shots in goal” and “shots on goal.” Every hockey player takes each shot fully intending to score.
The “shots on goal” philosophy in drug discovery emerged from the realization that, no matter how good your research tools are, animal models such as genetically modified mice are very limited and are not necessarily good predictors of beneficial activity in humans. It is virtually impossible to predict whether a new discovery drug candidate will be a success or failure. There are a number of reasons for this. Does the new compound have an inherent unforeseen toxicological effect? Does the new mechanism being studied have fewer beneficial effects in patients than expected? Oftentimes, the true effects of drugs are only learned when they are tested in people. The unraveling of the human genome was a great boon to discovery scientists; it yielded a wealth of hypotheses as to how to go about treating, or even curing, a variety of diseases. The problem, however, was that while many of these new targets looked very promising in the lab, it was unclear as to which would translate into effective therapeutic treatments for patients. In vitro tests and animal testing often provided tantalizing results, but no guarantees.
The “shots on goal” philosophy was applied in the Pfizer oncology labs in the late 1990s and early 2000s as a result of the explosion of new targets that were emerging. At that time, there was a plethora of new ideas to explore in pursuit of discovering new treatments for cancer; some ideas were based on how to prevent tumor cells from metastasizing, others were focused on preventing tumors from growing blood vessels so they would starve themselves and die, some were specific to blocking the processes that caused tumors to grow wildly, and still other approaches were even designed to help the immune system fight off this awful disease. All of these were exciting, viable ideas. However, it was impossible to believe that any one of these would be a “magic bullet” to cure cancer. Furthermore, it was impossible to determine which approaches would be superior to others. The decision was made to find good compounds based on each idea and then take these compounds into clinical trials to see which, if any, successfully treated cancer. To be successful in the fight against cancer, a number of strategies—or “shots on goal”—were needed.
The ones taking these shots were in the Pfizer cancer discovery group, which grew to over 200 people, making it one of the largest divisions in the company. Over a 10-year period, it produced a number of clinical candidates that explored over 20 of these novel ideas to treat cancer. While many of these compounds failed to improve the survival of cancer patients, a number of them proved to be very effective. Some of these compounds have already reached the market such as Tarceva, Xalkori, and Inlyta. In fact, it can be argued that Pfizer has one of the strongest oncology pipelines in the industry. Yet, at the start of every one of these programs, one could have rationalized why they might not be successful—or why that “shot” shouldn’t have been taken.
One might think that the “shots on goal” method is a scattershot approach, which, at best, yields a lucky result and, at worse, bloats the industry and inflates drug prices. Actually, it is a valuable R&D philosophy that has its foundation in the belief that you can’t pick winners without clinical data and that a product doesn’t become “unstoppable” until its full efficacy and safety profile are understood. However, for this philosophy to be successful, you must be certain that the “shots” are compounds that have cleared stringent hurdles designed to rule out any with potential flaws that would later reveal themselves in the key clinical trials. Just as a hockey coach puts his team in a position to win by encouraging his players to bombard the goal with good shots, a research manager increases his team’s chances for success by exploring multiple promising drug candidates in the clinic.
The “shots on goal” philosophy is one designed for success, meant to maximize the overall productivity of an R&D organization. Every “shot on goal” is made with the full intent of it being a “shot in goal.” The business of successfully discovering and developing new medicines is an incredibly risky endeavor, one that requires a lot of attempts before a winner is found. Limiting your shots by assuming that you can predict winners may ultimately prove to be a flawed strategy.
Most people know Pfizer as the behemoth it has become. But for the first half of my 30-year tenure, it was a relatively small company in a fragmented pharmaceutical industry. People are impressed with Pfizer’s 2011 sales of $67.4 billion and their R&D spend of $9.1 billion. Few realize that these same numbers in 1992 were $6.8 billion in sales with an R&D investment of $851 million (data from Pfizer Annual Reports). It is interesting to note that Lipitor, at its peak in 2006, had sales of $12.9 billion—almost double the sales of the entire company 14 years earlier.
The company of the mid-1990s was quite different from what has now emerged. As an R&D organization, we were severely constrained. We worked in a limited number of therapeutic areas, and even in those we were able to explore only a few different approaches at any given time. We always tried to work in areas of high likelihood of success; but, as was already discussed, these are not easy to predict. The ability to identify “winners” at the early discovery phase is not easy. In those days, not only were we limited in the number of projects that we worked on, but also these projects were thinly staffed, particularly in comparison with our larger competitors. I would often hear our scientists lament that on projects where we had six chemists, Merck had 20. My response was always the same: “Our six are better than their 20!” However, it would be disingenuous for me to say that I didn’t secretly envy the effort that Merck was able to put on their programs. To use a baseball analogy, the Tampa Bay Rays can compete with the New York Yankees and Boston Red Sox with half the budget of those richer organizations. But I would guess that Tampa Bay would like to compete with a similar budget. I felt the same way.
This all changed in the late 1990s. Thanks to the introduction of high-valued medicines such as Zithromax, Norvasc, Zoloft, Diflucan, Cardura, and Viagra, Pfizer sales grew dramatically and the R&D budget grew in parallel with this explosion in revenues. The $851 million R&D budget of 1992 was over $4 billion in 1999 (Pfizer 2000 Annual Report). Suddenly, we were able to run multiple programs in a given therapeutic area and with a healthy number of scientists. This was a terrific opportunity. Unfortunately, as we saw in Chapter 2, the rules of the R&D game changed dramatically at about the same time. R&D has become more difficult and expensive to carry out in the twenty-first century. This has contributed to the view that size is detrimental to productivity. But, I believe that size is not necessarily the culprit. I believe that success is related to how one utilizes the resources that you are allocated. As was already shown, the Pfizer discovery cancer team had a great deal of success and this success continues. This group was also the biggest we had at that time. Its success was not just due to size. We had a talented and competitive group that worked across three research sites: Ann Arbor, Groton, and LaJolla. But size wasn’t a detriment, it was a great asset.
While this group was large in Pfizer’s R&D setting, some would consider that it was not big enough. Colin Goddard, the former CEO of Oncogene Sciences Institute, a biotech company that was focused on cancer drugs, once told me that that to go after such an important disease as cancer, a team of at least 400 scientists was needed. Clearly, one’s view of size is dependent on one’s perspective.
There is a standard litany of the negative aspects of size: Decision making is slow; there is an aversion to risk; true innovation is snuffed out by layers of management. There is an assumption that small start-up companies are creative and big firms are not. Actually, my observations don’t support this. A small start-up company is very tightly managed by investors, especially venture capital firms. In these situations, every experiment and every dollar spent by the start-up has to be justified. As a result, some “off-the-wall” experiments may not even be tried. In a big R&D organization, there are enough projects going on that you can take some shots that are high in risk. “Skunk works” projects can easily thrive.
An essay on the Schumpeter blog in The Economist entitled “Big and Clever: Why Large Firms Are Often More Inventive than Small Ones”13 makes a very good case for size. Citing Michael Mandel of the Progressive Policy Institute, three arguments are given for the benefits of size in today’s economy:
The comments of Dr. Joshua Boger, alluded to earlier, resonate strongly with me. Dr. Boger has lived on both sides of the fence. His early career was spent in R&D at Merck. He left Merck to head up Vertex, where he grew this start-up to become a much admired biotech company. Here is what he had to say in an interview with Xconomy.14
I am a devout believer that size per se has nothing to do with innovation or its absence. Bell Labs was highly innovative for decades as part of the world’s largest company. Merck was highly innovative from about 1950 to 1990. The key is culture, and you can throw away an innovative culture when you are small or you can drive it away when big. There are certain well-greased decline paths that larger companies often take that lead to stifling innovation, but these are not inevitable. . . . Once low emotional intelligence takes hold in the executive suite, value destruction follows. Too often shareholders and advisors ignore or deprioritize the kind of value-based culture in which innovation thrives. They over-control and over-measure and reward the wrong behaviors in favor of short-term objectives. This is the cause of innovation decline, not bigness.
My own beliefs are centered on building as strong a team of scientists as possible and set them off to build an optimal portfolio of projects. The size of the team is really dependent on the company’s goals and objectives, along with the state of scientific knowledge in that field and whether it is ripe for investment. But whether you build a team of 200 in cancer or 40 in diabetes research, the principles are the same. Each team is empowered to create programs that will lead to exciting new candidates to meet a major medical need. Empowerment, however, doesn’t mean abandonment. Periodically, it is important to review the progress that the teams are making and, if necessary, offer advice and adjustments. It is also important to be clear that the organization expects productivity. Projects that don’t advance are closely scrutinized and possibly discontinued, in favor of more promising ones. But, if you have great scientists and you give them freedom and responsibility, they will be innovative regardless of how big the company is.
Properly used, size can be a great advantage. I would take that option any day.
This question was posed by Sten Stovall of the Wall Street Journal.15 Pharma companies are outsourcing more and more of the work it at one time did exclusively internally. At the start of Chapter 4, it was argued that relying totally on external sources for building one’s pipeline is not a very sound business proposition. But does it make sense to outsource certain types of work? If yes, where can such a strategy be best leveraged?
Outsourcing can have three different purposes:
The budget sparing situation is best exemplified by a conversation I had a number of years ago with the head of our R&D IT group at the time, who came in one day and said that he could outsource a large part of our IT work to a company in India at 10% of the cost of doing this work in the United States. He said that the quality of the work in India was excellent and that the work could be done in a timely fashion. How could any research head reject such an opportunity? Making such a move would enable us to reduce the IT budget and apply these funds to do added discovery or clinical studies.
The IT example is not unique. There are a number of processes in the drug discovery and development continuum that, while important, do not require in-house resources. Things as diverse as routine toxicology studies, preparation of synthetic intermediates, and the early-stage clinical studies can be done any place in the world. Such a strategy helps stretch the R&D budget as far as possible. One might ask if it makes sense to build your own capabilities to do this work in places like China and India. I would argue against this. It is likely that the low-cost locations of the future are not necessarily these but other countries. Maintaining flexibility in siting this type of work is key to maximizing your investment.
Historically, the pipeline of every Big Pharma company goes through times of plenty and times of relative drought. For this reason, you need to be able to outsource to generate enough capacity to move compounds rapidly through the different phases of development. It doesn’t make sense to have the internal capacity that reflects your business needs for the plentiful times. Rather, you need to build an internal organization of modest size, and you must complement that with a network that you can tap into during times of peak productivity. This type of capacity generating outsourcing also helps to keep internal costs down.
But, perhaps the most important use of an outsourcing strategy is to build a company’s pipeline. My view is that a pharmaceutical company should generate about 33% of its pipeline through outside sources (in-licensing). Actually, I didn’t pull this number out of the blue. Rather, it is derived from some observations and beliefs that have been generated over the past 30 years.
Why in-license anything at all?
This all sounds pretty good. Why not in-license 50–60% or more of your pipeline?
One might argue with the 33% figure. Maybe it should be 30%. Maybe it should be 45%. The bottom line is that it should be a significant part of one’s strategy. But it shouldn’t be the majority. The pipeline is a company’s lifeblood, and internal R&D must drive it.
Big Pharma is also making intense efforts to bolster its early discovery efforts by tapping into academic institutions to help convert early scientific findings into potential new medicines. This concept is not entirely new. In the past, companies often did collaborations with individual professors to gain access to a specific area of research. One of my former mentors, Professor Edward C. Taylor at Princeton, collaborated with Eli Lilly for many years, and the non-small cell lung cancer drug Alimta came from this collaboration. While I was at Pfizer, we set up a 5-year collaboration with the Scripps Research Institute in which Pfizer scientists openly collaborated with Scripps scientists in cutting-edge science programs.
However, companies are now building formal relationships with major institutions with the hope of broadening their research depth. This is yet another type of outsourcing, one intended to bring in new ideas and technologies at the earliest stage. Pfizer has set the pace in this area with their Centers for Therapeutic Innovation. The purpose of these units, according to Pfizer Senior Vice-President Jose Carlos Gutierrez-Ramos, is “to bridge the gap between scientific discovery and the delivery of promising compounds to the pipeline.” Pfizer has established these centers with the University of California at San Francisco and the University of California at San Diego, as well as with major academic centers in Boston and New York. These are substantial relationships as shown with the UCSF interaction which, according to the Pfizer press release, could be worth potentially up to $85 million to the University.
Pfizer is not alone. Merck, Sanofi and other Big Pharma companies are investing in unique models designed to accelerate the translation of novel biomedical research into medicines for major medical needs. All of these initiatives are very exciting. The company–academic interactions between the scientists involved in these joint programs can be very stimulating and fulfilling. It is entirely likely that some great breakthroughs will result from these investments. There is only one caveat. The work being done is at the earliest stages of the R&D process. It will be a decade, at least, before any new product will emerge. However, this is a great step in improving R&D capabilities for both the companies and the institutions.
As was previously discussed, it is not unusual for Big Pharma companies to develop partnerships on late-stage development compounds where co-development makes great strategic sense for both companies and where such a collaboration can maximize the value of the emerging new drug. These types of deals have been done for decades. However, a new trend has emerged recently: companies pooling their resources to explore new research platforms or technologies in order to determine if there is value in the underlying science.
This type of interaction would have been shunned 5–10 years ago. The reason for this is that companies looked for any competitive edge in the discovery and development of new drug candidates. Any breakthrough was treated as a trade secret, and attempts were made to protect these new methods either by patents or “radio silence” in that the scientists didn’t publish or talk about such research. An example of this was high-throughput screening (HTS). It was over a decade before Pfizer scientists published a seminal history of this work. By then, HTS was a common tool around the entire biopharmaceutical industry.
But, despite Pfizer’s silence on the discovery of HTS, other companies realized that there was great value in being able to rapidly screen tens of thousands of compounds, and it wasn’t long before this technology was broadly available. This shouldn’t be surprising. The industry is full of bright, diligent, creative scientists who are always looking to make technology leaps to enhance the R&D process. Thus, the thought that you have years of access to a new internally developed technique might be illusional.
In fact, new science has emerged over the past decade across the entire drug R&D continuum that offers the promise of improving this process, things like nanotechnology, biomarkers, imaging techniques, drug delivery technology, IT, and so on. The opportunities are such that it is impossible for one company to devote enough resources to explore any of these in real depth. As a result, companies are forming collaborations to explore such technologies and making any advances from this co-sponsored work available to all. This is a new attitude for Big Pharma companies, but it makes sense.
This is the rationale behind the creation of Enlight Biosciences, a company formed by PureTech Ventures where I am a Senior Partner. Enlight was created to meet “an unmet critical need in translating and funding the next generation of platform technology tools for drug discovery and development.” The partners that are invested in Enlight are Merck, Lilly, Pfizer, AstraZeneca, Johnson & Johnson, Abbott, and Novo Nordisk. These companies select the areas of research, actively contribute to the technology development, and fund individual portfolio companies. A typical such company is Entrega, formed to address the issue of the delivery of potential drugs that are not absorbed in the intestine. The oral delivery of peptides, proteins, and even certain small molecules has been a major challenge for the biopharmaceutical industry for many years. Entrega has developed proprietary technology that may solve this problem. If successful, the pharma partners will have access to a revolutionary drug delivery methodology.
The work being done by Entrega would have been difficult to do in a single pharma company, because it would have diverted resources from ongoing internal programs. Furthermore, this is just one of multiple companies started by Enlight to explore novel science. No single company could pursue all of these in depth. However, through this consortium, the Enlight members can cast a wide net for the next technology breakthroughs. This is a terrific example of how precompetitive collaborations are helping to maximize the funds an R&D organization has to discover new drugs.
If you were reading this chapter with the hopes that you would find the answers to improving R&D productivity in the pharmaceutical industry, you’ve been sorely disappointed. There are no easy solutions. R&D is tougher now than ever before. New products need to be safer and more effective than existing medicines, and proving this takes time and money—a lot of both. Furthermore, the R&D process is still lengthy, even for so-called niche products that require small clinical trials.
Critics sometimes focus on the number of compounds approved as a way of measuring the industry’s output. I am not sure that overall numbers are the issue. At the peak of industry productivity in the mid-1990s, the environment was very different. It was possible to have multiple viable products in a single disease category; statins are a great example. For reasons already described, that is no longer the case. In terms of overall benefits for patients, this is a good thing. We don’t necessarily need six or seven entries in a single disease category; we would be better off with two or three drugs, each in different disease indications. Thus, we are all benefiting from this evolutionary thinking in R&D.
The bigger issue in terms of productivity is the cost now required to develop a new medicine. The hurdles set by payers and regulatory agencies, as well as the expectations of patients and physicians, require far more extensive data now than ever before in the history of R&D. This is not going to change. New biomarkers are not going to eliminate the need for outcome studies in drugs to be used chronically for the rest of a patient’s life. Every drug, no matter how well researched, will have side effects. The question is what is the advantage that the new drug has in relation to its risk.
The biggest issue facing R&D organizations is trying to identify the fatal flaws of compounds before they enter the expensive late-stage clinical trial phase. One can be encouraged that new technologies already in hand or being developed will greatly help in this endeavor. However, measuring the impact on overall productivity takes time. Just as it took over a decade to see the benefits of high-speed technologies on discovery research, various new approaches such as the use of genetic information to improve clinical trial design or imaging technology to measure a drug’s impact on disease progression are already being used as predictors for gauging the potential success of an experimental drug in late stage studies. It’s hard to believe that these technologies won’t have a beneficial impact on R&D output.
But there is another issue impacting R&D productivity that tends to be minimized. The cost-cutting culture of the past few years has had a tremendous unsettling effect on people in R&D organizations. It seems that everyone wants to shake things up. New R&D models are being imposed on organizations routinely. The threats of site closures and mergers have begun to have a numbing effect on researchers. In some cases, changes are being made before the last series of changes have been able to have a measurable impact.
The former head of Amgen’s R&D organization, Dr. Roger Perlmutter, had some important insights on this. After his 11-year tenure as Amgen’s leader, he was interviewed about his thoughts on the state of R&D in pharma.16
My view of [R&D] is that yes, it’s more expensive, and yes, it’s harder than ever before, and the only way to succeed is to eschew distraction. It’s so hard to do, to focus only on things that can make a real difference. . . . The challenge is to identify the critical important things that warrant your attention and throw yourself 100% into those things.
He is absolutely correct. To paraphrase a line made popular during the Clinton Administration about the economy: “It’s the compounds, stupid!” This must be kept in mind when taking on any endeavor in R&D. Every investment, every organizational change, and every strategic shift should be made only after answering the following:
Researchers need not be coddled. In my experience, researchers will rise to the occasion to meet challenges the corporation faces. But CEOs need to realize that they can easily disrupt their company’s engine. They need to help researchers focus on the task at hand—bringing new medicines to the world.
References
1. Petersen, M. (2008) A bitter pill for Big Pharma. Los Angeles Times, January 27.
2. Armstrong, D. (2011) Pfizer after Lipitor slims down to push mini-blockbusters. Bloomberg, November 30.
3. Rockoff, J. (2011) Pfizer’s future: A niche blockbuster. Wall Street Journal, August 30.
4. Ledford, H. (2012) Success through cooperation. Nature News, February 1.
5. Paul, S. M., Mytelka, D. S., Dunwiddie, C. T., Persinger, C. C., Munos, B. H., Lindborg, S. R., Schacht, A. L. (2010) How to improve R&D productivity: The pharmaceutical industry’s grand challenge. Nature Reviews Drug Discovery, 9, 203–214.
6. Herper, M. (2011) Serial lifesaver. Forbes, September 26.
7. Barber, S. (2007) Celgene: The pharmaceutical Phoenix. Chemical Heritage Newsmagazine, 24, 1–2.
8. Marcus, A. D. (2011) Researchers show gains in finding reusable drugs. Wall Street Journal, August 18.
9. Oprea, T. I., Bauman, J. E., Bologa, C. G., Buranda, T., Chigaev, A., Edwards, B. S., Jarvik, J. W., Gresham, H. D., Haynes, M. K., Hjelle, B., Hromas, R., Hudson, L., Markenzie, D. A., Muller, C. Y., Reed, J. C., Simons, P. C., Smagley, Y., Strouse, J., Surviladze, Z, Thompson, T., Ursu, O., Waller, A., Wandinger-Ness, A., Winter, S. S., Wu, Y., Young, S. M., Larson, R. S., Wilman, C., Sklar, L. A. (2011) Drug repurposing from an academic perspective. Drug Discovery Today: Therapeutic Strategies, 8, 61–69.
10. Gottlieb, S. (2011) Big Pharma’s new business model. Drug makers aren’t chasing blockbusters like Lipitor anymore, or uncovering compounds in the same way. Wall Street Journal, December 27.
11. Scannell, J. W., Blanckley, A., Boldon, H., Warrington, B. (2012) Diagnosing the decline in pharmaceutical R&D efficiency. Nature Reviews Drug Discovery, 11, 191–200.
12. McCarron, R., Banks, M. N., Bojanic, D., Burns, D. J., Cirovic, D. A., Garyantes, T., Green, D. V. S., Hertzberg, R. P., Janzen, W. P., Paslay, J. W., Schopfer, U., Sittampalam, G. S. (2011) Impact of high-throughput screening in biomedical research. Nature Reviews Drug Discovery, 10, 188–195.
13. Schumpeter (2011) Big and clever—Why large firms are more inventive than small ones. The Economist, December 17.
14. Timmerman, L. (2011) The fall of Pfizer: How big is too big for Pharma innovation. Xconomy, August 29.
15. Stovall, S. (2012) To outsource or not to outsource? That’s the Pharma R&D question. Wall Street Journal, The Source Blog, February 7.
16. Timmerman, L. (2012) Xconomist of the week: Roger Perlmutter’s parting thoughts on Amgen, Xconomy, March 1.