The Crisis of Contemporary Psychiatry
CONDITIONS SUCH AS SCHIZOPHRENIA and bipolar disorder are engraved on our collective consciousness. For a century and more, psychiatry has insisted on their reality, and the misery and suffering these terms have encapsulated are real and often terrifying. The prospect of abandoning them would surely shake the foundations of psychiatry. Yet prominent voices within the profession are beginning to express doubts about what most would surely assume to be “real” diseases. Robin Murray, former dean of the London Institute of Psychiatry, a Fellow of the Royal Society, and one of the most widely cited researchers in the field, recently confessed, “I expect to see the end of the concept of schizophrenia soon. Already the evidence that it is a discrete entity rather than just the severe end of psychosis has been fatally undermined. Furthermore, the syndrome is already beginning to break down.… Presumably this process will accelerate, and the term schizophrenia will be confined to history, like ‘dropsy.’ ”1 The symptoms and suffering will not disappear, but if they do not correspond to a distinctive disease, then trying to discover the cause or causes of that nonexistent disease will necessarily be a fruitless task.
The problem that flows from this, of course, is that we can scarcely hope to find “the cause” of something if that something simply does not exist.2 Kenneth Kendler, who has performed important studies on the genetics of schizophrenia, reluctantly concluded that “individual psychiatric disorder[s] are clinical-historical constructs not pathophysiological entities.” This was not easy to accept. “The historical traditions on which we rely,” he bravely acknowledged, “give no guarantee about biological coherence—that underlying the clinical syndrome is a single definable pathophysiology.”3
There is a paradox here, because today, few would argue that syndromes such as schizophrenia and depression are single, homogeneous diseases. And yet, when it comes to clinical research, including clinical trials, both are still almost always treated as such. For example, studies continue to be published on the genetics of both of these syndromes despite the fact that there will never be a robust genetics of either condition, as the nature and severity of specific symptoms are too heterogeneous across individuals to have any consistent genetic correlations.4
Perhaps this helps explain the dismal return the National Institute of Mental Health (NIMH) has enjoyed on all the money it has poured into research on the neuroscience and genetics of mental disorder. In September 2015, no longer directing the institute and dispensing billions of dollars for research, Thomas Insel insouciantly summed up what all these dollars had purchased. “I spent 13 years at NIMH really pushing on the neuroscience and genetics of mental disorders, and when I look back on that I realize that while I think I succeeded in getting lots of really cool papers published by cool scientists at fairly large cost—I think $20 billion—I don’t think we moved the needle in reducing suicide, reducing hospitalizations, [or] improving recovery for the tens of millions of people who have mental illness.”5
IF PSYCHIATRY’S CLAIMS to diagnostic competence seem increasingly threadbare, what of the weapons it possesses to treat the various forms of mental distress? Here the picture is mixed. It would be a serious mistake not to acknowledge that for some patients, antipsychotics and antidepressants provide real relief from some of the symptoms that cause so much distress and suffering. Additionally, as I shall discuss shortly, a not-inconsiderable number of patients improve when treated with cognitive-behavioral therapy (CBT) or interpersonal therapy, though usually these patients are not those with the most serious and debilitating symptoms.
Yet it must also be pointed out that, for many patients, the therapeutic interventions the profession relies on have only limited efficacy. Accumulating evidence published during the last quarter century indicates that large fractions of those diagnosed with schizophrenia, bipolar disorders, and depression are not helped by the available medications, and it is difficult to know in advance who will respond positively to drug treatments. For many patients, a further major problem is that such relief as the pills provide must be set against the serious, debilitating, and sometimes life-threatening side effects that often accompany the ingestion of these medications. In this connection, academic psychiatry’s close ties to the pharmaceutical industry and clinicians’ overreliance on drugs as their primary treatment modality have created another profound set of concerns for the future of the profession.
THE FIRST DECADE OF THE NEW MILLENNIUM brought a sea of legal troubles for the pharmaceutical industry. Not all the problems revolved around psychopharmacology. The scandal surrounding the painkiller Vioxx is one particularly prominent example: having suppressed data that showed that its use was associated with a long-lasting heightened risk of heart attack and stroke, its manufacturer, Merck, was forced to pull the drug from the market and to pay damages totaling nearly $5 billion.6 Then there is the ongoing litigation surrounding the opioid crisis, and the alleged role played by the Sackler family and their corporation, Purdue Pharmaceuticals, in creating and profiting to the tune of billions of dollars from the epidemic of opioid addiction.7 Johnson & Johnson has also been held legally liable for their activities in this area and has paid hundreds of millions of dollars in fines to date, with more to come. But many of the most highly publicized and financially costly cases involved drugs prescribed for mental illness. GlaxoSmithKline pleaded guilty to criminal charges of consumer fraud in marketing its antidepressant Paxil, paying a fine of $3 billion in 2012 (a little more than a year’s worth of sales).8 The following year, Johnson & Johnson was fined $2.2 billion for concealing the risks of weight gain, diabetes, and stroke associated with its antipsychotic drug Risperdal.9 The year after that, Pfizer paid $430,000 in damages to settle a single lawsuit over its illegal promotion of its antiseizure drug Neurontin for psychiatric disorders, and further legal complaints raised the damages it paid to nearly $1 billion.10 Bristol Myers Squibb settled similar claims in 2007 for illegal marketing of its drug Abilify (an atypical antipsychotic) for more than $500 million.
Though these settlements could be and were seen as simply a cost of doing business (the fines constituted only a fraction of the billions these drugs had contributed to the companies’ bottom line), they did inflict a good deal of reputational damage. Worse, they attracted political attention from Chuck Grassley, chair of the United States Senate’s Finance Committee, who commenced an inquiry into psychiatry’s links to the drugs industry in 2007. And through the legal process of discovery, lawsuits brought to light some of the industry’s behind-the-scenes practices.
It was Senator Grassley’s investigations that unearthed Joseph Biederman’s financial ties to the pharmaceutical industry, as well as Alan Schatzberg’s concealed financial stake in the very corporation whose product he was “disinterestedly” researching with NIMH money.11 Charles Nemeroff, chair of the department of psychiatry at Emory University, was likewise found to have failed to report well over $1 million in income from consulting fees from drug makers, and he had routinely violated federal rules designed to avoid conflict of interests in conducting funded research.12
Nemeroff, it turned out, was a serial offender. On at least two previous occasions, his university had been notified of problems with his outside consulting arrangements. In 2004, an internal review found that he had committed “multiple ‘serious’ and ‘significant’ violations of regulations intended to protect patients” and had failed to disclose conflicts of interest in trials of drugs from Merck, Eli Lilly, and Johnson & Johnson.13 The report was buried. In 2006, new problems arose. Nemeroff used his position as editor of a prominent journal to tout the merits of a controversial medical device, neglecting to disclose that he had financial ties to the corporation that made it, Cyberonics. When a university dean accused him of producing “a piece of paid marketing,” Nemeroff blamed the episode on “a clerical error.”14 Earlier, when concerns had arisen about Nemeroff’s membership on a dozen corporate advisory boards, he had responded by reminding the university of the benefits it derived from the arrangement: “Surely you remember that Smith-Kline Beecham Pharmaceuticals donated an endowed chair to the department and that there is some reasonable likelihood that Janssen Pharmaceuticals will do so as well. In addition, Wyeth-Ayerst Pharmaceuticals has funded a Research Career Development Award program in the department, and I have asked both AstraZeneca Pharmaceuticals and Bristol-Meyers [sic] Squibb to do the same.”15
The cases of Biederman, Schatzberg, and Nemeroff were scarcely unique. In acting as they did, and then denying wrongdoing, they appear to have expected to suffer few consequences for their actions. And at least at first, their expectation that they could violate conflict-of-interest regulations and suffer few consequences seemed well founded. Stanford initially declared that the university saw nothing wrong with Schatzberg’s behavior. Belatedly and only temporarily it removed him as principal investigator on the grant that had provoked Grassley’s ire, but two years would pass before Schatzberg relinquished the chairmanship of his department. Harvard resisted taking action against Biederman and his associates. An internal inquiry found that, despite a university regulation requiring researchers to accept no more than $10,000 a year from companies whose products were being assessed, Biederman and his colleagues had received a total of $4.2 million. Harvard announced it would take “appropriate remedial actions.” Biederman and his colleagues were banned from “industry-sponsored activities” for twelve months. In addition, the university imposed some minor administrative penalties. None of these sanctions had any substantial effect on Lieberman, and to this day he remains a tenured full professor.16
Nemeroff’s fate is still more revealing of university priorities. Grassley’s staff had discovered that, while overseeing a federal grant designed to evaluate a GlaxoSmithKline drug, Nemeroff had received more than $960,000 from the company, while reporting only $35,000 of that money to the university. Once again, Emory seemed disposed to ignore the infraction, until it became apparent that its access to federal-grant money might be curtailed or eliminated. Only then was Nemeroff stripped of his department chairmanship—a highly unusual action in the world of academic medicine. It was announced that he would not be allowed to apply for federal grants and had been forbidden to accept more drug-company largesse.
A belated comeuppance it would seem. Subsequent events, however, showed that it was anything but. In short order, Nemeroff was offered and accepted the position of chair of the department of psychiatry at the University of Miami. In the negotiations, Miami promised to waive all the restrictions he had faced at Emory.17 Sometimes, we seem to have entered Alice’s looking-glass world. When Donna Shalala’s administration at the University of Miami was negotiating with Nemeroff over his move from Emory, there was serious talk of him heading an institute to promote ethics in academia and industry. Nine years later, the University of Texas came calling, and Nemeroff moved to another department chairmanship, higher up in the academic pecking order, and became simultaneously the director of its Institute for Early Life Adversity. He was once again supported by grants from the National Institutes of Health and the pharmaceutical industry, and he continues to play an outsized role in the American Psychiatric Association.
In some respects, those who express surprise at this sequence of events are either naive or, like Captain Renault in Casablanca, pretending to be “shocked, shocked to find that gambling is going on here.” Research universities depend for their existence on the constant flow of research dollars into the institution. The levels of funding from other sources are grossly inadequate to sustain their operations. The Nemeroffs, the Biedermans, and the Schatzbergs are experts at securing millions of dollars to fund their research, and those grants come with 50 or 60 percent overhead charges that accrue to the university budgets. No wonder America’s research universities have learned to turn a blind eye to ethical failings if the money on offer is sufficiently tempting. As these scandals proliferate, they threaten to inflict major damage on the legitimacy of psychiatry and on the medical-industrial complex more generally. The whole value of university research and the endorsement of particular therapeutic interventions by leading academics comes from the fact that they are thought to reflect disinterested “pure” science.
THE DEEPER SCANDAL OF PSYCHIATRY’S incestuous relationship with the pharmaceutical industry lies elsewhere, however. We live in an era that purports to be governed by something called evidence-based medicine. In principle, this is a development we should applaud. The centuries when physicians relied on “clinical judgment” to assess the worth of therapeutics gave us bloodletting and purges, vomits and blisters. In the twentieth century, psychiatrists relied on clinical experience to justify a long series of destructive interventions, from surgical evisceration and insulin comas to lobotomies. Empirical evidence that refuted entrenched collective wisdom played a vital role in consigning these interventions to the scrap heap, though in many cases only after a sustained battle with practitioners wedded to what “clinical experience” had taught them.
In the second half of the twentieth century, the major mechanism for tempering therapeutic enthusiasms became the double-blind controlled trial, with patients randomly assigned to receive the active treatment under study or either a placebo or an existing approved therapy.18 Particularly in the aftermath of the thalidomide tragedy in 1962, the FDA made evidence of safety and efficacy based on such trials the linchpin of its process for approving new therapies.19 To bring a new drug to market, the FDA requires companies to produce evidence from two trials that show that it is significantly more effective than either the placebo or an existing treatment.
That is the theory, at least. In practice, other pressures often subvert the process. One of the least controversial parts of the psychiatric universe is its attempts to do something for patients with dementia or Alzheimer’s. That lack of controversy is not the product of the profession’s success in treating or alleviating these conditions. On the contrary, these are probably the most treatment-resistant of all the disorders psychiatry confronts. Even efforts to palliate the enormous problems patients and their families face over years, even decades sometimes, have proved unavailing. The burden for individuals and society as a whole is massive and forecast to grow exponentially as the population ages. An estimated six million people now suffer from Alzheimer’s, and by 2050, that number is expected to approach thirteen million.
Few doubt that the roots of Alzheimer’s are biological, and there is general recognition of just how recalcitrant it has proven to be. Over the course of more than a century, it has defeated our best efforts to unravel its mysteries or solve the practical and therapeutic challenges it presents. Psychiatry has escaped blame for these failures precisely because everyone recognizes the enormity of the challenge Alzheimer’s and dementia represent. Yet that does not diminish the desperation either of those slipping into mental darkness or of their nearest and dearest as they struggle with the ensuing devastation.
If the travails of those suffering from other major psychiatric disorders have licensed and encouraged the employment of desperate measures to cure them, those same pressures are increasingly felt with respect to a condition even more resistant to successful interventions. Doing nothing does not seem an option. Hence a marketplace for peddlers of “alternative medicine” and other quack remedies: diet, herbs, acupuncture, light therapy—the list is long and the results are dismal. In more conventional quarters there are enormous pressures to find something, anything, that promises progress. For the pharmaceutical industry, Alzheimer’s represents a potential financial bonanza, if only its chemists can discover a pill that works.
Biogen claims that it has. On June 6, 2021, the FDA approved the first drug that purports to treat Alzheimer’s. It did so while admitting that the data from the two clinical trials that had been run left “residual uncertainties regarding clinical benefit.” Quite explicitly it made an exception to the usual requirements for approving a new medication because of the utter absence of any alternatives. The decision to grant approval for the drug came under a rarely used procedure, the so-called Accelerated Approval pathway, designed to provide a way to license drugs that seem “potentially valuable” for a disease for which alternative treatments don’t exist. As the FDA put it, they acted because “there is an expectation of clinical benefit despite some residual uncertainty regarding that benefit.”20
Aducanumab (or to give it its trade name, Aduhelm) was nearly abandoned when two large clinical trials proved unpromising. One showed slight benefits; the other, no benefit at all. An intravenous drug, Aduhelm targets the clumps of amyloid-beta proteins that led Alois Alzheimer to identify the disease, and brain scans reveal that it does indeed shrink the clumps in question. What remains unclear is whether it actually slows the disease or improves patients’ quality of life.
The FDA’s own advisory panel concluded that the evidence of its therapeutic effectiveness was far too weak to license its release, a conclusion strengthened by the risks of the drug’s side effects. (Patients who receive high doses in the trials experienced episodes of bleeding or swelling in the brain.) The FDA’s decision to overrule its own expert panel is in consequence highly controversial. Three members of the advisory group resigned in protest, and the controversy shows no sign of dying down. Adding to the concerns about whether the drug is efficacious, its enormous cost (a list price of $56,000 a year, plus the great expense of regular scans looking for brain hemorrhages), likely to be borne for the most part by the taxpayer via Medicare, has heightened opposition in many quarters to the FDA’s decision. On the other hand, desperate families and patients seem eager to try the experimental treatment, lacking anything else that works.21 Meanwhile, Biogen’s stock rose in the aftermath of the announcement by over 38 percent, adding over $16 billion in market value.22 Those with a sense of history will share the alarm of the scientists who recommended against approval. The FDA will have to hope its decision doesn’t backfire as the experimental drug enters clinical practice.
EVEN WHEN THE FDA STICKS to its usual practice and demands two controlled trials that provide empirical evidence of a new drug’s effectiveness, its procedures have often produced evidence-biased medicine rather than evidence-based medicine. Over time, most clinical trials have come to be funded and owned by the pharmaceutical houses. All sorts of unfortunate consequences have flowed from this situation. The size of patient groups recruited for trials has expanded greatly. Trials are typically run across multiple centers and very often across national boundaries. In the ordinary course of business, the data produced are owned by the company funding the research, and access to the full data set is carefully controlled and protected. Large numbers of participants make it easier to secure statistical significance for a finding, which may be far different from discovering a change that has clinical significance. (The statement that drug X is “significantly” better than a placebo means only that it produces some greater degree of improvement than can be explained by chance, not that it actually makes a useful difference to patient outcomes.) The data the company owns can be, and are, cherry-picked to find favorable results that form the basis for the studies the FDA requires. Unfavorable outcomes, including both trials that fail to show positive findings and records of serious side effects, are suppressed.23 That pattern of hiding damaging information even extends to fatal side effects. These data have generally surfaced only through lawsuits, and the associated process of discovery, which has forced the disclosure of unpublished studies and internal company deliberations that reveal the depths of the deceit. Unfortunately, recent Supreme Court decisions have imposed severe restrictions on future use of this weapon.24
A particularly revealing study is Glen Spielmans and Peter Parry’s “From Evidence-Based Medicine to Marketing-Based Medicine,” which looked at what could be learned by mining industry documents. One of the documents they reproduce is a Pfizer communication to its sales force about how to market sertraline (Zoloft), which is at once blunt and chilling. Headed “Data Ownership and Transfer,” it asserts that “Pfizer-sponsored studies belong to Pfizer, not to any individual.” As for the “science” they contain, the “purpose of data is to support, directly or indirectly, marketing of our product.… Therefore commercial marketing / medical need to be involved in all data dissemination efforts.” Among those efforts, it explicitly references the way “publications can be used to support off-label data dissemination.” These are, needless to say, standard industry practices. An Eli Lilly document lays out its plans to “mine existing data to generate and publish findings that support the reasons to believe the brand promise.” The drug referenced here is Zyprexa, an antipsychotic.25
When studies are written up for publication, professional ghostwriters are routinely employed to present findings in the most favorable possible light. Major academics are recruited to add their names as “authors”—a process designed to give an appearance of legitimacy to the papers and to secure publication in the most prestigious medical journals.26 Studies can, of course, be designed in ways that appear neutral but that are in reality biased in favor of the drug the company wishes to promote. When selectively reported, with the suppression of negative data and trials, what is meant to provide “evidence-based medicine” has allowed the marketing of useless or actively harmful interventions.
Here are two examples, but they are substantial and revelatory. One concerns the creation of spurious evidence of the superiority of so-called second-generation antipsychotics over their predecessors. The other examines the data used to justify the prescription of antidepressant drugs.
Industry-sponsored trials of second-generation antipsychotics typically used haloperidol, a first-generation antipsychotic, as the comparison drug, administering it generally in high doses. Haloperidol is known to have a high likelihood of inducing movement disorders and other distressing side effects, making it likely that the newer drug will seem to have a superior side-effect profile and receive FDA approval.27 Similarly, when selective serotonin reuptake inhibitor (SSRI) antidepressants are compared to placebos, their side effects are likely to reveal to both the doctor and patient who is receiving the active treatment and who is not, vitiating the presuming blinding of the study and rendering its results suspect to some indeterminate degree.28
Setting these concerns to one side—though they are weighty concerns that speak poorly to the psychiatric enterprise—what can we make of the evidence we do possess about the drug treatments psychiatrists have at their disposal? Second-generation or atypical antipsychotics were heavily marketed as improvements over the drugs that were introduced in the 1950s, when the psychopharmacological revolution began. In the first decade of the new millennium, however, major doubts began to surface about such claims.
A paper by John Geddes and his colleagues published in 2000 reviewed a range of studies and concluded that the evidence for the superiority of atypical antipsychotics was largely mythical.29 But it was two subsequent studies, both government funded, which inflicted the greatest damage on the presumption that atypical antipsychotics represented a superior approach to the treatment of schizophrenia. The Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE), funded by the NIMH, was designed to compare four atypical antipsychotics (with annual sales of over $10 billion) with a single older (and far cheaper) first-generation antipsychotic. The project was led by Jeffrey Lieberman, chair of the department of psychiatry at Columbia University. The highly anticipated study attracted major attention when it appeared in 2005 in the New England Journal of Medicine.
Throughout the preceding decade and a half, the pharmaceutical houses had claimed that “the introduction of second-generation or ‘atypical’ antipsychotics promised enhanced efficacy and safety” and on that basis, such drugs had acquired “a 90 percent market share in the United States, resulting in burgeoning costs.”30 The CATIE study, however, showed unambiguously that the atypical antipsychotics offered few benefits over the older drug with which they were being compared. Subsequent experience has shown that among most atypicals, the incidence of tardive dyskinesia has dropped by 50 or 60 percent, a welcome development, though the problem remains a serious one, and “many clinicians may have developed a false sense of security when prescribing these medications.”31 On the other hand, Lieberman and his colleagues found that the second-generation medications carried the same risk of neurological symptoms such as stiffness or tremors and had also brought other potentially life-threatening complications in their wake.32 Their use was associated with major weight gain and metabolic disorders. Those taking olanzapine averaged a weight gain of thirty-six pounds if they remained in treatment for the full eighteen months of the study and displayed signs of developing metabolic syndrome. These medications also led to an increased incidence of diabetes and a greater susceptibility to heart disease. A subsequent careful meta-analysis of the prevalence of antipsychotic adverse effects confirmed that “the AE [adverse effect] profiles of the newer antipsychotics are as worrying as the older equivalents for the patient’s long-term physical health.” The researchers’ review of the literature also prompted a sense of alarm about a broader issue: “Despite the frequent use—both on-license and ‘off-label’—of antipsychotics, the scientific study of their AEs has been neglected.”33
As for efficacy, Lieberman and his colleagues found that “antipsychotic drugs … have substantial limitations in their effectiveness in patients with chronic schizophrenia.” Nor were the newer compounds an improvement in this respect: with the partial exception of olanzapine, “there were no significant differences in effectiveness between perphenazine and the other second-generation drugs.” The condition of a not-insignificant proportion of those under treatment took a turn for the worse: depending on which of the drugs they were taking, between 11 and 20 percent of the study participants had to be “hospitalized for an exacerbation of schizophrenia.”34
The CATIE study reported another major finding that constituted a sobering commentary on the limits of contemporary psychopharmacology. Between two-thirds and more than 80 percent of those taking part in the study had dropped out, primarily for two reasons: the drug they were taking was not working, or the side effects of the medication were intolerable. Averaged across all five drugs, the dropout rate was 74 percent, ranging from 64 percent of those assigned to olanzapine (Zyprexa) to 82 percent of those given quetiapine (Seroquel). The authors of the CATIE report tried to normalize this rather extraordinary set of numbers by observing that these dropout rates “are generally consistent with those previously observed.”35 And in this they were correct.36 But such numbers constitute a stark reminder that many chronic schizophrenics find the therapy they are offered useless, or intolerable, or both.
Contemporaneously with the CATIE study, the British National Health Service was funding research comparing the effects of first- and second-generation antipsychotics. Its authors, aware of the growing preference among clinicians for the second-generation drugs, predicted that patients on them would enjoy an improved quality of life at a one-year interval, and they used a generally accepted and wide-ranging instrument to assess patients’ status. To their expressed surprise, their research decisively rejected their working hypothesis. If anything, patients on the first-generation medications fared better. “We emphasize,” the authors concluded, “that we do not present a null result: the hypothesis that SGAs [second-generation antipsychotics] are superior was clearly rejected.”37
TO PSYCHIATRISTS, the discovery of chlorpromazine and related compounds in the 1950s seemed to promise “both therapeutic progress and significant probes of brain function. Looking back, the picture is painfully different. The efficacy of psychotherapeutic drugs plateaued by 1955.”38 The older drugs, like their modern-day descendants, clearly altered the course of psychotic disorders, but it turns out that the improvements they provide are far more modest than the early enthusiasm for them suggested. Assessment of those effects has gradually become more systematic, often now using scales that attempt to assess the patient’s quality of life, or to measure improvements with respect to the so-called positive and negative symptoms of schizophrenia—using an instrument known as PANSS (Positive and Negative Syndrome Scale).39 “Positive” symptoms include such things as hallucinations, delusions, and racing thoughts, while “negative” symptoms include apathy, blunting of affect, disorganized thoughts, poverty of speech and thought, and poor or nonexistent social functioning. These rating scales include many unrelated items, and it is easy for scores to improve without the patient experiencing any functional improvement. PANSS, for example, contains a number of measures of tension, excitement, and anxiety—all likely to be present when someone is in an acute psychotic state. The sedating effects of antipsychotics are likely to produce a calmer patient, who will then be counted as improved, and yet there has been no change in the crucial psychotic symptoms, and the disorder remains untouched.
The largest effects are seen in patients experiencing their first psychotic episode. Unfortunately, though, “about 80% of patients relapse within five years.”40 Looking at results as a whole, antipsychotics unambiguously produce improvements in PANSS scores when compared to placebos. But most of this improvement is so limited as to verge on the clinically insignificant—what the authors of the largest systematic review of outcomes to date call a “minimal” response. On average, 51 percent of those given antipsychotics obtain this degree of improvement versus 30 percent of the placebo group. When it comes to a reduction in symptoms large enough to be termed a “good response” (not remission), the statistics are far less reassuring: 23 percent of the patients receiving antipsychotics obtain this result versus 14 percent of those in the placebo group.41 Schizophrenia, or the array of mental disturbances gathered under that label, clearly remains a malignant and devastating problem.
IF MANY WOULD NOW ARGUE that what we call schizophrenia is a label that lumps together a variety of heterogeneous phenomena, this seems even more obviously the case of what psychiatrists call the affective disorders. The criteria in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM 5) for a diagnosis of major depressive disorder require the presence of five or more of nine symptoms in a patient for two weeks or more. But one of the criteria is either insomnia or sleeping too much, so there are in effect ten criteria, and two patients can receive the identical diagnosis while sharing not a single symptom. Perhaps that is one reason why the antidepressants prescribed to deal with these disorders appear to have such limited efficacy. Yet 12 percent of the American population over the age of twelve currently uses antidepressant medications, and the percentage taking them for two years or more rose from 3 percent in 1999 to 7 percent by 2010.42
One might expect that drugs prescribed and used on such a massive scale must have obvious positive effects, and indeed it is routinely the case that studies comparing antidepressants to placebos in controlled trials show the superiority of the drugs, as measured by conventional rating scales.43 As with antipsychotics, many patients find taking these pills a blessing. That said, in the words of two Columbia University psychiatrists and a professor of psychological and brain sciences at Dartmouth College, “Even with maximal treatment, many patients will not experience sustained remission of their depression. The cumulative percentage of patients achieving remission after four sequential antidepressant trials in the National Institute of Mental Health (NIMH) sponsored Sequenced Treatment Alternatives to Relieve Depression (STAR*D) study was only 51%.”44
Antidepressants consistently outperform placebos in published clinical trials, and many patients swear by them, but the findings that these drugs work come with some serious caveats. All drug trials are now supposed to be prospectively registered with the FDA, a system that evolved to try to counter manipulation of the clinical trials system by the pharmaceutical industry. That has allowed us to see that a large fraction of these trials—more than 30 percent—are never published. The unpublished trials almost all produced negative assessments of the drugs under study and then never saw the light of day. It has also been possible to compare the data submitted to the FDA with those that appear in published papers, and that comparison, too, raises red flags: “According to the published literature, it appeared that 94% of the trials conducted were positive. By contrast, the FDA analysis showed that 51% were positive.” The problems this situation creates for the assessment of antidepressants are obvious. “Selective publication of clinical trials, and the outcomes within those trials, lead to unrealistic estimates of drug effectiveness,” as researchers have pointed out in an article in the New England Journal of Medicine, while “the true magnitude of each drug’s superiority to placebo was less than a diligent literature review would indicate.”45 Less is not nothing, but this is a disturbing finding.
The literature reviews that have been conducted add to one’s disquiet. They routinely show that treatment with antidepressants is statistically superior to the results obtained by administering a placebo. But the observed differences are not large—on average an additional 10 or 15 percent improvement over what placebo treatment produces.
Many instances of what are now labeled as major depressive disorder— 50 percent or more—seem to resolve spontaneously over time.46 Perhaps that helps explain the substantial placebo responses these clinical trials regularly report. Two recent reviews of these published results survey the degree of improvement patients registered on the widely used Hamilton Rating Scale for Depression, independently finding that those who were given placebos registered an 8.3-point improvement in their symptoms. Those given antidepressants fared better, and their symptoms improved by a further 1.8 points. However, clinically speaking, an average improvement of this sort verges on the trivial.47 That situation is masked, of course, from most clinicians and patients, since what they observe is the amount of improvement that follows after the prescription of the pills: the larger placebo effect; and the additional increment the drugs produce. The natural tendency under these circumstances is to embrace the post hoc ergo propter hoc fallacy and to assume that the entire improvement can be attributed to the medication.
One of the most curious phenomena when one examines the research on antidepressants is that the difference between placebo and active treatment has diminished sharply over time. Ordinarily, one would expect the gap between drug and placebo to widen. Instead, a systematic review in 2012 of major studies demonstrated that “antidepressant-placebo differences have decreased alarmingly over the past three decades.” A difference of 6 points on the Hamilton scale in 1982 had fallen to less than 3 points by 2008. The authors noted that their review had only been able to examine published studies. That was a significant problem, because “negative trials (meaning lower antidepressant-placebo differences) are more likely to remain unpublished.” “We can only surmise,” they concluded, “that including these unpublished trials would further reduce the mean antidepressant-placebo difference across all trials.”48
Arif Khan and his colleagues, who conducted this study, suggest that some of this decline may reflect a difference in the kinds of patients being studied: cases of serious endogenous depression in the early studies of antidepressants versus a more heterogeneous population created by the more expansive definitions of major depression embodied in DSM III and its successors. They also point to other methodological differences that have arisen as the FDA has modified its criteria for clinical trials. Still, as Khan acknowledges in a more recent paper, looking at the data
about the relative potency of antidepressants compared to placebo, … it became evident that the magnitude of symptom reduction was about 40% with antidepressants and about 30% with placebo.… [W]here the investigators and their staff were blinded to the design and execution of the trials … the symptom reduction with each treatment was of smaller magnitude and the differences among the various treatments and controls were also smaller.… [W]hen the level of blinding was high and it was difficult for the investigators, their staff and the depressed patients to guess treatment assignment, the differences between these treatments, controls and placebo became quite small.
Though interpreting all these data is “difficult and confusing,” their best estimate of “the effect size of current antidepressant trials that include patients with major depressive episode is approximately 0.30 (modest).49
What is troubling about this situation is that antidepressants, like antipsychotics, are not benign drugs. They are associated with a whole series of very troublesome side effects that ought to weigh heavily in any cost-benefit analysis of their usefulness. Among adults, these include sexual dysfunction, weight gain, nausea, apathy, and sleep disturbance, which for many prove intolerable, and these problems may not disappear after the drugs are discontinued.50 For children, adolescents, and young adults, the heightened risks of suicidality (serious thoughts of taking one’s own life, suicide plans, and suicide attempts) and of violence prompted the FDA to issue a black-box warning about the issue in 2004—the most serious step short of withdrawing a medication from the marketplace. Published papers attesting to the safety of prescribing antidepressants to children, as the discovery process associated with lawsuits against GlaxoSmithKline showed, had been based on manipulated data that hid the issue of suicide and that transformed data that showed the drugs did not work into claims that they were safe and effective, a dramatic example of how drug-company ownership and suppression of data can produce evidence-biased medicine. The papers were ghostwritten and had concealed the fact that taking Paxil was associated with a three times greater risk of suicidal ideation.51
There is a further problem that needs to be taken into account. For a not-inconsiderable fraction of those prescribed antidepressants, discontinuing the drugs proves problematic. While for some the resulting symptoms are mild and dissipate over a period of a few weeks, for others they are serious and persistent, representing a major iatrogenic problem. In effect, these patients may find themselves trapped into an endless cycle of antidepressant usage.52
GIVEN ITS HIGHLY NEGATIVE reputation in the court of public opinion and the decreasing teaching about and use of the procedure by most psychiatrists in the 1970s, most observers at the time suspected that electroconvulsive therapy (ECT) would soon suffer the same fate as virtually all the somatic treatments introduced between 1920 and 1940: it would essentially disappear. This has not happened. While it remains unclear just how extensively it is employed in contemporary psychiatric practice, ECT has enjoyed something of a revival, particularly in the treatment of severe treatment-resistant depression and, to a lesser degree, mania. To some extent, that revival occurred as growing attention began to be paid to “the dangers … and other toxic effects of the presently available antidepressants” and “the long-term negative effects of neuroleptics.”53 Gradually, though, better-designed studies with controls and blind assessments of mental status convinced many psychiatrists that ECT acted more rapidly than drugs and was at least as efficacious as other available therapies.54
There were patients and psychiatrists who still swore at ECT and insisted it had brain-disabling effects and should be banned.55 But from the 1980s onward, a larger number of patient memoirs from those who had suffered from extreme depression sang praises about the treatment and credited ECT for rescuing them from a life of utter misery.56 Prominent entertainers like Dick Cavett and Carrie Fisher joined forces with prize-winning writers like Simon Winchester and Sherwin Nuland. All spoke of how it had helped lift their bouts of depression. Granted, there were others who lamented how badly ECT had affected their memory, but these patients too were often willing to tout the treatment.57 Meanwhile, a series of somewhat better-designed studies appeared to provide stronger grounds for supporters of ECT to claim it worked in serious depression and did so more quickly and more effectively than antidepressant medications.58
Unfortunately, for many patients, the symptomatic improvement ECT produces does not last. As Ana Jelovac and her colleagues report, “Relapse rates following ECT are disappointingly high and appear to have increased over time. In patients treated with continuation pharmacotherapy … nearly 40% of ECT responders can be expected to relapse in the first six month and roughly 50% by the end of the first year.” Without ongoing treatment, either with drugs or more ECT, “relapse rates were even higher, approaching 80% at six months.”59
As for side effects, modified ECT has essentially eliminated the problem of fractures in the course of treatment. For memory and cognition, the picture is much cloudier. Lothar Kalinowsky’s chief protégé and the major advocate for ECT in the contemporary era, the New York psychiatrist Max Fink, bluntly acknowledges that the patients in the 1940s and 1950s had reason to complain about the damage ECT had done to them: “The prevalent belief that electroshock impairs memory is based on the early experiences of patients who were treated without anesthesia or oxygen [the vast majority of patients through the mid-1950s]. Such treatments were associated with severe and persistent memory losses.” But he insists that those problems are a thing of the past. In Electroshock: Restoring the Mind, first published in 1999 and reissued in several revised editions since, he points to improvements in the dosage and administration of the therapy. An unbridled enthusiast for ECT, he insists “there is no longer any validity to the fear that electroshock will erase memory or make the patient unable to recall her life’s important events or recognize family members or return to work.”60
Not everyone is as convinced that the problems with memory have disappeared. Harold Sackeim, who for decades has insisted on ECT’s usefulness, accepts that problems with side effects remain, even if fractures are no longer an issue. “Virtually all patients,” he claims, “experience some degree of persistent and, likely, permanent retrograde amnesia.” Those complaints could not be dismissed as the product of hysteria: “There is no dearth of patients who have received ECT who believe the treatment was valuable, often life-saving, who are not litigious, who return to productive activities, and yet report that a large segment of their life is lost.”61
Some ECT practitioners seek to minimize the loss of memory through the use of unilateral ECT, which is designed to affect one rather than both hemispheres of the brain. That may have helped skeptics to embrace the procedure. Such modifications were by no means universal, however, and many continued to use the traditional bilateral approach, despite accumulating evidence of greater long-term cognitive deficits in the aftermath of treatment. These deficits showed a direct relationship to the number of treatments given and the techniques used. A New York study by an enthusiast for ECT concluded that “some forms of ECT have persistent long-term effects on cognitive performance” and acknowledged that the study’s “findings do not indicate that the treatments with more benign outcomes are free of long-term effects.”62
ECT has thus had a slight renaissance, but it is hardly free of controversy. The history of its abuse in mental hospitals and the extraordinarily negative image it acquired in popular culture have proved hard to overcome. In the last quarter century, it has once more obtained a place in the psychiatric armamentarium—mysterious as the basis for its apparent therapeutic usefulness remains. How secure that place will prove to be remains uncertain. Though the American Psychiatric Association issued carefully couched endorsements of the procedure, ECT is regarded in many quarters with suspicion.63 One practical measure of its ambiguous standing is its tepid reception among those running the NIMH, the paymaster of American academic psychiatry. As of the end of the twentieth century, Edward Shorter and David Healy report, the NIMH had “supported 171 drug trials for depression, 21 trials for acupuncture, and only 4 for ECT.”64 And the consensus conference called by the NIMH to examine ECT’s place in modern psychiatry reflected that lack of enthusiasm in the very wording of its endorsement of the procedure: “ECT is demonstrably effective for a narrow range of severe psychiatric disorders in a limited number of diagnostic categories: delusional and severe endogenous depression, and manic and certain schizophrenic syndromes.”65
RECOGNIZING THAT THE PSYCHOPHARMACOLOGICAL revolution has been seriously oversold should not prompt us to despair or to dismiss drug treatment out of hand. There are substantial numbers of patients for whom drugs provide considerable relief from their suffering and permit them to resume some semblance of a normal life. Importantly, the side effects this group of patients experiences are mild or moderate, and on balance the treatment, for them, is clearly preferable to the disease.
One needs to remember that all medications carry risks. That is as true of the pills the rest of medicine prescribes as it is of the drugs psychiatrists employ to treat mental illness. In medicine, as elsewhere in life, there is no free lunch. Awareness of side effects and the need to balance potential benefits and harms is crucial in all of medicine, and psychiatry is no exception. Still, one could argue that, given the side-effect profiles of many psychiatric drugs and their inability to cure, caution is especially called for here.
For those patients who respond well to psychiatric drugs and who experience few negative effects, the choice to take them is clear. Symptomatic relief is often all that general medicine can offer: think, for example of treatments for diabetes, for arthritis, for autoimmune diseases, or for AIDS. Many of those symptomatic treatments work far better than their psychiatric equivalents, to be sure, so one should not push the parallel too far. Insulin for type 1 diabetes and triple therapy for AIDS can transform and extend lives in ways psychiatric medications simply cannot. Still, in cases where psychiatric medications successfully diminish the pain and suffering of depression or anxiety, or the hallucinations and delusions that torment schizophrenics, we should acknowledge that and be grateful for the limited but important degree of progress the psychopharmacological revolution has brought in its wake. Lithium treatment to treat mania and reduce the chances of recurrent episodes of bipolar disorder is another advance, though curiously American psychiatrists seem to be using it less and less.66 And we have reasonably good evidence that SSRIs can be used to alleviate the symptoms of obsessive-compulsive disorder (as does CBT).67
Unfortunately, however, many of those who come to psychiatry seeking relief from their suffering fall into one of two other camps: there are those for whom whatever degree of symptomatic relief the drugs provide is offset by the side effects produced by the medications—so troubling and unbearable that these patients very often refuse further treatment or become noncompliant and fail to take their prescriptions—and another very large group for whom the drugs simply don’t work at all. Members of the last group either drop out of treatment and remain a source of distress and disturbance for themselves and those around them, or they suffer the problems psychiatric medications bring in their wake, without obtaining any corresponding benefit.
In some sense, speaking of first- and second-generation antipsychotics is quite misleading. Both categories are highly heterogeneous. The drugs combined under these labels are pharmacologically and clinically distinct. Efficacy, side effects, and costs all vary substantially. The degree to which patients tolerate different drugs, and the balance between benefits and side effects, can be substantial and in conditions that are so distressing and disabling, as Stefan Leucht, of the Techische Universität at Munich and one of the leading practitioners of meta-analysis, argues, “even a small benefit could be important.”68
At present, though, psychiatrists have no way of knowing which response any given patient will have when a particular pill is prescribed. Taking antipsychotics or antidepressants is thus a game of craps, and when the gamble turns up snake eyes, patients are usually either switched to another drug or have other drugs added to the cocktail of chemicals that are prescribed for them—so-called polypharmacy.69 But the odds of either approach working diminish rapidly. Worse still, the price of polypharmacy—treating patients as neurochemistry experiments—is a greatly increased chance of suffering major side effects, with little likelihood of a positive outcome. Yet again, those who suffer are offered desperate remedies and confronted with desperately poor outcome statistics. The sobering reality is that we are very far from possessing psychiatric penicillin, and we should not be seduced into thinking, as Jeffrey Lieberman put it in Shrinks, that “the modern psychiatrist now possesses the tools to lead any person out of a world of mental chaos into a place of clarity, care, and recovery.”70 Sadly, we don’t.
ACKNOWLEDGING THE LIMITATIONS of psychopharmacology, clinical psychologists and some psychiatrists have attempted to use either CBT or interpersonal therapy, which were initially developed as treatments for the milder forms of mental distress seen in office practice, as an additional, adjunctive therapy for psychosis and graver sorts of mental illness.71 The hope was that these forms of psychotherapy could lead to better outcomes than drugs alone. Small, unblinded trials produced initially encouraging results, prompting groups like NICE (the National Institute for Health and Care Excellence)—the organization that decides which interventions the British National Health Service should provide—to endorse its use. Unfortunately, larger and better-designed studies, particularly those where the ratings of degrees of improvement are made blindly, have suggested that we should be much less sanguine about CBT’s value in treating psychosis. Indeed, some have gone further, dubbing the “vigorous advocacy of this form of treatment … puzzling” in light of accumulating evidence of its weak or nonexistent effects on the positive and negative symptoms of schizophrenia and on the propensity to relapse.72
Reviewing the results of CBT in schizophrenia, one well-designed meta-analysis of the published research found that it produces only a small improvement in patient functioning, which was not sustained at follow-up. As for its ability to have beneficial effects on patients’ subjective feelings of distress and their overall quality of life, here its contributions were weak at best. Indeed, not a single CBT trial has ever reported a rise in the quality of life among patients diagnosed with schizophrenia.73 A Cochrane review, widely seen as authoritative, likewise found CBT had “no advantage” in treating general psychiatric symptoms, delusions, and other so-called positive symptoms of schizophrenia. Nor did CBT have positive effects on the likelihood of relapse or rehospitalization.74
The evidence supporting the use of CBT in the treatment of bipolar disorder is similarly underwhelming, if slightly more positive. In the short run, it appears to reduce the risk of relapse and the severity of mania (though not of the depressive episodes). Longer term, however, these moderately positive outcomes disappear, and CBT does not appear to have persistent positive effects.75 While CBT appears to be more effective in major depression, the positive effects that have been recorded are unfortunately quite small.76 There is, it is true, considerable heterogeneity in the reported results, possibly because of the incoherence of the category of major depressive disorder, which lumps together a congeries of conditions with only an artificial label in common.
In most respects, the overall record of CBT in the treatment of psychosis is far from encouraging. In one very important respect, however, CBT does offer some promise. Whatever the drawbacks and failures of psychoanalysis, its attempts to grapple with psychosis did force its practitioners to listen to patients and to try to make sense of their condition. The rise of the DSM and of psychopharmacology reduced the psychiatric encounter to brief consultations that revolved around the prescription of medications and management of side effects. That essentially ended any serious attempt to listen to a patient’s concerns. In contrast, by its very nature, CBT, like other forms of psychotherapy such as interpersonal therapy and psychodynamic psychotherapy, does require actually listening to patients and giving legitimate weight and attention to their thoughts, feelings, and experiences. That, assuredly, is something we ought to welcome, even while recognizing that serious forms of mental disorder remain, for many, resistant to our best efforts to treat them.