CHAPTER TWENTY

The Complexities of Psychopharmacology

THE DRUGS REVOLUTION THAT BEGAN to transform psychiatry and the experience of patients in the 1950s was an accidental revolution. Few foresaw its arrival, and the discovery of a new approach to the treatment of schizophrenia and then of manic-depressive psychosis, depression, and a host of less devastating psychiatric syndromes was the product of serendipity, not rational design. Chance observations of the impact of chlorpromazine on psychiatric patients led to its introduction into the psychiatric arena. The equally fortuitous finding that tubercular patients treated with iproniazid or isoniazid became much more cheerful, even when faced with a grim diagnosis, led to the introduction of a novel therapeutics for depression. In the late 1950s, these so-called MAOIs (monoamine oxidase inhibitors) were joined by a new class of antidepressants, the tricyclics.1 Both of these types of drugs were seen as treatments for what was thought to be a relatively rare but devastating form of mental illness, melancholia or endogenous depression. The malignant character of this disorder—with its overwhelming feelings of sadness and guilt, social isolation, periodic psychotic features marked by hallucinations and delusions, characteristic anhedonia, and increased suicidality—distinguished it from other forms of depression and anxiety that were far more common but were, at the time, viewed as best treated with psychotherapy. Since melancholia was seen as a comparatively rare disorder, drug companies were slow to bring these drugs to market and were ambivalent about doing so. It would be more than two decades before a new view of depression emerged. Then its prevalence began to skyrocket, until it came to be seen as the common cold of psychiatry.

Taken collectively, the antipsychotics, the first generation of antidepressants, and the minor tranquilizers like Miltown marked a radical shift in society’s response to mental disorders, minor and profound. No one knew why these new pills worked. That pharmaceutical interventions modified mental symptoms over time led many to reembrace earlier notions that mental illnesses were rooted in the body, but it was wholly unclear why the drugs had the effects they did, or what the hypothesized biological origins of mental disturbance might be.

These puzzles would become central research questions for academic psychiatry. Both federal-grant monies and drug-company funds underwrote this new agenda, and the hiring of a new generation of research psychiatrists who focused on the brain, or on the supposed genetic roots of mental illness, created a new field of academic psychiatry that bore little resemblance to its midcentury predecessor. A large gap—a veritable chasm—separated the world of these academic researchers from the great majority of the profession, who were still facing the clinical complexities of managing mental illness. The clinicians embraced the pharmaceutical interventions that the academic-industrial alliance produced, and they borrowed from the hypotheses of the neuroscientists to lend an aura of science to their day-to-day practice. Managed care in any event left them little choice. Psychotherapy was far too time-consuming and reimbursement rates from the insurance companies far too small to make that an option, save for those whose practices permitted them to collect fees directly from their patients. Ten- to fifteen-minute consultations to prescribe and check medications soon became psychiatrists’ standard modes of practice.2

When Thorazine was introduced to the marketplace in 1954, it was thought that the cells of the brain communicated through electrical impulses.3 By the 1960s, a rival hypothesis had essentially replaced that electrical model. Neurotransmitters, chemicals that had previously been seen as important only in the peripheral nervous system, were now cast as the principal underpinnings of activity in the brain. The gaps between the 100 billion or so neurons making up the human brain were, it became increasingly clear, bridged by chemical messengers that excited or inhibited other neurons, binding to receptors that absorbed the signal they provided. Perhaps the new psychiatric drugs produced their effects by acting on neurotransmission, altering either the production of the transmitters or the number and sensitivity of the neurons’ receptors. It was a hypothesis that emphasized the potential connections between psychopharmacology and basic research on the functioning of the brain, leading to the recruitment of increasing numbers of neuroscientists into academic departments of psychiatry.4

Though there had been a long history of research on the structure of the brain and nervous system, dating back at least as far as the work of Thomas Willis—the man who coined the word “neurologie” in the seventeenth century—the term “neuroscience” was first used at the Massachusetts Institute of Technology in 1962.5 Later that decade, self-identified neuroscientists created their first professional organizations—the British Neuroscience Association in 1968 and its American counterpart, the Society for Neuroscience, in 1969. Explosive growth of the field followed. The first meeting of the Society for Neuroscience in Washington, DC, in 1971 had 1,400 attendees. Its fiftieth meeting, in 2020, was expected to attract more than 30,000 attendees before COVID restrictions forced its cancellation. Departments of psychiatry have played no small part in the expansion of the field, and the presence and prominence of neuroscientists in academic psychiatry have contributed massively to the biological turn in psychiatry over this period.

An important indirect stimulus to the neuroscientific turn was provided by the work of the Swedish neuroscientist Arvid Carlsson. In the late 1950s, Carlsson discovered that dopamine functioned as a neurotransmitter and could control movement—a discovery that won him a Nobel Prize in 2000. He suggested that defects of dopamine might cause the symptoms of Parkinson’s disease, and though that suggestion was initially greeted by skepticism, postmortem examinations of patients dying from that disease later revealed a significant depletion of dopamine in their midbrains. In short order, others experimented with levodopa (L-DOPA), a precursor of dopamine that crosses the blood-brain barrier, as a treatment for Parkinson’s disease, and by 1970, it had become the standard therapy for the disorder. It remains so today, despite its serious side effects, and notwithstanding the fact that, while initially effective, it eventually fails to control patients’ symptoms, as the destruction of the nerve terminals that produce dopamine in the brain reaches critical levels.6

If a shortage of dopamine was behind the tremors of Parkinson’s disease, and those tremors could be alleviated by L-DOPA, might whatever therapeutic benefits Thorazine and other psychoactive substances produce also take place via some hitherto undiscovered transformations of the brain’s neurotransmitters? It was a reasonable hypothesis, and that dopamine might be involved was suggested by some of the side effects the phenothiazines were known to produce, which closely resembled the motor disorders characteristic of Parkinson’s disease. Many early enthusiasts for Thorazine had welcomed the appearance of these symptoms, and, thinking that they were critical to the drug’s therapeutic effects, had deliberately increased patients’ dosage until they appeared.7 It was a connection that strengthened as research continued to show that the phenothiazines acted to block dopamine receptors in the brain.8

That connection, which in light of decades of confirmatory research seems indisputable, soon prompted a more daring hypothesis: if Parkinson’s disease was the product of a deficiency of dopamine, perhaps schizophrenia was caused by too much dopamine or an overstimulation of dopamine receptors in the brain.9 For those seeking to replace psychoanalytic speculations with biological accounts of mental disorder, this was a theory with great attractions. It promised to root the most serious forms of mental disorder in the body, simultaneously to include psychiatry in one of the most exciting emerging scientific fields, and to link it to the laboratory world of chemistry and pharmacology. Not coincidentally, it was a theory that drew a warm welcome from the pharmaceutical industry, for whose products it provided powerful marketing copy.

Attempts to validate the theory, however, have produced a litany of disappointments.10 If there was excessive dopamine in the brains of schizophrenics, then their cerebrospinal fluid should show evidence of this excess. It does not. Hormones produced by the pituitary gland are responsive to dopamine levels in the brain, but once again the great bulk of the studies that have addressed this question have been unable to demonstrate differences between schizophrenics and control subjects on that front. Nor have postmortem studies of the brains of schizophrenics and controls shown evidence of elevated levels of dopamine or its metabolites in the schizophrenic patients.

In an attempt to rescue the theory, some suggested that it was not excess dopamine that produced schizophrenia, but the proliferation of receptors, which heightened the effects of a given level of dopamine. That more receptors have been found in the brains of schizophrenics lent some credence to the theory, except that other research shows that the blockade of dopamine, which is a principal effect of treatment with many antipsychotic drugs, precisely stimulates the production of such additional receptors. In other words, rather than being evidence of something that causes psychosis, the proliferation of these receptors is more plausibly seen as an iatrogenic process—the product of drug treatment, not evidence for its necessity. Then there was the awkward finding, as Joanna Moncrieff put it in her study of the marketing of antipsychotics, “already apparent in the 1970s that some drugs, such as Clozapine and thioridazine (Mellaril), which had relatively weak dopamine-blocking properties, were as effective as other antipsychotics.”11 Finally, there have been more recent attempts to suggest that variants in genes involved in the dopamine system influence the susceptibility to schizophrenia, but the large majority of studies of this sort have produced negative results, and in the studies that buck this trend, the gene effects are acknowledged to be very small.

Yet despite a paucity of research providing support for the dopamine theory (or theories) of schizophrenia, it remains almost an article of faith among many psychiatrists.12 Like those who once clung to a heliocentric view of the universe, instead of abandoning the dopamine hypothesis, psychiatrists have repeatedly modified it in an ad hoc fashion to prevent its being disconfirmed.13

The dopamine theory of schizophrenia is not the only chemical theory proffered to explain a serious mental illness. Depression, too, came to be explained in similar terms. Indeed, whereas the suggestion that schizophrenia was caused in some fashion by dopamine was primarily embraced by the psychiatric profession, the claim that depression was caused by an imbalance of the chemical soup in the brain was adopted far more broadly, and this idea continues to be regularly echoed in the media and believed by the public at large.

The hypothesis that depression is caused by a deficiency of serotonin (another of the first neurotransmitters to be identified) was mooted as an explanation for why an earlier generation of antidepressants, the MAOIs, might alleviate its symptoms. This thesis was put forward with renewed vigor from 1988 onward, when a new class of antidepressant drugs came to market. The rapid spread and acceptance of Prozac and such copycat drugs as Zoloft and Paxil owed much to the pharmaceutical industry’s ability to promote the idea that depression was a purely physical ailment. It was, so their marketing insisted, a disorder that could be eliminated by making adjustments to the biochemistry of the brain.14 The drug companies proceeded to bombard physicians and then the public at large with advertisements promoting these claims. In the words of one of the early direct-to-consumer pamphlets produced to market Prozac, “Prozac doesn’t artificially alter your mood and it is not addictive. It can only make you feel more like yourself by treating the imbalance that causes depression.”15

Patients and their families proved a particularly alluring target, and advertising directed toward them helped sales to soar. The FDA had begun allowing direct-to-consumer advertising in the mid-1980s, and then significantly loosened the rules surrounding such advertising in 1997.16 Drug companies swiftly took advantage of the new regime, and the so-called SSRI antidepressants (selective serotonin reuptake inhibitors) were sold to the public as newly discovered wonder drugs that attacked the chemical basis of unhappiness, giving the brain back its serotonin and banishing the blues. Depression, so the advertising copy made it seem, could be banished, and patients made “better than well,” by ingesting these pills. Peter Kramer’s Listening to Prozac, published in 1993, and the first to advance this claim, was a publishing sensation, soon matched by Elizabeth Wurtzel’s memoir, Prozac Nation: Young and Depressed in America. Both did much to spread the gospel of the SSRIs.17

Pfizer was somewhat restrained in its Zoloft advertisements, though most would-be consumers probably overlooked its carefully hedged conditional clause. “Scientists believe,” the marketing copy read, “that [depression] could be linked with an imbalance of a chemical in the brain called serotonin.” The manufacturer of its rival, Paxil, showed no such restraint. Consumers were assured that “with continued treatment, Paxil can help restore the balance of serotonin.”18 Such bald assertions were permitted by a complaisant FDA, though, as scientists have since made clear, “there is no such thing as a scientifically established correct ‘balance’ of serotonin.”19 We possess no way of measuring serotonin levels in people’s brains, let alone of determining what “normal” levels of that neurotransmitter might be. As with the dopamine hypothesis, we must perforce rely on indirect evidence, and that indirect evidence fails to support the drug companies’ claims.20

Once again, the claimed connection between serotonin and depression was “rescued” by a series of ad hoc modifications attempting to explain why, for example, serotonin levels rise within a day or two of ingesting the drug, while changes in mood take weeks to materialize; or why measures of serotonin metabolites in the cerebrospinal fluid of depressed patients are all over the map, half of them within “normal” limits, a quarter higher than normal, and a quarter lower—or why SSRIs perform no better in clinical trials than the older tricyclic drugs or other pills such as Wellbutrin, that rely on a completely different mode of action.21 (Wellbutrin is a norepinephrine-dopamine reuptake inhibitor, with no effects on serotonin, unlike SSRIs like Zoloft.)

Soon enough, having convinced the FDA that SSRIs could be used to treat social anxiety disorder, obsessive-compulsive disorder, and premenstrual dysphoric disorder, the Zoloft and Paxil websites aimed at consumers claimed that these “diseases” too were the product of serotonin deficiency. Kramer’s Listening to Prozac asserted that we had entered a new era of “cosmetic psychopharmacology.” Adding more serotonin to our brains would increase our self-confidence, our happiness, our creativity, our energy levels, our success in life. “Shy, Forgetful? Anxious? Fearful? Obsessed?” Newsweek asked its readers in a cover story on the wonders of Prozac, only to promise the secrets of “how science will let you change your personality with a pill.”22 Vice President Al Gore’s wife, Tipper, recounting her own experience with depression, provided just one prominent example of how quickly the chemical theory of depression had spread: “It was definitely a clinical depression, one that I was going to have to have help to overcome. What I learned is that your brain needs a certain amount of serotonin and when you run out of that, it’s like running out of gas.”23

How one ubiquitous neurotransmitter acted as life’s panacea, or alternatively could cause such a variety of psychiatric problems, was left wholly unexplained by both the pharmaceutical industry and by the journalists who uncritically served as its echo chambers. The claims about serotonin depended on reasoning backward from observations about the efficacy of the drugs, and knowledge of some of their effects on the brain. This backward reasoning came from the pharmaceutical industry, not psychiatry, and though it was an effective marketing ploy, it is a deeply unsatisfactory argument, as we can readily see by some counterexamples. Many people are shy and withdrawn in social situations. Alcohol often lowers their inhibitions (sometimes with disastrous consequences) and has obvious physiological effects, as well as temporarily alleviating social awkwardness. Yet no one would sensibly argue that shy and introverted people are suffering from a deficiency of alcohol in their brains. Similarly, aspirin relieves headaches, but not because the brain is suffering from a deficiency of aspirin. The mechanism by which drugs work often does not neatly map on to the roots of the underlying pathology.


THE FACT THAT THORAZINE and its rivals produced symptoms that resembled Parkinson’s disease was swiftly acknowledged by psychopharmacologists, as we have seen. The emergence of these uncontrollable tremors was in many quarters seen as inescapably linked to the drugs’ efficacy. It was a price patients would have to pay to be relieved of their delusions and hallucinations. The new drugs were also found to blunt emotions, decrease initiative, and curtail movement. Fritz Freyhan, a pioneering psychopharmacologist, acknowledged that psychiatrists actively sought “transitions from hypermotility to hypomotility, which, in a certain proportion of patients, progressed to the more pronounced degrees of Parkinsonian rigidity”—and argued that “clinical evidence indicated that the therapeutic function of chlorpromazine and reserpine could not be separated [from these effects].”24 The Parisian psychiatrist Pierre Deniker, who played a major role in the introduction of phenothiazines into psychiatry, was blunter. He spoke of patients who “look as though they have been turned to stone” and argued that clinicians must “resolutely and systematically aim to produce neurological syndromes to get better results than can be obtained when neuroleptic drugs are given at less effective doses.”25

Those “neurological syndromes,” it turned out, were far more widespread and serious than these distressing echoes of Parkinson’s disease. A 1964 National Institute of Mental Health (NIMH) study of the safety and efficacy of the new antipsychotic drugs pronounced, on the basis of a six-week trial, that their unwanted side effects were “generally mild or infrequent.”26 That was a gravely mistaken conclusion. Reports of dizziness and drowsiness might be seen as relatively minor problems. Weight gain might be (mistakenly) dismissed. But many patients given the drugs became pathologically restless and unable to keep still, pacing up and down, exhibiting symptoms of extreme anxiety, often extending to panic and even violence and thoughts of suicide. Akathisia, as this syndrome was dubbed, sometimes persisted for months after the drugs were discontinued. More serious still was a condition known as tardive dyskinesia. As its name indicates, this syndrome emerged only after some time passed and sometimes was masked as long as the patient remained on antipsychotics. But in cases of long-term treatment, it afflicts between 20 and 60 percent of patients to varying degrees, and it is often irreversible.

Tardive dyskinesia is a profoundly disturbing and stigmatizing affliction. It involves facial tics, grimacing, grunting, protrusion of the tongue, smacking of the lips, rapid jerking and spasmodic movements, or sometimes slow writhing of the limbs, torso, and fingers. Naive observers often regard these as signs of mental illness. And it appears to be worsened by the prescription of drugs to control the symptoms of parkinsonism that accompany the use of antipsychotics. Data in one careful study suggested that 26 percent of older patients taking the drugs developed the disorder within a year of beginning therapy, and another 60 percent did so within three years, with 23 percent being diagnosed with “severe” symptoms of a disorder for which, even now, few effective treatments exist.27

Remarkably, during the first two decades of antipsychotic prescription, these serious problems were ignored or minimized by most of the psychiatric profession. Four years after the NIMH collaborative study had dismissed the side effects of phenothiazines as trivial and rare, Nathan Kline, referred to by some as “the father of psychopharmacology” and once a serious candidate for a Nobel Prize, asserted that these movement disorders were common in schizophrenia and reiterated that tardive dyskinesia was “not of great clinical significance.”28 It was a judgment echoed five years later by the long-term editor of Archives of Neurology and Psychiatry, Daniel X. Freedman. Dismissing the importance of tardive dyskinesia, he insisted that prevalence rates were low—3 percent to 6 percent—and the affliction was the “unavoidable price to be paid for the benefits of prolonged neuroleptic therapy.”29

The drugs’ ability to damp down the florid symptomatology of schizophrenia outweighed any concern over their side effects. Not until the Maryland psychiatrist George Crane published a paper on the subject in the pages of Science in 1973 did that complacency begin to evaporate. (Two of his earlier papers in 1967 and 1968, in which he had sought to warn his colleagues of an ongoing iatrogenic disaster, had been largely ignored.)30 By 1980, the profession had moved from “curiosity and mild concern to panic” about what seemed to be an epidemic.31 The American Psychiatric Association’s task force on the problem accepted that the incidence of tardive dyskinesia was at least 20 percent in adults, and 40 percent or more in the elderly, and that in as many as two-thirds of such patients, the symptoms persisted when treatment was discontinued.32 Accordingly, it recommended careful monitoring of patients taking antipsychotics, and minimizing the doses received—recommendations that seem to have been ignored by most practitioners, judging by the growing volume of prescriptions for antipsychotics during the 1980s and the doses that patients received.33 Even clinicians operating in a university setting were shown to be singularly poor at diagnosing tardive dyskinesia and other so-called extrapyramidal side effects, including uncontrollable muscular contractions, uncontrolled restlessness, and parkinsonism.34


IT WAS IN THIS CONTEXT that the Swiss pharmaceutical company Sandoz sought to bring a very old antipsychotic drug with a different mode of action back to the marketplace. Clozapine had been synthesized in 1956 by the Swiss company Wander, one of a number of compounds that drug companies developed in this period in an effort to compete with chlorpromazine. It was brought to market in Europe in the early 1970s, after Wander had been taken over by Sandoz. The delay was partly because of concerns about Clozapine’s propensity to cause abnormally low blood pressure and seizures. Ironically, there had also been hesitation about introducing the drug because it had many fewer extrapyramidal side effects, which were then considered to be an essential precondition for efficacy. (Extrapyramidal effects are physical symptoms, including tremors, slurred speech, uncontrollable restlessness [akathisia], uncontrollable muscular contractions [dystonia], and marked slowing of the thought processes, all of which are often accompanied by considerable anxiety and distress.) Before Sandoz could release Clozapine in the United States, evidence accumulated that it could also cause an often-fatal condition called agranulocytosis—a loss of white blood cells—in a significant fraction of the patients under treatment. In 1975, it was therefore voluntarily withdrawn from sale.

By the end of the 1980s, precisely because Clozapine was less likely to produce tardive dyskinesia and other extrapyramidal side effects (probably because it had relatively weak effects on dopamine and dopamine receptors in the brain), Sandoz changed course and requested permission from the FDA to introduce its drug to the American market. To bolster its case, it proffered evidence that Clozapine appeared to have positive effects in cases of schizophrenia that had not responded to other antipsychotics.35 With the assurance that this medication would only be used when other drugs had failed, and with careful weekly monitoring of patients’ blood to head off cases of agranulocytosis, the FDA gave approval for Clozapine to come to market, and it began to be prescribed in 1990.36

Over the ensuing decade, Clozapine was joined by a succession of other new compounds that the pharmaceutical industry cleverly sold as “atypical” or “second generation” antipsychotics.37 In clinical trials, these medications were matched against Haloperidol, a first-generation antipsychotic, and the data appeared to show they were less likely to provoke tardive dyskinesia and other side effects. By now, the first-generation drugs were off-patent, while these newer medications obviously were not and were thus hugely more profitable. Psychiatrists, insurance companies, and the federal government, seduced by claims that the new pills rescued patients from the debilitating side effects of the earlier drugs, and by assertions that they were clinically superior in the bargain, migrated rapidly to the new medications, and drug-company profits soared. Abilify, Risperdal, Seroquel, Zyprexa, and others became the new coin of the realm.

Almost simultaneously, SSRIs were transforming the market for antidepressants, replacing the MAOIs and the tricyclics that had hitherto been used to treat depression. That market was in any event poised to explode for an altogether different set of reasons.


IN COMPILING THE THIRD EDITION of the Diagnostic and Statistical Manual of Mental Disorders (DSM III), Robert Spitzer and his colleagues had been determined to erase all traces of psychoanalysis from what became psychiatry’s operating manual. In the process, they erased the prior distinction between endogenous and neurotic depression, substituting a new “illness” they dubbed major depressive disorder (MDD). In so doing, they created a new landscape, where depression acquired a far greater salience for psychiatrists and patients alike. The generalized anxiety and unhappiness, the tension and worries of those previously seen as suffering from neurotic depression, were subsumed under the same diagnostic umbrella as what had previously been described as melancholia or endogenous depression, a far rarer and more disabling disorder. To become eligible for the new diagnosis of MDD, symptoms had to last only two weeks, accompanied by an unhappy or dysphoric mood.

The checklist of symptoms that justified the diagnosis made no distinction between what most outside observers would regard as major disturbances (recurrent thoughts of suicide or death; overwhelming feelings of guilt and worthlessness; a profound hopelessness that was unresponsive to external changes; retarded movement, speech, and thought; and a more or less complete inability to think or make decisions about one’s life) and such things as insomnia or excessive sleep; loss of appetite and / or weight; decreased libido; a general feeling of fatigue or loss of energy; agitation or its opposite; and a general sense of lethargy. One could now be diagnosed with major depression after two weeks of unhappiness, unaccompanied by any of the major forms of disturbance. As Allan Horwitz notes, “The ease with which people could fulfill the five-symptom two-week criteria made MDD the target of the wildly popular SSRI antidepressant drugs that entered the market in the late 1980s.”38

Though unintended, the consequence of creating such a heterogeneous diagnosis was entirely predictable: the rapid rise in the prevalence of depression and its elevation to the status of one of the most common psychiatric diagnoses. By 2017, NIMH reported that 17.3 million adults in America had experienced a major depressive episode in the preceding year, along with a further 3.2 million adolescents between the ages of twelve and seventeen. Years after the fact, one of the DSM Task Force’s most prominent and influential members, Donald Klein, lamented “the plague of affective disorders that have descended on us”—a complaint he had raised at the time, only to be dismissed by Spitzer.39

An enormous market beckoned. As Edward Shorter notes, “The incidence of major depression in the United States more than doubled in the 1990s, rising from 3.3 [percent of the adult population] in 1991–92 to 7.1 per cent in 2001–2.”40 Drug companies moved rapidly to respond to the opportunity. Drugs under patent were obviously more desirable to them, because of the enormously increased profits they generated. SSRIs thus became the focus of an intense and highly focused marketing campaign. As with the second-generation antipsychotics, the move to prescribing the newer drugs was sold to psychiatrists as a way to avoid the well-documented side effects that dogged the earlier medications. MAOIs quite commonly produce dry mouth, dizziness, insomnia, or drowsiness and are also associated with weight gain, low blood pressure, and reduced sexual desire. But much more serious are the interactions with certain foods and drinks, including beer, aged cheese, cured meats, and fermented foods, which can cause extremely high blood pressure, as well as the risks that, if taken alongside certain other drugs and herbal supplements, MAOIs might provoke life-threatening complications.41 Tricyclics, the second group of antidepressants to be introduced in the 1950s, could cause disorientation or confusion, increased or irregular heartbeat, and a heightened propensity for seizures, as well as such problems as weight gain, blurred vision, dry mouth, constipation, and low blood pressure on standing.42 SSRIs, the pharmaceutical industry claimed, avoided most of these problems and yet restored brain chemistry to its “natural” state.43

Because atypical antipsychotics and SSRIs enjoyed patent protection, drug companies could charge enormous sums for these products, provided psychiatrists could be persuaded of their merits. That partially explains the huge growth of expenditures for these medications, though it is not the whole story. Looking first at the market for antipsychotics, the shift from the first-generation to the second-generation drugs was swift. Between 1996 and 2005, the percentage of the US population taking first-generation medications fell from 0.6 percent to 0.15 percent. Simultaneously, the percentage taking second-generation pills increased from 0.15 to 1.06 percent, or nearly sevenfold.44 By 2004, sales of atypical antipsychotics in the United States had reached $8.8 billion, and by 2008, that had risen again to $14.6 billion, making them the best-selling drugs in the country by therapeutic class. Five years later, annual sales of just Abilify, then the best-selling atypical antipsychotic, had reached $7 billion.

The market for antidepressants saw a similar pattern of explosive growth, paralleling ever-increasing estimates of the prevalence of depression. At the turn of the century, 7.7 percent of Americans over the age of twelve were taking antidepressants. By 2013, this had risen to 12.7 percent, an increase of almost two-thirds. By the latter date, nearly one in five people over the age of sixty were taking them. Two-thirds of those taking these drugs had done so for two years or more, and a quarter of them had been taking the pills for a decade or more.45 NIMH data for 2017 again reported a marked difference by gender, with an incidence of major depression of 8.7 percent among women eighteen and older, and 5.5 percent among men.

Such numbers suggest how deeply the everyday practice of American psychiatry had become intertwined with psychopharmacology. But they also reflect the fact that much of the increased prescription of these powerful chemicals came from primary-care physicians, who encountered patients with a range of presenting problems and increasingly responded with doses of antidepressants or even antipsychotics.46 A substantial part of the growing market for psychiatric drugs of all sorts had been prompted by a further broadening of the kinds of disorders for which these pills were prescribed. Antipsychotics were sold as remedies for the increasingly popular and protean diagnosis of bipolar disorder, which was extended, as we shall see, to the very young. Abilify and other atypical antipsychotics were then touted as adjunctive therapy for patients suffering from depression, to be added to the doses of antidepressants that patients were already on if their symptoms failed to improve.


THE MARKET FOR ANTIDEPRESSANTS ballooned as all sorts of everyday anxieties, emotional upsets, and phobias became targets for the drug companies. Zyprexa, for example, was marketed as the solution to patients displaying “complicated mood symptoms,” such as “anxiety, irritability, disruptive sleep, and mood swings”—relatively minor issues that extended far beyond the criteria for diagnosing schizophrenia and bipolar disorder, the only “illnesses” for which FDA approval had been granted. Quite correctly, Eli Lilly saw these mild problems as a powerful way “to expand our market,” part of its drive to make Zyprexa “the world’s number one neuroscience pharmaceutical in history.”47

On occasion, these domains expanded after the pharmaceutical industry obtained FDA approval for broadened applications for their products. Very often, though, the expansion came through the subtle encouragement of “off-label” applications for drugs already on the market—applications for which no approval had been sought and for which no systematic evidence of efficacy and safety had been offered or obtained.48 Once the FDA has approved the prescribing of a drug for one set of applications, there is little to preclude doctors from prescribing it for other purposes. They simply have to be persuaded to do so. At one end of the age spectrum, difficult and demented elderly patients were an almost irresistible target for interventions of this sort. At the other, a diagnosis of childhood bipolar disorder was manufactured, soaring in popularity.49

Poor children from families on Medicaid are especially likely to receive a psychiatric diagnosis and to be placed on psychotropic medications at a very young age. A cohort study of over 35,000 newborns born to poor families on Medicaid found that by the age of eight, 19.7 percent of the children had received a psychiatric diagnosis. and 10.2 percent had received a psychotropic medication.50 Between 2003 and 2011, the numbers diagnosed with attention-deficit / hyperactivity disorder (ADHD) rose by 41 percent, to 11 percent of all children. A recent study found that the United States accounted for more than 92 percent of the worldwide expenditures for ADHD-treatment drugs. Nearly one in five American high school boys and one in eleven American high school girls had been diagnosed with ADHD, by far the highest incidence in the world.51

Drug companies themselves were legally barred from promoting these off-label uses (though this prohibition was frequently circumvented when the financial temptations proved overwhelming).52 As a rule they preferred to rely on prominent academic psychiatrists, so-called thought or opinion leaders, to promote new uses for their products beyond those the FDA had already approved. Such figures can prove invaluable to the marketing departments of Big Pharma. Apparently disinterested and independent, but in fact deeply indebted to the companies who fund their research, pay them consulting fees, and advance their careers, these academics can transform the range of conditions the drugs they promote are used for, all the while allowing their corporate sponsors to disclaim responsibility for the recommendations they make.

Joseph Biederman, chief of the Clinical and Research Programs in Pediatric Psychopharmacology and Adult ADHD at the Massachusetts General Hospital and professor of psychiatry at Harvard Medical School, and his associates at Harvard almost single-handedly transformed bipolar disorder from a condition seldom found among young children to a disorder of epidemic proportions.53 In 1994, the fourth edition of the DSM revised the definition of the syndrome. The most important difference was a move from seeing bipolar disorder as a single monolithic entity with only one set of diagnostic criteria to a division into two, with each having its own separate set of diagnostic criteria. A decade later, the number of young patients with this diagnosis had multiplied forty-fold.54 Biederman and his team pioneered the off-label prescribing of powerful antipsychotic and antidepressant drugs for these children (even preschool-aged children), though for many of these chemicals there had not been even a modicum of testing for safety and efficacy in this population, and for others, such as paroxetine (Paxil), the tests had shown that the side effects of the drugs outweighed their benefits.55 The Harvard academics received millions of dollars in drug-company fees and funding, while concealing their conflict of interest from the university.56

Biederman was equally aggressive in promoting the use of stimulants like Adderall, Concerta, and Ritalin for children diagnosed with ADHD—another disorder whose prevalence has skyrocketed. Data from the Centers for Disease Control and Prevention show that 15 percent of high school children are now diagnosed with the condition, and 3.5 million children now take these medications. According to the New York Times, Biederman is

well known for embracing stimulants and dismissing detractors. Findings from Dr. Biederman’s dozens of studies of the disorder and specific brands of stimulants have filled the posters and pamphlets of pharmaceutical companies that financed the work. Those findings typically delivered three messages: The disorder was underdiagnosed; stimulants were safe and effective; and unmedicated A.D.H.D. led to significant risks for academic failure, drug dependence, car accidents and brushes with the law. Drug companies used the research of Dr. Biederman and others to create compelling messages for doctors. “Adderall XR Improves Academic Performance,” an ad in a psychiatry journal declared in 2003, leveraging two Biederman studies funded by Shire [the drug’s manufacturer]. A Concerta ad barely mentioned A.D.H.D., but said the medication would “allow your patients to experience life’s successes every day.”57

Uncontroverted evidence shows that Biederman engaged in a prolonged campaign to get Johnson & Johnson (J&J) to fund a research center at the Massachusetts General Hospital, promising in documents revealed in a court filing that one of its goals would be to “move forward the commercial goals of J&J.” An internal email from one of the company’s executives explained that “the rationale for this center is to generate and disseminate data supporting the use of risperidone” for the treatment of ADHD and bipolar disorder in children and adolescents.58 The pitch was successful. In 2002 alone, J&J provided $700,000 to the center. Biederman delivered. Ahead of conducting a proposed trial in 2004 of the use of Risperdal to treat ADHD, he promised that the research “will support the safety and effectiveness of risperidone in this age group.” Equally important from J&J’s standpoint, he assured the company that the trial “will clarify the competitive advantages of risperidone vs. other neuroleptics.”59 He was clairvoyant. A year later, he produced a paper comparing risperidone to Zyprexa, a rival neuroleptic manufactured by Eli Lilly. The paper concluded that risperidone, but not Zyprexa, improved children’s depressive symptoms.60

In the words of a Columbia child psychiatrist, David Shaffer, the author of a classic textbook on child developmental psychology, “Biederman was a crook. He borrowed a disease and applied it in a chaotic fashion. He came up with ridiculous data that none of us believed”—but that was swallowed by many in the media and by desperate parents. “It brought child psychiatry into disrepute and was a terrible burden on the families of the children who got that label.”61

When these revelations surfaced, they prompted an editorial in the New York Times decrying Biederman’s “appalling conflicts of interest. [I]t is hard to know whether he has been speaking as an independent expert or a paid shill for the drug industry.”62 Nature Neuroscience echoed these concerns, speaking of an “ethical crisis” that was “particularly dangerous in child psychiatry as the potential consequences of treating the developing mind with powerful drugs are both less well understood and potentially more severe than in adults.”63 Biederman’s response was that Harvard was currently reviewing the claims of conflicts of interest “and fairness dictates withholding judgment until that process has been completed.”64 Three years later it was. Biederman and his two associates were found to have violated Harvard’s policies. As punishment, they received what amounted to a slap on the wrist: they were required to refrain from paid-industry sponsored activities for a year and undergo ethics training: they were also told they might suffer a delay of consideration for promotion and advancement—a meaningless sanction for Biederman since he was already a full professor.65

Demented and disruptive elderly patients, particularly those confined in nursing homes, were likewise potentially an extremely lucrative market, but the pharmaceutical companies had a difficult time securing FDA approval for the use of antipsychotics and antidepressants in such cases. Those attempting to cope with these patients, however, were an easy mark for drug salesmen, who could tout the drugs’ presumed ability to calm and pacify their aggressive and agitated charges. A 1992 study found that 25 percent of nursing home residents were being prescribed antipsychotic drugs.66

Eli Lilly had initially hoped to obtain FDA approval for the use of Zyprexa for dementia, but by 2003, unable to muster the necessary data, it abandoned the pursuit—but not its efforts to exploit the commercial opportunity this largely captive population represented.67 Lilly was scarcely alone. Other major drug houses were equally keen to access this market. The advent of second-generation antipsychotics intensified these efforts, since this group of drugs was alleged to mitigate the risks of adverse effects when compared with the original antipsychotics. Unfortunately, as trials were conducted, they revealed a heightened mortality rate for those on the drugs, with a pattern of accelerated cognitive decline among those being medicated. Nor did the pills have any demonstrable advantage over placebos, except when used on a short-term basis to sedate angry and aggressive patients.68


FOR THE YOUNG AND OLD, and everyone in between, treatment at the hands of most psychiatrists now revolves around the prescription of a variety of drugs. A remnant of the profession still employs some forms of psychotherapy, and if they can attract a sufficiently affluent patient population, these psychiatrists will employ the kinds of psychosocial interventions that fifty years ago were the profession’s bread and butter. Mostly, though, those interventions have become the province of clinical psychologists and social workers, who accept lower pay and must strive to attract clients in an environment where the predominant message is that mental troubles are brain diseases for which drugs are the logical form of treatment.

There is, it turns out, a not-inconsiderable market for their services. Parents worried about the prescription of drugs with uncertain long-term effects on their children often turn to those employing a variety of psychotherapeutic techniques to treat their offspring. In many cases, except in the most serious forms of disturbance, the cognitive-behavioral therapy and related techniques clinical psychologists provide prove sufficiently successful in resolving the behavioral problems that bring families to the consulting room. Adults who obtain little relief from drug treatments, or find the side effects of psychopharmacology intolerable, provide still another source of clients. So a parallel market for psychological counseling persists and flourishes, and many families profess themselves satisfied.

In the first decades of the twenty-first century, psychiatrists, whether academics or clinicians, have found their fate closely bound up with psychopharmacology and the drug industry. Over the past half century, they have tied their practice to a diagnostic system that has implied a biologically reductionist approach to mental illness. It is a diagnostic system that proved highly useful to the pharmaceutical industry, cementing the notion that the various forms of mental illness were discrete diseases, each potentially treatable with its own class of chemicals. And for a time, it helped resolve an earlier crisis of legitimacy, when it seemed that psychiatrists had trouble reaching basic agreement about what was wrong with a particular patient—or whether anything was amiss at all.

Those seeming certainties are now crumbling. As we shall see, the activities of the pharmaceutical industry and their products have been coming under sustained critical and legal scrutiny, and the limitations of psychopharmacology have become ever-more manifest. Having underwritten psychiatric practice and academic psychiatry for several decades, Big Pharma is increasingly distancing itself from the search for improved chemical remedies for mental illness. Simultaneously, attempts to rescue and improve the DSM are foundering. Once more, psychiatry is in crisis.