8
THE STORY WE TOLD OURSELVES
How then are we to help “schizophrenics?” The answer is simple: Stop the lies!
THE STORY OF neuroleptics as drugs that induced a brain pathology, similar in kind to encephalitis lethargica and Parkinson’s disease, is one that can easily be dug out from the medical literature. It’s all there—the early comparisons to those two diseases, the biological explanation of how neuroleptics sharply impaired dopamine transmission, the importance of that dopamine activity to normal brain function, and the array of deficits, both short-term and long-term, produced by that drug-caused pathological process. Yet that story is not one that American psychiatry, once it had embraced neuroleptics in the early 1960s as safe and effective, was poised to tell, either to itself or to the American public. The country had put its faith in the drugs, and doctors were understandably intent on perceiving the drugs as effective, and at least somewhat safe. But maintaining that belief aloft required mental juggling of the most agile sort, and more than a little talent for self-delusion, poor science, and—ultimately—outright deceit.
A Tale of Biology
Mad-doctors, of course, have always constructed “scientific” explanations for why their treatments worked. Benjamin Rush drained the blood from his patients and reasoned that madness was caused by too much blood flow to the brain. Henry Cotton removed his patients’ teeth and other body parts and argued that it cleansed them of madness-causing bacteria. Manfred Sakel stumbled on insulin coma as a treatment and concluded that the treatment miraculously killed the “diseased” brain cells responsible for psychosis. Lobotomy, Egas Moniz said, destroyed nerve fibers where obsessive, paranoid thoughts were stored. Once it was learned that neuroleptics blocked dopamine receptors, psychiatrists reasoned that schizophrenics likely suffered from overactive dopamine systems. The treatment begat the theory of illness, and not vice versa.
As a result of this hypothesis, by the early 1970s patients and their families were regularly hearing this spiel: “I would explain that mental illness is caused by a chemical imbalance in the brain,” recalled Susan Kemker, a staff psychiatrist at North Central Bronx Hospital in New York. “Mental illness resembles diabetes, which involves a chemical imbalance in the body, I would explain. The patient’s psychiatric disorder is chronic, I would say, and requires medication every day for the rest of the person’s life. I would then assure the patient that if he took the medication, he would probably live a more normal life.”
2
Although neuroleptics clearly reduced dopamine activity in the brain to a pathological level, there was still the possibility that schizophrenics started out with hyperactive dopamine systems. Dopa - mine transmission in the brain works in this manner: A “presynaptic” neuron releases the neurotransmitter into the synaptic cleft (the space between neurons), and then the neurotransmitter binds with receptors on a “postsynaptic” neuron. The dopamine hypothesis suggested that either the presynaptic neurons were releasing too much of the neurotransmitter or else the postsynaptic neurons had too many receptors and thus were “hypersensitive” to dopamine.
To explore the first possibility, investigators measured levels of dopamine metabolites (or breakdown products) in their patients’ blood, urine, and cerebrospinal fluid. (Measuring the levels of metabolites provides an indirect gauge of dopamine release in the brain.) One of the first such studies was done in 1974 by Malcolm Bowers, at Yale. He determined that levels of dopamine metabolites in unmedicated schizophrenics were quite normal. “Our findings,” he wrote, “do not furnish neurochemical evidence for an overarousal in these patients emanating from a midbrain dopamine system.” However, his published results did show one other startling truth: Dopamine turnover markedly increased
after people were medicated. This was evidence, in essence, of a “normal” brain trying desperately to cope with the drug’s blocking of its dopamine signals.
3
Others soon reported similar findings. In 1975, Robert Post at the NIMH found no evidence of elevated dopamine levels in twenty nonmedicated schizophrenia patients compared to healthy controls. Three different research teams determined in autopsy studies that drug-free schizophrenics apparently had normal dopamine levels. Meanwhile, pharmacologists at St. Louis University School of Medicine and elsewhere fleshed out the pathology in dopamine transmission caused by the drugs. In response to the dopamine blockade, presynaptic dopaminergic neurons apparently went into hyper gear for about three weeks, pumping out more dopamine than normal. Then the brain cells, as if they were burning out, gradually slowed down to the point where they were releasing less dopamine than normal. Some dopaminergic cells turned quiescent, and others began firing in irregular patterns.
4
There was one other unsettling twist to the dopamine story: A number of research teams, including one at the NIMH, determined that dopamine turnover in some unmedicated chronic schizophrenics was abnormally
low, which spurred some to characterize schizophrenia as a dopamine-deficiency disease. If so, then neuroleptics would exacerbate this problem. All of this led UCLA neuroscientist John Haracz to gently conclude in 1982: “Direct support [for the dopamine hypothesis] is either uncompelling or has not been widely replicated.”
5
Having failed to find that schizophrenics had abnormally high levels of dopamine, researchers turned their attention to whether their postsynaptic neurons had too many dopamine receptors. At first blush, researchers found just that. In 1978, Philip Seeman and Tyrone Lee at the University of Toronto reported in
Nature that at autopsy, the brains of schizophrenics had 50 percent or more dopamine receptors than healthy controls. But the patients studied had been on neuroleptics, and, as Seeman and Lee readily acknowledged, it was possible that the neuroleptics had caused the abnormality. Animal studies and other postmortem studies soon revealed that was indeed the case. Investigators in the United States, England, and Germany all determined that taking neuroleptics led to an increase in brain dopamine receptors, and they found little evidence of higher-than-normal receptor levels prior to drug use. “From our data,” German investigators wrote in 1989, “we conclude that changes in [receptor density] values in schizophrenics are entirely iatrogenic [drug caused].”
6
Fifteen years of investigation into dopamine function in schizophrenics had produced a rather disturbing truth. Researchers had speculated that schizophrenics naturally suffered from overactive dopamine systems but had found that this wasn’t so. As John Kane, a well-known researcher at Long Island Jewish Medical Center in New York, confessed in 1994, “a simple dopaminergic excess model of schizophrenia is no longer credible.” He noted that even Arvid “Carlsson, who first advanced the hypothesis, [has] concluded that there is ‘no good evidence for any perturbation of the dopamine function in schizophrenia.’ ”
7 Yet investigators had found that the drugs profoundly hindered dopamine function and also caused a pathological increase in dopamine receptors in the brain, the very abnormality hypothesized to cause schizophrenia in the first place. In a sense, the drugs were agents that turned a normal brain into a schizophrenic one.
But that story was never told to the public. The public had been sold on a medical paradigm of a different sort, and on August 18, 1996, a consortium of pharmaceutical companies placed an ad in the
New York Times assuring the public that scientific studies had found that neuroleptics worked just as promised:
Scientists now know that the causes of schizophrenia and psychosis are often rooted in powerful chemicals in the brain called neurotransmitters. One of these neurotransmitters is dopamine. Schizophrenia and psychosis can result when the brain has abnormal dopamine levels. Because of recent advances, drugs that are able to alter dopamine levels free many patients from the terrible effects of mental illness.
8
A scientific hypothesis, genuine in kind, had finally given way to a bald-faced lie.
They Do Prevent Relapse, Don’t They?
The dopamine hypothesis was one part of the science tale constructed, from the 1960s forward, that maintained the image of neuroleptics as helpful medications. A second part of the story was that the drugs had been repeatedly proven to be effective in two ways: They knocked down acute episodes of psychosis and greatly lowered the risk that patients would relapse. In his 1983 book
Surviving Schizophrenia, E. Fuller Torrey explained to families: “The fact that antipsychotic drugs work is now well established. They reduce symptoms of the disease, shorten the stay in the hospital, and reduce the chances of rehospitalization dramatically.”
9
Yet, as even mainstream psychiatry began to obliquely confess in the 1990s, this claim of efficacy had been built upon a scientific house of cards.
When a new medical treatment comes along, the usual thing that researchers do is compare it to existing remedies (as well as to placebo). Before neuroleptics arrived, sedatives of various kinds had long been used in asylum settings to curb acute psychotic episodes and were regularly said to be fairly effective. In the 1800s and early 1900s, numerous articles appeared in medical journals touting the benefits of opium, barbiturates, and bromides. One would expect, then, that by the 1980s there would be numerous studies in the medical literature documenting the superiority of neuroleptics. Yet in 1989, when Paul Keck and other Harvard Medical School physicians scoured the literature for well-designed studies that compared the efficacy of neuroleptics to sedatives over a controlled period of time, they could find only
two. And in those studies, “both treatments produced similar symptomatic improvement in the first days, and perhaps weeks, of treatment.”
10 Their report was so unsettling to accepted wisdom that one physician wrote in stunned response: “Has our clinical judgment about the efficacy of antipsychotics been a fixed, encapsulated, delusional perception . . . Are we back to square one in antipsychotic psychopharmacology?”
11 Forty years after neuroleptics were introduced, and still there was no convincing proof that the drugs were any better at knocking down psychosis than old-fashioned opium powder.
k
At first glance, the second part of the efficacy question—do neuroleptics prevent relapse?—seems to be a very confused issue. On the one hand, the studies by Bockoven, Carpenter, Rappaport, and Mosher indicate that the use of neuroleptics increases the risk of relapse. Yet at the same time, there are scores of studies in the medical literature that have seemingly made just the opposite conclusion. In 1995, Patricia Gilbert and her colleagues at the University of California at San Diego reviewed sixty-six relapse studies, involving 4,365 patients, and summed up the collective evidence: Fifty-three percent of patients withdrawn from neuroleptics relapsed within ten months, versus 16 percent of those maintained on the drugs. “The efficacy of these medications in reducing the risk of psychotic relapse has been well documented,” they wrote.
12
There is an answer to this puzzle, and it is a revealing one. Bockoven found low relapse rates in patients who had never been exposed to neuroleptics. In a similar vein, the studies by Rappaport, Mosher, and Carpenter involved patients who, at the start of the experiment, were not on neuroleptics but were then treated either with placebo or a neuroleptic. And in those studies, relapse rates were lower for the placebo group. In contrast, the sixty-six studies reviewed by Gilbert were drug-withdrawal studies. In those trials, patients stabilized on neuroleptics would be divided into two cohorts: One would keep on taking the drugs and the other would not, and the studies reliably found that people withdrawn from their neuroleptics were more likely to become sick again. Thus, together these studies suggest that relapse rates fell into three groups: lowest for those not placed on neuroleptics in the first place, higher for those who took the drugs continuously, and highest of all for those withdrawn from the drugs.
However, there’s still more to be added to this relapse picture. The studies reviewed by Gilbert were designed in ways that grossly exaggerated the difference in relapse rates between drug-maintained and drug-withdrawn patients. First, in two-thirds of the studies, the patients were
abruptly withdrawn from neuroleptics, and abrupt withdrawal—as opposed to gradual withdrawal—dramatically increased the risk that patients would become sick again. In response to Gilbert’s report, Ross Baldessarini of Harvard Medical School reanalyzed the same sixty-six studies, only he divided the drug-withdrawn cohort into “abrupt-withdrawal” and “gradual-withdrawal” groups. He found that the relapse rate in the abruptly withdrawn group was
three times higher than in the gradual group. In other words, the high 53-percent relapse rate reported by Gilbert for drug-withdrawn patients was, in large part, created by the design of the sixty-six studies. Indeed, in a further review of the relapse literature, Baldessarini and his Harvard colleagues found that fewer than 35 percent of schizophrenia patients gradually withdrawn from their drugs relapsed within six months and that those who reached this six-month point without becoming sick again had a good chance of remaining well indefinitely. “The later risk of relapsing [after six months] was remarkably limited,” the Harvard researchers concluded, and they also provided a biological explanation for why this might be so. After the drugs leave the system, they noted, D
2 receptor densities in the brain may revert back to more normal levels, and once this happens, the risk of relapse decreases, returning to a level that “may more nearly reflect the natural history of untreated schizophrenia.”
13
The second flaw in the sixty-six relapse studies was that the low relapse rate for drug-maintained patients—16 percent over a one-year period—was also an artifact of trial design. In the real world, up to 30 percent of hospitalized patients don’t respond to neuroleptics. Among those who do and are discharged, more than one-third relapse within the next twelve months and need to be rehospitalized, even though they reliably take their medications. Thus, fewer than 50 percent of people who suffer a schizophrenic break respond to standard neuroleptics and remain relapse-free for as long as a year after discharge. But the relapse studies, to a large degree, were conducted in this select cohort of good responders. It was this group of patients that would be divided into drug-maintained and drug-withdrawn cohorts, and naturally relapse rates for those who stayed on neuroleptics could be expected to be low. In 1998, Gerard Hogarty at the University of Pittsburgh pointed out just how misleading the drug-maintained relapse rates were: “A reappraisal of the literature suggests a 1-year, post-hospital, relapse rate of 40 percent on medication, and a substantially higher rate among patients who live in stressful environments, rather than earlier estimates of 16 percent.”
14
In sum, the sixty-six relapse studies were biased in ways that provided a totally false picture of the merits of neuroleptics. The studies only compared results for drug-treated patients (as opposed to patients never put on neuroleptics), and even within this model of care, the studies painted a false picture. The relapse rate for the drug-withdrawn group was artificially raised by taking patients abruptly off their medications, while the relapse rate for the drug-maintained group was artificially lowered by selecting patients who had already shown that they could tolerate the drugs fairly well. The utter irrelevance of the studies to real-world care shows up dramatically in rehospitalization rates. By one estimate, more than 80 percent of the 257,446 schizophrenia patients discharged from hospitals in 1986 had to be rehospitalized within two years, a rehospitalization rate much higher than for “never-exposed” patients, or—as can be seen by the data above—for those gradually withdrawn from neuroleptics.
15 The 16-percent relapse rate touted in the medical journals was a helpful number for the tale of efficacy that needed to be woven in support of neuroleptics, but it was a statistic derived from science of the worst sort, and it totally misled the public about what was really happening to drug-treated patients.
See No Evil
The third cornerstone of the story we told ourselves about neuroleptics was that the drugs were relatively safe. In 1964, the NIMH specifically declared that side effects with the drugs were “mild and infrequent . . . more of a matter of patient comfort than of safety.” Torrey, in his 1983 book, even reiterated the point, assuring families that “antipsychotic drugs are among the safest group of drugs known.”
16 But keeping this part of the story afloat for nearly forty years proved particularly difficult. It required that the FDA and American psychiatry turn a blind eye for as long as possible to evidence that the drugs frequently caused tardive dyskinesia and, on occasion, a fatal toxic reaction called neuroleptic malignant syndrome.
From the very beginning, there had been reason to suspect that neuroleptics would cause long-term harm. In the 1930s, first-generation phenothiazines had been used in agriculture as insecticides and to kill parasitic worms in swine. That was their pre-clinical history—as agents toxic to bugs and parasites. French chemists then developed chlorpromazine as an agent that could help numb the nervous system during surgery. And once chlorpromazine was used in mental patients, it was observed to cause symptoms similar to Parkinson’s disease and encephalitis lethargica. After Smith Kline’s success with chlorpromazine, other pharmaceutical companies brought new and more powerful neuroleptics to market by selecting compounds that reliably induced catalepsy—a lack of motor movement—in animals. The agents were neurotoxic by
design. Then, in 1959, the first report appeared linking neuroleptics to irreversible motor dysfunction. This side effect was given the name tardive dyskinesia a year later, and over the next decade, nine studies found that it affected more than 10 percent of all patients, with one suggesting that it might afflict 40 percent of those who got the medications on a constant basis.
17
And yet the mentally ill were not being told of this risk.
The mechanism by which the FDA warns the public about drug-related risks is by requiring pharmaceutical companies to detail it on the drug’s label. Even a side effect that occurs in only 1 percent of patients is considered common and must be warned about. By this standard, tardive dyskinesia was a very common disorder, and yet, throughout the 1960s, the FDA did not require drugmakers to warn the public. Their drug labels typically devoted a single sentence to possible permanent neurological side effects, didn’t mention tardive dyskinesia by name, and—despite the reports in the literature concluding that it could affect up to 40 percent of patients—dismissed such problems as uncommon. In 1968, an NIMH scientist, George Crane, published a review of tardive dyskinesia in the widely read American Journal of Psychiatry, and still the FDA didn’t sound the alarm. Finally, in 1972—thirteen years after the first case report of tardive dyskinesia appeared in the literature—the FDA asked the drug companies to update their labels.
Psychiatry as a profession was proving equally reluctant to acknowledge this problem. In the early 1970s, Crane began something of a crusade to bring this problem to the fore. He wrote about tardive dyskinesia on several occasions, and yet each time he did, his colleagues responded by suggesting that he was making a mountain out of a molehill. Tardive dyskinesia, wrote Nathan Kline in 1968, is a “rare side effect” that is “not of great clinical significance.” Boston psychiatrist Jonathan Cole called Crane a “Cassandra within psychiatry” who was needlessly “foreseeing doom in many aspects of our current scientific and clinical operations.” In 1973, even after the FDA had finally started to stir, Minnesota physician John Curran chastised Crane’s alarms as “not only premature but misleading” and said that even if the drugs did cause brain damage, that shouldn’t be reason for undue concern: “While it is true that any psychosis can remit spontaneously, I honestly do not see how one can withhold a treatment of proved efficacy for fear of inflicting or aggravating putative brain damage.” Others chalked up TD to brain damage from earlier therapies, particularly lobotomy and electroshock, or attributed it to the disease. It all prompted Crane to retort: “The majority of clinicians continue to ignore the existence of this complication . . . the neglect of a serious health problem for so many years has deeper roots than mere ignorance of facts.”
18
The deeper roots were, of course, money.
Pharmaceutical companies had the most obvious reason for protecting the image of neuroleptics as safe. The drugs had turned into cash cows, and drug companies were not just selling them for use in the mentally ill. By 1970, more than 50 percent of mentally retarded children in America were being drugged in this way. So were a similar percentage of the elderly in nursing homes. Juvenile delinquents were given the drugs so regularly they referred to them as “zombie juice.” All told, 19 million prescriptions were being written annually.
19 Public attention to the fact that they frequently caused irreversible brain damage threatened to derail this whole gravy train.
Psychiatry’s motivation for turning a blind eye to tardive dyskinesia was a bit more complex. Prescribing a medication is
the ritual that defines modern medicine, and thus psychiatry, eager to see itself as a medical discipline, needed to have at its disposal a “safe and effective” drug for schizophrenia. Psychiatrists also compete with psychologists for patients, and their one competitive advantage is that because they are medical doctors, they can prescribe drugs, whereas psychologists can’t. They could hardly lay claim to superior curative prowess if their neuroleptics were not just ineffective but brain damaging. Finally, by the early 1970s, all of psychiatry was in the process of being transformed by the influence of drug money. Pill-oriented shrinks could earn much more than those who relied primarily on psychotherapy (prescribing a pill takes a lot less time than talk therapy); drug-company sales representatives who came to their offices often plied them with little gifts (dinners, tickets to entertainment, and the like); and their trade organization, the APA, had become ever more fiscally dependent on the drug companies. Thirty percent of the APA’s annual budget came from drug advertisements in its journals, and it also relied on industry “grants” to fund its educational programs. “We have evolved a somewhat casual and quite cordial relationship with the drug houses, taking their money readily,” an APA officer, Fred Gottlieb, confessed a few years later. “We persist in ignoring an inherent conflict of interest.”
20
In short, the interests of the drug companies, psychiatrists, and the APA were all in synch, and paying too much attention to tardive dyskinesia could prick the whole neuroleptic balloon.
As Crane sounded the alarm, he never urged that neuroleptics be withdrawn. He simply wanted the APA to mount a massive educational campaign to inform physicians how best to manage this risk. Prescribing lower doses could greatly lessen the odds that it would develop. Early diagnosis of TD and a proper therapeutic response—withdrawal of the drugs—could also minimize the harm done. But in the absence of such education, physicians were regularly treating tardive dyskinesia by
upping dosages (this would so clamp down on motor movement that the jerky motions would be somewhat stilled). Here was a clear and pressing medical need, one that could spare hundreds of thousands of Americans from drug-induced brain damage. “Mailing informative material to all physicians is essential,” Crane pleaded in 1973.
21 And in response, the APA . . . dawdled. Daniel Freedman, editor of the
Archives of General Psychiatry, angrily wrote that psychiatrists already had at their disposal “considerable data and guidelines to help determine sound judgments.”
22 Year after year passed, and the APA made no effort to educate its members. The tally of Americans afflicted with this often-irreversible brain disorder was climbing at a rate of
more than 250 people per day, and still the APA did nothing.
23 Finally, in 1979, the APA issued a task-force report on the problem . . . and then it dawdled some more. Another six years went by before it sent out a warning letter to its members, and that mailing campaign was launched only after several highly publicized civil lawsuits found psychiatrists (and their institutions) negligent for failing to warn patients of this risk, with damages in one case topping $3 million. As the APA put it in its warning letter: “We are further concerned about the apparent increase in litigation over tardive dyskinesia.”
24 Money, or the fear of losing it, had finally put the APA into an educational mood.
This foot-dragging obviously told of a stunning disregard for the mentally ill. But even more perplexing was that even when educational efforts were mounted, they didn’t do much good. After Crane gave a talk at a well-known hospital on the need to prescribe lower dosages, he returned six months later to see whether anything had changed. Nothing had. “The impact of my educational efforts on the prescribing habits of the physician has been nil,” he bitterly reported.
25 Even state laws requiring physicians to tell their patients about this risk didn’t do the trick. More than twenty-five states passed such legislation in the early 1980s, laws that implicitly condemned American psychiatry for failing to fulfill this duty on its own, yet a national survey soon found that disclosure rates were
lowest in states where it was mandatory.
26 In 1984, Thomas Gualtieri, a physician at the University of North Carolina, summed up the dismal history: “A review of the history of TD demonstrates nothing as clearly as this disconcerting fact: since 1957, published guidelines, scientific articles, presentations at professional meetings and draconian admonitions in the
Physicians Desk Reference seem to have had little, if any, effect on actual physician behavior with respect to neuroleptic drugs.”
27
The tragic result of this head-in-the-sand attitude has never been fully added up. Mantosh Dewan, of the State University of New York Health Science Center in Syracuse, estimated that during the 1980s, more than 90,000 Americans developed “irreversible TD each year.”
28 And the blind eye toward TD was simply part of a larger blindness by American psychiatry toward all of the neurological problems that could be induced by neuroleptics. Akathisia, akinesia (extreme blunting of emotions), Parkinson’s—all of these regularly went undiagnosed. One 1987 study found that akathisia was missed by doctors 75 percent of the time. The decades-long avoidance of a side effect called neuroleptic malignant syndrome, meanwhile, led to thousands dying needlessly. This toxic reaction to neuroleptics, which typically develops within the first two weeks of exposure, was first described by French physicians in 1960. Incidence estimates range from .2 percent to 1.4 percent. Patients break into fevers and often become confused, agitated, and extremely rigid. Death can then come fairly quickly. Yet, in the United States, neuroleptic malignant syndrome was not given much attention until the early 1980s. The cost of this neglect shows up dramatically in associated mortality rates before and after 1980: They dropped from 22 percent to 4 percent once it became a topic of concern. Although no researcher has tallied up the needless death count, rough calculations suggest that from 1960 to 1980 perhaps 100,000 Americans died from neuroleptic malignant syndrome and that 80,000 of those patients would have lived if physicians had been advised to look for it all along.
29
Only in America
Although neuroleptics became standard treatment in all developed countries, European physicians never embraced, at least not with the same enthusiasm, the notion that the drugs were “like insulin for diabetes.” In 1985, French pioneer Pierre Deniker, at a talk in Quebec City, summed up the view from abroad. First, he recalled, he and Delay had coined the term neuroleptics, which “horrified” the Americans, as it described a drug that clamped down, in the manner of a chemical restraint, on the central nervous system. The Americans preferred the much more benign term “tranquilizers.” But then the Americans had transformed the drugs’ image again, from tranquilizers into “antischizophrenics,” and that, Deniker said, was perhaps going “too far.” While neuroleptics might diminish certain symptoms of schizophrenia, he said, they did not “pretend” to be a treatment for a known biological illness.
30
That difference in view had also been accompanied, Deniker noted, by a difference in prescribing practices. From the beginning, the Europeans—seeing the drugs as
neuroleptics—had prescribed low dosages to minimize the harmful side effects. After their initial trials with chlorpromazine, Deniker and Delay had decided that 100 milligrams daily was the best dose. British psychiatrists tried a higher dose of 300 milligrams but found that it produced too many negative effects. In contrast, the first American investigators to test chlorpromazine quickly pushed dosages much higher, so much so that Baylor University’s John Vernon Ross-Wright told colleagues in 1955 that he had successfully given his patients 4,000 milligrams per day. This high dose, he said, “saved time” in getting patients stabilized and discharged from the hospital. Other leading American psychiatrists soon echoed his beliefs. Patients on 5,000 milligrams daily were said to be “functioning perfectly.” “When in doubt with Thorazine,” one psychiatrist advised, “increase the dose rather than decrease it.” In 1960, New York’s Nathan Kline summed up his rule of thumb: “Massive doses for fairly prolonged periods are essential for successful treatment.”
31
The next step up this drugging ladder came in the 1960s, when Prolixin (fluphenazine) and Haldol (haloperidol) were brought to the market. These drugs, developed by Squibb and Janssen pharmaceutical companies, were fifty times more potent than chlorpromazine. Squibb’s injectable formulation of fluphenazine shut down dopaminergic pathways so quickly that doctors dubbed it “instant Parkinson’s.” As would be expected, both of these drugs often caused severe side effects, and yet these were the drugs that American psychiatry turned to. By the 1980s, more than 85 percent of schizophrenics in the United States were on the high-potency neuroleptics.
Over this same period, American psychiatrists ratcheted up the dosage as well. Average daily doses doubled from 1973 to 1985. In the mid-1980s, patients were routinely discharged from hospitals on haloperidol or fluphenazine at daily dosages equivalent to 1,500 milligrams of chlorpromazine (five times what British doctors had initially deemed too problematic). Moreover, it was psychiatrists, rather than non-psychiatric doctors, who were the high dosers. In the 1970s, both of these physician groups prescribed neuroleptics in roughly equivalent amounts. But then, over the course of a decade in which the risk of tardive dyskinesia became well known, their prescribing practices diverged. Non-psychiatric doctors turned to lower doses, while psychiatrists
upped theirs. By 1985, American psychiatrists were prescribing neuroleptics at dosages that were four times higher than those prescribed by non-psychiatrists.
32 Such doses, Deniker said at the conference in Quebec City, “are enormous according to our [European] point of view.”
33
The prescribing habits of American psychiatrists seem bizarre until one remembers the “scientific” story that had been told about neuroleptics. They were antischizophrenic medications that prevented relapse. High doses—as long as they weren’t withdrawn— best achieved that goal. As Torrey assured families in 1983, the more potent drugs were “better.” Indeed, investigators at the University of Pittsburgh, studying this issue, concluded that American psychiatrists often adopted such practices to avoid criticism. By prescribing a potent antischizophrenic drug at a high dose, a psychiatrist could be seen by the patient’s family as “doing everything possible” to help.
34
As usual, though, it was the patients who bore the cost of this delusion. The harm from the high doses was documented in study after study. When high-dose regimens were compared to low-dose regimens, high-dose patients were found to suffer more from depression, anxiety, motor retardation, emotional withdrawal, and akathisia. The incidence of dystonia—painful, sustained muscle spasms—soared. Although high doses would forestall relapse, when patients on such regimens finally did relapse, they often became more severely ill. High doses of fluphenazine were tied to an increased risk of suicide. Even moderately high doses of haloperidol were linked to violent behavior. Van Putten determined that patients placed on a daily 20-milligram dose of Haldol, which was a standard dose in the 1980s, regularly suffered “moderate to severe” akathisia and, by the second week, “deteriorated significantly” in terms of their ability to respond emotionally to the world, and to move about it. This dosage of Haldol, Van Putten concluded, was “psychotoxic” for many patients.
35 As for tardive dyskinesia, it became a common problem for American patients, whereas in Europe, Deniker noted, it “is known but does not have the same importance in severity and in quality.”
36
Together, all of these historical pieces add up to a very dark truth. Neuroleptics, peddled to the public as medications that “altered” dopamine levels in ways that “freed patients from the terrible effects of mental illness,” actually induced a pathology similar to that caused by encephalitis lethargica. And American psychiatrists, for more than thirty years, prescribed such drugs at virulent doses.