9
SHAME OF A NATION
017
They called me mad, and I called them mad, and damn them, they outvoted me.
—Nathaniel Lee1
 
 
 
 
THE ALCHEMY THAT transformed neuroleptics into antischizophrenic medications had, in essence, set two “realities” in motion. There was the reality that the patients experienced and the one that we as a society believed in, and they were in dramatic conflict. During the 1970s, the battle over which reality was “true” spilled into the courts and deep into the hallways at the NIMH. Patients demanded the right to forgo “treatment,” and at the NIMH, the head of the schizophrenia division, Loren Mosher, put the question of whether patients might do better without neuroleptics under an experimental microscope. These two struggles marked the proverbial fork in the road, as they raised fundamental questions about the values that would, in the future, drive our care of the mentally ill. Would we be willing to listen to the mentally ill and fashion a form of care responsive to their wants and needs? Would we be willing to honestly explore alternatives to drug treatment? Or would we simply insist that our medications for schizophrenia were good, and leave it at that?
The answers to those questions can be clearly seen today.
Until patients mounted their legal protests in the 1970s, American society had always pretty much taken for granted that it had the right to forcibly treat the mentally ill. There had been a number of legal battles in the 1800s and early 1900s over society’s right to commit patients, but once patients were so committed, forced treatment seemed to follow as a matter of course. Mental patients lacked competence to consent, and—or so the argument went—the state had the right, in the absence of such competence, to act as a substitute parent and determine what was best for them. While there was an understandable rationale to that argument—how can a psychotic person evaluate a proposed treatment?—the history of mad medicine also showed that it invited abuse. Asylum patients had been strapped to tranquilizer chairs and bled, forcibly sterilized, and chased down hallways so they could be injected with metrazol or convulsed with a jolt of electricity. Freeman was so nonchalant about the practice of forced lobotomies that, in one of his books, he included a photo of a screaming, naked woman being dragged to the operating table. To patients, such treatment could be seen as an assault on who they were.
The introduction of neuroleptics into asylum medicine made for a new chapter in this long-running battle between doctor and mental patient. Very early on, hospital psychiatrists began describing how patients, much to their displeasure, were hiding pills in their cheeks and spitting them into toilets when they weren’t looking. Discharged patients were found to be “unwilling to purchase the drug.”2 Various studies determined that 40 percent, 50 percent, and even 60 percent of patients were trying to avoid treatment in this way, leading one psychiatrist to lament: “Drug defectors constitute a large part of the world’s psychiatric population.”3 The problem of patient resistance was so pervasive that in the early 1960s, pharmaceutical companies scrambled to develop drug-delivery methods that could circumvent this resistance. One solution, which several firms came up with, was to replace the pills with liquid formulations that were odorless and colorless, which hospital staff could then secretly mix into the patients’ food. Ads that Smith, Kline & French ran in psychiatry journals for liquid Thorazine revealed the medical attitude behind this subterfuge:
Warning! Mental Patients Are Notorious DRUG EVADERS. Many mental patients cheek or hide their tablets and then dispose of them. Unless this practice is stopped, they deprive themselves of opportunities for improvement or remission . . . deceive their doctors into thinking that their drugs have failed . . . and impose a needless drain on their hospital’s finances. When drug evaders jeopardize the effectiveness of your treatment program, SPECIFY LIQUID CONCENTRATE THORAZINE. Liquid concentrate is the practical dosage for any patient who resists the usual forms of oral medication. It can easily be mixed with other liquids or semisolid foods to assure ingestion of the drug.4
The long-acting injectables, first introduced in 1963, were similarly hailed as a “major tactical breakthrough” that made forced treatment easier. Ads promised doctors that an injectable “puts control of the schizophrenic in your hands . . . lightens responsibilities of the hospital staff . . . saves time, reduces costs in the hospital, clinic, office.”5 After a single injection, Heinz Lehmann advised, resistant patients became “cooperative enough to take whatever drug and whatever mode of drug administration is chosen for them.” In discharged patients, he added, injections could be likened to an intrauterine device for preventing pregnancy. “Once the medication is in, the patient is safe for a certain period of time,” he said.6 The fact that such long-acting drugs caused worse side effects was seen to be of little consequence, a small price for patients to pay in return for increasing the likelihood they would remain “medication compliant.”
One patient who was so treated was David Oaks, who today is the editor of Mind Freedom, an activist newsletter for ex-patients. In 1975, he suffered a psychotic break while an undergraduate at Harvard University: “I was told that I would have to be on drugs the rest of my life, that it was like insulin for diabetes. I was held down when I tried to reject the drugging, put in solitary confinement and forcibly injected. It galvanized me to fight back against this oppression. This forced drugging is a horrible violation of core American values of freedom.”7
That argument, that forced treatment violated fundamental American values, was the basis of legal challenges by patients for the right to refuse medication. The “mad” groups saw their struggle in historical terms, as a long-overdue battle for their civil rights. The Insane Liberation Front formed in Portland; the Mental Patients’ Liberation Project in New York City; and the Network Against Psychiatric Assault in San Francisco. They held demonstrations, organized human rights conferences, and, starting in 1975, took their fight to state courts. Their lawyers argued that forced drugging, whether achieved by injection or by slipping it into the patients’ food, was a form of medical assault and battery, constituting “cruel and unusual punishment” and a violation of their constitutional rights to due process and freedom of speech. The patients’ rallying cry was: “We need love and food and understanding, not drugs.”8
That was not a message that mainstream psychiatry was eager to hear. The patients’ political activities and their lawsuits stirred the wrath of psychiatrists to no end. Abram Bennett, who had helped pioneer convulsive therapies in America, told the San Diego Union that ex-mental patients, who were rising up against both forced drugging and the use of electroshock, were a “menace to society” and warned that if the public listened to them, “then insanity will rule the nation.” Alexander Rogawaski, a professor at the University of Southern California School of Medicine, publicly called them “bastards” and compared the Network Against Psychiatric Assault to “a dog that bites on your heels and hinders you in what is obviously a very important job.”9 Leaders in psychiatry spoke of how any curbing of forced treatment would pave the way for the mentally ill “to rot with their rights on” and that meddling judges could not understand that psychosis is “itself involuntary mind control” that “represents an intrusion on the integrity of a human being.” Antipsychotic medications, they told the courts, “liberate the patient from the chains of illness.”10 In ordinary times, psychiatry might have won this battle easily. But this fray erupted at the same time that Soviet dissidents were smuggling out manuscripts describing neuroleptics as the worst sort of torture, which, at the very least, presented America with the ticklish problem of explaining how a helpful medication here was a poison over there.

A Matter of Perspective

The first rumblings that the Soviets were using neuroleptics to punish dissidents surfaced in 1969 and burst into public consciousness a year later. Dissidents would be diagnosed with “sluggish schizophrenia,” their reformist ideas seen as evidence of their “delusions” and poor adjustment to Soviet society, and then sent to one of twelve special psychiatric hospitals. Although the Soviet practices were outrageous, the United States had every reason to be queasy about being too quick to throw stones over this issue. At the time, the United States shared with the Soviet Union the dubious distinction of labeling a larger percentage of its population “schizophrenic” than all other developed countries. Nor was the diagnosis of schizophrenia in the United States free from political, racial, or class taint. In 1958, the first African-American to apply for admission to the University of Mississippi, Clennon King, was committed to a state mental hospital—any black man who thought he could get into Ole Miss was obviously out of touch with reality.11 Moreover, in the early 1970s, U.S. institutions were routinely using neuroleptics to quiet the mentally retarded, the elderly, and even juvenile delinquents—in such instances, the drugs were clearly being used for non-psychiatric purposes. Even so, U.S. politicians rose up to condemn the Soviets, and in 1972, the U.S. Senate formally launched an investigation into the Soviets’ “abuse of psychiatry for [purposes of] political repression.”
What the senators heard chilled them. One expert witness, Canadian psychiatrist Norman Hirt, told of a mélange of treatments used to torment the dissidents. Wet packs, insulin coma, metrazol—all familiar to students of American psychiatry—were three such methods. “The fearfulness of these experiences cannot be described adequately by any words,” Hirt said. However, written appeals from Soviet dissidents, which had been smuggled out and given to the Senate, described neuroleptics as the worst torture of all. A person who is given aminazine (a neuroleptic similar to Thorazine), wrote Vassily Chernishov,
loses his individuality, his mind is dulled, his emotions destroyed, his memory lost . . . as a result of the treatment, all the subtle distinctiveness of a person is wiped away. It is death for creativeness. Those who take aminazine cannot even read after taking it. Intellectually they become more and more uncouth and primitive. Although I am afraid of death, let them shoot me rather than this. How loathsome, how sickening is the very thought that they will defile and crush my soul!
Comparisons were drawn between such forced drug treatment and the medical experiments of Nazi doctor Josef Mengele, all of which led Florida senator Edward Gurney to conclude: “Most horrifying of all in this psychiatric chamber of horrors were the many accounts of the forcible administration by KGB psychiatrists of chemicals which convert human beings into vegetables.”12
Over the next few years, Soviet dissidents published further details of this “chamber of horrors.” Aminazine and haloperidol were the two neuroleptics most commonly used to torment them. In a samizdat manuscript titled Punitive Medicine, dissidents described the incredible pain that haloperidol could inflict:
The symptoms of extrapyramidal derangement brought on by haloperidol include muscular rigidity, paucity and slowness of body movement, physical restlessness, and constant desire to change the body’s position. In connection with the latter, there is a song among inmates of special psychiatric hospitals which begins with the words, “You can’t sit, you can’t lie, you can’t walk” . . . many complain of unimaginable anxiety, groundless fear, sleeplessness.13
Doctors used neuroleptics, the Soviet dissidents stated, “to inflict suffering on them and thus obtain their complete subjugation. Some political prisoners do recant their beliefs, acknowledge that they are mentally ill, and promise not to repeat their ‘crimes’ in return for an end to this treatment.”14 American psychiatrists also heard such testimony firsthand. On March 26, 1976, Leonid Plyushch, a thirty-nine-year-old mathematician who had spent several years in the psychoprisons before being freed, spoke at a meeting of the New York Academy of Sciences. That produced this memorable exchange:
Q. What was the most horrifying aspect of your treatment?
A. I don’t know if there are irreversible effects of psychiatric treatment, but all the inmates at Dnepropetrovsk Special Psychiatric Hospital lived in the fear that there would be such effects. They had heard stories of those driven by the treatment into permanent insanity. My treatment, in chronological order, began with haloperidol in big dosages without “correctives” that avoid side effects, essentially as a torture. The purpose was to force the patient to change his convictions. Along with me there were common criminals who simulated [mental] illness to get away from labor camps, but when they saw the side effects—twisted muscles, a disfigured face, a thrust-out tongue—they admitted what they had done and were returned to camp.15
Such descriptions stirred newspapers and television networks in the United States to condemn, with great fervor, the Soviets’ actions. Not long after Plyushch’s testimony, the New York Times ran an extensive feature on “Russia’s psychiatric jails,” in which it likened the administration of neuroleptics to people who weren’t ill to “spiritual murder” and “a variation of the gas chamber.” Dissidents, the paper explained, had been forcibly injected with Thorazine, “which makes a normal person feel sleepy and groggy, practically turning him into a human vegetable.” Neuroleptics were a form of torture that could “break your will.”16
None of this word choice—torture, Mengele, gas chambers, spiritual murder, human vegetables—could possibly have brought any cheer to Smith, Kline & French, or to other manufacturers of neuroleptics. And with the dissidents’ words as a foil, U.S. mental patients were able to make powerful cases, in legal challenges filed in Massachusetts, California, New Jersey, Ohio, and elsewhere, that forced neuroleptic treatment was a violation of their constitutional rights. Some of the details that spilled out during those trials were disturbing in the extreme. Judges heard psychiatrists testify that it was best if mental patients were not told about the drugs’ side effects, of how patients would be held down by “goon squads” and given an injection in their buttocks, and of hospitals covering up the fact that many of their mental patients suffered from tardive dyskinesia. In New Jersey, John Rennie, a former aircraft pilot who was said to be highly intelligent, was beaten with sticks by aides at Ancora Psychiatric Hospital when he wouldn’t take his drugs. The behavior that had landed him there had an obvious political edge as well—he’d threatened to kill President Gerald Ford. At Fairview State Hospital in Pennsylvania, physicians “would enter the ward with a tray of hypodermic needles filled with Prolixin, line up the whole ward or part of the ward, and administer the drug”—care that was reminiscent of the mass shocking of asylum patients.17 Yet while the newspaper reports condemned the general mistreatment of the mental patients, the drugs—in this context of American medicine, as opposed to the Soviet Union’s abuse of its dissidents—were usually presented as helpful medications. They were, the New York Times said in its report on Rennie’s lawsuit, “widely acknowledged to be effective.”18
This reporting accurately reflected how the legal struggle played out in court. Judge Joseph Tauro in Boston handed down the groundbreaking ruling on October 29, 1979: “Whatever powers the constitution has granted our government, involuntary mind control is not one of them, absent extraordinary circumstances. The fact that mind control takes place in a mental institution in the form of medically sound treatment of mental disease is not, in itself, an extraordinary circumstance warranting an unsanctioned intrusion on the integrity of a human being” (italics added).19 Judge Tauro had found a way to simultaneously condemn and embrace American practices. Forced treatment was a violation of the patient’s constitutional rights, but “mind control” with neuroleptics was a “form of medically sound treatment of mental disease.” The image of neuroleptics as good medicine for the mentally ill had been maintained, and in that sense, the patients’ victory turned out to be hollow in the extreme. In the wake of the legal rulings, hospitals could still apply to a court to sanction forced treatment of drug-resisting patients (it became a due process issue), and as researchers soon reported, the courts almost inevitably granted their approval. “Refusing patients,” noted Paul Appelbaum, a psychiatrist at the University of Massachusetts Medical School, “appear almost always to receive treatment in the end.”20 Moreover, since the drugs were still seen as efficacious, society had little reason to develop alternative forms of nondrug care and could even feel justified in requiring patients living in the community, but in need of shelter and food, to take neuroleptics as a condition of receiving such social support. “I spent a lot of years in community mental health,” said John Bola, now an assistant professor of social work at the University of Southern California, “and the clients, in order to stay in the residences, would have to agree to take medication. Even when they were having severe reactions to the medication, staff would sometimes threaten to kick them out of the facility unless they took the drugs.”21
All too often, this resulted in drug-resistant patients finding themselves with nowhere to turn, and on the run. Such was the case for Susan Fuchs. Raised by a loving Brooklyn family, she’d been a bright child and had earned a degree in mathematics from State University of New York at Binghamton. After graduating, however, she found herself caught in the throes of mental illness. She needed help desperately, but neuroleptics only deepened her despair, so much so that at one point early in her illness, she leaped into the Hudson River in a suicide attempt. “I am a vegetable on medication,” she wrote. “I can’t think. I can’t read. I can’t enjoy anything . . . I can’t live without my mind.” That day she was rescued by a bystander, but her fate was cast: She was deeply in need of help, and yet the “help” that society was poised to offer were medications she detested. For the next fifteen years, she cycled in and out of New York’s chaotic mental-health system, moving endlessly among psychiatric wards, emergency rooms, and homeless shelters, where she was sexually assaulted. Finally, shortly after midnight on July 22, 1999, a woman’s screams were heard in Central Park—the last cry of Susan Fuchs for help. Nobody called the police, and the next morning she was found murdered. Her clothes had been torn from her body, and her head had been bashed in with a rock.22

The Defeat of Moral Treatment

The other defining political battle that occurred in the 1970s came in the form of an experiment, known as the Soteria Project, led by Loren Mosher. In their protests, ex-patients had declared that they wanted “love and food and understanding, not drugs,” and the Soteria Project, in essence, was designed to compare outcomes between the two. And while love and food and understanding proved to be good medicine, the political fate of that experiment ensured that the Soteria Project would be the last of its kind and that no one would dare to investigate this question again.
Mosher, a Harvard-trained physician, was not “against” neuroleptics when he conceived Soteria. He’d prescribed them while an assistant professor at Yale University, where he’d supervised a ward at a psychiatric hospital. But by 1968, the year he was appointed director of the Center for Schizophrenia Studies at the NIMH, he’d become convinced that their benefits were overhyped. In his new position, he also perceived that NIMH research was skewed toward drug studies. There was, he said, a “clubby” relationship between the academics who sat on the NIMH grant-review committees and the pharmaceutical industry.23 He envisioned Soteria as an experiment to test a simple premise: Would treating acutely psychotic people in a humanistic way, one that emphasized empathy and caring and avoided the use of neuroleptics, be as effective as the drug treatment provided in hospitals?
Mosher’s interest in this question was prompted by a conception of schizophrenia at odds with prevailing biological beliefs. He thought that psychosis could arise in response to emotional and inner trauma, and that it could, in its own way, be a coping mechanism. The “schizophrenic” did not necessarily have a broken brain. There was the possibility that people could grapple with their delusions and hallucinations, struggle through a schizophrenic break, and regain their sanity. His was an optimistic vision of the disorder, and he believed that such healing could be fostered by a humane environment. Soteria would provide a homelike shelter for people in crisis, and it would be staffed not by mental-health professionals but simply by people who had an evident empathy for others, along with the social skills to cope with people who could be strange, annoying, and threatening. “I thought that sincere human involvement and understanding were critical to healing interactions,” he recalled. “The idea was to treat people as people, as human beings, with dignity and respect.” To give his notions a more rigorous test, he designed the experiment so that only young, unmarried acutely ill schizophrenics would be enrolled—a subgroup that was expected to have poor outcomes.
The twelve-room Soteria house, located in a working-class neighborhood of Santa Clara, California, opened in 1971. Care was provided to six “residents” at a time. When they arrived, they presented the usual problems. They told of visions of spiders and bugs coming from the walls, or of being the devil, or of how the CIA was after them. They could be loud, they could be aggressive, and certainly they could act in very crazy ways. One of the first residents was an eighteen-year-old woman so lost to the world that she would urinate on the floor. She had withered to eighty-five pounds, wouldn’t bathe or brush her teeth, and would regularly fling her clothes off and jump into the laps of male staff and say, “Let’s fuck.” However, faced with such behavior, Soteria staff never resorted to wet packs, seclusion rooms, or drugs to maintain order. And over the course of a decade, during which time more than 200 patients were treated at Soteria and at a second house that was opened, Emanon, violent residents caused fewer than ten injuries, nearly all of them minor.
The philosophy at Soteria was that staff, rather than do things “to” the residents, would “be with them.” That meant listening to their crazy stories, which often did reveal deeper stories of past trauma—difficult family relationships, abuse, and extreme social failure. Nor did they try to argue the residents out of their irrational beliefs. For instance, when one resident proclaimed that aliens from Venus had selected him for a secret mission and were going to come to a nearby park at daybreak to pick him up, a staff member took him to the park at the appointed time. When the extraterrestrial visitors didn’t arrive, the resident simply shrugged and said, “Well, I guess they aren’t going to come today after all,” and then returned to Soteria House, where he fell sound asleep.
That was a reality check that had helped psychosis loosen its grip.
Beyond that, the Soteria staff let the residents know that they expected them to behave in certain ways. The residents were expected to clean up. They were expected to help with such chores as cooking. They were expected to not be violent toward others. The staff, in essence, was holding up a mirror, much as the York Quakers had done, that reflected to the residents not an image of madness, but one of sanity. Friendships blossomed, residents and staff played cards and games together, and there were no locks on the doors. Other activities included yoga, reading to one another, and massage.
Not too surprisingly, Soteria residents often spoke fondly of this treatment. “I took it as my home,” said one, in a film made at the time. “What is best is nobody does therapy,” said another. “We ought to have a whole lot of Soterias,” said a third. One of the stars of that film was the young woman who, when she’d arrived at Soteria, had regularly invited men to have intercourse with her—she had blossomed into a striking and poised woman, on her way to marrying a local surfer and becoming a mother. When residents recovered to the point they could leave, they were said to have “graduated,” and staff and other residents would throw a small party in their honor. The message was unmistakable: They would be missed. Schizophrenics! Said one young man on the day of his graduation: “If it wasn’t for this place, I don’t know where I’d be right now. I’d have to be on the run if it wasn’t for Soteria . . . Soteria saved me from a fate worse than death. Food’s good too. And there is a whole lot of love generated around this place. More so than any other place I’ve been.”
By 1974, Mosher and his colleagues were ready to begin reporting outcomes data. As they detailed in several published papers, the Soteria patients were faring quite well. At six weeks, psychotic symptoms had abated in the Soteria patients to the same degree as in medicated patients. Even more striking, the Soteria patients were staying well longer. Relapse rates were lower for the Soteria group at both one-year and two-year follow-ups. The Soteria patients were also functioning better socially—better able to hold jobs and attend school.24
And that was the beginning of the end for Mosher and his Soteria project.
Even though Mosher was a top gun at NIMH, he’d still needed to obtain funding for Soteria from the grants committee that oversaw NIMH’s extramural research program. Known as the Clinical Projects Research Review Committee, it was composed of top academic psychiatrists, and from the beginning, when Mosher had first appeared before them in 1970, they had not been very happy about this experiment. Their resistance was easy to understand: Soteria didn’t just question the merits of neuroleptics. It raised the question of whether ordinary people could do more to help crazy people than highly educated psychiatrists. The very hypothesis was offensive. Had anyone but Mosher come forward with this proposal in 1970, the Clinical Projects Committee probably would have nixed it, but with Mosher, the group had been in a difficult political situation. Did it really dare turn down funding for an experiment proposed by the head of schizophrenia studies at the NIMH? The committee approved the project, but it knocked down Mosher’s original request for $700,000 over five years to $150,000 over two years.25
With that limited funding, Mosher had struggled to get Soteria off the ground. He also had to fight other battles with the review committee, which seemed eager to hamstring the project in whatever way it could. The committee regularly sent auditors to Soteria because it had doubts “about the scientific rigor of the research team.” It repeatedly requested that Mosher redesign the experiment in some fashion. In one review, it even complained about how he talked about schizophrenia. Mosher and his colleagues, the committee wrote, liked to espouse “slogans” such as psychosis is a “valid experience to be taken seriously.” Then, in 1973, it reduced funding for Soteria to $50,000 a year—a sum so small that it seemed certain to provide Soteria with the financial kiss of death.
At that point, Mosher ran an end run around the clinical projects group. He applied for funding from a division of the NIMH that oversaw the delivery of social services to the mentally ill (housing, and so on), and the peer-review committee overseeing grants for that purpose responded enthusiastically. It called Soteria a high-priority investigation, “sophisticated” in its scientific design, and approved a grant of $500,000 for five years for the establishment of a second Soteria house, which Mosher named Emanon.
The battle lines were now clearly joined. Two different review committees—and one was slinging arrows at Mosher as a scientist and the other praising him for running the experiment in a sophisticated manner. The stakes were clearly high. The very credibility of academic psychiatry, along with its medical model for treating schizophrenia, was on the line. Patients were publicly complaining that neuroleptics were a form of torture, and now here was the physician who was the nation’s top official on schizophrenia, and also the editor-in-chief of Schizophrenia Bulletin (a prominent medical journal), running an experiment that could provide scientific legitimacy for their complaints. Even the NIMH grants committee that had approved funding for Emanon had acknowledged as much: Soteria, it wrote, was an attempt at a “solution” that could humanize the “schizophrenic experience . . . the need for [an] alternative and successful treatment of schizophrenia is great.”
And so when Mosher began to report good outcomes, the clinical projects committee struck back in the only way it could. “The credibility of the pilot study data is very low,” the review committee snapped. The study, it said, had “serious flaws.” Evidence of superior outcomes for the Soteria patients was “not compelling.” Then the committee hit Mosher with the lowest blow of all: It would approve further funding only if he was replaced by another investigator, who could then work with the committee to redesign the experiment. “The message was clear,” Mosher says, still bitter twenty-five years later. “If we were getting outcomes this good, then I must not be an honest scientist.”
The irony was that Mosher was not even doing the outcomes assessment. Outcomes data—for both Soteria and a comparison group of patients treated conventionally in a hospital setting with neuroleptics—were being gathered by an independent group of reviewers. Mosher well knew that experimenter bias regularly plagued drug studies, and so he’d turned to independent reviewers to rid the Soteria experiment of that problem. Even so, the project was taken away from him. A new principal investigator was recruited to lead the Soteria experiment; it limped along for a few more years, and then in 1977, the clinical projects committee voted to shut down the project. It did so even while making a final grudging admission: “This project has probably demonstrated that a flexible, community based, non-drug residential psychosocial program manned by non-professional staff can do as well as a more conventional community mental health program.”l
Soteria soon disappeared into the APA’s official dustbin, an experiment that American psychiatry was grateful to forget. However, it did inspire further investigations in several European countries. Swiss physicians replicated the experiment and determined that Soteria care produced favorable outcomes in about two-thirds of patients. “Surprisingly,” the Swiss researchers wrote in 1992, “patients who received no or very low-dosage medication demonstrated significantly better results.”26 Ten or so Soteria homes have sprung up in Sweden, and in both Sweden and Finland, researchers have reported good outcomes with psychosocial programs that involve minimal or no use of neuroleptics.
As for Mosher, his career sank along with the Soteria project. He became branded as anti-science, someone standing in the way of the progress of biological psychiatry, and by 1980 he had been pushed out of NIMH. Others who dared question the merits of neuroleptics in the 1970s also quickly discovered that it was a singularly unrewarding pursuit. Maurice Rappaport, who’d found in his study that schizophrenics treated without neuroleptics fared better, was able to get his results published only in a relatively obscure journal, International Pharmacopsychiatry, and then he let the matter drop. Crane, who’d blown the whistle on tardive dyskinesia, only to be denounced as an alarmist, left the NIMH and by 1977 was toiling in the backwaters of academic medicine, a clinical professor of psychiatry at the University of North Dakota School of Medicine. In the 1980s, Maryland psychiatrist Peter Breggin took up the cudgel as psychiatry’s most vocal critic, writing of the harm caused by neuroleptics and speaking out on television, and he quickly became a pariah, flogged by his peers as “ignorant,” an “outlaw,” and a “flat-earther.” Even the media piled on, with Time magazine comparing Breggin to a “slick lawyer” who has “an answer for every argument,” one who advances “extremely dubious propositions like the notion that drugs don’t help schizophrenics.” 27
No one could have missed the message. American psychiatry and society had its belief system, and it was not about to suffer the fools who dared to challenge it.

Better Off in Nigeria

Mosher’s experiment and the court battles had occurred at a very particular time in American history. The Civil Rights movement, protests against the Vietnam War, and Watergate all made the early 1970s a time when disenfranchised groups had a much greater opportunity than usual to be heard. Ken Kesey’s book One Flew over the Cuckoo’s Nest suggested that even crazy people should be listened to. That was the societal context that made it possible for the clash between the two realities—the one experienced by patients and the one we as a society believed in—to momentarily become a matter of public debate. With the demise of the Soteria Project, however, the debate officially ended. The 1970s passed into the 1980s, and lingering protests by patients over their drugs were dismissed as the rantings of crazy people. As Edward Shorter declared in his 1997 book A History of Psychiatry, antipsychotic medications had initiated a “revolution” in psychiatry and made it possible for patients with schizophrenia to “lead relatively normal lives and not be confined to institutions.”m That became the agreed-upon history, and not even repeated findings by the World Health Organization that schizophrenics in developed countries fared much worse than schizophrenics in poor countries, where neuroleptics were much less frequently used, disturbed it.
The WHO first launched a study to compare outcomes in different countries in 1969, a research effort that lasted eight years. The results were mind-boggling. At both two-year and five-year follow-ups, patients in three poor countries—India, Nigeria, and Colombia—were doing dramatically better than patients in the United States and four other developed countries. They were much more likely to be fully recovered and faring well in society—“an exceptionally good social outcome characterized these patients,” the WHO researchers wrote—and only a small minority had become chronically sick. At five years, about 64 percent of the patients in the poor countries were asymptomatic and functioning well. Another 12 percent were doing okay, neither fully recovered nor chronically ill, and the final 24 percent were still doing poorly. In contrast, only 18 percent of the patients in the rich countries were asymptomatic and doing well, 17 percent were in the so-so category, and nearly 65 percent had poor outcomes.28 Madness in impoverished countries ran quite a different course than it did in rich countries, so much so that the WHO researchers concluded that living in a developed nation was a “strong predictor” that a schizophrenic patient would never fully recover.29
These findings, which were first reported in 1979, naturally stung psychiatrists in the United States and other rich countries. But Western doctors were not used to seeing their medicine produce such embarrassing results, so many just dismissed the WHO studies as flawed. The people being diagnosed as schizophrenic in the poor countries, the argument went, must not have been suffering from that devastating disorder at all but from some milder form of psychosis. With that criticism in mind, the WHO launched a follow-up study. This time it compared two-year outcomes in ten countries, and it focused primarily on first-episode schizophrenics, all diagnosed by the same criteria. The WHO investigators even divided patients into schizophrenia subtypes and compared outcomes in the subgroups. But it didn’t matter. No matter how the data were cut and sliced, outcomes in poor countries were much, much better. “The findings of a better outcome of patients in developing countries was confirmed,” the WHO investigators wrote in 1992.30 Even the statistics were much the same the second time around. In the poor countries, nearly two-thirds of schizophrenics had good outcomes. Only slightly more than one-third became chronically ill. In the rich countries, the ratio of good-to-bad outcomes was almost precisely the reverse. Barely more than one-third had good outcomes, and the remaining patients didn’t fare so well.
The sharply disparate results presented an obvious conundrum. Why should there be such a stark difference in outcomes from the same disorder? Suffer a schizophrenic break in India, Nigeria, or Columbia, and you had a good chance of recovering. Suffer the same illness in the United States, England, or Denmark, and you were likely to become chronically ill. Why was living in a developed country so toxic? The WHO investigators looked briefly at various possibilities that might explain the difference—family involvement, childhood experiences, and societal attitudes—but couldn’t come up with an answer. All they could conclude was that for unknown reasons, schizophrenics in developed countries generally failed to “attain or maintain a complete remission of symptoms.”
However, there was in the WHO’s own data a variable that explained the difference. But it was one so threatening to Western medicine that it went unexplored.
The notion that “cultural” factors might be the reason for the difference has an obvious flaw. The poor countries in the WHO studies—India, Nigeria, and Colombia—are not at all culturally similar. They are countries with different religions, different folk beliefs, different ethnic groups, different customs, different family structures. They are wildly disparate cultures. In a similar vein, the developed countries in the study—the United States, England, Denmark, Ireland, Russia, Czechoslovakia, and Japan—do not share a common culture or ethnic makeup. The obvious place to look for a distinguishing variable, then, is in the medical care that was provided. And here there was a clear difference. Doctors in the poor countries generally did not keep their mad patients on neuroleptics, while doctors in the rich countries did. In the poor countries, only 16 percent of the patients were maintained on neuroleptics. In rich countries, 61 percent of the patients were kept on such drugs.
That is a statistically powerful correlation between drug use and outcomes. Certainly if the correlation had gone the other way, with routine drug use associated with much better outcomes, Western psychiatry would have taken a bow and given credit to its scientific potions. American psychiatry, after all, had made continuous medication the cornerstone of its care. Yet, in the WHO studies, that was the model of care that produced the worst outcomes. Indeed, the country with arguably the poorest outcomes of all was the Soviet Union, and it was also the country that led all others in keeping patients continually on neuroleptics. Eighty-eight percent of Soviet patients were maintained on the drugs, and yet fewer than 20 percent were doing well at the end of two years.31
Even before the 1992 WHO report, American researchers had reason to think that there would be such a correlation. In 1987, Courtenay Harding, a psychologist at the University of Colorado, reported on the long-term outcomes of eighty-two chronic schizophrenics discharged from Vermont State Hospital in the late 1950s. She had found that one-third of this cohort had recovered completely. And as she made clear in subsequent publications, the patients in this best-outcomes group shared one common factor: They all had successfully weaned themselves from neuroleptics. Hers was the best, most ambitious long-term study that had been conducted in the United States in recent times. The notion that schizophrenics needed to stay on medication all their lives, she’d concluded, was a “myth.”32
TABLE 9.1 Schizophrenia Outcomes: Developing vs. Developed Countries
Developing Countries Developed Countries
Drug Use
On antipsychotic medication 76% to 100% of follow-up period15.9%61%
Best Possible Outcomes
Remitting course with full remission62.7%36.9%
In complete remission 76% to 100% of follow-up period38.3%23.3%
Unimpaired42.9%31.6%
Worst Possible Outcomes
Continuous episodes without complete remission21.6%38.3%
In psychotic episodes for 76% to 100% of follow-up period15.1%20.2%
Impaired social functioning throughout follow-up period15.7%41.6%
SOURCE: Psychological Medicine, supplement 20 (1992)
The correlation between poor outcomes and neuroleptics also clearly fit with all that was known about the biological effects of the drugs. They induced a pathology in dopamine transmission akin to that caused by Parkinson’s disease and encephalitis lethargica. They destabilized dopaminergic systems in ways that made patients more vulnerable to relapse. They caused tardive dyskinesia, an often irreversible form of brain damage, in a high percentage of patients. How could such drugs, when prescribed as long-term, maintenance medications, possibly help mentally fragile people function well in society and fully recover from their descent into psychosis? “You are taking people who are already broken—and by that I mean traumatized, broken by life—and then you are breaking them completely,” said David Cohen, a professor of social work at Florida International University.33
The WHO studies, however, did more than just challenge American psychiatry to rethink its devotion to neuroleptics. The studies challenged American psychiatry to rethink its whole conception of the disorder. The studies had proven that recovery from schizophrenia was not just possible, but common—at least in countries where patients were not continually kept on antipsychotic medications. The WHO studies had demonstrated that the American belief that schizophrenics necessarily suffered from a biological brain disorder, and thus needed to be on drugs for life, wasn’t true. Here was a chance for American psychiatry to learn from success in other countries and, in so doing, to readjust its message to people who had the misfortune to suffer a schizophrenic break. Recovery was possible. That was a message that would provide patients with the most therapeutic agent of all: hope. They did not need to consign themselves to a future dimmed by mind-numbing medications. And with that conception of the disorder in mind, medical care of the severely mentally ill would presumably focus on helping them live medication-free lives. Either they would never be exposed to neuroleptics in the first place, or if they were, they would be encouraged to gradually withdraw from the drugs. Freedom from neuroleptics would become the desired therapeutic goal.
But, of course, that never happened. American psychiatry, ever so wed to the story of antipsychotic medications, a bond made strong by pharmaceutical money, simply ignored the WHO studies and didn’t dig too deeply into Harding’s, either. Schizophrenics suffered from a biological brain disorder, antipsychotic medications prevented relapse, and that was that. The tale that had been crafted for the American public was not about to be disturbed. Indeed, a few years after the WHO reported its results, an NIMH-FUNDED study determined that care in the United States was proceeding headlong along a path directly opposite to that in the poor countries: In 1998, 92 percent of all schizophrenics in America were being routinely maintained on antipsychotics.34 Even the thought of getting patients off the drugs had become lost to the medical conversation—evidence, once again, that American psychiatry was being driven by an utterly closed mind.