CHAPTER NINETEEN

Diagnosing Mental Illness

THE COLLAPSE OF INSTITUTIONAL PSYCHIATRY ought to have proved a boon to the bulk of the profession that now practiced in outpatient settings. In reality, however, the period between 1960 and 1980 saw rising challenges to psychiatry and to the Freudian elite who had dominated the profession since 1945. Some of these attacks were overt and obviously threatening. Other doubts were initially purely intraprofessional concerns debated by a minority of psychiatrists. These drew little broader notice at the time but later exploded into public view in an especially damaging fashion.

As early as 1961, the renegade psychiatrist Thomas Szasz, a professor of psychiatry at the State University of New York at Syracuse, claimed that mental illness was nothing more than a myth, an imaginary entity conjured up by his fellow professionals that had no biological reality. People had “problems in living,” but these were not illnesses. Medicalizing them was simply a way of surreptitiously exercising social control of a particularly insidious sort over troublesome people, depriving them of their rights and freedoms in the name of “helping” them. Beyond this, psychiatrists were pathologizing more and more varieties of human behavior, masking their role as enforcers of conventional morality by asserting they were acting in the name of science. Claiming to occupy the other end of the political spectrum from the right-wing Szasz, the Scottish psychiatrist R. D. Laing drew a great deal of attention a few years later for his heretical claim that schizophrenia was some sort of super-sanity, a voyage of discovery that should be indulged and encouraged. It was society, not the mental patient, who was sick.1

Sociologists chimed in to assert that mental illness was all a matter of labels, not individual pathology. It was a societal reaction to transient departures from social norms that stabilized rule-breaking behavior. Psychiatrists cemented the process by applying their scientific-sounding labels, often on the basis of the briefest and most casual of encounters.2

The contention that psychiatrists for decades had damaged and deformed those they purported to treat began to morph into a more general skepticism about the profession’s claims to expertise.3 Such contentions were taken up by members of the newly emerging mental health bar, public-interest lawyers who at the end of the 1960s began suing psychiatry on multiple fronts. The most prominent of these attorneys, Bruce Ennis of the American Civil Liberties Union, soon co-authored a long law-review article dismissing psychiatrists’ claims to be experts in the diagnosis and treatment of mental illness as scientifically indefensible and verging on the fraudulent. Psychiatrists who weighed in on questions of sanity fared no better, he charged, than a trained monkey flipping a coin.4

In the heyday of psychoanalysis, Hollywood had produced a string of movies extolling the virtues of the new psychiatry and trumpeting Freud’s insights into the human psyche. Miloš Forman’s One Flew over the Cuckoo’s Nest, adapted from Ken Kesey’s novel and released in 1975, almost single-handedly changed all that. Still widely watched nearly a half century after it was first shown, the film constituted a sustained assault on psychiatry’s competence, beginning with the failure to recognize that the roustabout Randle P. McMurphy was merely feigning insanity. As the film proceeded, a dark portrait of psychiatry emerged: it was a vicious, repressive, antitherapeutic enterprise whose “group therapy” was a form of sadism and whose physical interventions, from drugs to electroshock to lobotomy, were simply weapons employed to cow dissent and discipline the unruly.5

Critiques of psychiatry’s competence found support in internal concerns that some psychiatrists had begun to articulate about the unreliability of psychiatric diagnoses. During the 1960s, a series of sober academic studies conducted by leading lights in the profession had repeatedly documented the problem. As early as 1959, the prominent American psychiatrist Benjamin Pasamanick and his associates had drawn attention to the fact that “commonly promulgated definitions of mental illness are still so vague that they are frequently meaningless in practice. [It is] an even stronger indictment of the present state of psychiatry, that equally competent clinicians as often as not are unable to agree on the specific diagnosis of psychiatric impairment. Any number of studies have indicated that psychiatric diagnosis is at present so unreliable as to merit very serious question when classifying, treating and studying patient behavior and outcome.” Their own study of the issues demonstrated that the clinical and theoretical commitments of the treating psychiatrist were of greater importance than the symptoms presented by the patients. In particular, “the greater the commitment to an analytic orientation, the less the inclination toward diagnosing patients as schizophrenic.”6

The University of Pennsylvania psychiatrist Aaron T. Beck’s subsequent review of systematic studies of reliability, undertaken some three years later, essentially confirmed these findings. Setting aside organic cases such as delirium or dementia, where inter-rater reliability might reach 80 percent or more, in functional cases of mental disorder, psychiatrists at best agreed with one another just over 50 percent of the time, and often agreement was far less than this. Diagnosis was, he acknowledged, vital for research, treatment, and teaching, and yet the highest agreement on specific diagnoses that he found in these studies was only 42 percent. Perhaps, he suggested, though this situation was a major problem for research and epidemiological work, it mattered less in clinical settings, since most psychiatrists were “seldom bound by the actual diagnosis [and may] simply regard the clinical diagnosis as an additional bit of information (unreliable as it may be) which may support the therapeutic decisions made on the basis of other factors.”7

Psychoanalysts disdained diagnostic labels, treating them essentially as an irrelevance. Indeed, Karl Menninger, whose best-selling books had made him perhaps the most famous psychodynamic psychiatrist of the postwar era, argued in 1963 that diagnostic labels should be abandoned altogether.8 Not only was it a charade, since the labels had no discrete meaning, but affixing a diagnosis actively harmed patients, he contended, turning them into objects, not people, and stigmatizing them, while providing nothing useful for the clinician. Small wonder, then, as the acerbic psychopharmacologist Donald Klein put it, that “for the psychoanalysts, to be interested in descriptive diagnosis was to be superficial and a little bit stupid.”9


AMERICAN PSYCHIATRY HAD PRODUCED a Diagnostic and Statistical Manual in 1952, and a second edition was produced in 1968, both testaments to the postwar dominance of psychoanalysis.10 They were slight documents, barely over a hundred pages long. The second edition cost $3.50, which was more than most psychiatrists thought it was worth. Though the broad distinction made between neuroses and psychoses was uncontroversial, the manuals’ content was otherwise lightly regarded and little consulted. For those who saw the treatment process as involving an inquiry into the precise psychodynamics of the individual case, diagnostic labels were irrelevant, artificial creations that added nothing of substance to the understanding of a patient’s problems or the treatment process. That the primary forms of psychosis, schizophrenia and manic-depressive psychosis, derived from the work of Freud’s bête noir, Emil Kraepelin, probably further alienated the analysts from the whole process.

One of the most striking demonstrations of psychiatry’s inability to agree on diagnoses appeared in 1972, when a systematic cross-national study of the diagnostic process in Britain and the United States was published by Oxford University Press. John Cooper and his colleagues presented results that laid bare just how uncertain the status of psychiatric diagnoses was, how variable and subject to the whims of local culture. They looked at the two most serious forms of mental breakdown, schizophrenia and manic-depressive illness, and measured the cross-national differences in the ways these conditions were diagnosed.

Scientific knowledge is meant to be universal and to travel easily across national boundaries. No one would expect large variations in the diagnosis of, say, tuberculosis or pneumonia. It proved to be quite otherwise in the psychiatric realm. Schizophrenia, it turned out, was diagnosed far more frequently in the United States than in Britain. Contrariwise, the diagnosis of manic-depressive illness was embraced far more frequently by British psychiatrists. The contradictions were massive. New York psychiatrists diagnosed nearly 62 percent of their patients as schizophrenic, while in London, only 34 percent received this diagnosis. And while less than 5 percent of the New York patients were diagnosed with depressive psychoses, the corresponding figure in London was 24 percent. Detailed reexamination of the patients suggested that these diagnoses were not rooted in differences in their symptoms but were a by-product of the preferences and prejudices of each group of psychiatrists. Yet these differences had real-world consequences, producing major differences in the treatments the patients received.11

All these studies, and more, were couched in dry academic prose. They were of concern to a subset of psychiatrists who worried about their implications, but laymen did not read the psychiatric journals or monographs written for a handful of specialists and priced accordingly. Given the dismissive attitude most psychiatrists adopted toward diagnosis, few at the time expected these critiques to cause a major upheaval in the psychiatric enterprise. And then, quite suddenly, another publication attacking the profession’s diagnostic competence turned the psychiatric world upside down.

On January 19, 1973, Science (alongside Nature the most influential general science journal in the world) published an article that instantly captured major media attention. In itself, that is not unusual, for science journalists often use Science as a source of copy, but what was somewhat unusual was that the paper in question was authored by a social scientist, not someone from the biological or natural sciences. This particular paper has enjoyed an unusually long half-life. Nearly a half century after its appearance in print, it continues to attract hundreds of citations a year and to be a staple of undergraduate textbooks in both psychology and sociology.12 More remarkably still, one can plausibly argue that its findings had an extraordinary real-world impact, playing a major role in transforming a common subspecialty in medicine in ways that continue to resonate all the way down to the present.

The paper was by David Rosenhan, a Stanford professor of psychology and law, who had had a fitful academic career during the 1960s—a string of temporary teaching appointments along the East Coast, before landing a more promising job in 1968 at Swarthmore College. Two years later, he was invited to visit Stanford for a year, which led to the offer of a permanent post there.13 At Stanford, he co-authored a textbook on abnormal psychology with Martin Seligman—one that went through four editions between 1984 and 2000—and set himself up as an expert advising on jury selection. But his paper in Science was his one significant contribution to the social psychological literature, albeit one that made him famous for decades. He never revisited the topic in any academic journal or published anything of comparable impact for the rest of his career.

“On Being Sane in Insane Places” purported to report the results of an experiment involving eight subjects, one of whom was Rosenhan himself. (Rosenhan reported that there had been a ninth participant, but he had been dismissed from the study for violating the experimental protocol.) These volunteers, who had been screened to eliminate anyone with mental health issues, were instructed to show up at a variety of mental hospitals claiming that they were hearing voices and to seek admission. Those were the sole symptoms they were to report, and they were strictly enjoined to behave normally postadmission and to inform their doctors that they were no longer symptomatic. Together, the volunteers had approached a total of twelve mental hospitals (some participants engaged in the charade more than once). The hospitals, Rosenhan reported, were spread across five states and represented a wide spectrum of facilities, from isolated, run-down rural state hospitals to public facilities that were modern and relatively well staffed, as well as a single private mental hospital linked to an academic department of psychiatry. Uniformly, whatever institution they approached, the subjects of Rosenhan’s experiment were admitted as inpatients and then spent anywhere from seven to fifty-two days in the hospital before they were discharged (an average of nineteen days). The private mental hospital diagnosed its lone patient as manic-depressive, a relatively favorable diagnosis. By contrast, all of those admitted to public facilities were given the label of schizophrenia, and Rosenhan reported that, upon discharge, they were noted to be “schizophrenics in remission.”

On the basis of these results, he claimed that “we cannot distinguish the sane from the insane in psychiatric hospitals.” As he began to give presentations of his findings prior to publication, staff at a local teaching and research hospital “doubted that such an error could occur in their hospital.” Rosenhan’s response was to inform them that he would send along pseudo-patients over the following three-month period and see whether they could detect the imposters. It was a trap. “Forty-one patients were alleged, with high confidence, to be pseudo patients by at least one member of staff. Twenty-three were considered suspect by at least one psychiatrist.” Rosenhan rather gleefully reported that he had sent not a single pseudo-patient.14

A furious correspondence ensued. Psychiatrists from all over the country lined up to criticize the study and reject its findings. A number of correspondents were incensed at Rosenhan’s use of the terms “sanity” and “insanity,” objecting that these were legal, not medical terms. In reality, psychiatrists made use of “insanity” as a medical term well into the twentieth century, and terminological disputes were in any event irrelevant to the question at hand: the damage Rosenhan’s findings had created for the public’s view of the profession. What these complaints missed was that by adopting these vernacular terms, Rosenhan (doubtless deliberately) had invited greater lay attention to his findings. Rosenhan’s work had appeared in the most prestigious of places, presumably after strict peer review, so who could doubt its integrity? The implications of his findings were profound. If psychiatry could be so easily duped and would assign the most devastating of diagnoses—schizophrenia—on the basis of such superficial grounds, it was surely an emperor with no clothes.

None of the earlier critiques of psychiatry’s problems with diagnosis had attracted any attention outside the profession, and most within it had treated the problem as trivial. “On Being Sane in Insane Places” altered the landscape at once, and quite fundamentally. Rosenhan’s findings attracted massive media attention all across the country. At least seventy newspapers, both regional and national, gave prominent attention to his study. Television and radio shows interviewed Rosenhan. A major commercial publisher offered him a lucrative contract for a book based on his research, an offer Rosenhan accepted with alacrity. Harvard even sent out feelers about a possible appointment to its faculty. Rosenhan’s exposure of psychiatry’s flaws caused a sensation. No wonder so many practitioners rushed to register their objections in the pages of Science, which, quite extraordinarily, devoted nine pages of a subsequent issue to their howls of protest and to Rosenhan’s response.15 That in itself was a measure of how powerfully this exposé resonated outside the cloistered world of academia.

Thanks to some astonishing historical detective work by a New York journalist, Susannah Cahalan, we now know something remarkable. David Rosenhan, as she meticulously shows, perpetrated one of the most egregious and successful academic frauds of the twentieth century.16 It is highly probable that several of the pseudo-patients were simply figments of Rosenhan’s imagination. In any event, Cahalan provides extensive documentary evidence of falsified and distorted data, and gross departures in the conduct of the study from what the published findings claimed had happened. Rosenhan was himself a pseudo-patient in his study, and Cahalan quotes from his own medical records to document how fraudulent his published account was of what transpired when he sought admission to Haverford State Hospital in Pennsylvania. Far from confining himself to reporting three discrete aural hallucinations and otherwise behaving perfectly normally, Rosenhan (who had identified himself as David Lurie) gave ample evidence of deep intellectual and emotional disturbance. Besides grimacing and twitching during his intake interview, and the dull, halting speech pattern he exhibited, he indicated that the radio was broadcasting to him and that he could “hear” other people’s thoughts. It an attempt to quiet the voices, he had taken to wearing a copper pot over his ears. He was depressed and frightened and had been unable to work for months. Outpatient treatment with drugs had failed to improve matters. Visibly “tense and anxious,” he thought he was worthless and had contemplated “suicide as everyone would be better off if he was not around.”17 These are an infinitely more serious and extensive set of pathological symptoms than the ones he recounted in his Science article.

Had Rosenhan told the truth about his presentation at the hospital, no one would have been surprised that a psychiatrist would have decided to admit him or to diagnose the patient as schizophrenic. In his Science paper, Rosenhan further claimed that, once admitted, the pseudo-patients (himself included) immediately stopped displaying symptoms and behaved normally. Again, the surviving medical records show that in his case this is quite false. In the days after his admission, two other psychiatrists examined him at some length. Both documented the depths of the pathology Lurie was complaining of.

Cahalan’s book, The Great Pretender, provides many more examples of assertions Rosenhan made that turn out to be pure fiction. Her exposure of this far-reaching scientific fraud is a remarkable accomplishment, all the more so because of how hard it was to discover the truth decades after the fact. But Rosenhan managed to take his secret to his grave, and long before the truth emerged, his study had served as the catalyst for a revolution in the orientation and practice of psychiatry, one whose effects have dominated our approach to mental illness for almost four decades now.


THE SERIOUSNESS OF THE CRISIS the profession faced as soon as Rosenhan’s paper was published was immediately recognized by psychiatry’s elite. Within weeks of the article’s appearance, on February 1, 1973, the board of trustees of the American Psychiatric Association (APA) called an emergency meeting in Atlanta. How could they respond to “the rampant criticism” that enveloped the profession, not least to the perception (or, rather, the reality) that its practitioners could not reliably make diagnoses of the mental illnesses they claimed to be expert at treating?18 Over three days they debated how to proceed and, after prolonged discussion, came to a decision: the association would set up a task force charged with evaluating and reworking the Diagnostic and Statistical Manual (DSM).

Before that task force could be established, however, another controversy arose. For decades, psychiatry had held that homosexuality was a form of mental illness—a claim with deep roots in Freudian doctrine that reinforced strongly held prejudices in the public at large. Now, prompted by the civil rights revolution, gay activists, including closeted gay psychiatrists, revolted and demanded that the profession reverse its previous position. After much internal debate and discussion, the APA resolved the issue by means of a postal ballot—an approach that solved a wrenching political issue but that invited public ridicule and provoked further commentary about the reliability and scientific standing of psychiatric diagnoses.19 The ballots showed that 5,854 psychiatrists voted to remove homosexuality from the DSM and 3,810 to retain it.

The Columbia psychiatrist Robert Spitzer had played a large role in brokering the “solution” that put the controversy over the status of homosexuality to bed. Soon thereafter, after some behind-the-scenes lobbying, a grateful association appointed him to head the task force charged with revising the DSM. Most psychoanalysts continued to regard the whole project as silly and unworthy of their time, which allowed Spitzer great leeway in determining the working group’s makeup. The one psychoanalyst in its midst, finding himself marginalized and ignored, soon ceased attending its sessions.

The transformational impact that DSM III would have was not clear at all when Spitzer obtained his appointment. Most of the profession’s elite disdained what they saw as the dull, intellectually uninteresting task of constructing a new nosology for the field. They had, as they saw it, far more interesting intellectual puzzles to pursue. Spitzer demonstrated an extraordinary far-sightedness and great political skill in putting together the membership of the task force, guiding its members toward consensus, and then persuading a skeptical profession to adopt its work product.20

In the absence of the impetus provided by Rosenhan’s study, one wonders how eager psychiatry would have been to revise its diagnostic procedures. We do not live in that counterfactual universe, however. The revision of the DSM did take place, and the publication of the third edition, 494 pages long compared with the 134 pages of its predecessor, transformed American psychiatry irrevocably. It accelerated the decline of psychoanalysis and secured its replacement by a biologically oriented psychiatry that claimed that the “diseases” the manual identified and listed were akin to those that mainstream medicine diagnosed and treated. The DSM III provided an almost mechanical approach to the diagnostic process, one that, at least in theory, sharply raised the odds that psychiatrists in Topeka or Walla Walla, or San Francisco or New York, would attach the same label when confronted by the same patient.


DURING THE 1960S AND 1970S, there was a single major exception to the psychoanalytic domination of academic departments of psychiatry—Washington University in St. Louis. The academics there remained heavily committed to the notion that mental illness was in fact physical, a pathology of the body like any other illness. For them, psychoanalysts were either charlatans or medical men who had badly lost their way, imagining that a medical disorder could be cured by talk therapy when they should have been searching for biomedical treatments that acted on the body.21 Though other university departments contained the occasional somatic psychiatrist (seen by colleagues as distinctly odd), no other department clung so stubbornly to the idea that mental illness was a brain disease. At Washington University, this was an article of faith, and Robert Spitzer, who had once flirted with psychoanalysis himself, found in its faculty, and in some of the psychiatrists it had trained, the core members of the DSM Task Force he assembled.

The anomalous commitment of Washington University’s department of psychiatry to an approach that rejected psychoanalysis was an accident—or, rather, a reflection of the rules the university imposed on the faculty of its medical school. Those who took regular academic positions there had to give up any prospect of earning private clinical income. They were to live solely on their university salaries. This was an arrangement—the so-called strict full-time system—that the Rockefeller Foundation had urged on medical schools as it underwrote the reform of medical education in the 1910s and 1920s. Unsurprisingly, it had proved unpopular among many would-be faculty and had largely been abandoned by most medical schools. Washington University was an exception, and the psychoanalysts of St. Louis were having none of it. If the university sought to deny them the rich rewards of private practice, why, then, the university could do without their services. So it did.

If psychoanalysts thought diagnostic labels a waste of time, the psychiatrists at Washington University were committed to them, provided they could be refined and made more coherent. That process, by carefully identifying distinct psychiatric disorders, would facilitate the return to biology that they were certain was the key to developing an effective response to major mental illnesses. In pursuit of this goal, they had collectively sought to develop ways of distinguishing among mental disorders, looking for an approach that might facilitate research rather than worrying about its usefulness in clinical interventions. John Feighner, then a resident, later acknowledged that his mentors had concluded that “it seemed imperative that we refine our diagnostic criteria to assist us in selecting specific treatments for specific patients and to improve communication between research centers.”22 His senior colleagues sought to define such criteria for a variety of psychiatric disorders and collectively produced a paper published in the Archives of General Psychiatry. By departmental convention, their chief resident was assigned first authorship, so the distinctions the paper laid out were known thereafter as “the Feighner criteria.”23 It became, according to Hannah Decker, “the most cited paper ever published in a psychiatric journal.”24

When Robert Spitzer was charged with rewriting the DSM, it was to this group of outsiders that he turned to compose his committee, a decision that was left up to him because the Freudians thought the whole exercise a waste of time. He later commented that, in putting together the Task Force, he had “selected a group of psychiatrists and consultant psychologists committed primarily to diagnostic research and not clinical practice. With its intellectual roots in St. Louis instead of Vienna, and with its intellectual inspiration derived from Kraepelin, not Freud, the task force was viewed from the outset as unsympathetic to the interests of those whose theory and practice derived from the psychoanalytic tradition.”25 It was an accurate perception. From the beginning, the plan was to eliminate what its members regarded as the fanciful Freudian etiologies that had been embedded in the two earlier editions of the DSM and to strip out all references to neuroses and other psychoanalytic language.


SPITZER SHARED THE ST. LOUIS GROUP’S COMMITMENT to biology and to reconnecting psychiatry to the medical mainstream, though his ambitions were greater than theirs. Rather than just creating labels that might be useful in psychiatric research, he was committed to writing a manual that would guide clinical practice. It was an ambition that at times threatened to cause rifts with many of his Task Force, but Spitzer proved a skilled and effective political operator, and eventually he managed to secure broad acceptance within the working group for his plans.

Crucially, he understood, as the St. Louis group did not, that if he were to persuade the members of the American Psychiatric Association (most of whom were clinicians) to endorse the new DSM, he had to produce a document that found some place for the whole range of problems that brought patients to the psychiatric waiting room. Samuel Guze, the dominant figure in the St. Louis group, urged Spitzer to produce a severely truncated manual, one that included only a relative handful of well-validated conditions. Spitzer dismissed his suggestion out of hand. “If we do what you are proposing, which makes sense to us scientifically,” he countered, “we will give the insurance companies an excuse not to pay us.”26 Instead, he made sure that “if any group of clinicians had a diagnosis that they thought was very important, with a few exceptions, we would include it. That’s the only way to make it acceptable to everyone.”27 This was the “logic” that ensured the relentless growth in the number of psychiatric “illnesses” that would become a feature of each successive edition of the DSM.

One particularly striking example of how this expansion of diagnostic categories came about is the inclusion of post-traumatic stress disorder (PTSD) in the new manual. The diagnosis was in one very important respect an anomalous category in the new DSM, for as part of their attempt to break with the Freudian overtones of the two previous diagnostic manuals, Spitzer’s group jettisoned the purported psychodynamic origins of various disorders. Claims about the etiology of schizophrenia or depression were dismissed as just so much unscientific speculation. DSM III was to remain resolutely agnostic about the causes of the disorders it included. The new disorder that was PTSD, however, was explicitly tied to a particular source: trauma and its effects on the psyche. Like the undead, or so its proponents argued, memories of past horrors refused to remain buried and were so disturbing, intrusive, and disruptive that long after the event they overwhelmed an individual’s capacity to cope.

The pressure to include this new disorder in the spectrum of psychiatric illnesses initially arose from the ranks of disaffected veterans of the Vietnam War. The military brass had claimed that, unlike the earlier wars of the twentieth century, embedding psychiatrists in the combat zones had ensured that “psychiatric casualties need never again become a major cause of attrition in the United States military in a combat zone.”28 It was a stance that embittered opponents of that official narrative fiercely rejected. Joined by two sympathetic psychoanalysts, Robert Jay Lifton and Chaim Shatan, and then by others, Vietnam Veterans against the War argued that battlefield traumas had left them with lasting psychic scars that constituted still another consequence of an evil and immoral war. Theirs was a demand for official recognition and recompense for the serious psychological damage that lingered in their ranks and persisted years after the trauma that had brought it about.29

Shatan’s 1972 op-ed in the New York Times on “Post Vietnam Syndrome” was an early statement of their aims, and when Spitzer and his task force began to revamp psychiatry’s diagnostic system, they became an obvious target for those seeking recognition of this novel disorder.30 Initially, Shatan and his allies met with resistance. It was not just the evident political overtones of their arguments that provoked pushback, but the fact that their proposed diagnosis was sharply at odds with the whole approach the Task Force aimed to put in place.31 But unlike the rigid St. Louis group, Spitzer was a pragmatist, willing to satisfy any sizable constituency demanding inclusion of its preferred diagnosis if by doing so he ensured the success of his overall project. The diagnosis was legitimized, as the sociologist Wilbur Scott concluded, “because a core of psychiatrists and veterans worked consciously and deliberately for years to put it [in the manual]. They ultimately succeeded because they were better organized, more politically active, and enjoyed more lucky breaks than the opposition.”32

Faced with a sustained and highly organized pressure group, Spitzer ultimately gave way, but with a major proviso: the new diagnostic category he agreed to include in DSM III was not post-Vietnam syndrome, but a much broader and less specific stress-related disorder, post-traumatic stress disorder. A whole variety of traumas, not just those stemming from military conflict, were now recognized as possible triggers of lasting forms of mental disorder—sexual violence and assault prominent among them.

In time, the adoption of the PTSD diagnosis would open a Pandora’s box. Some enterprising psychiatrists and psychologists would soon uncover a whole host of alleged victims of trauma, those who remembered not too much, but too little. In their hands, patients whose early sexual traumas were so powerful that they had repressed all memory of them now learned to recall them in vivid detail. It was the return of the repressed with a vengeance. And vengeance it unleashed, with increasingly elaborate “recovered memories” wreaking havoc on the lives of many who stood accused of horrific crimes against their children (or even other people’s children, as in the case of the McMartin preschool scandal that rocked Southern California in the 1980s).33 For a decade and more, moral panics like these spread, ruining lives and reputations. Criminal trials and civil suits abounded, even as a growing volume of research on how memory works undermined the core tenets of the recovered-memory advocates.34 And then, in the late 1990s, almost as swiftly as the recovered-memory movement had gained public attention and credibility, it collapsed. As much as anything, the sociologist Allan Horwitz argues, it vanished when the major proponents of the syndrome lost a series of countersuits and were forced to pay staggering damages.35

If the recovered-memory movement has now largely faded from view, the same cannot be said of trauma-related diagnoses. In DSM III, the stressor was conceptualized as being a major and life-threatening event that “would evoke significant symptoms of distress in almost everyone.”36 Psychiatrists and clinical psychologists argued that in those developing PTSD, trauma produced involuntary, recurrent, and intrusive memories, hypervigilance, and persistent negative emotions. These in turn were often associated with reckless or self-destructive behavior.

In later editions of the DSM, however, the boundaries became more elastic. By 1994, the precipitating trauma did not need to be something so awful as combat exposure, seeing one’s parent or child being shot, being raped, or suffering other forms of sexual trauma. The emotional impact of hate speech, sexual harassment, witnessing a fight, indirectly learning of the death of a family member, even watching a disaster unfolding on mass media: all these came to be seen as sufficiently traumatizing to trigger PTSD.37 The upshot, unsurprisingly, has been the creation of “a largely autonomous profession that studies and treats trauma,” accompanied by a massive explosion of research on the subject, and the entrenchment of trauma counselors “in schools, hospitals, corporations, the military, the judicial system, and disaster relief organizations.”38 Post-Vietnam syndrome had expanded beyond all recognition.


RESPONDING TO THE IMPERATIVE to make sure that psychiatrists faced with the same case would attach the same diagnostic label, Spitzer and his team early on set aside any concern with validity—that is, whether their labels corresponded to divisions found in nature. As they recognized, they could not demonstrate convincing chains of causation for any major form of mental disorder, nor were there any biological tests that could be used for diagnostic purposes. Perforce, they had to rely on symptoms to distinguish among the disturbances that confronted them, a situation akin to that of eighteenth-century physicians trying to create an orderly classification from the confusing mass of clinical material that confronted them. If psychiatrists were to be brought to agreement, the criteria for diagnosing mental illness had to be consistent and straightforward. The Task Force’s emphasis thus fell on creating lists of symptoms that allegedly characterized different species of mental disorders; using that list then created a “tick the boxes” approach to the problem of diagnosis. That way, at least in theory, the embarrassing disagreements about diagnosis exposed by Aaron Beck and John Cooper and his associates (let alone the nightmare of Rosenhan’s pseudo-patients) would become a thing of the past.

The members of the Task Force presented themselves as data-oriented. In reality, theirs was a thoroughly political exercise. Spitzer asserted that they were “committed to the rigorous application of the principles of testability and scientific verification.” But in fact, as Robert Morison of the Rockefeller Foundation had complained about an earlier generation of psychoanalytic leaders, matters were resolved by taking votes and manipulating verbiage to gain consensus. As the historian of psychiatry Hannah Decker’s reconstruction of the process shows, decisions repeatedly relied on political horse-trading and settling on what “felt right”—which often meant what felt right to Robert Spitzer. So it was, for example, that when deciding how many of a laundry list of symptoms made someone eligible for the label “schizophrenic”—an enormously consequential decision—Spitzer’s group settled on six out of a list of ten possibilities. Left unsaid was that this meant that two people allegedly afflicted with this condition might share only two of this long list of symptoms and yet be given the same diagnosis. As to how many psychiatric illnesses to accept, and which ones, these were again questions that aroused much debate, with the answers the subject of politicking and votes by the members of the Task Force.

The Columbia psychiatrist Donald Klein, who was one of the most influential members of the group, did not bother to hide how the sausages were made:

We had very little in the way of data, so we were forced to rely on clinical consensus, which, admittedly, is a very poor way to do things. But it was better than anything else we had. We thrashed it out, basically. We had a three-hour argument. There would be about twelve people sitting down at the table, usually there was a chairperson and there was somebody taking notes. And at the end of each meeting there would be a distribution of events. And at the next meeting some would agree with the inclusion, and the others would continue arguing. If people were still divided, the matter would be eventually decided by a vote. [T]hat is how it went.39

Psychoanalysts, who had initially ignored what they regarded as a tedious and anti-intellectual exercise, gradually began to express some alarm about what Spitzer and his team were up to. There were complaints about the “linguistic and conceptual sterility” that marked early drafts of the revised manual. “DSM III,” it was alleged, “gets rid of the castles of neurosis and replaces it with a diagnostic Levittown.”40 In words that dripped with contempt, another psychoanalyst compared the depth and sophistication of Freudian perspectives with the jejune ideas Spitzer and his group appeared to be wedded to: “It is unreasonable to treat equally the carefully reproduced work of thousands of psychoanalysts and psychodynamic clinicians and the relatively recent learning theorists or esoteric fantasies about the etiology of psychopathology.”41

As events would show, Spitzer’s group, far from treating these two elements as equal, regarded the work of those thousands of analysts as unworthy of serious consideration. In April 1977, Otto Kernberg, a member of the executive council of the American Psychoanalytic Association, prophetically warned his colleagues that what they were inclined to dismiss as “a joke” was, on the contrary, “a straitjacket and a powerful weapon in the hands of people whose ideas are very clear, very publicly known, and the guns are pointed at us.”42 His warnings were largely ignored, though Spitzer, sensing the need to head off psychoanalytic opposition if he could, handpicked two analysts, John Frosch and his nephew William Frosch, to add to the Task Force. Within a year, facing ridicule and hostility from the original members and having no effect on the group deliberations, John Frosch gave up and resigned.


NOT UNTIL A FEW MONTHS before the American Psychiatric Association was about to vote on whether to accept the radically new DSM did analysts finally attempt to mobilize to protect their interests. They decided to launch a symbolic fight to rescue the term “neurosis” from the oblivion into which Spitzer proposed to cast it, and they insisted that the concept be included in the DSM, along with an explanation of the underlying psychic conflicts that psychoanalysis held were responsible for its existence. Without such changes, they threatened to mobilize votes and secure the rejection of the Task Force’s work. For a brief period, Spitzer worried that the psychoanalysts might succeed and that his room for maneuver was sharply limited. The St. Louis–based members of his Task Force were in no mood to allow him to compromise. They had fought to create a document that deleted all references to psychoanalytic ideas and were not prepared to see them reemerge at the last moment. Caught between these conflicting forces, the politically skillful Spitzer eventually found a diplomatic solution. After certain entries, a parenthesis would appear: “Anxiety Disorder (or Anxiety Neurosis),” for example. On May 12, 1979, by voice vote, with those modifications, the DSM III was approved as the official stance of the American Psychiatric Association. It would appear in print the following year and become an unexpected best-seller, something every mental health practitioner felt compelled to own, and a major contributor over the years to the coffers of the association.43


IT QUICKLY BECAME APPARENT that the analysts who had warned of the dangers the new manual would present for their branch of the profession had underestimated how deadly the new DSM would prove. The dominance the Freudians had exercised over American psychiatry withered so rapidly over the next few years that one might almost argue that the publication of DSM III marked the pronouncement of the last rites over what turned out to be a largely moribund enterprise. Psychoanalysts and their sympathizers were rapidly defenestrated from the elite positions they had for a quarter century occupied in the profession. Academic departments appointed biologically oriented psychiatrists as their chiefs, or imported neuroscientists into these posts. Spitzer’s ostensibly theory-neutral classification system in fact underwrote a rapid shift to a psychiatry that embraced a biologically reductionist model of mental disorder, one that had no truck with psychodynamic or psychoanalytic approaches and instead embraced psychopharmacology as the way forward.

There had been signs, even before the publication of DSM III, that the position of psychoanalysis in American psychiatry was under threat. In the mid-1960s, 50 percent of the psychiatric residents at UCLA were training in psychoanalysis. A decade later, only 27 percent sought psychoanalytic training. Over the following decade, the number of medical students opting to specialize in psychiatry again fell substantially.44 At the Menninger Clinic, once heavily committed to psychoanalytic treatment, the numbers receiving psychotherapy had fallen from 62 percent in 1945 to 23 percent in 1965, a pattern also evident at other private psychiatric hospitals that had once employed it as a first-line treatment.45

On another revealing front, besides the National Institute of Mental Health (NIMH), a main source of research support for psychiatry was the Foundations Fund for Research in Psychiatry, established in 1953 by a rich patient grateful for the psychoanalytic treatment of his depression by Lawrence Kubie. Initially its grants were, not surprisingly, given to many prominent analysts. But between 1962 and 1973, the foundation’s priorities changed drastically. By the early 1970s, most of its money was flowing toward research on somatic treatments, and only 9 of its 194 awards went to psychoanalysts. Between 1973 and 1978, that number dwindled to zero.46 Meanwhile, even analysts were losing faith in their ability to treat schizophrenics, and skepticism was increasingly being voiced about the outcomes of psychoanalytic treatment of neuroses.47 Making matters worse, university officials resented the fact that, with few exceptions, psychoanalytic institutes were organized and controlled by private practitioners and existed wholly outside the university’s orbit and control.48 That resentment, and the failure of analysts to secure research money, weakened whatever support might have been forthcoming from medical school administrators. Once the mainstream of psychiatry moved away from psychoanalysis toward a biological psychiatry that began to attract serious research support and was based in university facilities, the psychoanalytic elite found their previous dominance rapidly undermined.

The collapse of psychoanalytic supremacy was spectacularly swift. By the mid-1980s, psychoanalytic institutes were facing an extraordinary dearth of medically trained recruits to their programs. It was a situation that sharply curtailed the incomes even of the leading members of the institutes, who had always been able to rely on fees from neophytes seeking admission to the guild. Internal conflicts flared.49 In 1988, after seventy-five years of fiercely resisting the idea of training nonmedically qualified recruits, the institutes began to admit them. They were compelled to do so, to be sure, by an antitrust lawsuit launched by clinical psychologists in 1985, but that decision did provide another source of apprentices.50

Department chairs in American medical schools are extremely hard to budge, for they exercise great power over their faculty and routinely use their patronage to secure their positions against critics. But by 1990, just over ten years after the publication of DSM III, only three of the top ten departments of psychiatry were still headed by trained psychoanalysts or members of psychoanalytic organizations.51 That same year, when the American Psychoanalytic Association surveyed its members, they were seeing on average two patients a week for analysis, scarcely the basis for a secure living. Two decades on, the Journal of the American Psychoanalytic Association reported that there had been a 50 percent decline in the number of applicants for training since 1980 and “an even more precipitous decline in applications from psychiatrists.”52 Partly as a consequence, the profession was aging rapidly. In 2012, the International Psychoanalytic Association announced that 70 percent of the membership of its component societies were between fifty and seventy years of age; 50 percent were older than sixty, and as many as 20 percent of training analysts were over the age of seventy. Five years later, only 15 percent of its members were under the age of fifty.


IF THE DSM III SEEMED to miraculously create a reincarnated medical psychiatry, one of the midwives of the rebirth was the insurance industry. Private health insurers had increasingly begun to provide some degree of coverage of mental health issues during the 1960s. Federal employees in the Washington, DC, area had enjoyed particularly generous coverage, including relatively extensive coverage of psychotherapy. That proved extremely costly. The insurance companies had no easy way to limit the length of treatments and were confronted with paying for therapies whose efficacy was unsupported by more than anecdotal evidence directed at ill-defined, amorphous pathologies about which there appeared to be little consensus.53 By contrast, the new manual claimed to identify distinct diseases that could then be linked to discrete treatments.

To the extent that psychotherapy continued to be employed, insurance companies strongly preferred the cognitive-behavioral therapies that sought the rapid alleviation of symptoms to open-ended psychoanalysis—a preference reinforced by the fact that cognitive-behavioral therapy could be offered by people who were not physicians and thus could be paid much less.54 Hence the growing influence of clinical psychologists and psychiatric social workers, and hence the declining interest of the psychiatric profession in psychotherapy. In a world where the imperatives of managed care were taking hold, insurance companies proffered such low rates for treatments of this sort that fewer and fewer medically qualified personnel continued to offer such services, unless they had access to a clientele willing and able to pay privately for them. Increasingly, therefore, psychiatrists concentrated on forms of treatment for which their monopoly power was legally enforceable—and that could fit the strict requirements of a managed care regime: running through checklists, assigning a diagnostic label, and prescribing the relevant psychotropic medication or medications.

Given the complex nature of psychiatric illnesses, this silencing of patients’ voices and lack of sustained attention to their mental states was a major loss, as one of the major architects of the new DSM later acknowledged. Rather than seeing the categories of the new manual as the best approximations available: “DSM came to be given total authority in training programs and health care delivery systems. Since the publication of DSM III in 1980, there has been a steady decline in the teaching of careful clinical evaluation that is targeted to the individual person’s problems and social context and that is enriched by a good general knowledge of psychopathology. Students are taught to memorize the DSM rather than to learn complexities.” It was an approach, Nancy Andreasen ruefully concluded, that “had a dehumanizing impact on psychiatry.”55


THE SECOND MIDWIFE OF PSYCHIATRY’S REBIRTH in radically changed form was the pharmaceutical industry. For these corporations, the existence of stable diagnostic categories could play a vital role in the testing of new drugs that needed FDA approval. One of the most crucial and consequential legacies of DSM III was the creation of ever-closer links between psychiatry and pharmaceutical corporations, and the money that flowed from that connection until recently greatly improved psychiatry’s standing among medical school deans.

Psychoanalysts had greatly broadened the range of conditions they claimed to treat successfully, but their refusal to make diagnostic distinctions a priority, not to mention their resistance to any attempts to demonstrate statistically the usefulness of their interventions, had sharply curtailed the ability of the pharmaceutical industry to design the necessary trials and produce evidence that their innovative treatment worked. The new diagnostic manual might have been designed for the purpose. It incorporated categorical distinctions about different types of mental illness, and the number of such supposedly different syndromes multiplied rapidly. If a subset of patients appeared to respond to a medication under trial, soon enough a new label was attached to these patients, and a new psychiatric disease was born. Rather than diseases calling forth remedies, remedies began calling forth new “diseases.”

A final source of validation for the new DSM came from two federal agencies. The NIMH embraced the new system, seeing in its scientific-seeming diagnostic system a way to fend off political attacks on the social orientation it had adopted in previous decades.56 Equally critical, however, was the FDA’s embrace of the DSM’s assumption that mental illnesses had the same form as physical illnesses, a decision that ensured that drug companies would test and advertise their products as treatments for specific diseases.57 Those endorsements ensured the triumph of the approach Spitzer had championed for decades to come.


IN 1987, seven years after DSM III was published, a revised edition, again under Spitzer’s leadership, was published (though it was called DSM III R and not a fourth edition). All remaining references to analytic ideas were purged. With no discernible resistance, the fig leaf Spitzer had offered in 1980, the parenthetical gesture that saw “(or neurosis)” added to some of the disorders, vanished. Seven years after that, a new edition officially labeled DSM IV appeared, edited this time by Spitzer’s protégé, Allen Frances. That edition was superseded again in 2000 by what was officially called DSM IV TR (Text Revision). All adhered to the same logic that had inspired DSM III. They relied on symptoms to divide and subdivide the world of mental pathology. If that led to much overlap, as patients qualified for more than one disorder, either they could be allocated the most serious disease they qualified for, or the whole embarrassment could be solved by calling them victims of “co-morbidity.” The Freudians had regarded symptoms as just the visible sign of underlying disorders that required treatment, and they argued that to treat symptoms alone was to play a game of whack-a-mole. In all the successive editions of the DSM, from the third edition onward, symptoms became the very markers of disease, the key to deciding what ailed the patient.

Editions of the manual grew ever larger and gave birth to an ever-longer laundry list of types of mental disorder. DSMs I and II had been modest little documents of little interest to the profession at large, spiral-bound pamphlets of 132 and 119 pages, respectively. The number of possible diagnoses grew from 128 in the first edition to 193 in the second, assuming anyone paid much attention to the labels they provided. DSM III appeared between hard covers and ran to 494 pages. It listed 228 separate diagnoses, and now every psychiatrist and clinical psychologist was forced to employ its categories if he or she wished to be reimbursed by insurance companies. DSM III R grew to 567 pages and 253 “diseases,” while DSM IV was 943 pages long and encompassed 383 officially recognized disorders. DSM II had earned a modest $1.27 million for the American Psychiatric Association during its twelve-year run. By contrast, DSM III brought in $9.33 million; DSM III R, $16.65 million, and DSM IV, an astonishing $120 million.58

Psychiatrists could now match particular medicines to particular diseases, as physicians do with other forms of illness. Regularizing the diagnosis of mental illness allowed linkages to develop to standardized modes of treatment. This meant that uncertainty could be replaced by predictability, and, at least in principle, finite limits to insurance coverage could be set. Large-scale clinical trials of psychotropic drugs became possible for the first time, and instead of being marginalized in medical schools because of their inability to generate research dollars, psychiatrists became their deans’ darlings. Antipsychotic medications and (within a few years) antidepressant pills became huge sources of profit for Big Pharma, regularly among the most lucrative products it produced. While these helped some people lead a less tortured existence, they at best provided a measure of symptomatic relief, albeit at the risk of incurring significant side effects. For the pharmaceutical industry, that outcome had its advantages. Chronic diseases are chronically profitable, for those suffering from them seldom succeed in dispensing with their medications.

So it was that psychiatry, reembracing its medical identity, recommitted itself to the study of the brain as the key to understanding the mysteries of mental disorders. Where the superego once ruled, the usurpers brandishing molecular biology, genetics, and neuroscience now exercised their power. During the Reagan presidency, funding for psychosocial research on mental illness was almost eliminated and funding for Social Security payments to the mentally disabled was cut. The NIMH research budget, however, grew 84 percent, to $484 million annually, with the bulk of that money now directed at neuroscientific work on the most serious forms of mental disorder. This reorientation grew even more pronounced during the 1990s, years that President George H. W. Bush announced were to be the decade of the brain.59

Where drug-company dollars flowed, federal dollars followed. In 2013, Barack Obama launched his own BRAIN initiative (Brain Research through Advancing Innovative Neurotechnologies), aimed at developing new ways to understand brain function and how to treat, cure, and even prevent mental illness. Long before then, in the words of Steven Sharfstein, the incumbent president of the American Psychiatric Association, the discipline had moved from a bio-psycho-social model of mental disorder to a bio-bio-bio model.60 More accurately, it had effectively abandoned the psychosocial approach to understanding and treating mental illness in favor of a near-exclusive focus on biology. Or, as the Harvard psychiatrist Leon Eisenberg put it, the profession has traded “the one-sidedness of the ‘brainless’ psychiatry of the past for that of a ‘mindless’ psychiatry of the future.”61