CHAPTER SIXTEEN

A Fragile Hegemony

BY THE MID-1950S, the ideological domination of American psychiatry by Freudian ideas was almost complete. The recruits to the profession who were fortunate enough to secure admission to the burgeoning number of psychoanalytic training institutes and to navigate the complexities of didactic analyses became the professional elite. Though analytic training almost always took place outside university departments of psychiatry (with the Institute at Columbia University the only major exception), the vast majority of academic psychiatrists secured this credential, which rapidly became a sine qua non for professional advancement. Psychoanalysts who chose not to enter academia generally had the most lucrative clientele. Psychiatrists in private practice who could not secure admission to an institute, or failed to complete their training, nonetheless proclaimed themselves practitioners of psychodynamic psychiatry, a watered-down Freudianism that played down the psychosexual elements of Freud’s theories and his pessimism but endorsed the view that early experiences had a powerful impact on individual thought and perception. These therapists were satisfied with only one or two therapeutic sessions a week, rather than the five hours required by classical analysis, and often intervened more directly in the therapeutic process than classical analysis allowed.

Even some psychiatrists who found themselves immured in state mental hospitals gave lip service to analytic ideas. Their therapeutic interventions remained heavily somatic but were often given a psychoanalytic gloss, and if psychoanalysis was not embraced in these establishments as a therapy for major mental illnesses, its premises were invoked as the framework for understanding symptoms and their pathogenesis. Perhaps the most striking example of this phenomenon was the effort to invoke the Freudian model of the human personality to explain how lobotomy worked and why it enjoyed the success some psychiatrists claimed it did. The frontal lobes, some alleged, were where the Freudian superego lurked and did its damage, and in severing them, the hospital psychiatrist was freeing the patient from the psychic conflicts that had provoked their misery. A cartoon published in Life magazine in 1947 graphically informed its readers of how this surgery of the soul worked: id, ego, and superego were pictured as ruling over different regions of the brain, and the accompanying text summarized how lobotomy solved their conflicts: “[the] surgeon’s blade, slicing through the connections between the prefrontal area (the location of the superego) and the rest of the brain, frees his tortured mind from its tyrannical ruler.”1

Psychoanalysts had a dimensional view of mental illness. Their models of the mind implied that, rather than there being an almost unbridgeable chasm between the mad and the sane, mental illness and health were points along a continuum. The same unconscious drives and conflicts were present in all of us and could be invoked to explain every aspect of human behavior. In the hands of some ambitious analysts, this led to the suggestion that psychoanalysis could be employed to resolve all manner of social problems and political conflicts. Moving from its early focus on therapeutics, the Group for the Advancement of Psychiatry (GAP) formed a Committee on Social Issues, and as early as 1950, this group spoke of “a conscious and deliberate wish to foster those social developments which could promote mental health on a community-wide scale.” Psychiatrists, so it claimed, had insights that could resolve “all those problems which have to do with family welfare, child rearing, child and adult education, social and economic factors which influence the community status of individuals and families, inter-group tensions, civil rights and personal liberty.”2 Karl Menninger claimed that psychoanalysis even had a solution to a problem that was preoccupying so many in the 1960s: how to deal with the threat posed by crime and juvenile delinquency. His book The Crime of Punishment, a 1966 best-seller, argued that crime was a sickness psychoanalysis could successfully treat, and that punishment was a useless, brutal, and ineffective relic of the past.

Though GAP had by then lost some of the impetus of its early years, its leading figures continued to occupy important positions in American psychiatry. Of the seven presidents of GAP in its first twenty years of existence (1946–1966), five were subsequently elected to the presidency of the American Psychiatric Association. Both the original director of the National Institute of Mental Health (NIMH), Robert Felix, and the two succeeding directors, Stanley Yolles and Bertram Brown, embraced this view of the relevance of psychiatry to a vast range of contemporary issues, and their stewardship of their federal agency reflected this activist bent. A huge range of research was funded between 1949 and 1980, much of it on social issues whose relationship to the central problems of mental health was marginal, at best. Such an ecumenical approach vanished only when the election of Ronald Reagan brought about a dramatic change in the political environment, and an insistence that the NIMH confine its research agenda to the biological sciences. By then, the psychoanalytic dominance of American psychiatry, which had fueled these claims, was on the brink of ending—a development that Reaganite hostility toward expansive governmental programs and social interventionism certainly helped to accelerate.


THE CONFIDENCE IN THE RELEVANCE of psychodynamic psychiatry was not confined to claims about the contributions it could make to solving a vast array of social and political problems. It extended to assertions that psychological factors loomed large in the genesis of a range of illnesses that had traditionally been seen as rooted in the body. The Rockefeller Foundation’s limited support for psychoanalysis in the 1930s was fueled in considerable measure by the appeal of Franz Alexander’s ideas about psychosomatic illnesses. Alan Gregg and his subordinates were concerned that biological reductionism invited too constrained a view of disease and its treatment. On the other side of the coin, encouraging psychoanalysts to engage with the biological was, they thought, a useful way to temper Freudian excesses and bring psychoanalysis into closer relationship with the medical mainstream.

The founding of the journal Psychosomatic Medicine in 1939 was followed three years later by the creation of the American Psychosomatic Society, and in the postwar era, such major problems as heart disease and gastrointestinal disorders were said to have significant psychological components. Asthma, back pain, and allergies were still another group of illnesses whose mysterious etiologies prompted claims for the importance of psychological factors in the genesis of bodily malfunctions. (The German American psychoanalyst Erich Wittkower had popularized the idea of an “allergic personality” in the 1930s.) There was talk of type-A personalities being especially prone to heart attacks, and stress was routinely invoked as a causative factor in the genesis of peptic ulcers and “dyspepsia.”3 “Ulcer types,” it was said, were people whose personalities rendered them likely candidates for stomach ulcers. This was one of the first claims Franz Alexander made for psychosomatic medicine (though he was far from the only twentieth-century physician to link emotions and gastric upset).4 It was a notion that in the early 1980s would fall by the wayside when two Australian physicians, Barry Marshall and Robin Warren, demonstrated that the overwhelming majority of gastric ulcers were caused by a bacterium, Helicobacter pylori—a discovery that in 2005 won them a Nobel Prize in medicine.5

Asthma was another common and deeply distressing disease with an obscure etiology that psychoanalysts sought to bring within their ambit. Alexander and the American psychoanalyst Helen Flanders Dunbar, who had collaborated to found Psychosomatic Medicine before the war, were both convinced that asthma was a classic psychosomatic disorder. Alexander believed that “the asthmatic wheeze was the ‘suppressed cry’ of a patient suffocated by an over-attentive mother.”6 In her popular treatise Mind and Body, published in 1947, Dunbar went even further, asserting that most diseases could be traced back to childhood trauma, often of a sexual sort. Asthma and hay fever were paradigmatic cases:

There are certain specific emotions which seem to be linked especially to asthma and hay fever. A conflict about longing for mother love and mother care is one of them. There may be a feeling of frustration as a result of too little love or a fear of being smothered by too much. A second emotional conflict characteristic of the allergic is that which results from suppressed libidinal desire, often closely associated with longing for the mother. The steady repetition of this emotional history of “smother love” in the asthmatic is as marked as the contrasting history of hostility and unresolved emotional conflict in the sufferer from hypertension.7

If parents were to blame, and hypnosis failed, the remedy for these disorders might be a radical one—parentectomy, or the severing of relationships between parent and child. The 1950s saw the establishment of residential schools for asthmatics to put such doctrines into practice.8

Similar etiologies were propounded for diseases that fell more centrally within the psychiatric ambit. Autism as a disease of childhood had been separately identified by the Nazi collaborator Hans Asperger in Vienna and Leo Kanner at Johns Hopkins University in Baltimore. (Later generations have given primary credit to the latter, which is perhaps fortunate now that we know Asperger was complicit in the murder of autistic children.)9 Kanner’s 1943 paper “Autistic Disturbances of Affective Conduct” was a summary of his clinical observations of eleven children who were highly intelligent but displayed “a powerful desire for aloneness,” coupled with “an obsessive insistence on persistent sameness.”10 It was a condition then thought to be comparatively rare, though from the mid-1990s onward, the number of children diagnosed with autism would explode.11

Kanner borrowed the term “autism” from Eugen Bleuler, famous for being the first to describe and diagnose “schizophrenia.” Bleuler had used autism to describe the disconnection of schizophrenics from the outside world. But it was Kanner’s paper, and his application of the concept to children, that brought autism to public attention and inspired subsequent generations of researchers.12 Such children, he argued, exhibited patterns of social withdrawal characterized by restricted social relationships, limited speech, repetitive language and behavior, and obsessions with routine.

Many parents were grateful to Kanner for providing a diagnostic label that helped give some semblance of order to the chaotic world into which their child’s social isolation and self-harming behavior had plunged them, and Kanner’s formulation encouraged others to attend to and undertake research on the condition. But by the 1950s, that sense of gratitude curdled: Kanner openly entertained the idea that the emotional constipation of the parents, and most especially the mother, explained their children’s psychosis. In a 1949 paper on the nosology and psychodynamics of the condition, he claimed that the parents of autistic children had unaffectionate, mechanical relationships with them, prompting their neglected children to “seek comfort in solitude.”13 More vividly, and for a much larger extra-professional audience, in a 1960 interview with Time magazine he spoke of cold and distant parents “just happening to defrost long enough to produce a child.”14

It was a notion he came to repent of and recant by the late 1960s, but for a generation the idea of the “refrigerator mother” inflicted blame and misery on already-traumatized families, largely through the self-promoting efforts of another Austrian refugee, the psychoanalyst and charlatan Bruno Bettelheim, who went one step further and openly compared autistic children to inmates in a concentration camp, with the parents playing the role of sadistic SS guards. “The difference in the plight of prisoners in a concentration camp and the conditions which lead to autism and schizophrenia in children is, of course,” he opined in his best-selling The Empty Fortress, “that the child has never had a previous chance to develop much of a personality.” For anyone in doubt as to parents’ culpability, he added, “The precipitating factor in infantile autism is the parents’ wish that the child should not exist.”15 Bettelheim suggested that a crucial element in “curing” autism was the same parentectomy other psychoanalysts had recommended for asthmatic children—a severing of all ties between pathological parents and the child they had brought into the world. For decades, Bettelheim put such ideas into practice at the Orthogenic School, an institution with which the University of Chicago saw fit to associate itself, an establishment whose inmates were subjected to mental and physical abuse at the director’s hands.16

Parents, and especially mothers, were similarly portrayed as the progenitors of perhaps the most serious of all mental disorders, schizophrenia. The analyst Frieda Fromm-Reichmann, a refugee from Nazi Germany, was one of the first Freudians to suggest that psychoanalysis had a place in understanding and treating schizophrenia.17 As early as 1948 she spoke of “schizophrenogenic” parents, particularly mothers, who displayed a fateful mix of rejection and overprotection that amounted to “malevolence.” “The schizophrenic,” she suggested, “is painfully distrustful and resentful of other people, due to the severe early warp and rejection he encountered in important people of his infancy and childhood, as a rule, mainly in a schizophrenogenic mother.” Before psychoanalysts turned their attention to the disorder, there was mutual incomprehension between psychiatrist and patient. “The thought processes, feelings, communications, and other manifestations of the disturbed schizophrenic seemed nonsensical and without meaning,” she continued, but “psychoanalysts know that all manifestations of the human mind are potentially meaningful.” She went on to suggest loftily that “it is now recognized that the communications of the schizophrenic are practically always meaningful to him, and potentially intelligible and not infrequently actually understandable to the trained psychoanalyst.”18

Understandable, perhaps, or at least interpretable within a psychoanalytic paradigm, but not exactly treatable. The best face Fromm-Reichmann could put on the situation was that “the results of the psychotherapeutic efforts with disturbed schizophrenics, so far, are not too discouraging.” What did that mean? “Cures have not been to the psychoanalysts’ satisfaction as to number or durability”—a result she sought to explain away as “not because of the therapeutic technique used but because of the personal problems of the psychotherapist in his dealings with schizophrenics and because of the personality of the therapist.”19

Over the following decades, the limitations of talk therapy in the treatment of psychosis turned out to be completely intractable, even in richly endowed private institutions like Chestnut Lodge, where Fromm-Reichmann practiced. A handful of recoveries were offset by the vast majority of failures. The prospect of employing such time-intensive techniques in overcrowded and understaffed state hospitals was, of course, chimerical. Yet the psychoanalytic dominance of psychiatry, and its preeminence in the academy, helped ensure that the idea that the cause of schizophrenia was family pathology became the ruling orthodoxy. The pain and resentment such theories caused the families of schizophrenics remained largely hidden from view so long as psychoanalytic hegemony lasted, not only because of the internalized guilt that having a mentally ill relative created, but because any protests were readily dismissed as emanating from the “malevolent” parent who had fostered the madness in the first place. Once professional doubts began to surface, however, the latent anger and anguish would erupt in a fierce backlash against psychoanalysis, contributing to a rapid embrace among patients’ families of the alternative perspective offered by biological psychiatry.


IF PSYCHOANALYSIS WAS ILL-SUITED, as Freud had acknowledged, to the treatment of the hundreds of thousands of patients struggling with acute forms of psychosis who thronged the wards of the state hospitals, it had a much greater appeal to the substantial fraction of psychiatrists who practiced outpatient therapy. As long as sufficient numbers of patients saw analysis as the solution to their neuroses and unhappiness, it promised an attractive career, one that proved appealing to those recruits to medicine who found biomedical reductionism and the lack of sustained contact with their patients distasteful. When cheaper alternatives began to manifest themselves in the therapeutic marketplace, the viability of the profession began to crumble, its ability to defend itself further undermined from within by the schisms that had characterized the Freudian enterprise from its earliest years. Orthodox Freudians at the New York Psychoanalytic Society and Institute poured scorn on the adulterated version of psychoanalysis associated with the Menningers and their followers. There were vicious fights between the groups surrounding such figures as Franz Alexander and Sándor Radó, and between those identifying with Anna Freud and Melanie Klein. Even within individual psychoanalytic institutes, petty jealousy, tensions, power struggles, and backbiting were the order of the day—scarcely an advertisement for the maturity and self-awareness that being analyzed was supposed to produce.20

The gap between promise and performance was troubling and had been evident to some observers very early on. Beyond the internal debates at the Rockefeller Foundation about what its massive funding of psychiatry had actually accomplished, Robert Morison was privately deeply concerned about the propensity of psychoanalysts to rely more on rhetoric than substance in advancing their case.21 When he succeeded Alan Gregg as head of the foundation’s medical division, Morison repeatedly pushed leading analysts to provide systematic evidence of the effectiveness of their interventions, only to be repeatedly spurned. His disenchantment grew, and he soon concluded that “for some time to come it seems likely that university departments of psychology will offer better research possibilities than most departments of psychiatry.” Psychology, thanks to a new emphasis on its clinical applications, was emerging as a competitor in the mental health arena. With its focus on cognition and human behavior, it claimed to be more adept at treating the host of anxieties, fears, learning disabilities, and anger-management issues besetting patients, including the traumatized veterans of the war.22


THE NATIONAL INSTITUTE OF MENTAL HEALTH, which by the early 1950s had replaced the Rockefeller Foundation as the major source of funding for psychiatric training and research, had reached essentially the same conclusion. The NIMH’s first director, Robert Felix, interpreted his mandate broadly, funding not just psychiatrists, but also psychologists and other social scientists.23 The bulk of the research funding, both in dollars and number of projects supported, went to these other disciplines, including psychology, sociology, and anthropology.

The interest in funding work of this sort in the 1940s and 1950s was boosted by the claims of psychiatric casualties of war, but it also reflected the enormous fiscal and social costs of mental illness among the civilian population, a topic that greatly exercised the governors of individual states.24 One very important part of the NIMH’s intramural research capacity, its Biometry Branch, provided valuable ammunition for those seeking increased funding for basic research by periodically issuing reports estimating extraordinary direct and indirect costs to the economy from the burden of mental illness.25 Felix could be relied on to testify regularly before his congressional masters about the great progress being made, and imminent breakthroughs about to be realized, provided that the flow of federal dollars was sustained. His sunny optimism was rarely scrutinized, for who could doubt the progressive powers of modern medical science? At times, as the historian of psychiatry Gerald Grob has noted, Felix’s colleagues quietly sought to attenuate his enthusiasm. But Felix was evidently more politically sagacious than they, and funding increased at an exponential rate. An initial budget of $9 million in 1949 grew to $14 million in 1955, $50 million by 1959, and $189 million by 1964.26

There was, however, no clear road map for spending this cornucopia of research dollars. Under Felix, the NIMH employed a scattershot approach, underwriting research on lobotomy and epilepsy that went primarily to medical investigators, but also giving grants to epidemiologists, to teams of researchers trying to make mental hospitals function as therapeutic institutions, to those examining psychological therapies, and to those proposing basic research with little by way of direct clinical applications. As an in-house history indicates, the decision was motivated in part by a recognition of how weak the current understanding of the etiology of mental illness was, making it “wisest to support the best research in any and all fields related to mental illness.”27

The very first grant in NIMH’s history was awarded to a psychologist, Winthrop Kellogg, in 1949 for a study on the “basic nature of the learning process” and that turned out to be symbolically appropriate, for in subsequent years psychology was the discipline that routinely received most of the research dollars dispensed by the NIMH.28 In 1964, for example, 62 percent of principal investigators were classified as social and behavioral scientists (overwhelmingly psychologists, who numbered 55 percent of the total number of grantees), and they disposed of 60 percent of that year’s grant funding. By way of contrast, psychiatrists were only 12 percent of the principal investigators, and their share of the research budget was a comparatively meager 15 percent.29


A WIDE RANGE OF SUBSPECIALTIES shared in the bonanza. Psychobiologists took their share, but so, too, did psychologists working on cognition, perception, personality, and social psychology; on group dynamics, motivation, and development; and on language and behavior, psychotherapeutic interaction, and operant conditioning. In some of these areas, the mental health relevance of the work was clear; in others, attenuated almost to the vanishing point. Central or peripheral, it scarcely seemed to matter to Felix and his subordinates, provided that the grant applications passed muster with peer reviewers. This, surely, was one of psychology’s key advantages. For the experimental, laboratory-based, and statistical character of most research in the field, and its conformance with the hypothesis-testing empiricism and mathematical formalism that was seen as the hallmark of “science,” made it ideally suited to survive the grant-review process. Over time, as leading cadres became experienced at grantsmanship and incorporated the lessons they had learned into the training of the next generation of psychologists, and as experimental design, data collection, and statistical sophistication advanced, these comparative advantages became self-reinforcing.

The contrast with psychiatry in these same years is illuminating. With positions of power and authority dominated by the psychoanalytically inclined, psychiatry was poorly placed when it came to competing for large research grants. Psychoanalysis, beginning with Freud, developed its theory and technique from individual clinical encounters and the case history in ways that simply did not lend themselves to the experimental, large-scale approaches the NIMH and other government funding agencies quickly came to prefer. Indeed, most scholars working within the analytic tradition were actively hostile toward such modes of knowledge generation, seeing them as deeply flawed and unlikely to address the questions that they argued were central to understanding mental illness. Psychoanalysis suffered also from some self-imposed structural disadvantages. Where academic psychologists were entrenched within the university system, the most prominent and influential psychoanalysts were located in free-standing institutes with no direct connection to academic medicine. Beyond this, the training they provided was oriented toward practice, not research. Leading psychoanalytic clinicians were to be found outside the ambit of the universities—at places like the Menninger Clinic in Kansas, Austen Riggs in the Berkshires, and Chestnut Lodge in Maryland, all establishments with at best tenuous links to the academic world.

In the years of psychoanalytic dominance, the NIMH was generous with funding to train analysts and encourage the expansion of their ranks in academic psychiatry. When it came to research grants, however, much less was forthcoming. Looking back, one psychoanalyst complained, “We couldn’t develop grants that satisfied the psychologists and social workers who were running the grant departments at NIMH. I went down to Washington (and) I got nowhere because we couldn’t formulate psychoanalytic research in a way that was ‘one, two, three, four, five.’ Psychoanalysis is not that way.”30

Between 1948 and 1963, NIMH research grants totaled $156 million. Of this, less than $4 million (about 2 percent) was directed to psychoanalysts or psychoanalytic institutes, and most of this money was, as the historian Nathan Hale, Jr., pointed out, “not directly for research in psychoanalysis; most were for psychoanalysts working in related fields, such as studies of family relationships in schizophrenia, autism, mental hospitals, psychosomatic studies of adults and children, and early infant development.” Over a longer time frame, and considering only grants for research on psychotherapy, the marginalization of psychoanalysts was striking: only 7 percent of the $30 million disbursed in this area between 1947 and 1973 went to analysts. By contrast, nearly 50 percent, or $14 million, was awarded to study behavioral therapy, a field dominated by psychologists.31 In time, psychoanalysts’ inability to secure major research funding would help to undermine their standing in medical schools.


THE EXPANSION OF THE UNIVERSITY SYSTEM after the Second World War owed much to the GI Bill, which provided funding for returning soldiers to pay for higher education. But in the long run, it was the ever-increasing research money provided by the federal government that proved a more durable basis for the growth of tertiary education, and transformed its operations. A product of total war and of the Cold War, the big science and big medicine underwritten by federal largesse rapidly transformed research universities into knowledge factories. Institutions and departments were ranked according to the dollars their research entrepreneurs succeeded in capturing from government and industry, and the practitioners making up the various academic guilds found their prestige, their influence, even their salaries, ever more tightly linked to their contributions (or lack thereof) to the pile of treasure to which the seats of modern learning became addicted. For a time, the flow of training dollars hid the danger this represented to psychoanalysis. NIMH-funded training grants increased from $4.25 million in 1948 to $84.6 million in 1965 and reached a peak of $111 million in 1974. But the rapid decline of training grants after that left psychoanalytic psychiatry bereft of the currency that mattered to its academic masters, so when challenges arose from other quarters, it was ill-equipped to defend itself.

Worse still, there was a cuckoo in the psychotherapeutic nest, a creature that owed its very existence to the Second World War. Psychology had emerged after 1945, not just as a highly successful competitor for research dollars, but as an alternative profession offering therapy to the mentally troubled. In the first half of the twentieth century, as a variety of social science disciplines organized and defined themselves within a university context, psychology had been split between the core of the discipline, which consolidated around a vision of an academic, laboratory, and research-based specialization, and a group with a more applied agenda. The progress of what as early as 1907 had been dubbed “clinical psychology” was halting and uncertain for most of this period.32 It owed what success it had to the development of mental testing, building on the importation of IQ testing, first developed by the French psychologist Alfred Binet in 1905, and its application in such arenas as the identification and control of the “feeble-minded” and the disposition of juvenile delinquents.33 Attempts to diversify into the treatment of psychological disorders met with fierce resistance from organized psychiatry.34 Until the war, even as a marginal and stigmatized branch of medicine, psychiatry had far more legitimacy and power than the (heavily female) new discipline of applied psychology, and it was readily able to keep the upstart psychologists in a subordinate position. Reflecting this reality, in 1940 there was still not a single PhD program in clinical psychology.35 That year, a mere 272 members of the American Psychological Association identified themselves as practicing clinical psychology of any kind, and for the most part this meant mental testing, not administering psychotherapy.36


THE SHEER SCOPE OF MENTAL PROBLEMS among the armed forces, and the mismatch between the number of trained psychiatrists and the demand for treatment of psychiatric trauma among the troops, had prompted the recruitment of some psychologists to treatment teams—a move made easier by the fact that the treatments on offer were essentially supportive and psychotherapeutic in nature, and by the existence of military hierarchies that enabled medics to remain in overall charge.37 William Menninger, desperate for more manpower, welcomed their contributions (though preferring that they concentrate on mental testing rather than psychotherapy). As Ellen Herman has pointed out, by war’s end, there were more than 1,700 psychologists working for the military, many of them providing psychotherapy. “In 1946, a survey of every psychologist and psychologist-in-training who had served in the military showed a striking movement toward clinical work in the war years. Hundreds of them had practiced psychotherapy for the first time and many intended to return to school for further training in this field.”38 Federal funding, originally from the Veterans Administration and then from the newly established NIMH, made that possible on a mass basis. The center of gravity of psychology as a discipline was irrevocably altered.

There was fierce resistance in some departments of psychology to this shift toward clinical psychology. “Applied” work in university settings has traditionally been regarded with suspicion, and “theoretical” work routinely carried the highest prestige.39 Wedded to this view, and to the laboratory-based, research-oriented model of their discipline, some high-status departments like Harvard, Princeton, and the University of Pennsylvania rejected the very idea of establishing clinical psychology programs.40 Others, however, were more easily swayed. In short order, that began to change.

Chafing at their subordinate position in the social division of labor, clinical psychologists sought to bolster the legitimacy of their new profession and obtain a greater degree of professional autonomy. Leaders of the movement realized that their legitimacy depended on close ties to university departments and to a curriculum that combined clinical training with demonstrated competence in research methodology. At a 1949 conference held in Boulder, Colorado, and funded by the US Public Health Service’s Division of Mental Hygiene, the nuts and bolts of just such a program were hammered out. What became known as the “scientist-practitioner model” was the core of the new approach, which appropriated the mantle of science and combined it with supervised clinical training, emphasizing “the necessity of an academic background in general and experimental psychology as the foundation for training in clinical psychology.”41

Politically, this was extraordinarily astute. By requiring two years of basic training in academic psychology, the new program encouraged the model’s acceptance by existing university departments. The influx of federally funded clinical psychologists brought extraordinary amounts of new funding to the discipline, and the prospect of adding substantial numbers of new research faculty. The “scientist-practitioner” model ritually bowed to the superior knowledge and standing of the researchers, provided a “scientific” basis for the new professionals’ practice, and created a means of distinguishing the properly trained from the quack.42

As early as 1947, the Veterans Administration was underwriting the training of 200 clinical psychologists. From 1949 onward, the newly established NIMH advanced much larger sums to underwrite graduate and professional training, and while the bulk of the institute’s funding was committed to the training of psychiatrists, a substantial sum was diverted to clinical psychology, subsidizing the hiring of additional faculty in psychology departments, and providing stipends to would-be practitioners.43 The upshot was a dramatic expansion of the field. In 1945, the American Psychological Association had 4,173 members. By 1960, there were more than 18,000—a reflection of the fact that five times as many doctorates in psychology had been awarded in the 1950s as in the preceding decade. By the turn of the century, membership exceeded 80,000, and those numbers continued to rise, reaching a peak of more than 92,000 in 2008. National Science Foundation data suggest that, by 1964, more than two-thirds of American psychologists with a doctorate were working in the mental health field.

Ironically, it was the very dominance of psychoanalytic perspectives in postwar America that had done much to create the social space for psychology to expand its domain beyond the laboratory. The army and the traditional mental hospitals were both hierarchical organizations, and their bureaucratic structure kept psychologists duly subordinate to their medical superiors. William Menninger had told the clinical psychologists who worked for him during the war exactly where they stood: cooperation could proceed only if the psychologists acknowledged and abided by their dependent status. There ought to be no gainsaying that “certain kinds of painstakingly gathered clinical knowledge are prerequisites to carrying on psychotherapy” and these forms of knowledge necessarily needed to be taught by medically trained psychiatrists, those who alone possessed that knowledge in its entirety. Provided that psychologists “can accept the psychiatrist as the quarterback of a team that works together,” Menninger made clear, “the bugaboos of status, jurisdiction, equality and subordination become dead issues.”44

But psychoanalysis, with its office-based practice, removed the possibility of bureaucratic subordination, allowing psychologists to practice independent of psychiatric supervision. The incursion of clinical psychologists into their turf was a development that medically trained analysts resented and fiercely resisted. In 1955 Maxwell Gitelson, who would become president of the American Psychoanalytic Association the following year, wrote that he was “committed to the liquidation of lay therapy in the United States.”45 Yet “liquidation” was quite beyond his powers.

On the contrary, the experimental, laboratory-based, and statistical character of clinical psychology, and its conformance with the hypothesis-testing empiricism that was seen as the hallmark of “science,” made it far more capable of developing research programs that satisfied the requirements of peer review. Guided by their academic colleagues, clinically oriented psychologists were soon adept at modeling their grant proposals along these lines. Crucially, this led clinical psychology to develop therapeutic interventions that targeted particular symptom complexes for modification, and that could claim some degree of empirical validation.

On another front, and not coincidentally, the emphasis on a strong research component in the doctoral training of clinical psychologists greatly strengthened their ability to claim for themselves the prestige that accrued to the status of being a “scientist” rather than a mere technician. It also allowed them to boast of their superiority to the average practicing psychiatrist, whose medical education had not included instruction in how to conduct scientific research. As the executive secretary of the American Psychological Association, Dael Wolfle, pointed out in 1949 as the model was being adopted, “The average practicing physician or psychiatrist has neither the research interest nor the research skill that we attempt to develop in the student receiving his Ph.D. in clinical psychology.”46


THE APPROACH THAT CAME to dominate clinical psychology, and, after the demise of psychoanalysis in the 1980s, to characterize whatever residual dabbling in psychotherapy some psychiatrists made use of, was cognitive-behavioral therapy (CBT), a series of techniques that could be standardized and aimed at narrowly focused treatment of particular symptoms, rather than the more nebulous and hard-to-operationalize global reconstruction of the personality that psychoanalysts promised. This sort of intervention lent itself more readily to quantitative assessment and evaluation, and it was much briefer and more focused (and cheaper) than the notoriously interminable interventions associated with classical psychoanalysis. Though a handful of psychiatrists, most notably Aaron T. Beck, a disillusioned psychoanalyst based at the University of Pennsylvania, contributed in important ways to its development, most of its central exponents and protagonists came from the ranks of the psychologists—figures like Albert Bandura of Stanford University and Albert Ellis, who founded his own training institute in New York.

Many sectors of academic psychology in the years following the Second World War had embraced a behaviorism that had its roots in the work of John B. Watson, a movement whose foremost exponent in the postwar years was B. F. Skinner of Harvard University. A younger generation, however, had become disenchanted with behavioral psychology’s singular focus on external actions, and its denial or neglect of human consciousness and the mind. Steering clear of introspection, and thus remaining sharply at odds with psychoanalysis, these psychologists nonetheless gave increasing weight to internal mental states—beliefs, desires, and motives.

The cognitive revolution that swept the field was translated among clinical psychologists into an array of techniques that sought to combine an emphasis on cognition and consciousness with behavioral strategies that could form the basis of new forms of psychotherapy. Thought disorders and maladaptive behaviors that led to repetitive negative thinking were, in their eyes, a primary source of mental disturbance. Focusing directly on symptoms, they attempted to create strategies to modify the cognitive distortions and maladaptive behaviors that they claimed led to emotional distress and self-defeating cognitive patterns. Applied initially to panic disorders and depression, those techniques in time came to be applied to a vast array of psychic troubles: anxiety, eating disorders, phobias, obsessive-compulsive disorders, personality disorders, anger management, even spousal abuse—a host of the kinds of problems clinical psychologists encountered in their growing office practices.47

Cognitive-behavioral therapies that presented themselves as having statistical validation would prove an important comparative advantage when psychotherapists sought insurance reimbursement, and these interventions helped clinical psychologists to legitimize their profession. Mental health care in the first four decades of the twentieth century had mostly consisted of treatment in public mental hospitals at state expense. In the postwar era, a new market had emerged for outpatient psychiatry. Those sessions had to be paid for privately, and it helped that Americans were used to treating health care of all sorts as a commodity purchased in the marketplace. Increasingly, many patients could offset some of these costs through employer-provided health insurance. The insurance industry was initially reluctant to pay for the treatment of any kind of mental illness, fearing that the associated costs would prove crippling.48 Over time, however, more and more middle-class Americans secured at least some degree of insurance coverage for mental health care.49

In this new environment, psychoanalysts found themselves facing a distinctly skeptical reception from insurance companies. Their form of psychotherapy required extensive and expensive sessions and could extend over many years, with no obvious end in sight. Analysts were now competing, besides, with a heavily feminized profession in clinical psychology that perforce had to settle for lower financial rewards. The rich might prefer the services of psychoanalysts, but when it came to less affluent patients, Freud’s followers were forced to confront the dismaying reality that their rivals provided much shorter treatments, for which they charged considerably less by the hour, and that apparently produced demonstrable results.


THOUGH CBT HAD A VERY DIFFERENT intellectual genealogy, it echoed Adolf Meyer’s notion that mental illness was rooted in faulty habits. Unlike Meyer, though, these therapists began to develop techniques for addressing maladaptive patterns of thinking, beliefs, and behaviors. For anxiety disorders, phobias, panic attacks, obsessive-compulsive behaviors, and some forms of depression, the various tools CBT provides—teaching about the nature of fear and anxiety, and offering patients techniques to avoid or alter self-destructive ways of thinking, reduce muscular tension, or desensitize oneself to feared stimuli via controlled exposure—have been shown to be effective, though not universally so. Their usefulness in the treatment of depression is more variable and uncertain, and improvements have not always been found to be lasting.50

The distress and disability associated with these conditions is often considerable, so even though the Cochrane review of CBT in generalized anxiety disorder found that fewer than 50 percent of patients showed clinical response to treatment, that degree of improvement is still considerable and helps explain the ability of CBT therapists to attract clients.51 For some of these patients, CBT is certainly useful and welcome. But just as Meyer’s suggestion that psychotic disorders and manic depression could be seen as the product of bad habits was implausible and found few followers, so, too, attempts to extend cognitive-behavioral approaches to encompass these graver forms of mental disturbance have been largely unavailing. As we shall see in Chapter 22, CBT for serious forms of mental illness (schizophrenia, bipolar disorder, and major depression) shows little evidence of broad effectiveness. A further caveat is that Cochrane reviews, the most systematic reviews of the evidence we have, emphasize that the available studies are of low to moderate quality, limiting the confidence one can invest in their findings. CBT can claim that it has some empirical support for its approach, unlike many of its rivals, but its vaunted evidentiary support is less secure than its enthusiasts proclaim.52

Analysts insisted that their rivals were playing whack-a-mole, treating the symptoms of mental disorders and not their root causes. Psychodynamic interventions, they claimed, provided a deeper and more lasting transformation of the personality and attacked the roots of psychopathology. For a skeptical insurance industry, these claims were undercut by psychoanalysts’ continued inability to provide compelling evidence of the efficacy of their interventions, the same problem that Robert Morison of the Rockefeller Foundation had criticized them for in the late 1940s.

For a quarter century after the end of the Second World War, psychoanalytic ideas enjoyed a remarkable degree of authority in intellectual circles, spreading widely in popular culture. In the 1950s and 1960s, analysts made lucrative livings on both coasts and at a few cities in the interior, their incomes outpacing those of many other medical specialties. They dominated most academic departments in medical schools and attracted the most talented recruits to psychiatry. Many psychoanalysts thought their position was unassailable. But theirs would prove to be a fragile hegemony, one that would be reduced to rubble in the space of little more than a decade.