IX
L’HYSTÉRIE MORTE?

Diseases disappear. Occasionally they vanish when public-health campaigns succeed in eliminating all existing outbreaks, and the pathogen responsible for the illness needs a human reservoir and vector to survive. Such was the case with smallpox, whose last remaining case was documented in 1977, and whose demise was announced by the World Health Organization in May 1980. Only the survival of the virus in biological warfare laboratories gives us reason to be concerned about the possible reappearance of an illness that once disfigured and killed on a massive scale. Perhaps polio may soon follow, for new cases of the disease now number in the low hundreds each year, compared with the ten million cases reported as recently as 1988. Occasionally, the discovery of a medical magic bullet ushers a dread disease off the stage. In the early twentieth century, between 15 and 20 per cent of male admissions to New York State’s mental hospitals suffered from the ravages of general paresis. In its early stages, the disorder was visible only through subtle signs, overlooked by all but the trained eye—minor disturbances of gait and articulation, the unequal reaction of the pupils to light. But, as the disease unfolded, neurological catastrophes accompanied ever more florid psychiatric symptomatology—delusions of immense sexual prowess, physical strength, wealth, and social power coexisting pathetically in bodies exhibiting progressive paralysis and decay—till fatuity and a dreadful end, flesh rotting and vanishing, bedsores suppurating, mental darkness descending, muscles failing, and death supervening, often from choking on one’s own vomit. The discovery that the source of these troubles was the ravages of tertiary syphilis was followed, albeit some decades later, by the introduction of penicillin and other antibiotics, a development that rendered such spectacles moot. No more paretics.

More frequently, though, diseases vanish because medical fashions change, understandings of disease alter, and previous ways of classifying nature and its pathologies are superseded. Where are the dropsies and the relapsing fevers with which every doctor was once familiar? Where are the episodes of chlorosis, or the sthenic and asthenic diseases that loomed so large in nineteenth-century medicine? Gone or reconceptualized every one. More to the point, for our present purposes, where are all the cases of hysteria that once thronged the waiting rooms of the nerve doctors—the paralyses and tics, the phobias and the phantasms, the amnesia and the somnambulism, the hemianesthesia and the histrionics, the inexplicable loss of voice and sight, the emotional turmoil and the faints, and the dramatic muscular contractions that used to culminate in the arc-en-cercle so familiar from the beginning to the end of the nineteenth century? Where are the hysterical invalids, so many of them women, who were so visible then?

All apparently vanished into the ether. The new Bible of neo-Kraepelinian mainstream psychiatry, the fourth edition of the Diagnostic and Statistical Manual produced by the American Psychiatric Association, can find no place in its vast and ever-expanding array of possible pathologies for a disorder that was once the bread and butter of out-patient neuro-psychiatry, and the inspiration for the theory and practice of Freudian psychoanalysis. Nearly 900 pages and counting, an array of psychiatric disorders now numbering several hundred, and yet no place any longer for hysteria. The term survives in the language, of course, employed as a term of abuse, an epithet most often directed at women who make a spectacle of their extreme emotional lability, or invoked when some collective disturbance is dismissed as mass hysteria. But clinicians report that the condition itself has shuffled off into oblivion. They have no more hysterics to present at grand rounds, to provide entertainment and enlightenment for their trainees. The disorder that haunted the imagination of the previous fin de siècle, the spectacle of the hysterical female (soon joined by legions of male psychiatric casualties of the “war to end all wars”), has apparently evaporated. In the words of one of its best-known modern historians, Étienne Trillat, “L’hystérie est morte, c’est entendu.”1

Its death (if dead it is) was certainly a lingering one. Psychoanalysis, as Carroll Smith-Rosenberg aptly put it, was “the child of the hysterical woman.”2 And parent and child each supported the other, till the collapse of psychoanalysis almost simultaneously brought about the demise of her hysterical parent (though in fact hysteria had shown signs of serious decline for some time, and psychoanalysis had long neglected the disease that had brought it into existence).

Freud had stitched together psychoanalysis, as both theory and practice, on the foundation of his own hysterical breakdown, and the experience of ministering to a handful of hysterical women in the 1890s. But once he had developed his analytic technique, and constructed his basic model of the conversion of psychic conflicts into physical symptoms, he seems rapidly to have lost interest in the subject. As his intellectual edifice grew ever more complex, and his gaze shifted to the contemplation of the epic problems posed by civilization and its discontents, hysteria quickly lost its initial centrality. Following their master’s lead, in this as in all else, Freud’s disciples likewise directed their attentions elsewhere. To be sure, those hysterics who showed up to have their psyches rearranged were rarely turned away. They were too lucrative for that. But they seem to have appeared less and less frequently. If psychoanalysts were abandoning hysterics, hysterics also seem to have been deserting psychoanalytic couches in droves.

Most hysterics, it turned out, did not want to be told that their disorders were all in their minds. Victims of a disorder that mysteriously mimicked all sorts of neurological diseases without any obvious organic cause, they were insistent that their disease was a real, physical entity. They did not take kindly to theories that suggested otherwise. It was not just the General Staffs of the First World War armies who equated psychological troubles with malingering and bad faith. Many of the alleged victims of psychopathology shared that view, and loudly proclaimed their entitlement to the status of being genuinely sick. Like the renegade psychiatrist Thomas Szasz, they saw mental illness as a fiction and a myth, a term that disparaged the reality of their sufferings and rendered them essentially fraudulent. Besides, the penetration of psychoanalytic ideas and institutions was slow, halting, and uneven. The French, for nationalistic reasons, would have nothing to do with such Teutonic and Semitic doctrines till as late as the 1960s, when they finally emerged in the Frenchified form propounded by Jacques Lacan. In the German-speaking world, the advent of the Nazis soon put paid to Freud’s “degenerate” Jewish ideas. And in Britain, with its eclectic psychiatric profession, and its distaste for wallowing in “morbid” introspection, Freudian ideas were never more than a minority taste, mostly confined to the chattering classes, a reality that remained even when Sigmund and Anna Freud were driven into exile in London. To be sure, analytic ideas created a small institutional bridgehead centered around the Tavistock Clinic in London. But the machinations of Edward Mapother and the members of the Institute of Psychiatry ensured that the gates to academic respectability, in the form of an affiliation with the University of London, were firmly shut in the analysts’ faces. And the British tradition of emotional reserve kept most affluent patients at bay. Only in the United States, irony of ironies, did psychoanalysis flourish. Freud’s contempt for the culture notwithstanding, psychoanalysis attracted a small but growing band of American adherents, professionals and patients, in the years leading up to the war against Hitler and Hirohito, and for a quarter century and more after its end Freudian doctrines became the hegemonic ideology of the American psychiatric elite.

The Second World War, of course, played a major role in the emergence of this dominance. Exploiting the memories of the shell-shocked soldiers of the First World War, American psychiatrists persuaded the military to allow them to examine their recruits to screen out the psychiatrically vulnerable, and so they did, to the tune of nearly two million men rejected as mentally unstable. But it made no difference. Once exposed to the horrors of modern warfare, or sometimes even the prospect of the horrors of the battlefield, the men of “the greatest generation” broke down in large numbers, just as their fathers had before them. There were more than a million admissions to American hospitals in the war years for neuro-psychiatric problems. Among combat units in the European theatre in 1944, admissions were as high as 250 per 1, 000 men per year, an extraordinary percentage. “Of the casualties severe enough to require evacuation during the major US campaign in the Pacific, at Guadalcanal in summer and fall 1942, 40 percent were psychiatric.”3 And the surge in the ranks of the psychiatrically impaired showed no signs of diminishing in the immediate aftermath of the conflict. In 1945, 50, 662 neuro-psychiatric casualties crowded the wards of military hospitals, and to those who were institutionalized we must add the 475, 397 discharged servicemen who were receiving Veterans’ Administration pensions for psychiatric disabilities by 1947.

These wartime experiences had a profoundly transformative effect on psychiatry itself. In 1940 psychiatrists had composed a marginal and despised specialty, mostly still trapped within the walls of custodial asylums. The American Psychiatric Association had a total of only 2, 295 members. By 1945 the military alone had some 2, 400 physicians assigned to psychiatric duties. Many of these doctors had, of course, no prior background in the field, and had been rapidly indoctrinated with a thin veneer of knowledge to allow them to play their expected roles. Nonetheless, they rapidly acquired extensive experience with psychiatric disability, and many of them sought to stay in the field after the war.

The massive number of breakdowns among presumably previously mentally sound soldiers helped to reinforce putative links between overwhelming stress and mental pathology, and, under their psychoanalytically inclined leader, Brigadier William Menninger, the shock troops of this new wave of psychiatrists readily bought into a psychodynamic account of what was wrong with the soldiers they were charged with treating. Second World War veterans did not have shell shock, but “war neurosis” or “combat exhaustion” proliferated apace. The change in terminology was no accident. The doctors discovered that the less “psychiatric” the diagnosis the better, since a psychiatric label seemed to confirm victims in the sick role and make their recovery unlikely. Better to speak of exhaustion and hustle them back to the fight just as soon as possible. Forward, brief, simplified treatment communicated clearly to patients, treatment personnel, and the combat reference group that psychiatric casualties were unable to function and fight only temporarily. Conversely, evacuation of psychiatric casualties to distant medical facilities “weakened relationships with the combat group and implied failure in battle for which a continuation in the sick role was the only honorable explanation.”4 Giving the soldier a psychiatric diagnosis made that move on the patient’s part the more likely, which was why “combat exhaustion” became the preferred term, with its implication that the overtaxed system would recover with little more than a brief rest and respite from the fighting. If a mere psychiatric label had such negative effects on outcomes, sustained psychiatric treatment appeared to make matters even worse, greatly increasing the chances of permanent disability. Whereas there had been debates and then an emerging consensus among many doctors during the First World War that shell shock was a form of masculine hysteria, there were few comparable moves when the next worldwide military conflict exploded.

Three sorts of treatment regime emerged under the pressure to deal with the profound renewed threats to morale and military efficiency that “combat exhaustion” posed: brief interventions lasting a day or two as close to the front lines as possible; removal to a more formal psychiatric facility containing a few hundred beds further up the supply chain, where up to two weeks of more sophisticated treatment were offered; or removal from the battlefield entirely to something that more closely approximated a more traditional mental hospital, where more elaborate interventions could be attempted. The last two venues restored only a very small fraction to a combat role, and many patients treated in them became permanent invalids. And treatment at the front lines generally consisted of little more than warm food and a sedative to secure a good night’s sleep, and the mobilization of guilt by the soldiers’ doctors, manipulating the soldiers’ feelings of solidarity with their units and their desire not to let down their fellow fighting men: an American version of tea and (not too much) sympathy.

On this foundation, American psychiatry in the post-war era swung decisively in a psychoanalytic direction, and it became increasingly an out-patient specialty dealing with the walking wounded. Where virtually the entire membership of the American Psychiatric Association worked in mental hospitals at the end of the 1930s, by 1958 as few as 16 per cent of a greatly expanded profession did so. All those psychiatrists whose livelihood now depended upon an office-based practice naturally gravitated away from the psychotic and towards an ever closer embrace of the various milder “neuroses.” It would be natural to conclude, therefore, that hysteria would enjoy a new day in the sun, an expanded place in the theorizing and therapies of the profession. Surely the psychoanalysts now seemingly so securely established as the elite of the American psychiatric profession would renew the focus on a disorder that had given birth to their specialty?

But hysteria turned out to be an elusive quarry. Many of the sufferers had fled, implicitly sharing the view of turn-of-the-century neurologist J. A. Ormerod that the label had acquired “the disagreeable connotation of a certain moral feebleness in the patient, and of unreality in the symptoms.” The Washington psychoanalyst Paul Chodoff commented in 1954 that “hysterical conversion phenomena undoubtedly occur less frequently than formerly”;5 and, two years later, another Washington psychiatrist, Henry Laughlin, was still more emphatic, asserting that “such symptoms are rarely seen in the civilian practice of psychiatry.”6 Scarcely a decade on, Ilza Veith, who interpreted the entire history of the disease through a psychoanalytic lens, complained, as she drew her discussion to a close, about “the nearly total disappearance of the illness.” It had become, she thought, “an apparently infrequent disease”—ironically enough, according to her account, precisely because Freud had understood its dynamics so well, and had communicated his message so well, that “hysteria had become subjectively unrewarding … the ‘old-fashioned’ somatic expressions have become suspect among the more sophisticated classes, and hence most physicians observe that obvious conversion symptoms are now rarely encountered and, if at all, only among the uneducated of the lower social strata.”7

“Where has all the hysteria gone?” asked the female analyst Roberta Satow.8 Gone to its grave, said Étienne Trillat, “and taken its secrets with it.”9 Satow’s query would soon find its echo in another mystery: “Where have all the psychoanalysts gone?” In 1970, with only a handful of exceptions, all the major departments of psychiatry in North America were headed by a psychoanalytically trained psychiatrist. A decade and a half later, virtually none of them was. Instead, neuroscience ruled the roost. It was a sudden and spectacular fall from grace, a story that surely includes elements of profound political miscalculation (a misreading by the analysts of the significance of the move to the neo-Kraepelinian reclassification of mental diseases, and the impact of that cataclysmic cognitive shift on the legitimacy of psychoanalytic approaches); the growing sense that psychoanalysis just did not work (and, not infrequently, diagnosed real organic disorders as neurotic illnesses, to sometimes disastrous effect); and, perhaps most notably of all, the psychopharmacological revolution (both via its direct effects, and through the massive infusion of Big Pharma money it brought in its train). That revolution massively affected the practice of psychiatry at many different levels: clinically; cognitively; organizationally; even politically. By the mid-1980s, American psychoanalytic training institutes, having previously been rigorous about excluding all but MDs from training analyses, were welcoming lay analysts for the first time, as the enrolment of the medically trained all but vanished.

For better or worse, we now live in a psychopharmacological age. Prozac and Valium, Thorazine and Zoloft, and a host of other psychoactive substances, are daily ingested by millions, and have made fortunes for those creating and peddling them to an ever expanding market of eager (and sometimes not-so-eager) consumers. Since 1980, when the American Psychiatric Association promulgated the third edition of its Diagnostic and Statistical Manual (DSM III), American psychiatry has achieved worldwide hegemony, and in many ways pills have replaced talk as the dominant response to disturbances of emotion, cognition, and behavior. Pharmaceutical corporations have underwritten the revolution, and have rushed to create and exploit a burgeoning market for an ever broader array of drugs aimed at treating some of the hundreds of “diseases” psychiatrists claim to be able to identify. And patients and their families have learned to attribute their travails to biochemical disturbances, to faulty neurotransmitters, and to genetic defects, and to look to their doctors for the magic potions that will produce better living through chemistry.

The re-biologization of psychiatry has been accompanied by what Mark Micale has wittily called the “exorcism” of hysteria from psychiatry—a systematic effort to root out the last lingering residues of psychiatry’s Freudian misadventure. The lack of concern among psychoanalysts about problems of descriptive psychopathology had led them to ignore the formation in 1974 of the American Psychiatric Association’s Task Force on Nomenclature and Statistics, the group charged with updating psychiatry’s Diagnostic and Statistical Manual to accord with a forthcoming revision of the International Classification of Diseases (ICD). Early protests from Howard Berk that “DSM gets rid of the castle of neurosis and replaces it with a diagnostic Levittown”10 were met with soothing words and swift action to marginalize him. The one psychoanalyst appointed to the panel found his suggestions routinely scorned and ignored, and resigned in protest. His departure was only one of a multitude of political miscalculations on the analysts’ part. Just months before the publication of the Task Force’s report, it finally dawned on some of them that the proposed document amounted to “a wholesale expurgation of psychodynamics from the psychiatric knowledge base.”11

There was a flurry of protest. Threats were made to mobilize the membership to reject the new manual. Elaborate negotiations ensued. The analysts were thoroughly outmaneuvered. Meekly, they agreed to a “compromise”: the term “‘neurosis,’ whatever the word signified, was psychoanalysts’ bread and butter,” and comprised those disorders, including the now rarely seen hysteria, that were at the core of their enterprise; but, rather than being reinstalled into the body of the manual (a step that would have compromised the neo-Kraepelinians’ goals), it was agreed that the words “neurotic disorder” would be added in parentheses after the newly named “diseases” that had previously formed part of the kingdom of the neuroses. So it came to pass that “the appellative ‘neurosis’ and the clinical, psycho-dynamic tradition for which it stood had been marginalized to the relative obscurity of parentheses.” And soon not even that: a further revision some seven years later eliminated the parenthetical additions. Hysteria and indeed the whole array of “neurotic” disorders had been carved up and thrust out of sight. As Donald Klein, a leading psychopharmacologist and one of the authors of the revolution, later gloated: “The neurosis controversy was a minor capitulation to psychoanalytic nostalgia.”12

It is tempting to trace the residual nooks and crannies where bits and pieces of the old hysteria hang out, hidden from sight in the new neo-Kraepelinian consensus. After all, the various versions of the Diagnostic and Statistical Manual resemble the Yellow Pages in more than just their propensity to grow ever more elephantine as time goes by. (David Healy has pointed out that the number of psychiatric “illnesses” one can suffer from grew from 180 in the third edition, to 292 in the revised third edition, and to over 450 by the time the fourth edition appeared.) As in the Yellow Pages, if one looks diligently enough through the manual, one can find whatever one desires: in this case, the tools to pathologize virtually any species of human behavior, and categories and concepts that one can use in all sorts of creative fashions. “Shell shock” begat “combat exhaustion,” which begat the politically contrived diagnosis of “post-traumatic stress disorder,” or PTSD. Surely cases of classic conversion hysteria lurk under the disguise of the new scientistic categories: as “dissociative disorder: conversion type”; or as “histrionic personality type”; perhaps as “psychogenic pain disorder”; or under the catch-all categories of “undifferentiated somatoform disorder” or “factitious illness behavior”? That would suggest that hysteria has vanished as a medical diagnosis because of a fundamental redefinition of the psychiatric landscape, one that its enthusiasts compare to the replacement of superstition by science, and its critics see as more closely analogous to the recreation of the extraordinary and baroque nosologies that were a feature of eighteenth-century medicine.

The Canadian medical historian Edward Shorter has suggested a different explanation for hysteria’s strange evolution. There exist, he suggests, repertoires of psychosomatic illness that are characteristic of particular cultures and particular epochs in our history. Throughout history, he suggests, the phenomenon of hysterical conversion can be found: the “flight into illness” via the transformation of acute emotional anxiety into physical symptoms, motivated by the secondary gains the sick role can provide. A particular cultural and social setting and the reigning medical theories of the day provide a symptom pool from which the unconscious mind selects the kinds of somatization that then manifest themselves: swooning in the eighteenth century; paralyses, gait disturbances, seizures, and retreats into the role of permanent nervous invalid in the nineteenth century; eating disorders and chronic fatigue in the twentieth century. “By defining certain symptoms as illegitimate,” he claims, “a culture strongly encourages patients not to develop them or to risk being thought ‘undeserving’ individuals with no real medical problems. Accordingly, there is great pressure on the unconscious mind to produce only legitimate symptoms.”13

There is another possibility, however. Perhaps hysteria is not dead after all? Perhaps Charcot was right when he insisted that “L’hystérie a toujours existé, en tous lieux et en tous temps.”14 Perhaps the disease that even those who insist on its reality concede is the very instantiation of lability, a chameleon-like disorder that can mimic the symptoms of any other, and that seems to mold itself to the culture in which it appears, has just assumed a different guise? And one not so very different, it may be, from the disorders Charcot and Freud encountered and described.

Shorter has suggested that the grands gestes of Charcot’s Paris have been replaced by a more anodyne and elusive set of symptoms, chronic fatigue notable among them. Chronic fatigue syndrome has an obvious overlap with the neurasthenia of the late nineteenth century, and is a disorder that is similarly subjective and hard to disprove. Its sufferers insist, like other hysterics, that theirs is a real physical disorder, and, if it lacks the drama presented by the seizures, the hemi-paralyses, and the erotic writhings and moans of Charcot’s patients, it nonetheless presents with an impressive array of bodily symptoms: sore throats, memory loss, aching muscles and heads, insomnia, general lassitude. Fearful of being labeled classic hysterical malingerers, its victims have often opted for labels that seem more distinctively and solidly medical: Epstein-Barr virus; fibromyalgia; or myalgic encephalomyelitis (grim sounding and serious, except when rendered in the form of its unfortunate acronym, ME). It has scarcely helped. Mainstream medicine has evinced skepticism, and the public at large has gleefully dismissed the disorder as “yuppie flu.”

Bitterly, the fatigued denounce their critics, the worst-placed rattling their wheelchairs in lieu of shaking their fists, accusing doctors of being “lamentably ignorant of the most basic facts of the disease.” Proudly they re-dedicated themselves to “the long uphill battle against ignorance and inertia.”15 Pesticides, hormones, chemicals, bacteria, viruses: something must surely be responsible for their suffering, and, if modern medicine pronounces itself unable to oblige with a physical account of their troubles, and proposes to ship them off to the tender mercies of the psychiatric profession, then they must seek help elsewhere. Some have opted for self-help or have turned to holistic practitioners, who are happy to display more sympathy and faith in the physical reality of their disorder, and to link it, as the nineteenth-century proponents of American nervousness once did, to the perils of civilization, only this time in the guise of a poisoned modern environment. Others have sought online support groups, where they can share their experiences and sense of grievance. The verbally and sometimes (ironic as that would be) almost physically violent response of the ME patients to the suggestion that their symptoms are psychosomatic, or “all in their heads,” is a clue to what may have happened to other, more dramatic cases of hysteria. Such patients desperately want a neurological diagnosis. That diagnosis will validate the reality of their disorder, and legitimize their suffering, but the neurologists who have grown to professional maturity in the post-Charcot world evince little or no interest in their troubles. Pausing only long enough, in the most plausible of cases, to subject them to batteries of tests and scans before pronouncing them physically normal, they suggest these troublesome patients go to see a psychiatrist. But that is the last thing these patients want.

The neurologists’ dismissal is not new. Bernard Sachs spoke for many of his neurological colleagues early in the twentieth century when he dismissed the hysterical as peripheral to the neurological enterprise:

While hysterical and neurasthenic patients, and others of the same order, are numerous enough, their ailments and sufferings are, after all, less important than the sufferings of those who are afflicted with various forms of organic spinal disease, say tabes, primary lateral sclerosis, and the like. Let us try to do more for these patients … and do not let us waste too much energy on what people are pleased to call psychotherapy.16

The reluctance of most neurologists to entangle themselves with such cases has, if anything, increased with time. The attention devoted to hysteria and related complaints in neurological textbooks was reduced almost to vanishing point after 1950: “non-organic problems featured only as something to rule out when looking for neurological disease.”17

Hysterical patients still present themselves in neurological waiting rooms, only to be turned away by doctors who have no interest in seeing or treating them. In the process, an ageold disorder becomes almost literally invisible. Shunned by the doctors they seek to consult to validate their symptoms, and defined out of existence by a pharmacologically oriented psychiatry (even were they willing to swallow their pride and accept the psychological roots of their discomforts), hysterics find themselves modern medicine’s untouchables.

And yet—as Sir Aubrey Lewis once sagely remarked, “a tough old word like hysteria dies very hard. It tends to outlive its obituarists.”18