CHAPTER ELEVEN

THE PSYCHIATRIC MYTH

In the late 1890s, it was time for a brilliant young medical student called Carl Gustav Jung to decide his professional future.155 What branch of medicine would Jung specialize in? The decision was difficult, as every door was open to him. If he chose one of the more established branches of medicine, his future would enjoy all the trappings of financial security and professional prestige. But then Jung was never a conventional man; and what is more, through marrying into a wealthy family he could afford to make unconventional professional decisions. So Jung followed his inclination and decided to do something that his peers, his seniors, and his family believed to be very foolish. He chose to specialize in psychiatry.

Why on earth, they asked, would he do that? After all, psychiatry at that time had still not been established as a legitimate medical specialism. It suffered from very low status among the medical professions for a variety of reasons. In the first place, psychiatric treatment just wasn’t very successful. While researchers and doctors practicing general medicine were gaining headway in understanding and treating the disorders of the body, psychiatry continued to slumber far behind in terms of its clinical success rate.156

Furthermore, the main subject matter of psychiatry (our internal mental lives) was far less accessible to medical study and treatment than were the disorders of the physical body. The mind was decidedly more complicated, not least of all because its problems could be caused by multiple factors—spiritual, moral, environmental, social—things not easily explained in terms of biological problems. Psychiatrists were dealing with something different, and were struggling to know what to do.

This was the state of affairs in the late 1800s when Jung shocked those close to him by choosing psychiatry. At that time, little did he know that the fortunes of psychiatry would soon change dramatically.

A key player in this change was a doctor called Emil Kraepelin. In the early 1900s, Kraepelin gained international repute by arguing that our emotional problems were not really problems of the soul or the mind or anything else that was difficult to pin down scientifically. Rather, underlying every mental disorder, he claimed, there was surely a specific brain or biological pathology. If psychiatrists were to treat mental problems successfully, therefore, they should direct their efforts at finding these underlying biological malfunctions.

With the help of this new biological vision, Kraepelin believed that psychiatrists would not only be finally free to explore new procedures that treated mental distress via the body, but could also better align their work with the general medical preference for biological explanations and treatments. Psychiatry, went the argument, was like the rest of medicine—it had just been looking in the wrong place. Jung might not have made a poor decision after all.

Kraepelin’s biological convictions gave momentum to the development of a barrage of new psychiatric treatments during the first half of the twentieth century. In the 1920s, these included interventions not for the squeamish: surgically removing parts of the patient’s body—their teeth, tonsils, colons, spleens, and uteri. The rationale for these highly painful and sometimes fatal treatments was that bacteria living in one of these bodily areas caused mental illness. So, it was thought, if you remove the body part, you cure the problem.

As Joanna Moncrieff’s work makes clear, this was the rationale behind other similar treatments being reported at this time in reputable psychiatric journals as reasonable cures for mental illness.157 These included injecting patients with horse serum, using carbon dioxide to induce convulsions and comas, injecting patients with cyanide, and giving them hypothermia. Again, the aim of using these procedures was to target the underlying virus or disease that psychiatrists were convinced must be at the root of the trouble.

Another treatment was malaria therapy, injecting the patient with the malaria parasite in the hope that the high temperatures malaria produced would kill the virus then thought responsible for mental disease. You wouldn’t be surprised to hear that the effects of this “therapy” were often devastating, because many patients failed to recover from the malaria disease.158

Because success rates of these early treatments were so poor, a suite of new procedures was developed during the 1930s. These included treatments like “insulin coma therapy.” This involved putting patients into a coma for two hours with high doses of insulin, then suddenly waking them with glucose. The aim here was to generate in patients powerful seizures, which were thought to be somehow therapeutic. After this procedure, granted, patients would appear to feel calmer, but they would often show memory loss and other neurological abnormalities such as loss of speech. Five percent of all patients actually died from this treatment, so once again psychiatrists found themselves grappling for alternative treatments.159

Hope was kindled in the 1940s with the development of what was called at the time a ground-breaking new psycho-technology—otherwise known as the lobotomy. This involved surgically removing parts of the brain that were thought responsible for acute mental distress. This new treatment was so widely celebrated in psychiatry at the time that its inventor, António Egas Moniz, was actually awarded the Nobel Prize in 1949 for its invention.160 About one million people in the United States were lobotomized by the 1970s, before the treatment was finally abandoned because of its appalling effects (which are still being experienced by thousands of people still alive today). Fortunately, however, as lobotomies decreased in popularity, other treatments were ushered in.

One that grew in popularity during the 1940s (and which is still in use) is electroconvulsive therapy (ECT), which was brought to popular attention by the cult classic One Flew over the Cuckoo’s Nest. This procedure involves inducing severe seizures in depressed patients by administering intense electric shocks to the brain.

Although many psychiatrists still swear by the healing effects of this controversial intervention, their claims about success are more than offset by the reams of research illustrating ECT’s pernicious side effects, poor remission rates, and responsibility for widespread neurological damage.161 Furthermore, recent reviews of electroshock research have not shown significant differences between real and “sham” electroshock after the treatment period (i.e., sham ECT administers no electricity at all, without the patient’s knowledge). On the contrary, research assessing the improvement rates of fake versus real ECT after six months have actually revealed a two-point difference on the Hamilton Scale in favor of the fake treatment—suggesting that if ECT has any positive effects at all, these are largely placebo effects.162

The point of listing some of psychiatry’s more outlandish treatments is that they all won impetus and legitimacy from psychiatry’s enduring conviction that there must be a physical basis for mental disorder. As you will recall, this originated with Kraeplin’s assumption: if our emotional maladies are biologically caused, then the body is where our efforts must be directed.

Of course, for psychiatrists such as Carl Jung who felt these practices verged on barbarism, there were alternatives. Jung was part of a growing tribe of psychiatrists in Europe, the UK, and the United States who rejected Kraepelin’s biological vision and embraced the more interpersonal and less invasive talking cures. During the 1940s, ’50s, and ’60s, this group became very powerful indeed, especially in American psychiatry. In fact, the taskforces of the first and second editions of the DSM (in the 1950s–1960s) was largely comprised of these psychoanalytically informed psychiatrists.

But the talking cure and biological psychiatry were always strange bedfellows, and always antagonistic. After all, psychotherapists did not share the same biological convictions that Kraepelin used to save psychiatry from complete obscurity in the early 1900s—biological convictions that helped align psychiatry, intellectually at least, with neighboring streams of medicine.

Therefore, when criticisms of psychotherapy gained momentum in the 1970s the ground was set to reject the talking cure and once again embrace Kraepelin’s early vision. This was reflected by the fact that Spitzer’s DSM-III taskforce only included one psychoanalyst, who was included, as Spitzer put it to me, “as a token gesture.” This neo-Kraepelinian revolution, as it was called, was given a significant boost by allegedly exciting new developments in drug treatments.

As I discussed in chapter 5, in the 1950s drugs were seen at best as soothing tonics, but as drug company sponsorship of psychiatry gained pace in the 1970s, 1980s, and 1990s, they were soon marketed as medical cures that targeted and cured discrete diseases. Many senior psychiatrists, who were often funded by the drug companies, legitimated this view in medical journals and the popular media while at the same time receiving financial rewards.

The drug revolution solved many problems for psychiatrists. The profession could now put a legacy of embarrassing clinical failures and devastating treatments to rest and embrace drugs as the first line of treatment. Psychiatry now possessed the psycho-technologies that not only brought it in line with the rest of medicine, thus increasing psychiatry’s status in the eyes of other medical doctors, but which also rendered psychiatrists distinct from the barrage of psychologists, counselors, pastoral counselors, family and marriage therapists, and other kinds of psychotherapists who were beginning to flood the treatment market in the 1980s and 1990s. Psychiatrists and doctors were largely the only professionals, after all, who had legal authority to assign psychiatric diagnoses and prescribe psychiatric drugs. So long as these activities remained exclusively in their hands, their distinctiveness and authority were assured.

Furthermore, psychiatrists now had a new and immensely powerful ally—the pharmaceutical industry—whose financial sponsorship would see the profession move from a medical backwater during the middle of the twentieth century to one of the most powerful medical specialties by its close. In short, the drug revolution advantaged psychiatry in many ways: ideologically, professionally, and financially. It is no wonder that many psychiatrists now regard the pharmacological revolution as a high point in psychiatric history.

Yet there were problems with this revolution. I refer not to those already covered (drugs not working as well as claimed; reductive biological theories remaining unsubstantiated; normality being medicalized through successive DSM and ICD manuals; and pharmaceutical industry sponsorship corrupting medical objectivity). I am talking about something far less easy to identify and for that reason far more difficult to challenge. I am talking about how this new biological vision began to seep into popular consciousness and began to alter our very understanding of emotional suffering.

In order to explain precisely what I mean by this, I’ll have to be a little philosophical for a while. But you can be sure this momentary change in tone has nothing to do with authorial indulgence, but is a necessary preface to all that follows in the remainder of this book.

2

As the idea began to take root that increasing forms of emotional suffering are essentially biological problems best treated with pills, psychiatry committed itself to stripping much emotional suffering of its spiritual, psychological, and moral meaning. After all, if much mental suffering is a result of biological misfortune, then it must be a purposeless experience best swiftly removed.

As this new “negative vision” of mental suffering began to gain currency, many previous and more positive cultural associations attached to suffering (e.g., that it can be purposeful and necessary if handled productively or simply that it is often an unavoidable part of life) began to lose their hold. As I wrote about and defined the differences between negative and positive visions of suffering elsewhere, let me quote what I wrote:

The positive vision holds that suffering has a redemptive role to play in human life; as if from affliction there can be derived some unexpected gain, new perspective, or beneficial alteration. If this vision could have a motto, then Thomas Hardy captured it well: “If a way to the better there be, it first exacts a full look at the worst.” The positive vision of suffering, thus considered, sees pain as a kind of liminal region through which we can pass from a worse to a better place. A region from which can thus be derived something of lasting value for individual life. But the negative vision of suffering, on the other hand, asserts quite the opposite view—namely, that little of value can come of suffering at all. It says there is no new vista or perspective to be gleaned at its end, nor any immured insights to be unlocked from its depths. It is thus something to be either swiftly anesthetized or wholly eliminated, for what good is an experience whose most obvious features are pain and inconvenience.163

While this negative view of mental suffering is still controversial today, it has nevertheless served many psychiatrists well because it fits so neatly with their biological myth of mental distress: Since mental suffering is largely caused by biological problems, it is by implication largely purposeless. It is therefore right and proper to mitigate it in any way possible. And insofar as pills can achieve this, to deny their usage would be a basic dereliction of duty.

This simple equation gave many psychiatrists a growing sense of confidence in their own moral and clinical authority, as well as a rationale for claiming to be better equipped than other professionals to help improve our mental health. But this equation also has its faults. As we have seen, the view that suffering is caused by our biology still does not enjoy scientific backing. And this has led many critics to argue that the biological view is less a scientific reality than a convenient professional myth.

To explore this criticism further, we must first unpack what is actually meant by the term myth. To do this, let me share with you an event a friend once told me about concerning his five-year-old daughter.

He was driving her home along a country road late one clear summer’s night. She had been looking out of the car window quietly the whole time, until after about twenty minutes she finally spoke up. “Daddy,” she asked with great seriousness, “why is the moon following us home?”

The poetry of the question took him so off guard that he mumbled something about near objects seeming to pass by quicker than distant objects because of relative distances, and so forth.

His daughter wasn’t impressed. “Daddy, I think the moon is following us home because it’s lonely.” With this conclusion she appeared satisfied. She picked up her book and started humming contentedly to herself. She now had her myth.

We are not so different from that little girl. We seek myths to settle crucial questions for which we have no clear answers, but about which we feel we need answers so that we may turn our attention to other things. This is why every society throughout time has its multifarious myths about all aspects of life—about where we are from, about where we are going, about why we are here, and so on. Myths help soothe our anxiety about some of our most fundamental human uncertainties.

Take one of the greatest uncertainties of all: what happens to us after death. This issue provokes such universal anxiety that anthropologists haven’t found a single community upon this vast globe that does not have a myth about the afterlife. One community speaks of ethereal angels awaiting us at pearly gates; another of a cosmic mother we’ll all return to after death; still another of gamboling ancestors welcoming us with barrels of manioc wine. The myths are everywhere, all telling fantastically different stories but all, in effect, serving the purpose of providing answers to questions which if left unanswered could drive many of us to mad distraction.

Because the maddening questions are so plentiful, every society has its manifold myths, addressing every conceivable quandary from what is the meaning of love or happiness to what are the purposes of fear, death, and birth. Myths speak to the realms of life that matter most, including “whence comes and how goes our suffering”—a matter that, in contemporary Western societies, we turn to psychiatrists for help.

Surely the help that bio-psychiatry offers is located in something more substantial than myth. After all, apologists argue that psychiatric notions are scientific, not mythic; that they are a product of scientific investigation, not human imagination. Critics are keen to retort that this is clearly not the whole story. Many psychiatrists’ claims are no more substantiated than are the claims of religion. This is because, in so many areas that they survey, psychiatrists do not prove things but decide things: they decide what is disordered and what is not, decide where to draw the threshold between normality and abnormality, decide that biological causes and treatments are most critical in understanding and managing emotional distress.

Granted, many of these decisions are informed by research, yet none of these decisions, or the research upon which they are often based, is free from the subjective persuasions and interests of the players involved. DSM definitions are not fashioned in scientific laboratories but in committee rooms, drug research cannot be impartial when it is wedded to drug company interests, and the profession’s commitment to the biological vision of mental strife can’t be shorn from psychiatry’s historical struggle for biomedical status.

That many psychiatrists’ claims and pronouncements issued by their lobby groups and professional associations cannot be scientifically substantiated has led critics to state that it is therefore no more objective than many traditional systems of belief. Take systems like Shintoism, Confucianism, Shamanism, or Christianity, for example. Each of these systems offers explanations about the causes and meaning of suffering, about what is normal or abnormal, sick or healthy, mad or bad. And each of these systems says different things about the experiences each is aimed to explain and manage. What is considered wrong, pathological, or odd in one society simply isn’t regarded that way in another. In what sense, then, can we conclude that our own psychiatric system has somehow transcended the bounds of culture and thus attained access to universal truth?

Abstract questions like these are always best answered by looking at concrete examples on the ground. So let us now look at one such example: the experience of hearing voices. As I have written elsewhere, in a society where this experience is chiefly associated with mental illness, any individual harassed by these visitations must also contend with the difficult idea that they are psychologically unwell—an idea which, if believed by the hearer, is likely to generate additional anxiety as well as compound, with each new episode, the hearer’s sense of abnormality. This means that in societies where these experiences are perceived negatively, individual sufferers will struggle not only with the experience itself but also with the consequences of how these experiences are socially perceived, defined, and managed.

By contrast, in a community where “hearing voices” does not invite the same cultural suspicion, or where these voices are seen (as in the poleis of ancient Greece) as possible signs of divine inspiration, the hearer is believed to be less mentally afflicted than potentially blessed. The individual subjected to this more favorable cultural diagnosis will invariably possess a far less tortured relationship with his internal voices and will therefore be freer from the burdens of shame and angst attending our first individual.

Thus an experience that can mark you as unhinged in one society can mark you as inspired in another. And because the way we are marked can shape how we feel, when trying to make sense of any human experience we must always relate it to the dominant myth through which people define and pronounce upon their experiences.164

This comparison reveals a major danger with psychiatric diagnosis. As soon as you are assigned a diagnosis of “depression” or “anxiety disorder” or “attention deficit disorder,” you become a protagonist in a larger myth—you now have a mental disorder that marks you as a patient. You have entered into a social contract in which you are now socially positioned as dependent on psychiatric authority. From then on it is harder to think of yourself as a healthy participant in normal life, or as a person in control of your own fate. You have a psychiatric condition that has seized control, that has set you apart, and that has made you dependent upon psychiatric authority.

Admittedly, for many people, being diagnosed will bring initial relief (at least someone has named your suffering, and so presumably can treat it—when you’re in that kind of pain, the promise of any kind of help brings relief). Many other ways of responding to suffering can also bring initial relief (psychotherapeutic, spiritual, religious, etc.), however, and these may not always entail the unfortunate side effects of being labeled psychiatrically unwell. For instance, patients’ complaints about the burden of their diagnosis are widely documented in mental health literature. This is because we know that being diagnosed can bring additional stresses that accompany self-identifying as different, disordered, and in need of medical help. In other words, receiving a diagnosis can have negative secondary effects that are not always anticipated at the start.

In the work of the psychiatrist Marius Romme, there is a particularly striking example of precisely what these effects can be. Romme was working with a 38-year-old woman who had been given a diagnosis of schizophrenia. This woman had been hearing voices for a long time, but her medication was just not helping her. After enduring many years of failed drug treatment and the awful negative effects her medication caused, she was finally on the brink of suicide. But then one day she unexpectedly took a turn for the better—appearing much happier and more optimistic than before.

This sudden change followed her reading of a book by the psychologist Julian Jaynes, who argued that the ancient Greeks were different from many modern Western individuals, as they regularly understood their inner thoughts as coming from the gods. Whether Jaynes’s theory stood up to academic scrutiny or not did not matter to this patient. All that mattered was that the book provided her an alternative myth, a new way to understand or think about her internal world. This patient decided that she was probably an ancient Greek rather than a schizophrenic. And this simple change to the myth she embraced altered her mood significantly by changing her whole relationship to her voices, making her feel less frightened of them, less odd, and consequently less alone.165

So if the myth we embrace affects how we read and experience our psychological states, changing the myth through which we understand such states can be just as therapeutic as can be taking a pill or undergoing therapy. Consider, for example, another significant change many patients report once they reject the psychiatric view: they often no longer experience the stigma that accompanies being identified as psychiatrically unwell. This is an important point because a popular justification for the biological vision of our emotional troubles is that it reduces the stigma of mental disorder. After all, if a patient has a biological disorder, they cannot be blamed for the way they are.

Groups like the National Alliance on Mental Illness in the United States and SANE in Britain take this position: The biological myth helps sufferers because it indicates to others they are not responsible for their predicament. They are like anyone else with a medical condition, and so should not be seen or treated otherwise.166

While in theory this position is sensible enough, in practice things seem to unfold very differently. Many people experience negative secondary effects from their diagnosis, including either concealing their diagnosis from others out of shame (which can compound their isolation) or they become so identified with their label that they regularly declare it to others (which can in turn invite real rejection). For example, researchers have shown that today’s most popular public perception of mental disorder is that it is biological in origin.167 This is particularly problematic in the light of recent research revealing that patients whose emotional problems are believed to be caused by brain disorders are treated far more harshly by the average person than patients believed to have problems caused by social or psychological factors.

A research team at Auburn University revealed this troubling fact by asking volunteers to administer mild or strong electric shocks to two groups of patients if they failed at a given test. The results were alarming: the patients believed to have a brain disorder were shocked at a harder and faster rate than the patients believed to have a disorder that was social or psychological in origin, suggesting that we may attract harsher treatment when our problems are considered in brain-based terms.168

Results like these are obviously alarming but not entirely unexpected. After all, we know from other research that people who are believed to suffer from biological abnormalities are seen by the average person as more unpredictable and dangerous than “normal” people. Such perceptions have also been shown to lead “normal” people to avoid interacting too closely with the “mentally distressed”—an avoidance which can, once again, compound the sufferer’s isolation.169

Studies like those above show that being diagnosed with a psychiatric condition—with depression, anxiety, or one of the more severe disorders—often comes with powerful cultural baggage, especially when our suffering is perceived as being rooted in our biology. Paradoxically, then, the worldwide psychiatric campaigns whose goals are to reduce the stigma associated with mental illness through use of the assertion that it is just like any other biological disease may well have helped bring about the very opposite of what they intended.170

I feel by now that you may be wondering if I have exaggerated the extent to which the dominant myth in psychiatry is biological. Surely psychiatry takes into account the relevance of factors that are non-biological in nature? Is the dominant myth of psychiatry as biological as I have made out?

Many psychiatrists do in fact recognize, at least in principle, that the causes of mental distress can be various mixtures of social, biological, and/or psychological factors.171 This view has been captured in what is dubbed the bio/psycho/social model of suffering, which is that these diverse factors interact to precipitate and perpetuate emotional distress. The problem with this view is that psychiatric treatment has, in spite of it, increasingly involved biological treatments since the 1980s. As the British psychiatrist Professor Christopher Dowrick put it:

… the biopsychosocial model has little substance beyond the descriptive, and in everyday general practice is viewed mainly as necessary rhetoric. In reality, general practitioners work to what I have characterized as a “bio(psycho)” model of health care. We tend to see acute physical problems as most appropriate for us to deal with, followed by chronic physical and psychological problems. But we generally consider social problems to be inappropriate for medical attention, and can become irritated if we are presented with too many of them.172

While Dowrick states that the “bio(psycho)” model now dominates, others argue, and with justification, that even the “psycho” arm of psychiatry is on the decline. As a recent president of the APA admitted, psychiatry has allowed the “bio-psycho-social model to become the bio-bio-bio model”—meaning that prescribing pills is all that most psychiatrists now only want to do.173 British psychiatrist Duncan Double also confirmed this when he bemoaned to me that even those who most ardently avow a bio-psycho-social view often tend at bottom to put the biological first in their treatment practices.

So while the bio-psycho-social model exists theoretically, in practice it is far from being realized. This is borne out by the fact that pill treatments are up, long-term psychotherapy is on the decline, and that psychiatrists rarely make social interventions. In short, bio-psychiatry advocates do not necessarily advance their negative model of suffering by way of their theoretical pronouncements, but by way of their clinical practices—something implied by the fact that biological treatments are generally preferred.

3

The growing dominance of the bio-psychiatric myth of suffering raises many new and serious questions. Does this domination lead increasing numbers of us to view our suffering in entirely negative terms as something to be erased or anaesthetized at all costs? After all, as diagnostic thresholds lower and as the number of disorders increase, more and more realms of emotional distress have been medicalized. We are replacing traditional philosophical and religious ways of managing and understanding distress, which once allowed people to find meaning and opportunity in many forms of suffering,174 with a starker technological view that sidesteps the bigger humanistic questions: Cannot suffering often be a necessary call to change (and therefore a message to be understood rather than anaesthetized), or the organism’s protest against harmful social conditions (therefore requiring a social or psychological rather than a chemical response), or a natural accompaniment of our psychological development (therefore having vital lessons to teach if managed responsibly and productively)?175

The rise of the biological, negative view of suffering shores up a wider cultural ideology particular to our times, one in which emotional anesthetics—pills, alcohol, retail therapy, escapist activities—have become the preferred vehicles for managing distress. In what way, therefore, has the idea that we can consume our way out of depression become a myth that is convenient not only for psychiatry and Big Pharma but for the wider capitalist system in which we live?

Psychiatry is not an island. Its ideas and preferences shape, reflect, and respond to the dominant cultural shifts of our time. But psychiatry’s cultural embeddedness is not just pertinent for the regions in which it was developed. This is because, in recent years, Western psychiatry has aggressively expanded into new lucrative markets—China, India, Indonesia, South America, Eastern Europe—rapidly converting whole new parts of the globe to our culturally specific modes of misery management. What happens, then, when a system made in the United States or the UK is exported to communities that have never before embraced our medicalized view of emotional suffering? By exporting our remedies, can we therefore be confident that we are really helping heal the world?

Before you begin to formulate an answer, I ask you to first consider the bizarre series of facts that I’m now about to reveal to you, facts that require us to take a journey to a series of destinations somewhat farther afield than the United States.