CHAPTER ONE

“WE’RE ALL MAD HERE”

BY THE LATEST ACCOUNTING, MORE THAN HALF OF ALL Americans meet criteria for a psychiatric disorder at some time in their lives.1 The current system of diagnosing mental disorders contains hundreds of labels, ranging from well-known standards like schizophrenia to less familiar ones like hypoactive sexual desire disorder. But what is a psychiatric disorder? Does normal become meaningless if most of us have an abnormality of mind? Where do we draw the line between normal and abnormal?

In 2007 two reports were released documenting alarming increases in the diagnosis of childhood psychiatric disorders that were previously thought to be rare. Both reports triggered a public outcry. But the nature of the outcry was quite different.

The first report, from the U.S. Centers for Disease Control, examined the prevalence of autism among eight-year-old children in the year 2002. Based on data from fourteen sites, the CDC found that 1 in 150 children (0.66 percent) had an autism spectrum disorder. That number was more than ten times higher than prevalence estimates of autism in the 1980s and seemed to validate a growing concern that the nation was in the midst of an epidemic.

The response among families, advocacy groups, and the media was, understandably, one of unmitigated alarm. Alison Singer, spokeswoman for the advocacy organization Autism Speaks, captured the sense of urgency felt by many: “This data today shows we’re going to need more early-intervention services and more therapists, and we’re going to need federal and state legislators to stand up for these families.”2 Singer and others called for a vast increase in research funding “so we can find a cause and understand what is fueling this high prevalence.”3

Some families, and certain celebrities, insisted that vaccines were to blame; others weren’t so sure but worried that some kind of environmental toxin might be contributing to the rise in prevalence. Many scientists and educators cautioned that the apparent epidemic might simply be a product of greater awareness and a broadening of the definition of autism (to include a larger “autism spectrum”). But few doubted the urgent need to help affected children and their families.

The outcry over the second report was equally strong but dramatically different in its tone. The study, published in the Archives of General Psychiatry, examined trends in the diagnosis of child and adolescent bipolar disorder using data from a large survey conducted by the National Center for Health Statistics. The authors found that between 1994 and 2003 the rate of bipolar disorder diagnoses in children up to age nineteen increased fortyfold, from 0.025 percent to 1.0 percent of the population (approximately half the rate of bipolar disorder among adults).4 This time, the jump in prevalence was widely interpreted not as a public health emergency but a scandal. For many, the findings confirmed the suspicion that psychiatry itself was deeply flawed. The blogosphere lit up with critics who claimed that psychiatry was pathologizing normal behavior, medicalizing childhood, and even colluding with pharmaceutical companies to create a market opportunity for drugging children. Many in the medical community were also suspicious that a lot of misdiagnosis was going on.

Two numbers, two very different reactions. Considered side by side, these two episodes dramatize the charged and complicated nature of defining psychiatric disorders. There are some remarkable parallels: in the same year, the public learned that two often disabling childhood disorders, once thought to be rare, were now being diagnosed in about 1 percent of children. In both cases, part of the story seemed to be an increasing public awareness of the condition and an expansion of diagnostic labels. The new autism estimates captured the broader autism spectrum including Asperger syndrome. And the bipolar estimates reflected a broadened spectrum as well. Since the mid-1990s, some researchers and clinicians argued for expanding the diagnosis beyond the classic symptoms of manic highs and depression to include children who exhibited chronic and explosive anger and irritability.

But there were important differences. Autism had always been a disorder of childhood while, prior to the 1990s, many psychiatrists believed that bipolar disorder did not exist in children. The broadening of the autism spectrum may have been less controversial because it had a longer history. But there was another key difference. At the time the reports were published, there were few if any established drug treatments for autism. On the other hand, medications are a cornerstone of treating bipolar disorder. And many of these medicines—lithium, valproate, and antipsychotics—can have serious side effects. The idea that such powerful drugs would be increasingly used to treat bipolar disorder in young children was clearly part of what was alarming to many people. Some saw the expansion of the diagnosis as psychiatric imperialism and “disease-mongering.” Scientists who collaborated with pharmaceutical companies were accused of nefarious conflicts of interest, with the implication that psychiatric research was motivated by financial self-interest.

We still don’t know exactly why the prevalence of autism and bipolar disorder has been growing, but the controversy forces us to confront an important question: How do we draw the line between normal and disorder when it comes to how the mind functions? At what point are we just pathologizing normal as some critics of psychiatry charge? Answering those questions requires that we first answer another question: What do we mean by normal?

Determining what is normal is a surprisingly difficult task, and that may explain why academic science has rarely tried to address it. But the definition of abnormal has been investigated and debated over and over again—perhaps in part because of a notion articulated a century ago by the great American psychologist William James, who believed that “the best way to understanding the normal is to study the abnormal.”5

Modern psychiatry has largely tried to define the abnormal without much reference to the normal. And as we’ll see, that’s created some problems. For the most part, we have described disorders by starting at the edges of human experience—identifying syndromes from the most striking and dramatic symptoms that people express. Working our way inward from those edges, normal becomes something of an afterthought—the ill-defined residual.

But without a basic map of how the mind and brain function, our definitions of abnormal and normal depend heavily on what behaviors we decide are unusual, bizarre, or problematic. And those decisions can easily be influenced by cultural trends, historical tradition, or the opinions of “authorities.”

A REVOLUTION IN PSYCHIATRY

SEVERAL YEARS AGO ONE OF MY COLLEAGUES POSED A QUESTION during a staff luncheon in our Department of Psychiatry: “Who do you think was the most influential psychiatrist of the last fifty years?”

The answer seemed obvious: Robert Spitzer. Robert Spitzer? Probably an unfamiliar name to most people; but the revolution he led transformed the way we view mental illness.

As recently as the 1970s, psychiatrists had no reliable criteria for making a diagnosis. A patient who reported hallucinations and bizarre behavior might receive a diagnosis of schizophrenia from one psychiatrist, borderline personality from another, or manic-depressive illness from a third. At the same time, the field began to acknowledge that its disorders were sometimes based on archaic views of human behavior. In 1973 the Board of Trustees of the American Psychiatric Association voted to remove homosexuality from its official manual of psychiatric disorders.

That same year, Science, a top-tier scientific journal, published an article challenging the foundations of “sane” and “insane.”6 The author, psychologist David Rosenhan, asked seven confederates to join him in a deception. They were each to present themselves to psychiatric hospitals with the complaint that they had been hearing voices. All eight of these “pseudopatients” were admitted to psychiatric hospitals and held for weeks. Their mission was to get discharged. “Each was told that he would have to get out by his own devices,” Rosenhan explained, “essentially by convincing the staff that he was sane” (p. 252). This turned out to be very difficult, and it took nearly three weeks for the pseudopatients to be discharged. Even though they exhibited no psychiatric symptoms during their hospital stays, all eight were initially diagnosed with schizophrenia and their “normal” behavior was interpreted as evidence of illness.

In the early 1970s, another indictment of psychiatric diagnosis highlighted the need to change the way psychiatrists practiced. A study of hospital admission records revealed that a patient was much more likely to be diagnosed with schizophrenia (rather than an affective disorder, such as manic-depressive illness and depression) if he were admitted to a hospital in New York than if he were admitted to a hospital in London.7 Could mental illness in America really be so different from mental illness in the UK?

One obvious way to answer this is to show the same set of patients to psychiatrists in both countries and see if they agree on diagnosis. As part of the U.S./UK Cross-national Project, researchers showed videotapes of patient interviews to groups of psychiatrists in the United States, United Kingdom, and Canada.7 The results clearly showed that it was the psychiatrists not the patients that explained the transatlantic differences in diagnoses. When faced with the same patients, American psychiatrists were far more likely to make a diagnosis of schizophrenia than were the British psychiatrists. If small cultural differences among psychiatrists could have such big effects on the way they labeled symptoms, what hope was there of defining the boundaries of normal and abnormal?

The unreliability of psychiatric diagnosis led Robert Spitzer and his colleagues to overhaul the system. In 1980 they rolled out the third edition of the Diagnostic and Statistical Manual of Mental Disorders, or DSM-III, as it is better known. The two previous editions of the DSM (published before 1970) had been heavily influenced by Freudian concepts of psychopathology and offered few specifics about the definition of mental illnesses.

The third edition provided the field, for the first time, with an explicit set of criteria for diagnosing disorders. DSM-III also debuted a raft of conditions that are now familiar fixtures of popular culture: attention deficit disorder, panic disorder, posttraumatic stress disorder, borderline personality disorder, and others. Over successive editions of the manual, psychiatry has engaged in a cycle of lumping and splitting its diagnoses. Between the publication of DSM-I in 1952 and the latest major revision, DSM-IV, in 1994, the number of diagnostic labels in the book has swelled from just over 100 to more than 350.

Today, the DSM is the most influential book in psychiatry. It is the reference manual every psychiatrist-in-training must learn to use before being considered competent to practice. Among other things, it provides the definitions of mental disorders that insurance companies use to determine whether psychiatric treatment is reimbursable. In many ways, DSM-III and its successors also fueled the modern era of medication treatment of mental illness. With clearly defined disorders to study, researchers and pharmaceutical companies could test whether new compounds were effective treatments for these conditions. Indeed, before a pharmaceutical company launches a psychiatric drug, they usually must demonstrate its effectiveness for a “DSM-defined” disorder. More than any other psychiatrist, Spitzer (and his colleagues) shaped the way we talk about mental illness.

But it’s no secret that the DSM has its limitations. Right up front, the manual acknowledges “that no definition adequately specifies precise boundaries for the concept of ‘mental disorder.’ ”8

The primary goal of the DSM, since 1980, has been to provide a practical and useful set of criteria—a common language—for diagnosing mental disorders in clinical practice and research. In essence, it presents a description of syndromes—agreed upon by a consensus of experts—that are associated with distress, disability, or “a significantly increased risk of suffering death, pain, disability, or an important loss of freedom.” Still, despite the claims of some critics, the DSM was never intended to be an authoritative statement about what’s normal and what isn’t. As Robert Spitzer himself noted, “It does not pretend to offer precise boundaries between ‘disorder’ and ‘normality.’ ”9

By design, the DSM also doesn’t attempt to tie disorders to the basic functioning of the mind and the brain. And so, as useful as it’s been in providing a common language for drawing a line between mental health and illness, the application of DSM’s categories can be subject to the vagaries of cultural trends in how we label behavior. That’s something I witnessed in the course of my own training as a psychiatrist.

FROM EPIDEMIC TO ODDITY

“YOUR NEXT ADMISSION’S IN 314.”

I stopped by the nurses’ station on the way to room 314 and picked up a copy of Sarah Crane’s chart. It was 2:30 in the morning, and she would be my fourth admission of the night—I needed a quick summary of her history. I glanced at the note from the resident who had admitted her last month and skimmed a story that was by now a familiar one.

“Hello, Ms. Crane, I’m Dr. Smoller.”

A woman in her late twenties sat, with a blank stare, in the corner of the interview room, wrapped in a powder-blue wool blanket. She didn’t make eye contact.

“Can you tell me what brings you in tonight?”

“One of my alters tried to kill me,” she answered, matter-of-factly.

“Tried to kill you?”

“Yes.”

“Who tried to kill you?”

She didn’t respond.

“Ms. Crane, who tried to kill you?”

We sat there in silence for two or three minutes.

Then her eyes narrowed, and her face took on a stern scowl; she spoke in a voice that was low and gruff. “I did.”

In the late 1980s an alarming but previously obscure mental illness began to reach epidemic proportions in the United States. To accommodate the victims, psychiatric hospitals were driven to divert their inpatient resources by opening “units” specializing in the treatment of this disorder. The disorder was called “multiple personality disorder” (MPD) and was believed to be due to early traumatic sexual abuse, which itself was being recognized as vastly more common than previously suspected.

Even more striking, MPD was becoming epidemic not only on a national scale but also, one might say, on an individual level. Eve may have had three faces, but the modern MPD victim could have more than a hundred “alters,” each with its own personality, name, vocal inflection, and set of memories. Prior to 1970 fewer than two hundred cases had ever been reported, but between the mid-1980s and 1990s, more than twenty thousand cases were diagnosed.10 And then, in 1994, MPD was removed as a label in the DSM manual.

In its place, the diagnosis of dissociative identity disorder (DID) appeared. The diagnostic criteria for MPD and DID are almost identical, but the name change signaled a retreat from the almost supernatural notion of coexisting multiple personalities. By the time MPD was stricken from psychiatry’s official list of diagnoses it had become the focus of a controversy that engaged feminists, victims’ rights advocates, litigators, and mental health professionals.

The concept of “recovered memory” played a key role in disorders like MPD and posttraumatic stress disorder. This seemed to cultivate a cottage industry of therapists who elicited and helped “recover” memories of childhood sexual and physical abuse in patients with a variety of symptoms.

Exemplified (and perhaps inspired) by the 1970s story of Sybil, who reportedly developed multiple personalities after suffering horrific abuse during her childhood, the prevailing explanation for MPD was that victims of overwhelming abuse develop separate personalities to handle their unbearable memories and seal them off from consciousness. Therapists were trained to draw these memories into awareness, sometimes through hypnosis or by interviewing patients while they were under the influence of Amytal (a barbiturate touted as a “truth serum”). Suddenly patients who had never known they were abused were discovering they had been horribly victimized.

Families were torn apart, and in a growing number of cases, patients sued the alleged perpetrators (usually a family member). The recovered memory phenomenon fueled a growing cultural panic about child abuse in the 1980s, reaching a peak with prosecutions of staff members of several preschool and day care centers. Responding to accusations of abuse, law enforcement officials and therapists elicited increasingly bizarre tales of ritual and satanic abuse that should have defied credulity. In the Little Rascals Day Care Center case, the center’s director was sentenced to twelve consecutive life terms based on the testimony of young children who described abuse that included the ritual killing of babies aboard a spaceship.

As research emerged demonstrating that recovered memory of severe trauma is a rare (if not implausible) phenomenon, a backlash ensued, including a new wave of litigation targeting therapists who encouraged or even induced false memories of sexual and ritual abuse. Paul McHugh, then chairman of psychiatry at Johns Hopkins University, likened the frenzied assertions about repressed and recovered memory and multiple personality disorder to the social hysteria of the late 1600s that produced the Salem witch trials.10 Skeptical psychiatric researchers challenged the advocates of repressed memory to verify their claims. Harvard psychiatrist Harrison “Skip” Pope and his colleagues at McLean Hospital asked why it was so difficult to find documented examples of repressed memory prior to the twentieth century if it is a natural or innate capacity of the brain in the face of severe trauma.

They scoured historical works of literature and nonfiction and were unable to find any descriptions of repressed traumatic memory. And then they did something unusual for a group of academics: they offered a $1,000 reward “to the first person who could produce an example of dissociative amnesia for a traumatic event in any work of fiction or nonfiction in any language, prior to 1800.” They posted their challenge in print and on websites and discussion groups all over the Internet and in multiple languages. It was an extraordinary approach to solving a highly contentious, politicized, and seemingly unending debate.

I spoke with Pope about the repressed memory challenge as we sat in his office at McLean Hospital, where he directs the Biological Psychiatry Laboratory. He is a man who speaks in paragraphs with a boyish enthusiasm that is uncommon for a Harvard professor and an erudition fitting for a descendant of Alexander Pope (another scholar with an interest in memory and forgetting: “Of all affliction taught a lover yet, ’Tis sure the hardest science to forget!”).

“I had always been struck,” he said, “by the fact that there did not seem to be any cases of repressed memory in Shakespeare or in Aeschylus or Euripedes or Sophocles or the Aeneid or The Odyssey or the Bible or other things and wondered if maybe it’s just that I didn’t have a sufficiently comprehensive knowledge of literature or whether maybe this was an indication of the fact that this was not a natural human phenomenon.”

In the 1990s he had asked members of a university English Department to see if they could come up with any instance of repressed memory in literature before the nineteenth century, and they were unable to. Intriguing but hardly definitive. But a decade later, he realized that advances in technology had created an unprecedented opportunity. The reach of the Internet and the resources that are available online meant that he could launch a comprehensive test of his hypothesis. “My study is a study that right up front seeks to prove a negative and claims to have done so because it uses a technology that has not existed until the last ten years of humankind. Namely the power to ask a question of every single person in the world and then if nobody can answer the question to be able to say that there is no answer to the question.”

In 2006 Pope and his colleagues issued their repressed memory challenge on more than thirty high-volume websites across the Internet, from broad-interest sites like Google Answers to more specialized sites like “Great Books Forums.” They translated the challenge into French and German and posted it on websites hosted in those countries, and readers spread the challenge to other websites.

In 2007 Pope and his colleagues published the results of their quest in a medical journal—not a single case of repressed memory was uncovered by them or anyone else.11 They concluded that “dissociative amnesia” (thought to be a core component of multiple personality disorder) was best described as what psychiatry calls a “culture-bound syndrome”—that is, an entity constructed by and limited to a particular historical culture—in this case, twentieth-century western societies.

There is a postscript to this story. Shortly after their paper was published, a response to the challenge was submitted that did appear to describe an instance of repressed memory published before 1800. It concerned a scene from Nina, a one-act French opera by Nicolas Dalayrac, which premiered in 1786. In the opera, Nina faints after seeing her true love, Germeuil, in a pool of blood, apparently murdered by a rival whom her father wishes her to marry. When her father presents her to the murderer for marriage, she becomes delirious. She is sent to recuperate at her father’s country estate, where she develops amnesia for Germeuil’s murder, believing he is on a trip from which he will soon return. When Germeuil finally does reappear, miraculously having survived, Nina gradually regains her memory of him. Strictly speaking, even this case does not meet Pope’s challenge, because Nina’s forgetting seems to have involved amnesia due to delirium, and there’s no indication that she recovered a memory of the traumatic event. Nevertheless, Pope and colleagues awarded the prize for this entry, which moved the origin of “repressed memory” only fourteen years before the cutoff date of 1800.

Repressed memories are considered to be central to the etiology of MPD; given that the concept of repressed memory didn’t appear before 1786, it’s perhaps not surprising that the first case of a dual personality wasn’t reported until 1791. Eberhard Gmelin, a German physician, described a local woman who, while recovering from an infectious disease, developed attacks of nodding head movements followed by a sudden shift into the identity of a vivacious French woman who described herself (in fluent French) as a refugee of the Revolution who had fled to Germany.12 In these states, she had no memory of her German family, but just as suddenly, she would return to her true identity, with no recollection of her French alter ego. The second, and more famous case of a “multiple personality” was that of Mary Reynolds, reported in 1816 by the New York physician S. L. Mitchell. Like the earlier case, she had a second, more bubbly personality emerge following an illness that apparently included severe seizures.13 It seems likely that these cases actually represented a neurologic alteration of personality that can occur following seizures or a variety of brain insults. The more modern concept of MPD, with its emphasis on repressed memory, did not appear until the late nineteenth or early twentieth centuries.

The story of multiple personality disorder is one of several examples in which psychiatric diagnoses have risen and fallen from favor as notions of what is normal and what is an illness have shifted. Like “hysteria” and “fugue” before it, multiple personality disorder made the journey from epidemic to oddity.14 The point here is that our definitions of disorder can change, even within a generation, sometimes owing more to cultural preoccupation than scientific insight. And, I would argue, that’s more likely to happen when we construct descriptions of syndromes without a grounding in how the mind and the brain work—that is, the psychology and biology of normal.

CULTURE-BOUND

SOME CRITICS OF THE DSM’s APPROACH TO DISTINGUISHING “DISORDER” from “normal” have pointed to the influence of such cultural preoccupations as evidence that the whole enterprise is socially constructed and based on a largely Western, medical model of mental illness. For some disorders, that seems like an overstatement. For example, the psychotic disorder that Western psychiatry calls schizophrenia is recognized around the world, and rates of schizophrenia have been consistent across cultures, in the range of 2 to 5 per 1,000 population.15

But it’s undoubtedly true that social and cultural factors affect how people express distress, experience symptoms, and engage in healing. It’s also true that all definitions of mental illness involve some kind of value judgment about the bounds of normal. That is, defining the realm of abnormal or disorder depends on how a group (e.g., a society or a professional establishment) judges the boundaries of normal and the concept of deviance.

In his insightful book Crazy Like Us, the journalist Ethan Watters makes a compelling case that the Western mental health establishment has exported its concepts of mental illness and psychiatric disorder around the world, in essence infecting non-Western cultures and creating epidemics of DSM-defined mental illnesses where they never existed before. He documents examples of disorders—anorexia, PTSD, depression—that have taken hold in cultures around the world as a result of the West’s cultural hubris, well-intentioned naïveté, or even the marketing machinery of the pharmaceutical industry.

At the same time, cultures around the world have constructed their own conceptions of mental illness, and some behaviors that we might consider abnormal don’t neatly fit into any of the DSM’s categories.

SHRINKING PENISES

CONSIDER THE FOLLOWING CASE DESCRIBED IN 1965 BY A Taiwanese psychiatrist:

T. H. Yang, a thirty-two-year-old single Chinese cook from Hankow, in Central China, came to the psychiatric clinic in August 1957, complaining of panic attacks and various somatic symptoms such as palpitation, breathlessness, numbness of limbs, and dizziness. During the months just prior to his first visit, he had seen several herb doctors, who diagnosed his disease as shenn-kuei or “deficiency in vitality” and prescribed the drinking of boy’s urine and eating human placenta to supply chih (energy or vital essence) and shiueh (blood). At this time the patient began to notice also that his penis was shrinking and withdrawing into his abdomen, usually a day or two after sexual intercourse with a prostitute. He would become anxious about the condition of his penis and ate excessively to relieve sudden intolerable hunger pangs. Almost irresistible sexual desire seized him whenever he felt slightly better; yet he experienced strange “empty” feelings in his abdomen when he had sexual intercourse. He reported that he often found his penis shrinking into his abdomen, at which time he would become very anxious and hold on to his penis in terror. Holding his penis, he would faint, with severe vertigo and pounding of his heart. For four months he drank a cup of torn-biann (boy’s urine) each morning, and this helped him a great deal. He also thought that his anus was withdrawing into his abdomen every other day or so. At night he would find his penis had shrunk until it was only one centimeter long, and he would pull it out and then be able to relax and go to sleep.16

Most people would agree that this case describes a condition that is not “normal”—but what is it? If this man were to walk into the office of a Western psychiatrist steeped in the language of DSM-IV, he might receive any of several diagnoses: panic disorder (an anxiety disorder), major depression with psychotic features (a mood disorder), delusional disorder, somatic type (a psychotic disorder), hypochondriasis (a somatoform disorder), or any number of others, although the details of the case would make an awkward fit for the DSM categories. In fact, the diagnosis given to this patient is older than any of the DSM labels I just mentioned. He is a victim of koro.

Koro has been recognized for centuries in China17 but didn’t appear in the Western literature until the late nineteenth century.18 The classic presentation of koro is an acute state of panic in males caused by the belief that the penis is shrinking or even disappearing and that complete retraction will lead to death.18 Not surprisingly, early Western psychiatric accounts interpreted cases of koro in Freudian terms as a manifestation of “castration anxiety.” But there is another part of the koro story that makes it unlike your standard neurosis—it often occurs in epidemics.

In October and November 1967, an outbreak of koro occurred primarily among the Chinese population of Singapore. Rumors spread that koro was caused by eating pork from pigs that had been vaccinated against swine flu.19, 20 Fanned by media reports, the rumors triggered an epidemic of koro that ultimately sent hundreds of victims to emergency rooms and clinics fearing that they were about to die from genital retraction.20 The epidemic occurred when the Chinese, for whom pork was a dietary staple, felt threatened by Muslim Malays, who do not eat pork.18 An even larger epidemic, affecting more than two thousand men, women, and children, struck Thailand in 1976 following rumors that Vietnamese immigrants had poisoned Thai food and cigarettes with a powder capable of causing genital retraction.18, 20 Again, ethnic tensions seemed to be at the root of the outbreak because fears of invasion by the Communist Vietnamese were widespread.

Although koro has been commonly considered an Asian culture-bound syndrome, similar cases have been reported in Europe, Africa, and the United States.

An outbreak in Khartoum followed rumors that foreigners were roaming the city and, by handshakes, causing men’s penises to disappear.21 The panic appears to have begun in Nigeria or Cameroon in 1996 but spread to involve numerous countries over a several-year period.22 In the Western literature, a growing number of cases have been reported in which genital retraction fears have figured prominently. In some instances, the syndrome appears to be a complication of underlying medical or neuropsychiatric diseases, a phenomenon dubbed “secondary koro.” Thus, koro-like illness has been reported as a symptom of diseases ranging from brain tumors, epilepsy, and stroke to urologic disease, HIV infection, and even drug abuse.20

There is a debate in the ethnopsychiatric literature about how to categorize the various forms of koro. Is sporadic koro, affecting isolated individuals and resembling an anxiety or psychotic syndrome, really the same as the epidemic form that is often ignited by folk beliefs or ethnic tensions? Should “secondary koro” and “chronic koro” be considered separate subtypes? How do genital retraction syndromes differ from other cultural syndromes like dhat (an Indian syndrome) and shen k’uei (a Chinese syndrome), which involve anxiety and panic about “semen loss”?23 We can imagine, and experts have proposed, an elaborate classification of genital retraction syndromes and their causes. Now, chances are, a few minutes ago you didn’t know koro existed. But already you can see the complexities of trying to define the boundaries of disorder. When we’re classifying disorders based largely on descriptive syndromes instead of a road map of how the mind works, it’s easy to get into the kind of lumping and splitting of categories that many have criticized in the growth of the DSM.

A LINE IN THE SAND?

WE’VE SEEN THAT DEFINITIONS OF NORMAL AND ABNORMAL CAN BE highly contingent on time and place. They can rise and fall depending on the historical moment or cultural setting. Is there no way to ground the relationship between normal and abnormal functioning? One of the arguments I will make in this book is that there is. But doing so requires that we reverse the strategy that psychiatry has pursued for most of the past century. Rather than constructing disorders by labeling the extremes—the troubled mind and the broken brain—we must start with an understanding of the normal. What were the mind and the brain built to do? How do mental and neural functions develop? How are they organized? By understanding the basic architecture of the mind and the brain and how they make sense of the environment and experiences they encounter, we can begin to see where the dysfunctions are likely to occur and how they emerge from the normal spectrum of human experience. Our definitions of mental illness become less arbitrary. That doesn’t mean that cultural influences will no longer matter. Indeed, as we learn more about the fundamental structure of the mind, we can see more clearly how culture shapes our experience and judgments about behavior.

One of the most influential attempts to grapple with the basic organization of the mind has turned to evolution for answers. The functioning of our brains, like the rest of our bodies, evolved in response to the challenges that ancestral humans faced in their struggle to survive and reproduce. Our most fundamental mental processes are organized around the most important of these challenges: avoiding harm, making plans and decisions, selecting mates, negotiating social dominance hierarchies, and so on. Jerome Wakefield, a professor of social work and psychiatry at NYU, has proposed a simple but powerful definition of mental disorder: a disorder is a “harmful dysfunction.”24 The line between mental health and mental disorder is crossed when a behavioral or psychological condition causes harm to an individual and represents a dysfunction of some naturally selected mental mechanism. Wakefield’s solution nicely struck a compromise in a long-standing and contentious debate that spanned the last several decades.

On the one side were those who claimed that psychiatric diagnoses and the distinctions drawn between normal and abnormal behavior are inherently value judgments. The Rosenhan pseudopatient experiment and the American Psychiatric Association’s vote to depathologize homosexuality were certainly examples where the line between normal and abnormal seemed to be drawn based on cultural value judgments. The extreme version of this critique was exemplified by Thomas Szasz and the so-called antipsychiatry movement that arose in the late 1960s. Szasz, whose 1961 book The Myth of Mental Illness was probably the most influential statement of this position, claimed that psychiatry’s diagnostic labels were merely tools used for the exclusion and subordination of individuals. A less radical view is that psychiatric diagnoses may be useful, but they are ultimately just social constructions. On the other side of the debate were those who claimed that mental illnesses are biomedical disorders that can be defined just as objectively as diabetes or cirrhosis of the liver.

But both the strict “values” and the strict “biomedical” positions are ultimately incomplete. For one thing, the idea that psychiatric disorders are simply myths or social constructions ignores a vast body of evidence about the biological basis of mental illness. Our biological understanding of psychiatric disorders is admittedly limited, but decades of scientific research have established that people who meet the criteria for these disorders do have profiles of genetic risk and brain structure and function that differ from those who do not meet criteria—although the differences are usually matters of degree. What’s more, as a psychiatrist, I have seen the pain and desperation that individuals and their families have to bear when psychosis, mania, depression, or panic overtake the mind. I have seen people so overwhelmed by this pain they wanted to end their lives rather than face a future filled with these symptoms. I have also seen medication and psychotherapies transform suffering and save lives. And the notion that defining these conditions as illnesses is merely an exercise in mythmaking trivializes the suffering of those who must bear them.

At the same time, it is hard to argue against the claim that the definition of psychiatric disorders involves some normative judgment about behavior. Severe shyness and social inhibition can be diagnosed as a disorder (social phobia) when they impair functioning (e.g., by inhibiting someone from advancing in their career). But that impairment occurs in part because social inhibition is devalued by employers and the larger culture.

Wakefield’s notion that mental disorder is a “harmful dysfunction” accommodates both values and biology.25 The first necessary condition for a mental disorder is that it involves mental states or behaviors that are harmful to an individual according to social norms. The syndromes we call schizophrenia, bipolar disorder, depression, and so on clearly fulfill this criterion. But “harm” is not sufficient to define a mental disorder. Plenty of behaviors are harmful but we would not call them disorders—procrastination or illiteracy, for example.

The other requirement is that the mental states or behaviors result from failure of a biologically designed function. Our brains exist to perform certain functions. Natural selection has sculpted the contours of those functions by enhancing the reproductive success of early humans whose brains best met the challenges that life threw at them. Some of these are obvious—detecting and avoiding danger, mating and reproducing. Others are more subtle—not being cuckolded, recognizing the intentions of others, cooperating and competing effectively, and maximizing available resources. In modern times we have given these functions names like trust, attraction, empathy, selfishness, and so on.

THE NORMAL SIDE OF DEPRESSION

AN IMPORTANT IMPLICATION OF THE “HARMFUL DYSFUNCTION” model is that psychiatry’s DSM system may be diagnosing mental illness when no disorder is present. Modern psychiatric diagnoses are based almost entirely on clusters of symptoms, with little attention to the circumstances in which those symptoms occur. Take the example of depression. The DSM-IV diagnosis of depression (officially known as “major depressive disorder”) requires two weeks or more of at least five symptoms, including persistent depressed mood and/or loss of interest or pleasure in activities most of the day, nearly every day. The other symptoms are significant weight loss or gain, sleeping too much or too little, physical agitation or slowing, loss of energy, feelings of worthlessness or excessive guilt, impaired concentration or indecisiveness, and recurrent thoughts of death or suicidality.

To reach the level of a diagnosis, the symptoms must cause significant distress or impaired functioning, and they can’t be due to the effects of a drug or another medical illness. And there’s one more thing—the symptoms can’t be due to bereavement. That’s a key exclusion, because the grieving process normally involves most of the symptoms of depression. Imagine a mother whose child has just died of leukemia. For a month, she cries nearly every day, loses interest in sex, has trouble falling asleep, and can’t muster the energy to go back to work for three weeks. Should this woman be given a diagnosis of depression? Of course not. She is experiencing a normal grief reaction in the face of a devastating loss.

But should bereavement be the only situation where depressive symptoms are considered normal? What about other painful losses, traumas, and stresses that many of us experience over the course of a lifetime?

A man pulls me aside at a dinner party to seek my advice: “I’m worried about a friend of mine. Howard’s been with our firm for twenty-five years, and three weeks ago, the company downsized and Howard got axed. He’s fifty-nine years old, and his work was his life. I saw him last week and I was really alarmed. He’s devastated—he just had this blank stare, he’s lost weight, and he looked like he hadn’t slept in a week. I tried to get him to come out golfing this weekend—something he’s always loved to do—but he just said, ‘No, some other time.’ He looks so lost and his wife says he just mopes around the house. I think he’s depressed. Is there some kind of medicine that could help?”

Does Howard need treatment for depression? He certainly seems to have symptoms of a major depressive episode. And we know that episodes of depression are often triggered by major stresses in vulnerable people. Here’s a man whose whole adult life was organized around his work, and now the core of his self-concept is gone. He clearly has suffered a terrible loss. Had his wife died, we would ascribe his symptoms to bereavement and his friend would probably not even have asked me about the need for medication. So why is one traumatic loss so different from another? How clear is the line between normal sadness and depression?

Wakefield and his colleagues asked this question using data from a large study of the prevalence of psychiatric disorders in the United States.26 They looked at people who met the criteria for a depressive episode and who said that their symptoms were triggered by either the death of someone close to them (“bereavement-triggered”) or by some other type of loss (“other loss–triggered”); they then divided these groups into “complicated” or “uncomplicated” cases. “Complicated bereavement” is a term used in the DSM to describe genuine cases of depression that are triggered by bereavement. According to the DSM, bereavement crosses the line from uncomplicated (normal) to complicated (true depression) when it is prolonged and accompanied by serious symptoms like impaired functioning, suicidal thoughts, or morbid preoccupation with worthlessness.

When Wakefield compared the uncomplicated bereavement group to the uncomplicated “other loss” group on nine indicators of major depressive disorder (things like the number of depression symptoms, suicide attempts, functional impairment, and treatment for depression), he found essentially no differences between them. On the other hand, complicated cases were significantly more severe for all of the indicators, whether they were triggered by bereavement or some other type of loss. In other words, there was no evidence that bereavement was a special kind of loss in terms of its connection to depression.

So what? Well, right now, if you experience two weeks of intense sadness, trouble sleeping, loss of interest, and trouble concentrating after losing your job, getting divorced, or some other major loss, you would qualify for a diagnosis of major depression. Wakefield and his colleagues estimate that if psychiatry treated these other losses the same way it treats bereavement and categorized uncomplicated cases as normal sadness, the prevalence of depression in the United States would drop by nearly 25 percent. Wakefield doesn’t claim that psychiatric diagnosis is inherently flawed—he’s just suggesting that it can be improved by adopting a framework that places it in the context of normal mental function and the situations that people find themselves in. We can’t define a line between normal and disorder by simply declaring a set of extreme behaviors as symptoms. Context matters. And we need to start by asking where these behaviors come from and how they fit into the full spectrum of human experience.

In other words, if we want to understand mental illness, we first need to understand how and why the mind functions the way it does. Perhaps that seems self-evident, but most attempts to define mental dysfunction—including the DSM—have not started with an account of normal function. So let’s look at one example of how understanding normal function can tell us something about disorder.

STEP ON A CRACK?

OUR MENTAL CAPACITY TO SENSE RISK AND AVOID HARM WAS clearly developed during our evolutionary past. An animal without this ability would not have survived long enough to reproduce. Natural selection promoted those mental mechanisms that could anticipate and avoid danger. What if our normal harm-avoidance mechanisms went awry? What would it look like if we saw danger where none exists?

In fact, many of the syndromes we refer to as anxiety disorders are exaggerated and inappropriate forms of detecting and responding to threats. For example, psychiatry defines obsessive-compulsive disorder (OCD) as an anxiety disorder in which individuals suffer from recurrent, anxiety-provoking, intrusive thoughts (obsessions) or repetitive behaviors aimed at preventing harm or relieving anxiety (compulsions). But the content of these obsessions and compulsions is not random; they tend to fall into certain domains.

Four groups of symptoms account for the majority of obsessions and compulsions: (1) contamination obsessions and washing compulsions; (2) aggressive obsessions and checking compulsions; (3) symmetry obsessions and ordering compulsions; and (4) hoarding compulsions.27 Each of these tap into fears and rituals we all may experience from time to time and each likely reflects a dysfunction of a mental system that evolved to avoid danger and stay safe.

What evidence do we have that these harm-avoidance and precautionary systems exist in all of us? For one thing, we see them bubble up to the surface during certain moments in our lives. As little children, dependent on our parents and with little experience to distinguish what is safe from what is harmful, we are particularly vulnerable. Not surprisingly, early childhood offers a showcase for fears and rituals. Think bedtime—that fearsome and dreaded moment when parents leave their children at the mercy of monsters lurking under the bed.

Bedtime fears are common, and many young children develop elaborate rituals to quell their fears: repeatedly checking under the bed or reciting safety scripts like little shamans warding off evil spirits. And then there is the awesome responsibility children often feel to prevent harm to themselves or their caregivers—fears that sometimes fuel perfectionistic compulsions to avoid making mistakes or to get things “just right” (“Step on a crack, break your mother’s back”). As one group of scientists put it, “These rituals may resemble pathology when taken to an extreme, but within their appropriate ontogenetic context, they are crucial in teaching children to manage their anxiety about the outside world”(p. 858).28

There’s another life stage when intrusive fears and compulsive behaviors normally flare: pregnancy and the postpartum period, a time whose importance is hard to trump from an evolutionary perspective. Natural selection is fundamentally a race for reproductive fitness—that is, maximizing the transmission of an individual’s genetic makeup to subsequent generations. Preoccupations with the safety of the fetus and newborn are understandably common during pregnancy and early parenthood when our reproductive fitness is most directly at stake.

James Leckman and his colleagues at the Yale Child Study Center have been studying the biological basis of OCD for more than twenty years. Several years ago, they decided to explore the hypothesis that the preoccupations of early parenthood could be thought of as a normal variant of OCD. They interviewed parents during the eighth month of pregnancy and within the first three months after childbirth and found some intriguing parallels.29 Just before the baby was born, more than 80 percent of mothers and fathers experienced worries about “something bad happening to the baby” and more than a third had thoughts about doing harm to the baby.

When they were interviewed at two weeks and three months postpartum, more than 70 percent of parents continued to have preoccupations with their babies’ vulnerability or safety. In some cases these fears had a key feature seen in the obsessions of OCD: intrusive worries that an individual recognizes are irrational. Nearly 25 to 40 percent of parents had thoughts about doing harm to the baby. Parents reported graphic images of dropping or throwing the baby, scratching the baby with their fingernails, injuring the baby in a car accident—despite being sure they would never do something like that.28

More than 75 percent of parents also reported that they felt a compulsive need to check on the baby, even though they knew “everything was okay,” and, at two weeks postpartum, about 20 to 30 percent recalled “telling themselves that such compulsive checking was unnecessary or silly.”29 When new parents were played recordings of their infant’s cries while undergoing brain scans, fear centers lit up and correlated with OCD-like intrusive fears and compulsive harm avoidant behaviors.28 The anxieties and preoccupations of early parenthood were greatest just before and just after the birth of the baby, and then began to decline.

So the perinatal period is a time of a normal increased sensitivity to avoiding harm and errors. It’s not that new parents suffer from a psychiatric disorder. Most parents who experience intrusive anxieties and compulsive safety behaviors report that they are brief and do not cause marked distress or interfere with functioning—that is, they don’t cross the threshold necessary for a diagnosis of OCD. But the perinatal period seems to tap into the same mental mechanisms that overtake the minds of those suffering from OCD.

DIRTY THOUGHTS

CONTAMINATION FEARS PROVIDE ANOTHER EXAMPLE OF THE CONTINUUM between normal and pathological obsessions and compulsions. At the extreme, OCD sufferers can wear their hands raw from excessive washing, or become housebound from obsessive contamination fears about the outside world. But the same fears are triggered when we avoid shaking hands with the sniffling, sneezing person in the next cubicle at work. The irrational fears of AIDS victims that swept the United States in the 1980s demonstrated the powerful and sometimes violent shape that engaging these harm avoidance responses can take.

More recently, fears about deadly flu epidemics and other germs have created a massive market for hand sanitizers: in a one-year period (2004–2005), sales increased by more than 50 percent,30 creating a community of Purell-soaked germophobes that has been dubbed “hand-sanitation nation.”31

The emotional counterpart of contamination sensitivity is the feeling of disgust, certainly a universal and familiar experience. Typically, disgust (literally, “bad taste”) is triggered most potently by the thought or act of oral contact with objects or fluids derived from animals or other humans (feces and decaying meat are two of the most universal triggers of disgust). Disgust most likely evolved as a mechanism for avoiding disease.32

But even those of us without OCD experience irrational disgust and contamination fears. In a series of intriguing studies, Paul Rozin and his colleagues at the University of Pennsylvania found that people’s feelings about food contamination often involve a degree of “magical thinking.”

What if I asked you to eat a bowl of your favorite soup but told you that it had been stirred by a washed-but-used flyswatter? Would you eat it? When Rozin asked a group of healthy adults, most said no. But 50 percent said they still wouldn’t eat the soup if it had been stirred by a brand-new flyswatter. In another test, subjects were offered two pieces of fudge that differed only in shape. They were happy to eat the fudge that was shaped like a muffin but rejected the fudge that was in the shape of dog feces, even though they knew it was just fudge.33

Neuroimaging studies have even pinpointed some of the brain regions that specialize in handling this function and appear to be overactive in people with OCD. When shown pictures of objects like public phones, toilets, or ashtrays and told to imagine coming into contact with them without washing afterward, individuals with OCD and contamination fears activate a system of brain regions involved in the processing of emotions, especially disgust.34 Interestingly, similar regions light up in healthy individuals given the same task, though to a lesser degree,35 suggesting again that OCD is an exaggeration of normal brain mechanisms.

A little ridge of cortex in the brain known as the insula is a key player in the biology of disgust.36 Among its other responsibilities, the insula is the brain’s clearinghouse for gut “feelings” and heart “aches”: it keeps tabs on bodily sensations and connects them to emotional responses.37 It is also the primary taste cortex, the region where our experience of taste is registered and integrated with our sense of smell.38, 39 Electrical stimulation of this area in the brain triggers nausea and stomach churning.40 So the insula is perfectly suited to handle disgust, an emotional response to bad tastes and smells. And indeed, brain imaging studies have confirmed that the experience of disgust activates the insula.36 When healthy volunteers are presented with disgusting tastes, odors, or pictures (spoiled food, mutilated bodies, etc.), the insula goes into overdrive.41

So disgust seems to be hardwired and we are evolutionarily prepared to find some things disgusting—things like feces and putrid food. Overcoming our contamination sensitivity takes effort or self-deception. Think about the “five-second rule”: food that falls on the floor is safe to eat if you retrieve it within five seconds. (Sadly, this turns out to be a myth because most of the transfer of bacteria from the floor to a piece of bologna happens within the first five seconds.)42

But wait—any parent knows that two-year-olds will put anything in their mouths. Where’s the hardwired disgust? They have to be taught that a dead cockroach is totally gross. That’s true. A child’s sense of disgust and contamination sensitivity emerges gradually as demonstrated in a study of three- to twelve-year-old children who were given cookies and juice under a progressively more disgusting set of conditions.43 The children were offered a glass of apple juice. But before pouring the juice, the experimenter pulled a comb out of her purse, combed her hair, and returned the comb to her purse.

She then produced another comb, telling the child, “This is a brand-new comb that I bought yesterday, all washed and cleaned. I am going to stir your juice with this comb.”

After stirring the juice, she asked, “Will you drink some juice?”

If the child drank the juice, the experimenter produced another comb from her attaché case and said it was the one she used to comb her hair every day, but it was washed and clean. If the child was willing to drink juice stirred with this comb, the experimenter pulled a comb from her purse and said it was the one the child had seen her use to comb her hair (it was actually a clean duplicate of the original comb). Would the child drink the juice after she stirred it with a comb they’d just seen her use on her hair?

The answer depended on how old the children were: 77 percent of children ages three to six years would drink the juice compared to only 9 percent of the nine- to twelve-year-olds. In another version of the experiment, the experimenter brought forth a real (sterilized) grasshopper and dropped it in the juice. She asked the children if they would drink some juice from the bottom of the glass using a straw. Sixty-three percent of the youngest children were perfectly happy to oblige compared to only 19 percent of the older children.

So what happened to make a ten-year-old disgusted by the thought of drinking bug juice? One possibility is that the brain of a three-year-old simply doesn’t have the capacity to think of juice being contaminated by a floating bug.44 In other words, the development of disgust sensitivity has to wait until certain cognitive abilities come online. But social learning is another likely contributor: older kids have seen other people express disgust about contamination. Entomophagy (the practice of eating insects) is actually common in many parts of the world, but Americans find the idea revolting, and children’s disgust reactions are stronger to things their parents find disgusting.45

And that social learning seems to have a neural basis: the same brain structure that activates when we experience disgust also lights up when we see facial expressions of disgust in others. French neuroscientist Bruno Wicker and his colleagues performed functional MRI scans of subjects in two conditions. First, the subjects watched movies of actors who smelled the contents of a glass that contained either water, perfume, or a disgusting-smelling liquid (the contents of a toy with the pungent name “stinking balls”). The actors made facial expressions appropriate to the contents of the liquid they smelled (neutral, pleasure, or disgust, respectively).

In the second experiment, the subjects were asked to inhale a series of pleasant smells (passion fruit, lavender, and so on) and a series of disgusting smells (including ethyl-mercaptan, once dubbed the “smelliest substance in existence” by the Guinness Book of World Records). Both the sight of others expressing disgust and the direct experience of disgust lit up the anterior insula. In other words, watching others react with disgust triggers our own disgust center. Perhaps the ten-year-old learns that bugs are gross by seeing those around him react with disgust. A broader implication of this work, and one we will return to in Chapter 4, is that “we perceive emotions in others by activating the same emotion in ourselves” (p. 660).46

Contamination-related disgust is central to some forms of OCD, and it occurs in a less harmful form in daily life, suggesting that there is a normal system for experiencing disgust and when that goes awry, mental illness can result. While brain-imaging studies have found that the insula and other emotion-processing regions may contribute to obsessive-compulsive symptoms, they also point strongly to dysregulation of a circuit connecting the frontal cortex to deeper structures like the basal ganglia, which are involved in avoiding errors and adjusting our behavior to threats and rewards. The point is that we can begin to understand OCD not as some mysterious affliction but as a dysfunctional expression of safety mechanisms that we all have.

THE DISTRIBUTION OF NORMAL

IN 1754 A FRENCH MATHEMATICAL GENIUS NAMED ABRAHAM DE MOIVRE died in poverty and relative obscurity in London. Two years after his death, the third edition of his great work The Doctrine of Chances appeared, containing a discovery that has become an iconic symbol in scientific and popular culture. De Moivre was concerned with describing the outcomes of random events—for example, if you flip a coin one hundred times, what’s the probability that you’d observe thirty tails? He noticed that as the number of trials increased, the probability of its outcomes (e.g., heads or tails) formed a predictable pattern. Most trials of a fair coin toss will result in an equal number of heads and tails, so the most likely outcome is that we will observe fifty tails. For numbers much less or much more than fifty, the probability trails off. Using these simple observations, de Moivre derived a formula that produced an intriguing result. Graphing the probabilities of each number of tails produces a curve shaped like a bell. As it turns out, this bell-shaped curve can describe the distribution of a remarkable range of physical, biological, and even social phenomena; it has clearly earned its other familiar name: the “normal distribution.”

I bring up the normal curve to address the question I posed at the beginning of this book: What is normal? If you look up the word normal in most dictionaries, the first definition is usually one with a statistical basis—something like: “conforming to the usual standard, type, or custom”—that is, normal is the most common or perhaps the average. But the metaphor of a “normal distribution” usefully goes beyond this.

Normal distributions are entirely defined by two numbers: one is the mean (the average), and the other is the variance (or its square root, the standard deviation). In other words, in statistical terms, a normal distribution encompasses both the average and deviations from the average: variance is an essential part of normality. By analogy, we’ll see in this book that the biology of normal human functioning encompasses variations in how the brain processes the conditions of the physical and social environment it encounters. The result is a broad range of normal when it comes to temperament, empathy, trust, sexual attraction, and social cognition.

The recurring story of this book is that each of us finds our place in this great distribution by the intersection of three major players: evolution, genetic variation, and the particular environment and experiences we’ve encountered. The first—our shared evolutionary heritage—begins long before we’re born. The countless trials and errors of natural selection have compiled a basic text of biological instructions spelled out in the human genome. The overwhelming majority of letters in that text are shared by all humans and provide a common set of possibilities and constraints within which our minds develop, function, and interact with each other and our world.

But the other two players—genetic variation and experience—shape the unique trajectory we travel within the broad distribution of the possible.

NIGHT AND DAY

IF WE ACCEPT THAT NORMAL IS NOT ONE STATE—THE MOST COMMON, the average, or the ideal—but rather a distribution or a spectrum of human possibility, how are we supposed to draw the line between normal and abnormal? A distribution may have a bulge in the middle and tails on the end, but there are no dividing lines in between.

If you’ve been waiting for me to give you my answer to the question of where the line between normal and abnormal is, here it comes. I don’t think there is one. Sorry. It’s not that I’m dodging the question, it’s that I think it’s not the right question to ask. There are no bright lines.

If that’s the case, why write a book about the biology of normal?

Actually, there are two reasons. When I talk about the biology of normal, I’m referring to an understanding of what the brain and the mind are designed to do and how they function across the spectrum of human endeavor. We are now beginning to build that understanding through an unprecedented convergence of anthropology, genomics, psychology, and neuroscience. The story that’s emerging is worth telling because it sheds light on how we become who we are. That’s the first reason.

The second is that characterizing the biology of normal can ground our understanding of how things can go awry and contribute to what we recognize as mental disorders. But, you might be asking, if I’m claiming there is no sharp line between normal and abnormal, how can we even say what a mental disorder is?

Wakefield’s harmful dysfunction model gives us one answer, but useful definitions of psychiatric disorders don’t depend on identifying a single “true” line between normal and abnormal. We draw lines to create useful and “real” distinctions all the time, despite the fact that such lines are at some level not really there. The practice of medicine has many examples. Hypertension is defined as a blood pressure greater than 140/90, but no one thinks that there’s a qualitative difference between a blood pressure of 141/90 and 139/90. And yet, high blood pressure can be deadly; hypertension has been a useful concept for research and clinical medicine.

Normal and abnormal are like night and day. That is, both are meaningful descriptions of two states that we recognize as different. But the line between them is impossible to draw. When exactly does day become night? We might decide to draw the line at sunset—a specific moment in time that we’ve constructed to separate the two. But that’s clearly somewhat arbitrary. Nevertheless, we’d all agree that day and night are meaningfully real. We schedule our lives around them; we make plans based on them. But we rarely worry about the moment that one becomes the other. We’re comfortable with the fuzziness of twilight.

The same principle applies to the distinction between normal and abnormal or between disorder and nondisorder. Any specific line we draw to define disorder will require a judgment. But that doesn’t mean that these disorders are simply fictions. There is clearly value in identifying syndromes that cause people harm and suffering: they allow us to develop treatments, to predict prognoses, and perhaps even formulate strategies for prevention.

TOWARD A BIOLOGY OF NORMAL

WAKEFIELD’S HARMFUL DYSFUNCTION MODEL PROVIDES A FRAMEWORK for classifying disorder—the abnormal. But it also says something central to the subject of this book: there is much to be learned by understanding normal. To be on solid ground in defining and studying mental dysfunction, we first need to understand what functions are being “dys-ed.” We need to grasp what the brain and the mind are designed to do—How do they function? What problems are they designed to solve? The answers to these questions are what I refer to as the biology of normal.*

In the chapters that follow, we’ll see how research has begun to answer these questions that define who we are and what makes us tick. And along the way, we’ll see that unpacking the science of normal can help demystify the nature of mental illness. Indeed, with a century of science at our back, it’s time to turn William James’s maxim on its head: the best way to understand the abnormal is to study the normal.


* As I explained in the prologue, I’m using the phrase “biology of normal” as a shorthand for describing underlying architecture of the brain and the mind. It involves multiple perspectives including evolutionary biology, neuroscience, genetics, and psychology.