CHAPTER EIGHT

THE NEW NORMAL

WHEN IT COMES TO THE HUMAN MIND, WE’VE LONG HAD an uneasy relationship with the concept of normal. For one thing, it’s hard to define. And more than that, it has connotations that can imply something derogatory about people who struggle with life’s challenges. For some, normal is the way things ought to be. You’re either normal or you’re not, and abnormal is something alien and possibly inferior.

But I’ve argued for a very different view—another side of normal. Normal is not the ideal, the average, or even the state of being healthy. It is more like a landscape of human possibility whose contours have been shaped by the design features of the mind and brain. As I suggested in Chapter 1, like the statistician’s concept of the normal distribution, variation is part of its very definition.

A central premise of this book has been the deceptively simple claim that there is a biology of normal—a complex but fathomable basis for how the mind and the brain operate.

We are still filling in the details, but the broad outline is taking shape from a convergence of evolutionary biology, psychology, neuroscience, and genetics. To illustrate that premise, I’ve focused on what we’re learning about some of the most fundamental challenges that our brains and minds were designed to tackle. Of course, the examples I’ve given cover only a tiny fraction of our mental lives, but they reveal some important themes about the biology of normal. Here are a few.

1. THE RHYME BEHIND REASON

OUR MINDS ARE ORGANIZED AROUND SOLVING PROBLEMS THAT mattered to the reproductive fitness of our ancestors: avoiding harm, understanding the thoughts and feelings of other people, forming attachments, selecting mates, and learning from the past, to name a few. But, for the most part, natural selection is a sketch artist, providing the lines and shading that help us make sense of the world, but leaving us to fill in the details. It has given us mental rules of thumb, honed over millennia of biological competition, that we use to navigate our lives. We inherit the instructions for building neural circuitry and mental algorithms that tune our brains to process the salient signals out of the infinite noise of life. But it would be impossible to build a brain that anticipated all of the important contingencies of human life. And so the most important tool that evolution has given us is the capacity to adjust to life, to learn from experience.

2. GREAT EXPECTATIONS: THE POWER OF SENSITIVE PERIODS

SOME EXPERIENCES MATTER MORE THAN OTHERS. ONE OF THE most remarkable features of human development is that the brain uses the world to wire itself. Many of the universal functions of the mind are programmed early in life by experiences that evolution has prepared our brains to expect. Our brains lie in wait for the inputs we need to wire foundational systems like vision, language, attachment, and social cognition. This process of experience-expectant plasticity involves sensitive periods during which the brain is supersticky—acutely responsive to instructions from the world around it. But these windows of opportunity create a double-edged sword. On the one hand, sensitive periods ensure that we can extract what we need from the environment. Children will acquire language as long as they are exposed to some speakers and they will form an attachment system as long as there is some sort of caregiver around. On the other, sensitive periods create windows of vulnerability. If the experiences we expect are corrupted or absent, the damage may be hard to undo. A child who suffers severe neglect early in life may have attachment problems that last a lifetime.

Our brains expect a good enough environment. We need certain inputs—exposure to visual information, language, caregiving—to wire key systems in the brain. Disrupting those inputs during sensitive periods can have profound effects on the downside, but trying to supercharge the environment is not likely to get you a supernormal child. It’s true that when the environment is less than good enough, enrichment can make a big difference. And specific skills and talents—like musical abilities—can be enhanced by training. But the fundamentals of brain development don’t need perfection. That may come as a disappointment to parents who are frantically trying to optimize every detail of their two-year-old’s environment in the hope of supersizing their abilities. But it should also be a relief: barring catastrophe, most children will develop just fine.

3. BUFFERING AND BUFFETING

THERE’S NO QUESTION THAT EARLY EXPERIENCE MATTERS, BUT our fate isn’t sealed by the age of five. The unique trajectory each of us travels from cradle to grave is continually shaped by the buffeting and buffering effects of genes and experience. And through it all, the brain continues to be capable of change—the result of experience-dependent plasticity. Even children who endure early neglect or disrupted attachments may do well if they later find a nurturing home. Cultural and social influences can shape our desires. Psychotherapy can extinguish long-standing fears. Love can mend a broken heart. As a psychiatrist I am continually humbled by the remarkable resilience of the human mind. Even in the face of tremendous adversity, we can find ways to adapt and carry on.

4. THE LITTLE THINGS

SOMETIMES THE EFFECTS OF NATURE AND NURTURE SEEM PLAIN and simple. The distinctive personality profile of Williams syndrome can be related to a chunk of missing DNA on chromosome 7. Exposure to alcohol or toxins in the womb can cause lifelong cognitive impairments. Major abuse and neglect can have devastating effects on emotional development. But most of the broad spectrum of individual differences we see—the variance within the normal distribution—has much more complex roots.

For example, consider the impact of a small change in how we perceive other people’s emotions. We’ve seen that specific variants of genes involved in emotion processing can cause slight differences in amygdala sensitivity, influencing temperament and adjusting the emotional lens through which we look at the world. Depending on the combination of genetic variants you carry, you may be slightly more (or less) sensitive to detecting fear or anger in the faces of other people. And we’ve seen that life experience can have similar effects: a child raised in a hostile environment may also be more attuned to seeing anger and aggression. Again and again we find that genetic variations and the vagaries of experience produce small differences in how our minds/brains are tuned to the world around us. They calibrate and recalibrate our brain circuits in subtle ways that are largely invisible to us, but over time they shape who we are and what we care about.

5. THE UNITY OF NATURE AND NURTURE

IN HIS BOOK NATURE VIA NURTURE, MATT RIDLEY TAKES ON THE age-old but misguided nature vs. nurture debate. Ridley explains that genes and environment are always interacting and it is meaningless to try to apportion their effects. Genes are only expressed in the context of the environments they inhabit. As he puts it, “Nature can only act via nurture. It can only act by nudging people to seek out the environmental influences that will satisfy their appetites” (pp. 92–93).1 In other words, genes affect our behavior only with the complicity of the environment.

In recent years, however, our understanding of gene expression has taken this insight even further—down to a molecular level—hammering perhaps the final nail in the coffin of the nature-nurture dichotomy. We now know that in addition to the genome, there is an epigenome—a parallel code through which the environment can turn genes on and off. We’ve only begun to unravel this enormously complex system, but we’ve already seen examples of how experience can modify the epigenome. In animal studies, variation in maternal care can cause long-lasting abnormalities in the stress response by attaching an off switch to genes involved in the stress response. Some of the same changes have been found in human suicide victims who had been maltreated as children. In other words, nurture acts in part by modifying the chemistry of our chromosomes. In the nuclear core of our cells, nurture is nature.

The science of epigenetics has uncapped an entirely new source of normal variation. Over the course of our lives, epigenetic changes, or marks, accumulate and fluctuate, creating physical traces of the experiences that make us unique. That helps explain why identical twins—who are genetic clones—are not identical in their behavior, personality, or risk of diseases. Beginning at fertilization, each twin begins to accumulate differences in their epigenomes that alter the expression of their genes and nudge their development in different directions. Recent research has shown that some epigenetic changes can even be transmitted across generations, raising the possibility that we may inherit not only our ancestors’ genes but the effect of their environments.2 In other words, it’s conceivable that your genes are being affected by experiences your grandmother had. If that’s the case, we’re talking about inheriting nurture.

Epigenetic research has also revealed another force that shapes the trajectory of our lives—and it’s a little disturbing. It turns out that some, and maybe even most, of the epigenetic marks that regulate the expression of our genes are the result of random chance. At a molecular level, random (or “stochastic”) variation may turn genes on or off, with a cascading influence—like the proverbial “butterfly effect”—creating new twists and turns in the trajectory of development.2, 3 And so to natural selection, genes, and experience, we must add chance as a force in creating the distribution of normal.

PATHOLOGIZING NORMAL?

THERE’S ANOTHER THEME THAT EMERGES FROM STUDYING THE DEVELOPMENT and functioning of the mind. Mapping the biology and psychology of normal will not only demystify what makes us tick, it can tell us something about how things go awry. The more we learn about the architecture of the mind, the more we see that conditions we recognize as disorders are variations of the same biological and psychological systems that operate in all of us.

In Chapter 1, I argued that normal and abnormal are like day and night: we recognize them as different, but there is no sharp line between them. And that creates a dilemma. If the science doesn’t support a clear boundary between normal and abnormal, doesn’t that undermine the whole idea of defining psychiatric disorders?

In April 2010 the American Psychiatric Association released its provisional plans for revising the most recent version of the DSM. Once again, a group of leading experts was charged with reevaluating and improving what has become the standard classification of mental illness. That news reenergized a chorus of vocal critiques within the media, blogosphere, and the mental health professions. Columnist George Will warned that “childhood eccentricities, sometimes inextricable from creativity, might be labeled ‘disorders’ to be ‘cured.’ If seven-year-old Mozart tried composing his concertos today, he might be diagnosed with attention-deficit hyperactivity disorder and medicated into barren normality.”4 Writing in the Wall Street Journal, historian Edward Shorter argued that psychiatry was pursuing a misguided program of “reshuffling” symptoms rather than identifying real diseases: “With DSM-V, American psychiatry is headed in exactly the opposite direction: defining ever-widening circles of the population as mentally ill with vague and undifferentiated diagnoses and treating them with powerful drugs.”5

The debate over how to define abnormal is far from simply “inside baseball” for mental health clinicians and scientists. Where we draw the line between mental health and mental illness has far-reaching implications. Insurance companies typically require a psychiatric diagnosis for reimbursement of mental health care. Government agencies use these categories to determine disability benefits. Pharmaceutical companies use the DSM categories to get approval for psychiatric drugs. National studies find that a little more than half of the U.S. population will meet the DSM’s criteria for at least one mental disorder in their lifetimes. And the evidence suggests the rates of several psychiatric disorders, including autism, ADHD, and depression, have been climbing over the past several decades.68 With more people seeking mental health care, the use of psychiatric medication has also increased substantially in recent years. Between the 1990s and 2000s, there was a relative 400 percent increase in the prescription of antidepressants to adults.9 And in less tangible ways, our views of normal and abnormal affect how we judge ourselves and one another.

BIBLE STORIES

IN RECENT YEARS, PSYCHIATRY’S SYSTEM OF CLASSIFYING MENTAL disorders—the DSM—has become a popular whipping boy. The field has been accused of “disease mongering”—basically creating, expanding, and hyping new definitions of disease that could apply to anyone.

The shortcomings of the DSM are self-evident. The fact is that all psychiatric disorders are currently defined by checklists of symptoms that are based on a consensus of experts. Many of the diagnostic criteria seem arbitrary. For example, a diagnosis of panic disorder requires recurrent unexpected panic attacks that are followed by a month or more of worry about additional attacks or a change in behavior as a result of the attacks. Why a month? Why not two? Panic attacks are defined by the presence of at least four out of thirteen anxiety symptoms (why four out of thirteen?) that have to reach their peak intensity within ten minutes (as though ten minutes is meaningfully different than fifteen). And in some cases, categories and criteria are based more on accidents of history than standards of evidence.

But before we simply dismiss the current diagnostic system out of hand, it’s worth appreciating some of the challenges the field has faced in trying to diagnose and treat people who are seeking help for symptoms that are often disabling.

So try this thought experiment. If you were asked to develop a better way of diagnosing mental illness, what would you recommend?

It’s not so easy. Imagine you are a psychiatrist. Your job is to help people suffering from mental distress. A woman comes to your office and begins to sob as she tells you that her life has become unbearable. She’s been crying daily for no reason. She hasn’t been able to sleep well in months. She spends most of her day lying in bed, ruminating with guilt about wasting her life. She hasn’t been able to work for the past two years. Nothing seems to matter anymore and even the things she used to enjoy seem meaningless. She’s begun to believe that her family wants her dead, and now she’s convinced they would be better off without her. She’s caused them nothing but pain. Last week she almost took an overdose of Tylenol but stopped herself because she was afraid she’d go to hell. “What is wrong with me?” she asks. “How do I stop feeling this way?”

What do you tell her? Does she have a disorder? Are there treatments you can offer her? Without a system of diagnosis, it’s hard to answer those questions.

Before 1980 psychiatric diagnosis was a little like the Wild West. There were no clear criteria that practicing psychiatrists and psychologists agreed on for deciding whether someone had a disorder or what kind of disorder they had. If there is no common language for diagnosing, say, depression, and if the definitions people use are idiosyncratic, how can we learn anything about it? If you want to begin developing answers to questions that are important to people seeking help (“How long will I feel this way?” “Is there a treatment that can help?” and even “Are my children likely to develop this problem?”) you need to start with a definition of the problem.

The arrival of the DSM-III in 1980 meant that, for the first time, mental health clinicians had a standard set of criteria for making diagnoses and treatment decisions. And researchers had a common starting point for testing the validity of these diagnoses and evaluating the effectiveness of treatments.

It’s clear that many of the disorders defined by the DSM capture important syndromes that affect many people. In 2001 the World Health Organization (WHO) catalogued the leading causes of chronic disability worldwide for young adults (ages fifteen to forty-four), including everything from heart disease and infectious disease to accidents and malnutrition. Remarkably, four of the top five slots were occupied by psychiatric disorders: depression (#1), alcoholism (#2), schizophrenia (#3), and bipolar disorder (#5).10 Our modern categories of mental illness have also been used to make important discoveries. In the past several years, by using advanced DNA chip technology and ever-larger studies, psychiatric genetic researchers have been able to identify specific DNA risk factors for schizophrenia, autism, and bipolar disorder.1114 By pinpointing these genetic variations, we have opened new windows onto biological pathways that contribute to these disorders.

The DSM approach has clearly been useful; the problem is that it was only a starting point. Nevertheless, the appeal of some kind of formal criteria rapidly established the DSM as psychiatry’s bible. As it infiltrated clinical practice, the DSM went from being a standard to the standard for American (and, later, global) psychiatry. Scientists often focus their research on the existing categories of mental illness as though their validity is a given. For many people, both within and outside of psychiatry, the DSM categories have taken on the status of settled law.

Steven Hyman, a psychiatric neuroscientist and the former director of the National Institute of Mental Health (NIMH), has argued that this reification of the DSM has become an obstacle to improving the validity of psychiatric diagnosis and classification. Without a deep understanding of the causes of psychiatric disorders, there was no real alternative to starting with a simply descriptive system of classification. But the inevitable mismatch between rigid lists of specific diagnostic criteria and the real-world diversity of clinical presentations has become obvious. Hyman points out the irony that a classification system designed to advance research and clinical practice is now in danger of stifling them.

So what can be done to improve this state of affairs?

AN IMMODEST PROPOSAL

FOR THE REASONS I’VE ALREADY DISCUSSED, HAVING A SYSTEM for making diagnoses and treating those who suffer is important, and that requires drawing some boundaries between disorder and health. We can accept that drawing boundaries sometimes has pragmatic benefits, even though we may recognize that such boundaries inevitably involve imperfect judgments. And because our state of knowledge is evolving, the categories we construct today may not turn out to be the most scientifically sound categories possible. The challenge is to improve their validity in the most thoughtful, ethical, and conceptually coherent way that we can. That means recognizing that the lines we draw are provisional and being open to revising them as new evidence accumulates and practical priorities evolve. As the psychiatrist Kenneth Kendler and philosopher Peter Zachar have proposed,1517 one way to do this is to iteratively refine our categories by testing how well they capture a coherent set of causal mechanisms and how well they serve the clinical purposes of diagnosis: predicting prognosis, optimizing treatment, maximizing distinctions from other diagnoses, and minimizing stigma.

Since the birth of modern psychiatry and psychology, efforts to understand mental function and mental illness have followed the notion attributed to William James nearly a century ago: “the best way to understand the normal is to study the abnormal.” And with good reason. As James’s colleague E. E. Southard put it, “Normality is baffling.”18

With only the most fragmentary picture of how the brain works, the focus in psychiatry was necessarily on the extremes—the qualitative differences that might shine a light into the black box. Lesions that knock out territories of the brain and functions of the mind provided crucial clues about neural circuitry and the architecture of mental function. Dramatic symptoms—psychosis, mania, compulsions, panic attacks, and self-induced starvation—provided a basis for constructing psychiatric syndromes.

Unfortunately, this approach has also constrained our understanding. Focusing on the abnormal led to a system of classification and diagnosis—the DSM—based on constructing categories from constellations of symptoms. Without a map of how these symptoms connect to the functional organization of the mind and brain, it’s hard to evaluate their validity.

So, as I proposed at the outset of this book, one hundred years after William James proposed his agenda for abnormal psychology, the time has come to turn his formula on its head: to encourage a different project for twenty-first-century psychiatry and psychology, guided by the principle that the best way to understand the abnormal is to study the normal. Rather than simply starting at the edges and working our way back, our goal should be to illuminate the full and vast distribution of normal. As we fill out the center, we can see its connections to the extremes—how and where the functions of the mind can be perturbed or disrupted. We’re hardly there yet, but as I’ve suggested in this book, the work is well under way.

To get there, we’ll need to begin moving beyond drawing boundaries around disorders by consensus definitions of the abnormal in favor of developing a basic understanding of how the mind and the brain develop and function—the biology and psychology of normal.

The first step is to have a conceptual framework—a way of organizing our understanding of the biology and psychology of normal. There are many ways to approach this,17 but I find Jerome Wakefield’s model of harmful dysfunction (introduced in Chapter 1) particularly helpful. You’ll recall that Wakefield defines disorders as harmful “failure of some internal mechanism to perform a function for which it was biologically designed. . . .”19

But understanding dysfunction has to start with an understanding of function. And that’s the second, more difficult step, for which the project of illuminating the biology and psychology of normal becomes essential. Of course, we can infer dysfunction even without knowing the details of function—before we knew the causes of delirium, it still would have been clear that something was wrong with brain function. But appreciating what the dysfunction is about—how it might be treated or prevented—requires a more basic understanding. That kind of understanding can dramatically improve how well we draw lines between normal and abnormal and how we interpret symptoms and disorders.

We can see the perils of ignoring these insights when we look at some of psychiatry’s ideas about mental illness before the modern era of evidence-based psychiatry. In 1953 the psychoanalyst John Rosen expressed the prevailing view of the cause of schizophrenia when he wrote, “A schizophrenic is always one who is reared by a woman who suffers from a perversion of the maternal instinct. Schizophrenia . . . is caused by the mother’s inability to love her child.” Sadly, it took too long for these kinds of theories to yield to the evidence that discredited them.

A “bottom-up” approach to the brain and the mind—that is, one that is based on how the mind works and is not constrained by our current diagnostic categories—may force us to reconsider some cherished dogma. Boundaries among diagnoses may need to be redrawn. Findings from our research group and others’ are revealing overlapping genetic influences on syndromes that have traditionally been considered quite different, including schizophrenia, autism, ADHD, bipolar disorder, depression, and anxiety disorders.2024 We also now know that single variations in stretches of DNA called copy number variants can lead to a range of neurodevelopmental disorders, including autism, ADHD, epilepsy, schizophrenia, and intellectual disability.2529

And some of the syndromes that the DSM has treated as qualitative categories—anxiety disorders or personality disorders, for example—may be more accurately and usefully treated as dimensions of normal. Genetic, psychological, and developmental studies of neuropsychiatric and behavioral conditions increasingly point to the conclusion that many (though not all) are quantitative extremes of a normal distribution.3032

There may be other profound implications of grounding our approach to psychopathology in the biology of normal. For example, there’s at least one fundamental domain of the mind that’s been nearly invisible to psychiatry’s classification of mental dysfunction: anger and aggression. Like fear and anxiety, these are universal, evolved, and hardwired functions. They allow us to defend ourselves and others from harm and exploitation. Just as anxiety disorders are dysfunctions of a fear system and mood disorders are (at least partly) a dysfunction of reward systems, shouldn’t we expect dysfunctions of an aggression system? Psychiatry recognizes multiple disorders of anxiety and mood, but there is no category of anger or aggression disorders.

While I was writing this book, Thomas Insel, director of the National Institute of Mental Health, announced the launch of the Research Domain Criteria Project (RDoC). The project aims to develop new ways of classifying psychopathology by studying “basic dimensions of functioning . . . across multiple levels of analysis, from genes to neural circuits to behaviors, cutting across disorders as traditionally defined.”33, 34 This is an important effort that may well begin the transition to a bottom-up understanding of function and dysfunction.

And, finally, one of the great hopes of pursuing the biology and psychology of normal is that it will lead to more effective strategies for preventing and relieving suffering. The stark reality is that all of the widely used medications and psychotherapies for treating mental disorders—depression, bipolar disorder, schizophrenia, anxiety, obsessive-compulsive disorder—are based on a handful of discoveries that date to the 1970s or before. Medications have been helpful for many and lifesaving for some, but too often they fall short or have intolerable side effects. And for many other conditions—autism, intellectual disability, dementias—the options are even more limited. But we’ve seen how an understanding of how the brain and the mind work can open new doors. For instance, insights into the biology of social cognition led to the finding that oxytocin may enhance social functioning in autism spectrum disorders, and insights into the science of emotional memory led to the discovery of treatments that can extinguish and perhaps eliminate traumatic fears.

Illuminating the biology of normal means enlarging our view to understand not only our limitations but our talents, not only vulnerability but resilience. And, in the end, an appreciation of the full breadth of how the human mind and brain adapt to life will allow us to see ourselves and one another with compassion and wonder.