THE MEDICALIZATION OF MISERY
In late June 2011, I met Sarah Jones, a single mother of two and a care worker at a community center in West London. Sarah had a warm smile and a welcoming manner, and as she spoke about her family and work, her love for both shone through. But when the topic turned to her 7-year-old son, Dominic, she seemed suddenly overcome with anxiety.
“Dominic is a lovely boy, he really is, but last year he started getting agitated and aggressive. He was doing badly at school, being disruptive, and then he got into a fight. The school psychologist wanted Dominic to get a doctor’s assessment, and I felt under real pressure to go. So after seeing Dominic for twenty-five minutes, the doctor said he was suffering from ADHD [Attention Deficit Hyperactivity Disorder] because he had all the classic symptoms: hyperactivity, impulsivity, and inattention. The doctor said medication would help. So Dominic is now on pills—and yes, he seems less distracted sometimes, but he also doesn’t seem himself either. It feels like a part of his spirit has gone.” Sarah’s distress was palpable. “I just don’t know what to do.”
Month on month, year on year, increasing numbers of children like Dominic are being diagnosed with mental disorders like ADHD. In fact, diagnoses of ADHD have risen so sharply in the last ten years that 5.29 percent of the global child population is now thought to suffer from the condition (with prevalence rates in North America and Europe being pretty much equal at around 5 percent).26 This vaulting rise in ADHD is consistent with a growth in other childhood psychiatric disorders. If we add up the prevalence rates for all childhood disorders, for example, it is estimated that between 14 percent and 15 percent of children now suffer from a diagnosable mental disorder in any given year.27
But as high as these figures may be, they pale in comparison to those relating to the adult population. For example, the National Institute for Mental Health in the United States now claims that about 26.2 percent of all American adults suffer from at least one of the DSM disorders in a given year28 while the Office for National Statistics on Psychiatric Morbidity in the UK reports a similar figure.29 This amounts to saying that at least one in four people is afflicted by a mental disorder in a given year each side of the Atlantic, a figure made more startling when in the 1950s it was more like one in a hundred, and at the beginning of the twentieth century a meager one in a thousand. So what can account for this massive surge in mental disorders? Why in just a few decades have we apparently all become so psychiatrically unwell?
There are at least three hypotheses the mental health community uses to try to account for the escalating rates. And as the book unravels, we will look at them in greater depth. But to give you just a quick snapshot, let me outline them briefly below.
The first goes like this: As the pressures of contemporary life have increased, so too have our levels of stress and strain, leading to an upsurge in poor mental health. While this explanation seems reasonable enough, as we will see later it is difficult to ascertain whether contemporary life is really so much more stressful than life many decades ago. Indeed, as many sociological studies have shown, social stress may have decreased rather than increased in recent years, therefore putting this hypothesis under strain.30
The second hypothesis is also problematic: it says that mental disorders have increased because today’s psychiatrists are better than those in the past at recognizing psychiatric disease. Perhaps advances in technology now allow clinicians to more readily spot and diagnose disorders that once slipped below their radar. While this hypothesis again has some obvious appeal, its weakness is that by and large diagnostic technology has not improved—there are still no objective tests that can confirm the validity of any psychiatric diagnosis, a fact supported by the continued low diagnostic reliability rates.
To be at our most generous, then, the first two hypotheses are, at best, plausible explanations that can partly account for the rise in disorder rates. But what if these hypotheses do not reveal the whole picture? What if they overlook a crucial yet not-so-obvious third possibility: that psychiatry, by progressively lowering the bar for what counts as mental disorder, has recast many natural responses to the problems of living as mental disorders requiring psychiatric treatment. In other words, has psychiatry, by redrawing the line between disorder and normality, actually created the illusion of a pandemic?
Let’s now look at this third hypothesis in greater depth.
2
In March 2011, a group of scientists undertook a comprehensive study on nearly one million Canadian schoolchildren. What they did was look at the medical diagnoses all these children had received within the period of one year. These children were between the ages of six and twelve. The scientists were particularly interested in how many of them had been diagnosed with ADHD. Once the calculations were conducted and the results came in, the scientists were initially baffled by what they found: the precise month in which a child was born played a significant role in determining whether or not he or she would be diagnosed with ADHD.
As odd as this may sound, the figures published in the Canadian Medical Association Journal are plain to see. The line that charts the monthly diagnostic rates, rather than resembling a mountain range that peaks and dips from month to month, instead moves steadily and diagonally upward from the beginning of the year in January right up to the end of the year in December. To translate this into numerical terms, we find that 5.7 percent of all boys born in January were diagnosed, compared with 5.9 percent born in February, and 6.0 percent born in March. After that, the monthly rates rise incrementally until boys born at the end of the year are 30 percent more likely to be diagnosed than boys born at the start.
If this figure seems startling to you, then just consider the female diagnostic rates: Girls born at the year’s end in December are 70 percent more likely to be diagnosed with ADHD than girls born in January. So what is going on here? Why are children born at the end of the year far more likely to be diagnosed with ADHD than children born at the beginning?
The clue to unraveling this puzzle has nothing to do with birth signs or weather patterns or cosmic shifts in the lunar calendar. It rather has to do with the simple fact that children in the same year at school can be almost a full year apart in actual age. This is because children with birthdays just before the cutoff date for entering school will be younger than classmates born at earlier times of the year. So in Canada, for example, children born at the beginning of the year (January) are eleven months older than classmates born at the end of the year (December). This means that January children have a full eleven months of developmental advantage over their December peers. And an eleven-month gap at that age represents an enormous difference in terms of mental and emotional maturity.
As I was keen to find out more about the implications of this study, I interviewed Dr. Richard Morrow, one if its lead researchers.
“Well, the most important thing we noticed,” Morrow said candidly, “was that the younger kids in the classroom were far more likely to be diagnosed with ADHD because their relative immaturity was being wrongly mistaken for symptoms of ADHD.”
The relative immaturity of the younger children was, in effect, being wrongly recast as psychiatric pathology. “And this clearly explained for us,” continued Morrow, “why the younger you are in your class the more likely you are to be diagnosed with this condition. And this is happening not just in Canada, because we found that wherever similar studies have been conducted [e.g., the United States and Sweden] they have reached the same results—the younger you are in your class, the more likely you’ll get the diagnosis. It’s a pretty wide phenomenon.”
The reason why Morrow’s research is so important to us is because it provides a clear example of what is known as medicalization—namely, the process by which more and more of our human characteristics are seen as needing medical explanation and treatment. Now, while in the Canadian study it is clear that the effects of medicalization can be deleterious, this is obviously not the case in all instances. Indeed, medicalization, at best, has often been a force for good. For example, it was right to use medicine to cure biological conditions that were once unhelpfully understood as religious problems (to be healed only by prayer or church attendance).
And yet, as we have seen, there are forms of medicalization that are clearly unhelpful. These are the forms that invasively spread medical authority where it was never designed to go. For instance, “problems” such as low achievement, certain kinds of truancy, or underperformance have attracted medical diagnoses and intervention in our children, as have many normal reactions to the demands of adult life that are labeled as so-called “stress disorders” to be biologically explained and pharmacologically treated.
The issue of medicalization is crucial because it concerns where the very limits of medical intervention should be drawn. At what point does medicalization begin to undermine the health of a population? At what point does it begin to turn what should be a matter for spiritual, philosophical, or political understanding and action into an issue to be managed by medicine alone? This question has particular relevance for psychiatry. For psychiatry, as we will soon see, has been accused more often than any other medical specialism of incorrectly medicalizing our normal actions and responses. The question for us right now, then, is to what extent is this accusation true?
3
In an interview for a BBC documentary in 2007, the film’s maker, Adam Curtis, posed this very question to Robert Spitzer. He asked Spitzer whether the DSM had committed any errors. More precisely, he asked whether when creating DSM-III his taskforce had adequately distinguished between human experiences that were disordered and human experiences that were not. In effect, had the taskforce, when creating its list of mental disorders, wrongly labeled many normal human feelings of sadness and anxiety as indicators of medical disorders that required treatment?
Spitzer, with noticeable regret, admitted that this had occurred. He then went on to explain why.
“What happened is that we made estimates of prevalence of mental disorders totally descriptively, without considering that many of these conditions might be normal reactions which are not really disorders. And that’s the problem. Because we were not looking at the context in which those conditions developed.” In other words, Spitzer’s DSM only described the symptoms of each disorder, but never asked whether or not these so-called symptoms could, in some circumstances, actually be normal human reactions to difficult life situations.
An incredulous Curtis therefore said to Spitzer: “So you have effectively medicalized ordinary human sadness, fear, ordinary experiences—you’ve medicalized them?”
“I think we have, to some extent,” responded Spitzer. “How serious a problem it is is not known. I don’t know if it is 20 percent, 30 percent … I don’t know. But that is a considerable amount if it is 20 percent or 30 percent.”31
In this interview with Adam Curtis, Spitzer admitted that the DSM-III wrongly reclassified large parts of normal human experience—sadness, depression, grief, anxiety—as indicators of mental disorders that required medical treatment. This error occurred because his taskforce was only interested in the experiences that characterized the disorder. It was not interested in understanding the individual patient’s life or why they suffered from these experiences. Because these contextual factors were overlooked, experiences of sadness, anxiety, or unhappiness were often listed as symptoms of underlying disorders, rather than seen as natural and normal human reactions to certain life conditions that needed to be changed.32
You’ll remember that I met Spitzer in early May in his house in leafy Princeton. As we sat eating lunch, I took the chance to ask him about the Curtis interview. Once I had recounted to him his exchange with Curtis, he slowly put down his spoon and turned his head in my direction. It was immediately clear to me he was unwilling to elaborate on what he had previously said.
It also seemed clear that he had shifted his position since that earlier interview with Curtis. While he still agreed that normal reactions were being recast as psychiatric illness, he now seemed keener to locate the cause of this problem elsewhere: not in how the DSM was constructed and written, as he had confessed to Curtis, but in how the manual is being used. As Spitzer explained:
“[In clinical practice] there is often too much emphasis placed by some on the diagnostic criteria of the DSM. [In simple terms, if a person has this set of symptoms, then they have this disorder.] This approach ignores other things that are important when making an assessment, such as the context in which the person became ill. So there has been a move toward an overemphasis on diagnostic criteria, and a neglect of assessing the social context in which the person is living.”
In other words, problems emerge when a psychiatrist simply tries to match the patient’s experiences with one of the disorders in the book without investigating why the person is suffering as they are. After all, perhaps he is suffering because he has just lost his job, or someone dear to him, or because he’s struggling with his identity, with poverty, with failure in love or work—who knows, perhaps his life just hadn’t turned out as he’d hoped. There are countless understandable reasons why a person may suddenly start manifesting emotions or behaviors that can be easily misread as “symptoms” of “major depression” or “anxiety disorder”—reasons that may have nothing to do with the person being psychiatrically unwell.
What Spitzer told me, in essence, is that when clinicians ignore such contextual factors, they’ll see mental disorders where there are none. In these cases, diagnoses are assigned unnecessarily. And this of course helps us unravel what we encountered in the Canadian study—younger children in the classroom being diagnosed with ADHD. The context of their relative age had not been taken into account. When a consideration context is omitted, in other words, damaging diagnostic oddities ensue.
4
In 1994, Spitzer’s revolutionary DSM-III had finally reached the end of its shelf life. It was now time for it to be replaced by a new edition of the manual, entitled DSM-IV. The person who replaced Spitzer as chair of the new DSM was a psychiatrist called Dr. Allen Frances. Frances was appointed chair for many reasons. First, at that time he was head of psychiatry at Duke University, so he was believed to have the credentials. Also, the APA made it clear it wanted someone who had dabbled in many fields. Frances again seemed to fit the bill; not only had he trained in psychoanalysis, but he had conducted research on other therapeutic approaches, including studies of medications for depression and anxiety. Furthermore, because Frances had been minimally involved in the construction of Spitzer’s DSM-III, he knew how diagnostic books are made and so could apply that knowledge in the construction of DSM-IV.
When I interviewed Frances in May 2012, his DSM-IV was still being used and sold around the world. This meant that apart from one minor revision in 2000, the manual he published in 1994 has for two decades shaped research and practice within the global psychiatric community.33 What I wanted to know from Frances, therefore, was whether, with the benefit of hindsight, he felt his DSM-IV Taskforce had made any mistakes. In short, did his manual unleash any unintended negative consequences that he now regrets?
“Well, the first thing I have to say about that,” answered Frances confidently, “is that DSM-IV was a remarkably unambitious and modest effort to stabilize psychiatric diagnosis, and not to create new problems. This meant keeping the introduction of new disorders to an absolute minimum.”
What Frances meant by this was that his taskforce only added around eight new disorders to the main manual.34 This indeed is a modest amount considering that Spitzer had introduced around eighty. And yet, from another standpoint, this claim to modesty is somewhat wobbly—it ignores that Frances also included an additional thirty disorders for “further study” in the appendix, and that he subdivided many existing disorders too. So if we include these appendix disorders and subdivisions (all of which patients can be diagnosed with), Frances actually expanded the DSM from 292 to 374 disorders.
But obviously Frances, and I believe wrongly, had chosen not to count the inclusions and subdivisions, for indeed, as he continued: “Yet despite that conservatism, we learned some pretty tough lessons. We learned overall that even if you make minimal changes to the DSM, the way the world uses the manual is not always the way you intended it to be used.”
Letting his questionable claim of conservatism stand for a moment, I asked Frances to elaborate on what he meant by his learning some pretty tough lessons.
“We added a Bipolar II [this is for individuals who have manic episodes, who also might have a bipolar tendency]. We also added Asperger’s disorder [this was to cover people who didn’t have full-blown autism but who had considerable problems with autistic-like symptoms] and finally we added ADHD [for people who had attention issues coupled with hyperactivity]. And, well, these decisions helped promote three false epidemics in psychiatry.”
Trying to sound unfazed, I asked Frances to clarify what he meant by three false epidemics.
“We now have a rate of autism that is twenty times what it was fifteen years ago. By adding Bipolar II, we also doubled the ratio of bipolar verses unipolar depression, and that’s resulted in lots more use of antipsychotic and mood-stabilizer drugs. We also have rates of ADHD that have tripled, partly because new drug treatments were released that were aggressively marketed. So every decision you make has a tradeoff, and you can’t assume the way you write the DSM will be the way it’ll be used. There will be so many pressures to use it in ways that will increase drug sales, increase school services, increase disability services, and so forth.”
At this point in our interview I could not help but recall young Dominic and the Canadian schoolchildren, all of whom had been diagnosed with ADHD. Was the creator of the modern ADHD category now admitting that potentially millions of children just like them (not to mention the adults) were being wrongly diagnosed with this and other mental health conditions?
I put the question to him directly: “Are you saying that the way the DSM is being used has led to the medicalization of a number of people who really don’t warrant their diagnoses?”
“Exactly.”
“Can you put a figure on how many people have been wrongly medicalized?”
“There is no right answer to who should be diagnosed. There is no gold standard for psychiatric diagnosis. So it’s impossible to know for sure, but when the diagnosis rates triple over the course of fifteen years, my assumption is that medicalization is going on.”
Once in a while when conducting interviews, you hear a confession that hits like a thunderclap. And this for me was one of those moments. Here was the creator of DSM-IV admitting that many new disorders they included actually helped trigger the unnecessary medicalization and medication of potentially millions of people.
But is this the whole story? Could the story actually be even worse than this? Frances’s admission only relates to the new disorders he included, but what about all the old disorders (around 292) that Frances actually imported directly into his DSM-IV? After all, Frances’s team only significantly reformulated four of the 292 disorders inherited from Spitzer. In other words, while Frances’s “conservatism” meant many new disorders were placed in the appendix rather than in the main text, did it not also allow the continued existence of countless disorders that frankly had woeful scientific support?
For example, some of the more eccentric disorders Frances’s taskforce incorporated into DSM-IV, and which also, incidentally, are contained in the ICD, included: Stuttering (disturbance in normal fluency and time patterning of speech); Premature Ejaculation (which requires no explanation); Caffeine-Related Disorders (caffeine withdrawal and dependency); Expressive Language Disorder (below average language skills); Social Phobia (shyness and/or fear of public speaking); Sexual Aversion Disorder (absence of desire for sexual activity); Reading Disorder (falling substantially below the reading standard for your age, intelligence, and age-appropriate education); Female Orgasmic Disorder (persistent or recurrent delay in or absence of orgasm); Noncompliance with Treatment (a diagnosis that can be given when the patient resists treatment); Conduct Disorder (repetitive or persistent violation of societal norms or others’ rights); Transsexualism (identifying with a gender not of your sex); Oppositional Defiant Disorder (for children with irritable mood swings, and who overly defy authority), to name a few.
Notwithstanding that most critics find indefensible the idea that the above problems are psychiatric disorders, I asked Frances why he carried these and the other disorders from Spitzer’s DSM into DSM-IV. Why didn’t he, as chair of DSM-IV, simply remove them on the grounds they were eccentric and enjoyed remarkably weak scientific support?
“If we were going to either add new diagnoses or eliminate existing ones,” Frances explained, “there had to be substantial scientific evidence to support that decision. And there simply wasn’t. So by following our own conservative rules we couldn’t reduce the system any more than we could increase it. Now, you could argue that is a questionable approach, but we felt it was important to stabilize the system and not make arbitrary decisions in either direction.”
“But one of the problems with proceeding in that way,” I pressed, “is that it assumes the DSM system you inherited from Spitzer was fit for its purpose. For example, it assumes that the disorders Spitzer included and the diagnostic thresholds Spitzer’s team set [i.e., the number of symptoms you need to warrant any diagnosis] were themselves scientifically established.”
“We did not assume that at all,” asserted Frances. “We knew that everything that came before was arbitrary [Frances quickly corrects himself]; we knew that most decisions that came before were arbitrary. I had been involved in DSM-III. I understood their limitations probably more than most people did. But the most important value at that time was to stabilize the system, not change it arbitrarily.”
I pressed harder now. “So you are essentially saying that you set out to stabilize the arbitrary decisions that were made during the construction of DSM-III?”
“In other words,” corrected Frances, “it felt better to stabilize the existing arbitrary decisions than to create a whole assortment of new ones.”
At this point I simply did not know what else to say. Frances, it seemed, had said it all. While his “conservatism” had stopped his taskforce from including excessive numbers of new disorders (if we exclude the appendix inclusions and subdivisions), it had also led him to import Spitzer’s mistakes into the DSM that has now been in medical use for eighteen years and is still in use today as I write. Not only did the eccentric and non-scientifically established disorders remain, but so too did many of the low thresholds people had to meet in order to warrant receiving a diagnosis. This meant that the dramatic medicalization of normal human reactions to the problems of everyday life was allowed to proceed unchecked.
5
Toward the end of my interview with Allen Frances, I asked him whether he felt that the problem of medicalization would be solved someday soon. I asked this question because, as I write, a new edition of the DSM (called DSM-5) is being prepared for publication in May 2013. Would this new DSM, many years in the making, rectify the problems of the past?
“That question is crucial,” responded Frances passionately, “because the situation I think is only going to get worse. DSM-5 is proposing changes that will dramatically expand the realm of psychiatry and narrow the realm of normality—resulting in the conversion of millions more patients, millions more people from currently being without mental disorders to being psychiatrically sick. What concerns me about this reckless expansion of the diagnostic boundaries,” continued Frances, “is that it will have many unintended consequences which will be very harmful. The ones I am most particularly concerned about are those that will lead to the excessive use of medication, and most particularly antipsychotic medication because it leads to excessive weight gain.”
Frances was particularly disturbed by DSM-5’s proposal to make ordinary grief a mental disorder. While previous editions of the DSM highlighted the need to consider excluding people who are bereaved from being diagnosed with a major depressive disorder, in the draft version of DSM-5 that exclusion for bereavement has been removed. This means that feelings of deep sadness, loss, sleeplessness, crying, inability to concentrate, tiredness, and low appetite, which continue for more than two weeks after the death of a loved one, could actually soon warrant the diagnosis of depression, even though these reactions are simply the natural outcome of sustaining a significant loss.35
“Reclassifying bereavement as a symptom of depression will not only increase the rate of unnecessary medication,” said Frances angrily, “but also reduce the sanctity of bereavement as a mammalian and human condition. It will substitute a medical ritual for a much more important time-honored one. It seems to me there are cultural rituals—powerful and protective—that we shouldn’t be meddling with. But by turning a normal painful human experience into a medical illness we are doing precisely that.”
“Are there any other new inclusions in DSM-5 that worry you?”
“Yes, sure—there is the new Generalized Anxiety Disorder which threatens to turn the aches and pains and disappointments of everyday life into mental illness. There is Minor Neurocognitive Disorder that will likely turn the normal forgetfulness of aging into a mental illness. There is Disruptive Mood Dysregulation Disorder which will see children’s temper tantrums become symptoms of disorder. These changes will expand the definition of mental illnesses to include more people, exposing more to potentially dangerous psychiatric medications.”36
“So where do we go from here?” I asked Frances, feeling rather bleak. “What will happen when this version goes forward?”
“I am worried that the already existing diagnostic inflation will be made much worse,” responded Frances, “and excessive medication treatment will increase. This will also lead to a misallocation of resources away from the more severely ill, who really need help, and towards people who don’t need a diagnosis at all and will receive unnecessary and harmful treatment.”
“Will this do damage to the credibility of psychiatry itself?”
“Well, James, I think things have been unfortunate in that regard,” said Frances pensively, “because that’s already happened.”
For one reason or another, it just felt appropriate to end our interview there.
6
The criticisms Frances expressed to me regarding DSM-5 have now infected the wider mental health community. During 2012 a flurry of damning editorials, including two eloquent pieces in Lancet and one in the New England Journal of Medicine, strongly criticized the DSM-5’s pathologization of grief. This chorus of dissent has been repeated in over one hundred critical articles in the world press and by the appeal of over one hundred thousand grievers worldwide. Furthermore, in 2012 an online petition went live protesting against the changes proposed by the DSM-5. It was endorsed by over fifty organizations related to the mental health industry, including the British Psychological Society, the Danish Psychological Society, and the American Counseling Association.
The arguments they advanced were similar to those made by Allen Frances: By lowering the diagnostic thresholds for warranting a diagnosis, more people may be unnecessarily branded mentally unwell; by including many new disorders that appear to lack scientific justification, there will be more inappropriate medical treatment of vulnerable populations (children, veterans, the infirm, and the elderly); by deemphasizing the sociocultural causes of suffering, biological causes will continue to be wrongly privileged. In short, the petition then concludes, “In light of the growing empirical evidence that neurobiology does not fully account for the emergence of mental distress, as well as new longitudinal studies revealing long-term hazards of psychotropic treatment, we believe that these changes pose substantial risks to patients/clients, practitioners, and the mental health profession in general.”37
With all this mounting pressure on the DSM to seriously mend its ways, who better to ask than the chair of DSM-5 whether these criticisms will have any impact. So I tried to get the interview, but unfortunately I was unsuccessful. This is because, as I would learn, all members of the DSM-5 taskforce have actually been forced to sign confidentiality agreements by the APA, which makes their talking frankly about what they are doing legally precarious. So instead, I put the question to the next best person, Dr. Robert Spitzer. Did Spitzer think the DSM-5 should stop and reconsider before going through with publication?
“Well, they have already had to postpone publication several times,” said Spitzer candidly, “because of all the problems. So I just think the DSM committee should ask the APA for an extension until all the work has been done properly. But this is not happening, and I think it is because the APA wants it published next year—it needs the huge amount of money sales will bring.”
Indeed, the APA has spent around $25 million developing DSM-5 and now needs to recoup its investment, as the coffers at the APA are now low. As it needs the $5 million a year the DSM makes from its global sales, the APA is now in a real dilemma, as Allen Frances pointed out: “Will it put public trust first and delay publication of DSM-5 until it can be done right? Or will it protect profits first and prematurely rush a second- or third-rate product into print?”
The answer came on December 1, 2012, when on that fateful day, and only having minimally addressed a handful of criticisms, the APA finally approved the new DSM-5 for global publication. As the chair of DSM-5, David J. Kupfer, joyously put it: “I’m thrilled to have the Board of Trustees’ support for the revisions and for us to move forward toward the publication.” What this approval means is that by the time you have this book in hand, on a shelf somewhere very near you now sits the manual that will influence mental health practice for decades to come.
As to what further damage all this will do, of course, remains to be seen. But if the legions of authoritative critics are to be believed, then the future does not look so bright. We can expect the inflation of diagnosed schoolchildren like those in Canada, and vastly more medicated youths like young Dominic in London. In short, we can expect vaulting numbers of children and adults alike to become yet more statistical droplets in the ever-expanding pool of the mentally unwell.