9

SAD, MAD, AND BAD

Myths about Mental Illness

Myth #37 Psychiatric Labels Cause Harm by Stigmatizing People

How would you feel if your friends thought you had paranoid schizophrenia? David Rosenhan (1973b), a professor of psychology and law, posed this question as a means of suggesting that psychiatric diagnoses, or labels, are stigmatizing—meaning they cause us to view people who’ve received these labels negatively. He believed it self-evident that such labels as “paranoid schizophrenia” tainted patients with the stigma of mental illness, causing other people to treat them in prejudiced and even harmful ways. To reduce this stigma, Rosenhan argued that mental health professionals should avoid global diagnostic labels, such as “major depression,” in favor of objective behavioral descriptions, like “looks sad,” “cries a lot,” and “walks and talks slowly.”

In response, psychiatrist Robert Spitzer (1976) wondered whether this approach would really affect people’s attitudes or behavior. He rephrased Rosenhan’s question using behavioral terms rather than a diagnostic label: How would you feel if your colleagues thought that you had an unshakable but utterly false conviction that other people were out to harm you? Spitzer contended that the stigma of mental illness stems from people’s reactions to aberrant thoughts and behaviors, such as paranoid delusions, not to the psychiatric diagnoses that professionals use to classify mental disorders. Who’s right?

To many people, the answer to this question begins and ends with a famous paper by Rosenhan (1973a) entitled “On Being Sane in Insane Places.” Eight mentally healthy individuals—including Rosenhan himself—presented themselves to a total of 12 mental hospitals. According to plan, all pretended to exhibit mild anxiety and requested admission based on a supposed complaint of unusual auditory hallucinations, namely hearing voices that repeated the words “empty,” “hollow,” and “thud.” Interestingly, all of these “pseudopatients” (fake patients) were admitted to the hospital: One was diagnosed with manic depression, the other 11 with schizophrenia. Once admitted, the pseudopatients stopped faking any symptoms of mental disorder. Aside from extensive note-taking for the purpose of data collection, the pseudopatients acted normally to see whether the hospital staff would discover their absence of illness and release them. Yet surprisingly, the pseudopatients were kept in the hospital for an average of 19 days, each with the same change in diagnosis. Their original condition was merely reclassified as “in remission,” meaning “no longer displaying symptoms of illness.” Rosenhan interpreted these findings to mean that mental health professionals can’t distinguish normality from abnormality, because all patients retained their original diagnoses upon discharge.

The pseudopatients observed negligent and even abusive treatment of their fellow patients, much of which Rosenhan (1973a) attributed to the stigmatizing effects of labels. He claimed that “psychiatric diagnoses … carry with them personal, legal, and social stigmas” (p. 252) and cast patients in a hopeless light, as “the label sticks, a mask of inadequacy forever” (p. 257). Rosenhan concluded by conjecturing that “In a more benign environment, one that was less attached to global diagnosis, [the staff’s] behaviors and judgments might have been even more benign and effective” (p. 257).

Rosenhan’s study created a scientific and media sensation. In a flurry of comments on this article, scholars observed that Rosenhan (1973a) had used seriously flawed methodology, ignored relevant data, and reached unsound conclusions. In perhaps the most devastating critique, Spitzer (1976) contended that Rosenhan’s own data ironically offered the best evidence against his claims. For example, recall that all 12 pseudopatients’ discharge diagnoses were amended to “in remission.” This change means that the abnormal behavior noted at intake was no longer present at discharge. Spitzer gathered data suggesting that “in remission” diagnoses were extremely rare, if not unheard of, in psychiatric hospitals. The fact that all 12 pseudopatients’ diagnoses were changed in the same unusual way shows just how capably the staff recognized normal behavior when the pseudopatients stopped faking symptoms. As Spitzer noted, this fact counters Rosenhan’s claim that mental health professionals can’t distinguish normality from abnormality.

Even today, countless sources inform readers that psychiatric labels are stigmatizing and potentially harmful. A website sponsored by the U.S. Substance Abuse and Mental Health Services Administration (http://mentalhealth.samhsa.gov/publications/allpubs/SMA96-3118/default.asp) asserts that “labels lead to stigma” and that “words can be poison,” listing “depressed, schizophrenic, manic, or hyperactive” as examples of hurtful labels. In a discussion of the dangers of diagnosis, sociologist Allan Horwitz and social worker Jerome Wakefield (2007) referred to the “vast evidence” that psychiatric diagnosis “leads to harmful stigma” (p. 23). Moreover, despite withering critiques, many scholarly texts still present Rosenhan’s (1973a) study in an uncritical fashion. This study is among the most frequently cited studies in introductory psychology textbooks (Gorenflo & McConnell, 1991), has been reprinted in several edited books of classic readings in psychology (Heiner, 2008; Henslin, 2003; Kowalski & Leary, 2004), and has been cited in more than 1,100 journal articles (see also Ruscio, 2004). For example, in his widely used abnormal psychology text, Ronald Comer (2007) wrote that Rosenhan’s study demonstrates “that the label ‘schizophrenic’ can itself have a negative effect not just on how people are viewed but on how they themselves feel and behave” (p. 432). In a lecture in the Great Ideas of Psychology audiotape series, psychologist Daniel Robinson (1997) told his listeners that “what Rosenhan’s study made clear is that once one is diagnosed as being an X, one is going to be treated as an X … because the setting has established that you are an X, and you will be an X forever more.”

Back in the 1970s, Spitzer had asked Rosenhan to provide access to his data to verify his conclusions. Granting access to data for an independent review by competent professionals is required by ethical standard 8.14 of the American Psychological Association (2002). Spitzer (1976) reported that Rosenhan agreed to provide the data once he completed a book about the study. But the book never materialized, and neither did Rosenhan’s data. Thirty years later, writer Lauren Slater (2004) featured Rosenhan’s work in a chapter of her book Opening Skinner’s Box: Great Psychological Experiments of the Twentieth Century. She not only gave readers the impression that Rosenhan’s conclusions were valid, but that she’d replicated them in a follow-up study in which she presented herself as a pseudopatient at various mental hospitals: “Let me tell you, I tried this experiment. I actually did it” (Slater, 2004, p. 89). Spitzer and several other prominent mental health researchers repeatedly asked Slater to provide copies of records from her hospital encounters, but she didn’t comply. Only after Spitzer and his colleagues (Spitzer, Lilienfeld, & Miller, 2005) published a critique did Slater (2005) write that “I never did such a study; it simply does not exist” (p. 743). To this day, it’s not clear whether Slater’s claimed replication ever took place.

Even though Rosenhan and Slater never provided data for independent scientific review, many published studies have teased apart the influence of psychiatric diagnoses and aberrant behaviors on the stigma of mental illness. Some investigators have confounded these sources of evidence—John Ruscio (2004) discussed fatal flaws in the widely cited studies of Ellen Langer and Robert Abelson (1974) and Maurice Temerlin (1968)—but researchers have conducted a number of better-controlled experiments. For example, the written description of a target individual can include a psychiatric diagnosis (such as bipolar disorder), a behavioral description (such as alternating periods of clinically elevated and depressed moods), both, or neither. By varying labels and behaviors independently, investigators can determine how these two factors influence judgments about people with mental illnesses. One early review led its authors to conclude that “It seems likely that any rejection directed towards psychiatric patients comes from their aberrant behavior rather than from the label that has been applied to them” (Lehmann, Joy, Kreisman, & Simmens, 1976, p. 332). A number of later studies support this conclusion (Ruscio, 2004).

Even though a substantial body of evidence indicates that psychiatric labels themselves don’t cause harm, the belief that diagnoses are responsible for the stigma associated with mental illness persists. Because the stigma itself is undeniably real, psychiatric diagnoses provide an easy target for the understandable frustrations experienced by those who suffer from mental illness and those who care for them. Yet the argument that diagnoses themselves, rather than the behaviors associated with them, produce this stigma was never plausible to begin with. First, let’s consider the fact that the stigma of mental illness substantially predates all psychiatric classification systems. The Diagnostic and Statistical Manual of Mental Disorders (DSM), which is used by mental health practitioners around the world, was originally published in 1952, and the most recent edition was published in 2000 (American Psychiatric Association, 2000). Even though less formal classifications existed for a few decades prior to the first DSM, the stigma of mental illness has been present for centuries.

Further problems for the argument that diagnoses themselves cause stigma are that diagnoses are confidential and one needn’t be diagnosed by a mental health professional to be stigmatized. Unless people care to share their formal diagnoses, others won’t even know what these diagnoses are. For example, once released from mental hospitals, the pseudopatients in Rosenhan’s (1973a) study would have had to tell people they’d been diagnosed with schizophrenia for anyone to know this information. Why would people concerned about being stigmatized tell others their diagnoses? In addition to or instead of the direct observation of aberrant behaviors, a plausible source of the stigma of mental illness is knowledge that someone has visited a mental health practitioner. It’s not uncommon to assume that anyone who sees a therapist must suffer from a mental disorder, and laypersons informally label each other all the time using derogatory terms like “crazy,” “loony,” or “nuts.” This process of “informal labeling,” as some have called it, is often sufficient to give rise to the stigma associated with mental illness, whether or not one actually receives a psychiatric diagnosis or shares it with others (Gove, 1982).

Psychiatric diagnoses play important roles that would be difficult to fulfill if we abandoned them. Diagnoses are essential for many purposes, including communication among mental health professionals; the coordination of research activities around the world; the provision of mental health services; reimbursement from insurance companies; and connecting patients to the most effective treatments. Certainly, no one believes that the DSM is perfect. We should make every effort to improve the existing psychiatric classification system, but attacking it on the unsupported grounds that diagnoses are stigmatizing is counterproductive.

Suppose that people in your life observed that you had an unshakable but utterly false conviction that everybody was out to harm you. It’s likely that any stigma associated with your mental illness would exist regardless of whether anybody knew you’d been diagnosed with paranoid schizophrenia. Rather than blaming stigma on psychiatric labels, Patrick Corrigan and David Penn (1999) discussed a number of more constructive ways to reduce stigma, including community-based educational and contact-oriented programs and compassionately conveying diagnoses in the context of humane and effective treatments.

Furthermore, several studies demonstrate that diagnostic labels can actually exert positive effects on stigma, probably because they provide observers with explanations for otherwise puzzling behaviors. In one study, peers rated essays written by children diagnosed with attention-deficit/ hyperactivity disorders more positively than those written by non-diagnosed children (Cornez-Ruiz & Hendricks, 1993). In another study, adults rated mentally retarded children more favorably when they received a diagnostic label than when they didn’t (Seitz & Geske, 1976). Similarly, Michelle Wood and Marta Valdez-Menchaca (1996) found positive effects of labeling children with expressive language disorder and suggested that a diagnostic label “may cause teachers to adopt a more supportive attitude toward the child … labeling can provide a more informative context in which to evaluate the relative strengths and weaknesses of a child with disabilities” (p. 587).

The history of clinical psychology and psychiatry reveals that as we come to better understand mental illnesses and as their treatment becomes more effective, stigma subsides. In the meantime, when individuals with mental illness experience stigma we should be careful not to place the blame where it doesn’t belong—namely, on psychiatric diagnoses that can help to identify the source of their suffering.

Myth #38 Only Deeply Depressed People Commit Suicide

Before reading on, close your eyes and picture a suicidal person. What do you see?

Odds are you’ll imagine a profoundly depressed individual, perhaps crying uncontrollably, contemplating whether life is worth living. There’s certainly a large grain of truth to this description: Clinical depression —often called “major depression”—is a powerful predictor of suicide attempts and completions (Cheng, Chen, Chen, & Jenkins, 2000; Coppen, 1994; Harwitz & Ravizza, 2000; Moscicki, 1997). Indeed, the risk of suicide in the lifetime of a person with major depression is about 6% (Inskip, Harris, & Barracough, 1998). This percentage is considerably lower than the 15% figure that had long been previously accepted (Guze & Robins, 1970), but still far higher than the approximately 1% risk of suicide in the lifetime of a person drawn from the general population. Although friends, relatives, and loved ones sometimes think of depression as merely as a “passing phase,” there’s no doubt that it’s often a life-threatening condition.

Yet many people who are aware of the link between depression and suicide assume that only depressed people take their own lives. For example, the director of a state suicide prevention foundation told a reporter “I didn’t know he was depressed” after she learned of her husband’s unexpected suicide (http://blog.cleveland.com/health/2008/03/boomers_suicide_trend_continue.xhtml). In one study of 331 undergraduates enrolled in introductory psychology courses, 43% responded “True” to the item, “If assessed by a psychiatrist, everyone who commits suicide would be diagnosed as depressed” (Hubbard & McIntosh, 1992). A later study of undergraduate education majors revealed lower numbers, but still found that 25% endorsed this item (MacDonald, 2007).

Many people are therefore surprised to learn that people who aren’t deeply depressed sometimes kill themselves. The belief that only clinically depressed people commit suicide is potentially dangerous, because friends, relatives, and significant others may assume erroneously that a person without serious depressive symptoms is “safe” and therefore doesn’t require immediate psychological attention. Yet research shows that between 13% and 41% (depending on the investigation) of people who commit suicide don’t meet diagnostic criteria for major depression. About 10% have diagnoses of either schizophrenia or substance use disorders, like alcoholism (Rihmer, 2007). In addition to depression, schizophrenia, and substance use disorders, other diagnoses significantly associated with suicide attempts, completions, or both are:

Still, there’s some controversy regarding the relation of these conditions to suicide attempts and completions, because some of them are frequently “comorbid” with major depression, meaning they often co-occur with major depression within people. So at least some of the apparent association of these conditions with suicidal behavior may be due to their overlap with depression (Cox, Direnfeld, Swinson, & Norton, 1994; Hornig & McNally, 1995). Still, a number of researchers have found that even after accounting for depressive symptoms, at least some of these conditions still predict suicidal behavior. For example, patients with borderline personality disorder, either with or without depression, are about twice as likely to attempt suicide as patients with depression alone (Kelly, Soloff, Lynch, Haas, & Mann, 2000). The evidence concerning whether panic disorder alone—that is, without comorbid depression— predicts suicide is more mixed (Vickers & McNally, 2004; Weissman et al., 1989).

For reasons that are mysterious, about 5-10% of people who commit suicide have no diagnosable mental disorder at all (Solomon, 2001). At least some of these individuals probably suffer from “subthreshold” symptoms of one or more mental disorders, meaning they barely fall short of meeting the formal diagnostic criteria for these conditions. But an undetermined number probably commit what some have termed “rational suicide,” a carefully considered decision to end one’s life in the face of terminal illness or severe and untreatable pain (Kleespies, Hughes, & Gallacher, 2000; Werth & Cobia, 1995).

There are other reasons to believe that depression isn’t necessarily the only, or even most important, predictor of suicide. First, in some studies hopelessness has been a better predictor of suicide than depression itself (Beck, Brown, & Steer, 1989; Beck, Kovacs, & Weissman, 1975; Wetzel, 1976). That’s probably because people are most likely to kill themselves when they see no means of escape from their psychological agony. Second, although depression actually tends to decrease in old age (see Myth #9), the rates of suicide increase sharply in old age, especially among men (Joiner, 2005). One likely reason for the striking discrepancy between the rates of depression and the rates of suicide with age is that the elderly are medically weakened and therefore less likely to survive suicide attempts, such as poisoning, than younger people. Nevertheless, another reason is that suicide attempts among the elderly tend to be more serious in intent (Draper, 1996). For example, compared with younger people, the elderly are more likely to use lethal means of attempting suicide, such as shooting themselves in the head (Frierson, 1991).

This discussion leads us to a closely related potential myth: Many people assume that the risk for suicide decreases as a severe depression lifts. In one survey of undergraduates, 53% responded “False” to the statement, “A time of high suicide risk in depression is when the person begins to improve” (Hubbard & McIntosh, 1992, p. 164). Yet there’s actually evidence that suicide risk may sometimes increase as depression lifts (Isaacson & Rich, 1997; Keith-Spiegel & Spiegel, 1967; Meehl, 1973), perhaps because severely depressed people begin to experience a return of energy as they improve (Shea, 1998). During this time interval they may be in a hazardous “window” during which they’re still depressed yet now possess sufficient energy to carry out a suicide attempt.

Nevertheless, the research support for this claim is mixed, because depressed patients who begin to experience improved mood but don’t fully recover may be more suicidal to begin with than other depressed patients (Joiner, Pettit, & Rudd, 2004).

So improvement in mood may not cause increased suicide risk, although the issue isn’t resolved. Still, it’s safe to say that one should never assume that a deeply depressed person is “out of the woods” once his or her mood begins to brighten.

Myth #39 People with Schizophrenia Have Multiple Personalities

“Today, I’m feeling schizophrenicof two minds, if you like.”

“Most philosophers have a schizophrenic attitude toward the history of science.”

“We face a dangerously schizophrenic approach to educating our young.”

“There is, of course, an easy answer for this seeming moral schizophrenia: the distance between the principles and the policy … between the war on terror and the war in Iraq” (quotation from a journalist criticizing President George W. Bush’s approach to the war in Iraq).

These quotations, pulled from various Internet sites, reflect a prevalent misconception, namely, that schizophrenia is the same thing as “split personality” or “multiple personality disorder.” A popular bumper sticker and key chain even reads: “I was schizophrenic once, but we’re better now”; another bumper sticker reads “I used to be a schizophrenic until they cured me—now I’m just lonely.”

One prominent introductory psychology textbook goes so far as to say that “schizophrenia is probably the most misused psychological term in existence” (Carlson, 1990, p. 453). As this and other textbooks note, schizophrenia differs sharply from the diagnosis of dissociative identity disorder (DID), once known as multiple personality disorder (American Psychiatric Association, 2000). Unlike people with schizophrenia, people with DID supposedly harbor two or more distinct “alters”—personalities or personality states—within them at the same time, although this claim is scientifically controversial (Lilienfeld & Lynn, 2003). One familiar example of DID is the so-called “split personality,” in which two alters, often opposite to each other in their personality traits, coexist. In the split personality, one alter might be shy and retiring, the other outgoing and flamboyant. Robert Louis Stevenson’s classic 1886 novel, The Strange Case of Dr. Jekyll and Mr. Hyde, is probably the best-known illustration of the split personality in popular literature.

Nevertheless, many psychologists find the assertion that DID patients possess entirely distinct and fully formed personalities to be doubtful (Ross, 1990; Spiegel, 1993). It’s far more likely that these patients are displaying different, but exaggerated, aspects of a single personality (Lilienfeld & Lynn, 2003).

Even some articles in scientific journals confuse schizophrenia with DID. One recent article published in a medical journal featured the subtitle The dermatologist’s schizophrenic attitude toward pigmented lesions and went on to argue that although dermatologists have been on the forefront of educating the public about risk factors for skin cancer, many ignore patients’ concerns about their skin blemishes (Dummer, 2003). An article entitled Recent developments in the genetics of schizophrenia, which appeared in a journal devoted to the genetics of brain disorders, stated that “Schizophrenia, which is also called ‘split personality,’ is a complex and multifactorial mental disorder with variable clinical manifestations” (Shastry, 1999, p. 149).

The schizophrenia-multiple personality misconception is surprisingly widespread. In one survey, 77% of students enrolled in introductory psychology courses endorsed the view that “a schizophrenic is someone with a split personality” (Vaughan, 1977, p. 139). Later studies found this number to be a bit lower—about 50% among college students, 40% among police officers, and nearly 50% among people in the community (Stuart & Arboleda-Florez, 2001; Wahl, 1987).

This misconception has also found its way into popular culture. The 2000 comedy film, Me, Myself, and Irene, starring Jim Carrey, features a man supposedly suffering from schizophrenia. Yet he actually suffers from a split personality, with one personality (Charlie) who’s mellow and another (Hank) who’s aggressive. In the film, Carrey’s character switches unpredictably from “gentle” to “mental.” After the NBC show, My Own Worst Enemy, starring Christian Slater as a spy with a split personality, debuted in October of 2008, numerous television critics erroneously referred to Slater’s character as a schizophrenic (Perigard, 2008). The toy industry has contributed to the confusion too: one of G. I. Joe’s action-figure enemies goes by the fear-inspiring name of Zartan, whom the toy makers describe as an “extreme paranoid schizophrenic” who “grows into various multiple personalities” (Wahl, 1997). Unfortunately, few articles on schizophrenia in popular magazines even discuss the confusion between schizophrenia and DID (Wahl, Borostovik, & Rieppi, 1995), making it difficult for the public to comprehend the difference.

The schizophrenia–DID myth almost surely stems in part from confusion in terminology. Swiss psychiatrist Eugen Bleuler coined the term “schizophrenia,” meaning “split mind,” in 1911. Many laypersons, and even some psychologists, soon misinterpreted Bleuler’s definition. By schizophrenia, Bleuler (1911) meant that people afflicted with this serious condition suffer from a “splitting” both within and between their psychological functions, especially their emotion and thinking. For most of us, what we feel at one moment corresponds to what we feel at the next, and what we think at one moment corresponds to what we think at the next. If we feel sad at one moment, we often tend to feel sad a moment later; if we think sad thoughts at one moment, we often tend to think sad thoughts a moment later. In addition, what we feel at one moment usually corresponds to what we think at that moment; if we’re feeling sad, we tend to think sad thoughts, and vice versa. Yet in schizophrenia, all of these linkages are frequently ruptured.

As Bleuler observed, people with schizophrenia don’t harbor more than one coexisting personality; they possess a single personality that’s been splintered or shattered (Arieti, 1968). In modern psychological and psychiatric lingo, schizophrenia is a severe psychotic disorder marked by a dramatic disruption in reality (American Psychiatric Association, 2000). People with this condition typically suffer from confused thinking and unpredictable moods, and often experience delusions (fixed false beliefs, like believing that one is being followed) and hallucinations (sensory experiences in the absence of any actual sensory stimulus, like hearing voices).

Ironically, the first misuse of schizophrenia as multiple personality in the popular press may have been by a prominent psychologist (McNally, 2007). In 1916, a Washington Post journalist described an interview with G. Stanley Hall, then a faculty member at the Johns Hopkins University and the first president of the American Psychological Association. “Schizophrenia,” Dr. Hall told the reporter, “is a term much used by psychologists to describe a divided mind, of which the Jekyll-Hyde personality is one type” (“He calls it schizophrenia,” p. A5). Only a few years later, the confusion between schizophrenia and multiple personality in popular culture proliferated, although the extent to which Hall’s quotation fostered this confusion is unclear (Holzinger, Angermeyer, & Matschinger, 1998; McNally, 2007). By 1933, this confusion had even found its way into a dictionary article by well-known author T. S. Elliott, who wrote that “For a poet to be also a philosopher he would have to virtually be two men: I cannot think of any example of this thorough schizophrenia” (Turner, 1995, p. 350).

But should any of this matter? If people misuse the term schizophrenia, should we care? Regrettably, many people in the general public don’t appreciate the fact that schizophrenia is often a profoundly disabling condition associated with a heightened risk for suicide, clinical depression, anxiety disorders, substance abuse, unemployment, homelessness, and other serious complications (American Psychiatric Association, 2000; Gottesman, 1991). Nor do many laypersons appreciate the often devastating effects of schizophrenia on family members, friends, and loved ones. Trivializing this condition, as do many Hollywood films, can lead us to underestimate its severity and minimize affected individuals’ urgent need for effective treatment (Wahl, 1997). As psychologist Irving Gottesman (1991) noted, “everyday misuse of the terms schizophrenia or schizophrenic to refer to the foreign policy of the United States, the stock market, or any other disconfirmation of one’s expectations does an injustice to the enormity of the public health problems and profound suffering associated with this most puzzling disorder of the human mind” (p. 8). Words matter.

Myth #40 Adult Children of Alcoholics Display a Distinctive Profile of Symptoms

Imagine that you’ve just seen a psychologist for an initial evaluation. You’ve been feeling out of sorts of late, and have been dissatisfied with your relationships, friendships, and job. “What’s causing all of this?” you wonder. After keeping you waiting for a few anxious minutes in the waiting room, the psychologist calls you in to his office and asks you to sit down. He informs you that the results of his tests reveal that you’re suffering from the following problems:

What does this all mean, you ask sheepishly? The psychologist reassures you that these symptoms are entirely typical of someone with your family history. As the adult child of an alcoholic, he proclaims confidently, these problems are entirely to be expected. You breathe a sigh of relief, comforted by the realization that many of your previously inexplicable emotional difficulties stem from your father’s alcoholism. Moreover, if you’re totally honest with yourself, you’re forced to admit that this personality profile fits you to a T.

Your psychologist’s “diagnosis” is well in keeping with a great deal of popular literature. The symptoms listed above, along with a few others, comprise what’s commonly believed to be a specific personality “profile” found among adult children of alcoholics (ACOAs) (Logue, Sher, & Frensch, 1992).

The ACOA symptom profile is one of the most firmly entrenched concepts in all of folk psychology. Over 220,000 websites contain the phrase “adult children of alcoholics,” and hundreds of them advertise self-help groups and therapy programs intended to assist individuals with ACOA personality features. Such popular books as Wayne Kristberg’s (1986) The Adult Children of Alcoholics Syndrome, Janet Woititz’s (1983) Adult Children of Alcoholics, and Robert Ackerman’s (2002) Perfect Daughters outline the hallmarks of the ACOA syndrome and describe techniques for alleviating or compensating for these problematic traits. Several widely publicized books have even attempted to account for the seemingly inexplicable behaviors of America’s most famous ACOA, former U.S. President Bill Clinton, in terms of the ACOA profile (Fick, 1998). For example, David Maraniss’s (1998) The Clinton Enigma attributed Clinton’s notorious sexual escapades to the impulse control problems of ACOAs, and his intense political ambitions to the overpowering desire of ACOAs to solve others’ problems.

When investigators have subjected the research literature on ACOAs to careful scrutiny, however, scientific support for this profile has evaporated. Kenneth Sher (1991) reviewed the major published studies on the personality characteristics of ACOAs and found surprisingly weak support for the ACOA syndrome. On average, children of alcoholics do exhibit some personality differences from children of nonalcoholics. For example, they tend to be somewhat more high-strung, outgoing, and prone to risk-taking than other individuals (Tarter, Alterman, & Edwards, 1985). Nevertheless, none of these differences map directly onto the standard ACOA profile, and most of the other features of the profile don’t distinguish ACOAs from non-ACOAs.

In addition, there’s little or no evidence that ACOAs display higher levels of “codependent” personality traits—that is, traits related to a tendency to help (or “enable”) people who are dependent on alcohol or other substances—than do non-ACOAs, The Oprah Winfrey Show and many other popular TV programs notwithstanding. Nevertheless, ACOAs are significantly more likely than non-ACOA’s to label themselves as codependent, perhaps because they’ve read or heard in popular psychology sources that ACOAs are often codependent (George, La Marr, Barrett, & McKinnon, 1999).

A 1992 study by Sher and two collaborators, Mary Beth Logue and Peter Frensch, sheds further light on the ACOA syndrome. They demonstrated that a self-report checklist consisting of supposed ACOA statements drawn from the popular psychology literature (for example, “In times of crisis you tend to take care of others,” “You are sensitive to the difficulties of others”) did no better than chance at distinguishing ACOAs from non-ACOAs (Logue et al., 1992). Interestingly, Sher and his co-authors found that ACOAs were just as likely to endorse a checklist of extremely vague and generalized statements (for example, “Variety and change may at times bother you,” “You have too strong a need for others to admire you”) as they were the checklist of ACOA statements. Moreover, about 70% of ACOAs and non-ACOAs reported that both checklists described them “very well” or better.

As we pointed out in Myth #36, psychologists refer to these kinds of exceedingly uninformative and ill-defined personality descriptors as “P. T. Barnum statements” and the tendency of people to find them accurate as the “Barnum Effect” (Meehl, 1956). People suffering from puzzling psychological problems may be especially vulnerable to this effect, because they’re often searching for a neat and tidy explanation for their life difficulties. Psychologists term this phenomenon “effort after meaning.” People want to figure out how they became who they are, and the Barnum Effect capitalizes on this understandable tendency.

Barnum statements come in several flavors. Some are “double-headed” because they apply to people who are either above or below average on a characteristic and by definition apply to essentially everyone (Hines, 2003). The third statement in the ACOA profile at the beginning of this piece, which describes taking both too much and too little responsibility for others, is a prime example. One website describes ACOAs as both afraid of getting close to people and overly dependent on people. Still other Barnum items refer to trivial weaknesses that are so prevalent in the general population as to be virtually meaningless for assessment purposes (for example, “I sometimes have difficulty making decisions”) or to assertions that are impossible to disconfirm (for example, “I have a great deal of unrecognized potential”). The fourth statement in the ACOA profile, which refers to needs for approval, almost certainly fits the bill on both counts. Who doesn’t sometimes desire approval, and how could we prove that someone who appears fiercely independent doesn’t possess a deeply hidden need for approval?

The Barnum Effect probably accounts for much of the success of graphologists (see Myth #36), astrologers, crystal ball readers, palm readers, tarot card readers, and spirit mediums. All make extensive use of Barnum statements in their readings. You’re also likely to observe the Barnum Effect at work during your next visit to a Chinese restaurant. To see what we mean, just crack open your fortune cookie and read the message.

The results of Sher and his colleagues probably help to explain why you found the ACOA profile at the beginning of this piece to fit you so well. Their findings confirm the popular belief that the personality features of this profile are true of ACOAs. There’s only one little catch: They’re true of just about everyone.

Myth #41 There’s Recently Been a Massive Epidemic of Infantile Autism

Trying “googling” the phrase “autism epidemic,” and you’ll find about 85,000 hits referring to what many consider a self-evident truth: The past 15 years or so have witnessed an astonishing increase in the percentage of children with autism.

According to the most recent edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM; American Psychiatric Association, 2000), autism is a severe disorder that first appears in infancy. About three fourths of individuals with autism are mentally retarded, and most are male. All suffer from marked language deficits, in severe cases resulting in complete muteness, and many don’t establish close emotional bonds with others. Most engage in stereotyped and ritualized activities, such as hair twirling, hand fluttering, and head banging, and display pronounced negative reactions to even trivial changes in their environments.

Once assumed to be an exceedingly rare condition—prior to the 1990s, the best estimates put the prevalence of autism at about 1 in 2,500 (DeFrancesco, 2001)—autism is now believed to afflict about 1 in 150 people (Carey, 2007). Between 1993 and 2003, U.S. Department of Education statistics documented an astonishing 657% increase in the rates of autism nationwide (Lilienfeld & Arkowitz, 2007). Understandably, many people have attempted to pinpoint the sources of this baffling upsurge. Some of them, including consumer advocate Robert F. Kennedy Jr. (2005) and tens of thousands of parents of autistic children, have pointed the finger squarely at vaccines containing the preservative thimerosal, which are commonly administered shortly before many children develop autistic symptoms (Kirby, 2005). One of thimerosal’s breakdown products is mercury, which can produce neurological damage at high doses (Figure 9.1). In one study, 48% of undergraduates agreed that “Autism is caused by immunization shots” (Lenz et al., 2009).

The claim that the rates of autism are skyrocketing has been popularized by a litany of high-profile media spokespersons. In 2005, NBC’s Meet the Press devoted an entire show to the autism epidemic and to claims by bestselling author David Kirby that thimerosal-bearing vaccines are causing it. In 2008, actress and former Playboy Playmate Jenny McCarthy, who has an autistic son, called for the resignation of the head of the Centers for Disease Control and Prevention (CDC), Julie Gerberd-ing, for her “incompetence during the autism epidemic” and called for a director “who recognizes that we are experiencing an epidemic of autism” (http://adventuresinautism.blogspot.com/2008/03/jenny-mccarthy-calls-for-julie.xhtml). Former National Football League star quarterback Doug Flutie, who also has an autistic son, has similarly proclaimed publicly that the prevalence of autism is rising at an startling rate (http://www.dougflutiejrfoundation.org/About-Autism-What-is-Autism-.asp).

Figure 9.1 This T-shirt captures the sentiments of opponents of thimerosal-containing vaccines, whom most of whom believe that these vaccines explain the apparent recent epidemic of autism (Hg is the chemical symbol for mercury, which is a breakdown product of thimerosal).

Source: Photo courtesy of Zazzle.com.

c09_img01.jpg

In addition, both major party candidates in the 2008 U.S. presidential election endorsed the view that autism is increasing dramatically in prevalence. In response to a question at a 2008 town hall meeting, John McCain replied that “It’s indisputable that (autism) is on the rise amongst children, the question is what’s causing it… there’s strong evidence that indicates that it’s got to do with a preservative in vaccines” (he reiterated the claim that autism is rising during the third presidential debate in October, 2008). Less than 2 months later, Barack Obama told supporters at a rally that “We’ve seen just a skyrocketing autism rate. Some people are suspicious that it’s connected to the vaccines—this person included.” Many Americans appear to share McCain’s and Obama’s views: according to one informal Internet poll in 2008 by CBS News’ 60 Minutes, 70% of respondents believe there’s an epidemic of autism.

Yet there’s serious reason to doubt that autism is becoming more common. A far more likely explanation for the findings is a pronounced loosening of diagnostic practices over time (Gernsbacher, Dawson, & Goldsmith, 2005; Grinker, 2007). The 1980 version of the DSM (DSM-III) required individuals to meet all 6 of 6 criteria to be diagnosed with autism. In contrast, the 1994 version (DSM-IV), which is still in use with minor modifications, requires individuals to meet any 8 of 16 criteria to be diagnosed with autism. In addition, whereas DSM-III contained only two diagnoses relevant to autism—autistic disorder and Asperger’s syndrome, which is generally regarded as a mild form of autism—DSM-IV contains five such diagnoses, several of which describe relatively mild forms of autism. So the diagnostic criteria for autism have become considerably less stringent from 1980 to the present, resulting in more diagnoses of this condition (Gernsbacher et al., 2005).

Additional influences may be at play. Because of disabilities laws passed by the U.S. Congress in the early 1990s, schools are now required to provide precise counts of the number of children with disabilities, including autism. As a consequence, educational districts are now reporting far more cases of autism, although this increase doesn’t necessary reflect any changes in autism’s actual prevalence (Grinker, 2007; Mercer, 2010). Furthermore, the “Rain Man Effect,” which refers to the public’s heightened awareness of autism following the 1988 film of the same name (see “Mythbusting: A Closer Look”), may have made parents and teachers more likely to notice autistic symptoms in children (Lawton, 2005). The Rain Man Effect may produce what investigators term detection bias: heightened reporting of a condition resulting from a change in how readily observers detect it (Hill & Kleinbaum, 2005).

Indeed, several recent studies suggest that the autism epidemic may be an illusion. In one investigation, researchers tracked the prevalence of autism diagnoses between 1992 and 1998 in an area of England using the same diagnostic criteria at both time points (Chakrabarti & Fombonne, 2005). Contrary to what we’d expect if there were an autism epidemic, the authors found no increase whatsoever in the prevalence of autism over time. Another study found evidence for a phenomenon termed “diagnostic substitution”: As rates of the autism diagnosis soared in the United States between 1994 and 2003, diagnoses of mental retardation and learning disabilities combined decreased at about an equal rate. This finding suggests that diagnoses of autism may be “swapping places” with other, less fashionable, diagnoses. The same trend may be unfolding in the case of diagnoses of language disorders, which have become less frequent as autism diagnoses have become more popular (Bishop, Whitehouse, Watt, & Line, 2008).

All of these studies offer no support for an autism epidemic: They suggest that diagnoses of autism are skyrocketing in the absence of any genuine increase in autism’s prevalence. As a consequence, efforts to account for this epidemic by vaccines may be pointless. Putting that problem aside, there’s no solid evidence for any link between autism and vaccinations—including either injections containing thimerosal or injections for MMR (measles, mumps, and rubella; Institute of Medicine, 2004; Offit, 2008). For example, several large American, European, and Japanese studies revealed that even as the rates of vaccinations stayed the same or went down, the rates of diagnosed autism increased (Herbert, Sharp, & Gaudiano, 2002; Honda, Shimizu, & Rutter, 2005). Even after the government removed thimerosal from vaccines in 2001, the rates of autism in California continued to climb rapidly until 2007 (Schechter & Grether, 2008), paralleling similar findings in Denmark (Madsen et al., 2002). Nor is there any evidence that vaccines containing stronger doses of thimerosal are associated with higher autism rates than vaccines containing weaker doses (Hviid, Stellfeld, Wohlfahrt, & Melbye, 2003).

None of these findings conclusively exclude the possibility that vaccines may increase the risk of autism in a tiny subset of children, as it’s difficult to prove a negative in science. But they provide no evidence for a link between vaccines and autism (Offit, 2008). Moreover, they rule out the possibility that vaccines can explain the supposed autism epidemic, as any possible overall effect of vaccines is so miniscule that studies haven’t been able to detect it.

Yet researchers haven’t always found it easy to get the word out. One scientist who’s published articles refuting the vaccine-autism link, Paul Offit, has been called a “terrorist” by protesters and has received hundreds of hostile e-mails, including death threats. Other scientists have endured similar harassment (Hughes, 2007).

Mythbusting: A Closer Look

Do Most Autistic Individuals Possess Remarkable Intellectual Skills?

The purported epidemic of autism is merely one of a myriad of unsupported beliefs regarding this condition (Gernsbacher, 2007). One other rampant myth is that most people with autism are savants (“savant” means a wise person): individuals with one or more isolated pockets of remarkable intellectual ability, often called “splinter skills” (Miller, 1999; O’Connor & Hermelin, 1988). Among these skills are “calendar calculation,” that is, the ability to name the day of the week given any past or future date (like March 8, 1602 or November 19, 2307), astonishing memory for specific facts (like knowing the exact batting averages for all major league baseball players over the past century), and exceptional musical talents (like being able to reproduce perfectly a complicated tune on a piano after hearing it only once). The belief that most people with autism possess remarkable abilities appears to be widespread, at least in the autism community. In one survey, parents (4.24 on a 6-point scale) and teachers (4.15 on a 6-point scale) of autistic children agreed mostly with the statement, “Most autistic children have special talents or abilities” (Stone & Rosenbaum, 1988, p. 410).

This belief almost surely stems in part from films, such the 1988 Academy Award-winning movie, Rain Man, starring Dustin Hoffman (see Introduction, pp. 17–18), that portray autistic individuals as savants. Rain Man was inspired by an actual savant named Kim Peek, who knows approximately 9,000 books by heart—he can read a page from a book in 8 to 10 seconds and recall details from it months later—and can operate as a human Mapquest, providing precise directions from any U.S. city to any other (Treffert & Christensen, 2005).

Yet studies show that among autistic individuals, savants are the exception rather than the rule. Although estimates vary, most studies show that no more than 10%, and perhaps less, of people with autism display savant abilities (Heaton & Wallace, 2004; Rimland, 1978). This figure compares with a rate of about 1 % among individuals without autism. It’s not known why only certain autistic individuals become savants, although research indicates that savants tend to have higher IQs than non-savants, suggesting that overall intellectual ability may play a role (Miller, 1999).

The misconception that most autistic individuals are savants may seem innocuous enough. But this belief may have contributed to a misguided treatment called facilitated communication (FC), which is premised on the unsubstantiated notion that autism is primarily a motor (movement) disorder, not a mental disorder. According to FC advocates, individuals with autism are essentially normal individuals trapped inside an abnormal body. Because of a motor impairment, they maintain, individuals with autism are unable to articulate words properly (Biklen, 1990). The existence of savants appears to provide a rationale for FC, because it implies that autistic individuals are often more intellectually capable than they superficially appear (Frontline, 1993).

Using FC, the argument goes, largely or entirely mute autistic individuals can type out words and sentences on a keyboard with the aid of a “facilitator,” who guides their hands and thereby compensates for their presumed motor impairment. In the early 1990s, shortly after FC was introduced to the United States, scores of ecstatic facilitators reported astonishing success stories of previously uncommunicative autistic individuals typing out eloquent sentences, at times speaking of their sense of liberation upon at last being able to express their imprisoned feelings. Yet numerous controlled studies soon showed that FC was entirely a product of unintentional facilitator control over autistic children’s hand movements. Without even realizing it, facilitators were leading children’s fingers to the keys (Delmolino & Romancyzk, 1995; Jacobson, Mulick, & Schwartz, 1995). Regrettably, FC has raised false hopes among thousands of desperate parents of autistic individuals. In addition, it’s led to dozens of uncorroborated accusations of sexual abuse against these parents—based entirely on typed communications that emerged with the aid of facilitators (Lilienfeld, 2005a; Margolin, 1994).

A great deal is at stake, as the public’s misunderstanding of the autism-vaccine link may be dangerous. Following a widely publicized, but since discredited, 1998 British study proclaiming a link between MMR vaccines and autism, vaccination rates for MMR in England plummeted from 92% to 73%, resulting in sudden outbreaks of measles and at least one death (Smith, Ellenberg, Bell, & Rubin, 2008). Although this decline in vaccinations might have been coincidental, it’s noteworthy that it immediately followed widespread media coverage of the autism-vaccine link. And the past several years have witnessed similar increases in measles in the United States, Italy, Switzerland, Austria, and Italy, all in areas in which many parents have refused to vaccinate their children (New York Times, 2008). Misconceptions matter.

Myth #42 Psychiatric Hospital Admissions and Crimes Increase during Full Moons

We’ll begin this myth with a riddle: Once every 29.53 days on average, an event of rather trivial astronomical significance occurs. But according to some writers, it’s an event of enormous psychological significance. What is it?

The answer: A full moon. Over the years, authors have linked the full moon to a host of phenomena—strange behaviors, psychiatric hospital admissions, suicides, traffic accidents, crimes, heavy drinking, dog bites, births, crisis calls to emergency rooms and police stations, violence by hockey players … the list goes on and on (Carroll, 2003; Chudler, n.d.; Rotton & Kelly, 1985).

This belief is hardly new: the word “lunatic” (which in turn has given rise to the slang term “looney”), meaning a psychotic person, derives from the Latin term luna, or moon. Legends of werewolves and vampires, terrifying creatures that supposedly often emerged during full moons, date back at least to the ancient Greeks, including Hippocrates and Plutarch (Chudler, n.d.). These legends were also enormously popular in Europe during much of the Middle Ages and later eras. In his great play Othello (Act 5, Scene 2), Shakespeare wrote that “It is the very error of the moon. She comes more near the earth than she was wont. And makes men mad.” In 19th century England, some lawyers even used a “not guilty by reason of the full moon” defense to acquit clients of crimes committed during full moons. Even today, Buddhism forbids participants from playing outdoor sports during full moons (Full Moon Rules out Play, 2001).

The notion that the full moon is tied to myriad strange occurrences— often called “The Lunar Effect or “Transylvania Effect”—is deeply embedded in modern culture as well. A 1995 study by investigators at the University of New Orleans revealed that up to 81% of mental health professionals believe in the lunar effect (Owens & McGowan, 2006), and a 2005 study of surgical nurses in Pittsburgh, Pennsylvania demonstrated that 69% believe that full moons are associated with increases in patient admissions (Francescani & Bacon, 2008). One study of Canadian college students revealed that 45% believe in the lunar effect (Russell & Dua, 1983). This belief has real-world implications; in 2007, the city of Brighton, England instituted a policy to place more police officers on the beat during full moon nights (Pugh, 2007).

The lunar effect is also a familiar fixture in scores of Hollywood films. For example, in the 1985 Martin Scorsese comedy, After Hours, one of the police officers mutters “There must be a full moon out there” after the main character behaves oddly late at night. In the 2009 film, Underworld: Rise of the Lycans, one of the human characters transforms himself repeatedly into a werewolf during full moons.

In recent decades, psychiatrist Arnold Lieber (1978, 1996) popularized the idea of a correlation between the full moon and behavior. For Lieber and his followers, much of the rationale for the lunar effect stems from the fact that the human body is four-fifths water. Because the moon affects the tides of the earth, so the argument goes, it’s plausible that the moon would also affect the brain, which is, after all, part of the body. Yet on closer inspection, this argument doesn’t “hold water,” if you can forgive the pun. As astronomer George Abell (1979) noted, a mosquito sitting on your arm exerts a more powerful gravitational force on your body than does the moon. Similarly, the gravitational force of a mother holding her baby is about 12 million times greater than the gravitational force of the moon on that baby. Furthermore, the moon’s tides are influenced not by its phase—that is, by how much of it is visible to us on earth—but by its distance from earth (Kelly, Laverty, & Saklofske, 1990). Indeed, during a “new moon,” the phase at which the moon is invisible to us on earth, the moon exerts just as much gravitational influence as it does during a full moon.

These flawed explanations aside, we can still ask the question of whether the full moon exerts any meaningful effects on behavior. Because well over 100 published studies have examined this issue, scientists now have something close to a definitive answer. In 1985, psychologists James Rotton and Ivan Kelly reviewed all of the available research evidence on the lunar effect. Using meta-analytic techniques (see p. 32), they found no evidence that the full moon was related to much of anything—murders, other crimes, suicides, psychiatric problems, psychiatric hospital admissions, or calls to crisis centers (Rotton & Kelly, 1985). Rotton and Kelly did unearth a few scattered positive findings here and there, which isn’t surprising given the dozens of studies they examined. Nevertheless, even these few positive findings were open to decidedly “non-lunar” explanations. For example, one team of investigators reported that traffic accidents were more common during full moon nights than other nights (Templer, Veleber, & Brooner, 1982). Yet as Rotton and Kelly pointed out, this finding was marred by a serious flaw. During the time period studied by the researchers, full moons fell more often on weekends— when there’s more traffic—than on weekdays (Hines, 2003). When the researchers reanalyzed their data to take this confound into account, their positive findings vanished (Templer, Brooner, & Corgiat, 1983). Boldly flouting the conventional conclusion of psychology review articles that “more research is needed in this area,” Rotton and Kelly ended their article by concluding that no further research on lunar effects was necessary (p. 302).

Later analyses of the lunar effect yielded equally negative results. Investigators have examined whether the full moon is linked to suicides (Gutiérrez-García & Tusell, 1997), psychiatric hospital admissions (Kung & Mrazek, 2005), dog bites (Chapman & Morrell, 2000), emergency room visits or runs by ambulances to emergencies (Thompson & Adams, 1996), births (Kelly & Martens, 1994), and heart attacks (Wake, Fukuda, Yoshiyama, Shimada, & Yoshikawa, 2007), and virtually all have come up empty-handed. There’s also no evidence that emergency phone calls to operators or police departments go up during full moons (Chudler, n.d.). Because it’s often difficult or impossible to prove a negative in science (see p. 199), determined proponents of the lunar effect can maintain that this effect will one day emerge with better data. Still, it’s safe to say that if a lunar effect exists, it’s so tiny in size as to be essentially meaningless (Campbell, 1982; Chudler, n.d.).

If so, why are so many intelligent people convinced of it? There are at least two potential reasons. First, psychologists have discovered that we’re all prone to a phenomenon that Loren and Jean Chapman (1967, 1969) called illusory correlation, (see Introduction, p. 12). As you may recall, illusory correlation is the perception of an association between two events where none exists. It’s a statistical mirage.

Although several factors probably give rise to illusory correlation (Lilienfeld, Wood, & Garb, 2006), one that deserves particular attention is the fallacy of positive instances. This fallacy refers to the fact that when an event confirms our hypotheses, we tend to take special note of it and recall it (Gilovich, 1991; see also p. 31). In contrast, when an event disconfirms our hypotheses, we tend to ignore it or reinterpret it in line with these hypotheses. So, when there’s a full moon and something out of the ordinary happens, say, a sudden surge of admissions to our local psychiatric hospital, we’re likely to remember it and tell others about it. In contrast, when there’s a full moon and nothing out of the ordinary happens, we usually just ignore it (Chudler, n.d.). In still other cases, we might reinterpret the absence of any notable events during a full moon so that it’s still consistent with the lunar hypothesis: “Well, there was a full moon and there weren’t any psychiatric admissions tonight, but maybe that’s because it’s a holiday and most people are in a good mood.”

The illusory correlation hypothesis dovetails with the findings of a study revealing that psychiatric hospital nurses who believed in the lunar effect wrote more notes about patients’ strange behavior during a full moon than did nurses who didn’t believe in the lunar effect (Angus, 1973). The nurses attended more to events that confirmed their hunches, which probably in turn bolstered these hunches.

A second explanation is more conjectural, but no less fascinating. Psychiatrist Charles Raison and his colleagues (Raison, Klein, & Steckler, 1999) speculated that modern society’s belief in the lunar effect may stem from a correlation that once existed, and that some observers of the time misinterpreted as causation (see Introduction, p. 13 for a discussion of the confusion between correlation and causation). Prior to the advent of contemporary outdoor lighting, Raison and his collaborators suggested, the bright outdoor moon deprived people who were living outside, including homeless people with mental disorders, of sleep. Because sleep deprivation often triggers erratic behavior in patients with certain psychological conditions, especially bipolar disorder (once known as “manic depression”), and in patients with epilepsy, the full moon may once have been associated with a heightened rate of bizarre behaviors. Of course, it didn’t directly cause these behaviors: It contributed to sleep deprivation, which in turn contributed to bizarre behaviors. Nowadays, according to Raison and his co-authors, we no longer find this correlation, at least in large cities, because outdoor lighting largely cancels out the effects of the full moon.

This clever explanation may turn out to be wrong. But it reminds us of a key principle: Even false beliefs may derive from what was once a kernel of truth (see Introduction, p. 17).

Chapter 9: Other Myths to Explore

Fiction Fact
Psychiatric diagnoses are unreliable. For most major mental disorders (such as schizophrenia and major depression), reliabilities are comparable with those of major medical disorders.
Most psychotic people in Western society would be viewed as “shamans” in non-Western cultures. People in non-Western cultures clearly distinguish shamans from people with schizophrenia.
Hallucinations are almost always a sign of serious mental illness. Ten percent or more or college students and community residents without psychotic disorders have experienced waking hallucinations while not on drugs.
Most people with agoraphobia can’t leave their houses. Only severe agoraphobia results in its sufferers becoming housebound.
Most people who experience severe trauma, like military combat, develop posttraumatic stress disorder (PTSD). Even for the most severe traumas, only 25–35% of people typically develop PTSD.
The symptoms of PTSD were first observed following the Vietnam War. Clear descriptions of PTSD date from at least the U.S. Civil War.
Most phobias are traceable directly to negative experiences with the object of the fear. Most people with phobias report no direct traumatic experiences with the object of their fear.
People with fetishes are fascinated with certain objects. People with fetishes obtain sexual arousal from certain objects, like shoes or stockings.
Psychosomatic disorders are entirely in “people’s heads.” Psychosomatic disorders, now called psychophysiological disorders, are genuine physical conditions caused or exacerbated by stress and other psychological factors. They include asthma, irritable bowel syndrome, and some headaches.
People with hypochondriasis are typically convinced they’re suffering from many different illnesses. People with hypochondriasis are typically convinced they’re suffering from one serious undetected illness, like cancer or AIDS.
Most people with anorexia have lost their appetite. Most patients with anorexia nervosa don’t lose their appetites unless and until their illness becomes extremely severe.
All people with anorexia are female. About 10% of people with anorexia are male.
Eating disorders, especially anorexia and bulimia, are associated with a history of child sexual abuse. Controlled studies suggest that rates of child sexual abuse are probably no higher among eating disordered patients than patients with other psychiatric disorders.
Almost all people with Tourette’s syndrome curse. The proportion of Tourette’s patients with coprolalia (uncontrollable cursing) ranges from 8% to 60% across studies.
The brains of children with attention-deficit/hyperactivity disorder (ADHD) are over-aroused. Studies suggest that the brains of ADHD children are under-aroused.
Autistic individuals have a particular talent for generating prime numbers. There’s no good support for this claim, which derives from a few widely publicized examples.
All clinically depressed people suffer from extreme sadness. Up to a third of clinically depressed people don’t suffer from extreme sadness, but instead suffer from “anhedonia,” an inability to experience pleasure.
Depressed people are less realistic than non-depressed people. Mildly depressed people tend to be more accurate than non-depressed people on many laboratory tasks.
Depression has been demonstrated to be due to a “chemical imbalance” in the brain. There’s no scientific evidence for a genuine “imbalance” in any neurotransmitter in depression.
Children can’t become seriously depressed. There’s strong evidence that clinical depression can occur in childhood.
The rates of depression in women increase dramatically during the postpartum period. The rates of nonpsychotic depression are no higher immediately after giving birth than at other times, although the rates of psychotic depression are.
People with bipolar disorder, formerly called “manic depression,” all experience both depressed and manic episodes. Manic episodes alone are sufficient for the diagnosis of bipolar disorder.
Suicide typically happens without warning. Two thirds to three fourths of individuals who commit suicide had previously expressed their intentions to others.
Most people who commit suicide leave a suicide note. Only a minority of people who commit suicide, about 15–25% in most studies, leave suicide notes.
People who talk a lot about suicide are extremely unlikely to commit it. Talking repeatedly about suicide is one of the best predictors of killing oneself.
Asking people about suicide increases their risk for suicide. Although no controlled experiment has examined this claim directly, there’s no research support for it.
Suicides are especially likely during the Christmas holidays. Suicide rates either remain the same or even decrease slightly during the Christmas holidays.
Suicide is especially common during the dark days of winter. Across the world, suicide tends to be the most common during the warmest months.
The age group at highest risk for suicide is adolescents. The age group at highest risk for suicide is the elderly, especially older men.
More women than men commit suicide. More women than men attempt suicide, but more men than women succeed.
Families play a major role in causing or triggering schizophrenia. Although familial criticism and hostility may trigger relapse in some cases of schizophrenia, there’s no evidence that they play a causal role in the disorder’s onset.
All people with catatonic schizophrenia are inactive, lying in a fetal position. People with catatonic schizophrenia sometimes engage in frenzied and purposeless motor activity or perform odd gestures.
People with schizophrenia virtually never recover. Follow-up studies suggest that one half to two thirds of people with schizophrenia improve markedly over time.
Virtually all people who use heroin become addicted to it. Many regular heroin users never become addicted, and some previous addicts lose their addictions when they move to a new environment.
Most transvestites are homosexual. True transvestites, who obtain sexual arousal from cross-dressing, are nearly all heterosexual males.

Sources and Suggested Readings

To explore these and other myths about mental illness, see American Psychiatric Association (2000); Finn & Kamphuis (1995); Furnham (1996); Harding and Zahniser (1994); Joiner (2005); Hubbard and McIntosh (1992); Matarazzo (1983); Murphy (1976); Raulin (2003); Rosen and Lilienfeld (2008).