8

KNOW THYSELF

Myths about Personality

Myth #31 Raising Children Similarly Leads to Similarities in Their Adult Personalities

How did you become who you are?

This is just about the most fundamental question we can ask about personality. Mull it over for a few minutes, and you’re likely to gener ate a host of responses. If you’re like most people, the odds are high that many of your answers pertain to how you were raised by your par ents. “I’m a moral person because my parents taught me good values.” “I’m a daredevil because my father wanted me to take risks in life.”

Few beliefs about personality are as firmly held as what Judith Rich Harris (1988) termed “the nurture assumption,” the idea that parenting practices make the personalities of children within a family more similar to each other—and to their parents (Pinker, 2002; Rowe, 1994). For example, in her 1996 book, It Takes a Village, former first-lady and now U.S. Secretary of State Hillary Clinton argued that parents who are honest with their children tend to produce children who are honest; parents who are unduly aggressive with their children tend to produce children who are aggressive; and so on (Clinton, 1996). Moreover, we can find this assumption in hundreds of scholarly articles and books. For example, in an early edition of his widely used personality textbook, Walter Mischel (1981) presented the outcome of a thought experiment:

Imagine the enormous differences that would be found in the personalities of twins with identical genetic endowment if they were raised apart in two different families … Through social learning vast differences develop among people in their reactions to most stimuli they face in daily life. (Mischel, 1981, p. 311)

The nurture assumption also forms the bedrock of numerous theories that rely on parent-to-child socialization as a driving force of personality development (Loevinger, 1987). Sigmund Freud proposed that children learn their sense of morality (what he termed the “superego”) by identifying with the same-sex parent and incorporating that parent’s value system into their personalities. Albert Bandura’s “social learning theory” holds that we acquire behaviors largely by emulating the actions of our parents and other authority figures. The fact that our personalities are molded largely by parental socialization is undeniable. Or is it?

It’s true that children tend to resemble their parents to some extent on just about all personality traits. But this finding doesn’t demonstrate that this resemblance is produced by environmental similarity, because biological parents and their children share not only environment but genes. To verify the nurture assumption, we must find systematic means of dis entangling genes from environments.

One method of doing so capitalizes on a remarkable natural experi ment. In about one of out every 250 births, the fertilized egg, or “zygote,” splits into two copies called identical twins; for this reason, they’re also called “monozygotic” twins. Identical twins therefore share 100% of their genes. By comparing the personalities of identical twins raised apart from birth with identical twins raised together, researchers can estimate the effect of shared environment: the combined environmental influences that increase the resemblance among family members.

The largest study of identical twins reared apart, conducted by Uni versity of Minnesota psychologist Thomas Bouchard and his colleagues, examined over 60 pairs of identical twins separated at birth and raised in different homes. Playfully termed the “Minnesota Twin” studies after the state’s baseball team, this study reunited many adult twin pairs at the Minneapolis-St. Paul airport for the first time since their separation only a few days following birth.

Bouchard and his colleagues, including Auke Tellegen and David Lykken, found that these twins often exhibited eerie similarities in person ality and habits. In one case of male twins raised in different countries, both flushed the toilet both before and after using it, read magazines from back to front, and got a kick out of startling others by sneezing loudly in elevators. Another pair consisted of two male twins who unknow ingly lived only 50 miles apart in New Jersey. To their mutual astonish ment, they discovered that they were both volunteer firefighters, big fans of John Wayne westerns and, although fond of beer, drinkers of only Budweiser. While attending college in different states, one installed fire detection devices, the other fire sprinkler devices. Amazing as these anecdotes are, they don’t provide convincing evidence. Given enough pair ings among unrelated individuals, one could probably detect a number of equally bizarre coincidences (Wyatt, Posey, Welker, & Seamonds, 1984).

More important was Bouchard and his colleagues’ remarkable finding that on questionnaire measures of personality traits—like anxiety-proneness, risk-taking, achievement motivation, hostility, traditionalism, and impulsivity—identical twins raised apart were as similar as ident ical twins raised together (Tellegen et al., 1988). Being raised in entirely different families exerted little or no impact on personality similarity. Other studies of identical twins raised apart have yielded similar results (Loehlin, 1992). Walter Mischel was wrong. In fact, he deleted his thought experiment from later editions of his personality textbook.

Another method of investigating the nurture assumption takes advant age of what Nancy Segal (1999) called “virtual twins.” Don’t be fooled by this term, because they’re not twins at all. Instead, virtual twins are unrelated individuals raised in the same adoptive family. Studies of virtual twins indicate that unrelated individuals raised in the same household are surprisingly different in personality. For example, one study of 40 children and adolescents revealed weak resemblance in person alitytraits, like anxiety-proneness, and most behavior problems within virtual twin pairs (Segal, 1999).

The results of identical and virtual twin studies suggest that the extent that you’re similar to your parents in extraversion, anxiety, guilt-proneness, and other traits is due almost entirely to the genes you share with them. This research also suggests some counterintuitive advice to parents and would-be parents. If you’re stress-prone and want your children to turn out to be stress-free as adults, don’t stress out over it. It’s unlikely that your parenting style will have as large a long-term impact on your children’s anxiety levels as you think.

That’s not to say that shared environment has no effect on us. For one thing, shared environment generally exerts at least some influence on childhood personality. But the effects of shared environment usually fade away once children leave the household and interact with teachers and peers (Harris, 1998). Interestingly, as Bouchard notes, this finding offers yet another example of how popular wisdom gets it backward. Most people believe that our environments exert increasing or even cumulative effects on us over time, whereas the opposite appears to be true, at least insofar as personality is concerned (Miele, 2008).

In addition, it’s likely that extremely neglectful or incompetent par enting can produce adverse effects in later life. But within the broad range of what psychoanalyst Heinz Hartmann (1939) called the “average expectable environment,” that is, an environment that affords children basic needs for nourishment, love, and intellectual stimulation, shared environmental influence on personality is nearly invisible. Finally, at least one important psychological characteristic seems to be influenced by shared environment: antisocial behavior (by the way, don’t confuse “antisocial” behavior with “asocial” behavior, which means shyness or aloofness). Studies of children adopted into criminal homes often show that being raised by a criminal parent increases one’s risk of criminality in adult hood (Lykken, 1995; Rhee & Waldman, 2002).

It’s easy to see why most people, including parents, find the nurture assumption so plausible. We observe that parents and their children tend to be similar in personality, and we attribute this similarity to some thing we can see—parenting practices—rather than to something we can’t—genes. In doing so, however, we’re falling prey to post hoc, ergo propter reasoning, the mistake of assuming that because A comes before B, A causes B (see Introduction, p. 14). The fact that parenting prac tices precede the similarity between parents and children doesn’t mean that they produce it.

Mythbusting: A Closer Look

Birth Order and Personality

The finding that shared environmental influences—those that make people within a family more similar—exert a minimal effect on adult personality doesn’t imply that nonshared influences—those that make people within the same family different—aren’t important. In fact, studies demonstrate that the correlations for all personality traits within identical twin pairs are considerably less than 1.0 (that is, considerably less than a perfect correlation), which strongly suggests that nonshared environmental influences are operating. Yet researchers have had a devil of a time pinpointing specific nonshared environmental influences on personality (Meehl, 1978; Turkheimer & Waldron, 2000).

One promising candidate for a nonshared influence on personality is birth order, a variable that’s long been a darling of popular psychology. According to scores of self-help books, such as Kevin Leman’s (1988) The New Birth Order Book: Why You Are the Way You Are and Cliff Isaacson and Kris Radish’s (2002) The Birth Order Effect: How to Better Understand Yourself and Others, birth order is a potent predictor of personality. First-borns, these books assure us, tend to be comformist and per-fectionistic, middle-borns diplomatic and flexible, and later-borns non-traditional and prone to risk-taking.

Research paints a different picture. In most studies, the relations between birth order and personality has been inconsistent or non existent. In 1993, Swiss psychologists Cecile Ernst and Jules Angst surveyed over 1,000 studies of birth order and personality. Their con clusion, which surely produced angst (we couldn’t resist the pun) among advocates of the birth order hypothesis, was that birth order is largely unrelated to personality (Ernst & Angst, 1993). More recently, Tyrone Jefferson and his colleagues examined the relations between birth order and the “Big Five” personality dimensions, which emerge from analyses of almost all broad measures of personality. These traits, which are conveniently recalled by the water-logged mnemonics CANOE or OCEAN, are conscientiousness, agreeableness, neuroticism (closely related to anxiety-proneness), openness to experience (closely related to intellectual curiosity), and extraversion. Jefferson and co-authors found no significant relations between birth order and self-reported measures of any of the Big Five traits. Using ratings from peers (such as friends and co-workers), they found modest associations between birth order and a few aspects of agreeableness, openness, and extraversion (with later-borns being more outgoing, inventive, and trusting than earlier-borns), but these findings didn’t hold up using spouse ratings (Jefferson, Herbst, & McCrae, 1998).

On the basis of analyses of scientists’ attitudes toward revolution ary theories, like Copernicus’s theory of a sun-centered solar system and Darwin’s theory of natural selection, historian Frank Sulloway (1996) argued that birth order is a predictor of rebelliousness, with later-borns being more likely to question conventional wisdom than earlier-borns. But others have found Sulloway’s analyses unconvincing, in part because Sulloway wasn’t “blind” to scientists’ birth order when classifying their attitudes to scientific theories (Harris, 1998). Moreover, other invest igators haven’t been able to replicate Sulloway’s claim that later-borns are more rebellious than earlier-borns (Freese, Powell, & Steelman, 1999).

So birth order may be weakly related to a few personality traits, although it’s a far cry from the powerful predictor that folk psycho logy would have us believe.

Myth #32 The Fact that a Trait Is Heritable Means We can’t change it

In the 1983 film Trading Places, two wealthy businessmen disagree about whether nature (genetic make-up) or nurture (environment) is respons ible for success in life. To settle their dispute, they arrange for Louis Winthorpe III, an employee at their investment firm (played by Dan Aykroyd), to lose his job, home, money, and girlfriend. In his place, they hire Billy Ray Valentine, a con artist living on the street (played by Eddie Murphy), and give him the home and social status previously enjoyed by Winthorpe. If success depends on nature, Valentine should fail in his new position and end up back on the street, whereas Winthorpe should overcome temporary setbacks and rise again. The film reflects the per spective that was dominant at the time: Winthorpe became a victim of his new circumstances and Valentine thrived in his.

As recently as the early 1980s, the suggestion that genes could play a role in shaping human traits or behavior was exceedingly controversial. More than a century earlier, Charles Darwin (1859) had proposed his landmark theory of evolution by natural selection; two decades earlier, James Watson and Francis Crick (1953) had discovered the molecular structure of DNA (the genetic material). Yet many scholars dismissed these revolutionary discoveries as irrelevant to social and behavioral science. They simply assumed that our behavior was shaped only by our environments—cultural beliefs and practices, family members and other important people in our lives, physically or psychologically traumatic events, diseases, and the like. The question of how nature and nurture affect us wasn’t up for debate. “Lower” animals might act on inherited instincts, but human behavior wasn’t influenced by genes.

Of course, an awful lot has changed since then. Scientists have now firmly established the influence of genes on personality and many other aspects of human behavior. Even so, misconceptions about the heritability of psychological traits persist. Perhaps the most widespread myth is that heritable traits can’t be changed, a message that might be discouraging if it were true. For example, in his well-regarded book In the Name of Eugenics: Genetics and the Uses of Human Heredity, Daniel Kevles (1985) wrote that the famous statistician Karl Pearson “outraged both physicians and temperance reformers … by his outspoken insistence that a tendency to contract tuberculosis was heritable—which made a mockery of public health measures to combat it” (p. 67). In their enormously controversial book, The Bell Curve: Intelligence and Class Structure in American Life, Richard Herrnstein and Charles Murray (1994) also committed several errors when writing about heritability. In par ticular, they referred to “the limits that heritability puts on the ability to manipulate intelligence” and said that “even a heritability of [60%] leaves room for considerable change if the changes in environment are commensurably large” (p. 109). This statement suggests that a higher heritability wouldn’t leave room for much change. These authors, like many others, implied mistakenly that highly heritable traits are difficult or impossible to change. As we’ll soon discover, even a heritability of 100% doesn’t imply unmodifiability. To see why, we need to understand what heritability means and how researchers study it.

Scientists define heritability as the percentage of individual differences (differences across people) in a trait that’s due to genetic differences. Let’s consider the trait of extraversion, or how outgoing and sociable some one is. If extraversion were 0% heritable, differences between shy and gregarious people would be due to environmental factors only, not genes. At the other extreme, if extraversion were 100% heritable, all differences on this trait would be produced genetically and unrelated to environ mental factors. It turns out that extraversion, like most other personality traits, is about 50% heritable (Plomin & Rende, 1991). But what does it mean to say that something is partially heritable? Two aspects of the deceptively simple heritability concept are crucial to grasp.

First, despite what many people believe, heritability concerns differ ences across people, not within people. Even The Bell Curve co-author Charles Murray got this wrong in a 1995 interview on CNN: “When I—when we—say 60 percent heritability, it’s not 60 percent of the vari ation. It is 60 percent of the IQ in any given person” (quoted in Block, 1995, p. 108). But heritability has no meaning within a “given person.” Whatever your IQ, you can’t say that you get 60% of this from your genes and the other 40% from your environment. Instead, this statistic means that across people in a population, 60% of their differences in IQ are due to differences in their genes and 40% to differences in their environments.

Second, heritability depends on the ranges of genetic and environmental differences in a sample. When studying the behavior of genetically identical organisms raised in different conditions, heritability would be 0%. Because there’s no genetic variability across organisms, only envir onmental differences could exert any effect. Scientists sometimes attempt to minimize hereditary differences in behavior by working with specially bred strains of nearly genetically identical white mice. By eliminating just about all genetic variation, the effects of experimental manipulations are easier to detect. In contrast, when studying the behavior of genetically diverse organisms under exactly identical laboratory conditions, heritability would be 100%. Because there’s no environmental variability, only genetic differences could exert any effect. Scientists comparing the yield of genet ically different seed varieties can grow them in virtually identical soil, temperature, and lighting conditions to eliminate just about all environ mental influences. To study the heritability of psychological traits, we should ideally include a broad range of genes and environments.

So how do scientists estimate heritability? It’s not as simple as look ing for similarities among members of intact families, because this approach confounds the influence of genes and environment. Children share not only their genes with their siblings, parents, and even other biological relatives, but also many aspects of their environment. The trick is to design a study in which genes and environments vary systematically and independently. For example, in most twin studies researchers test the similarities of siblings—who shared the same uterus and were born at virtually the same time—raised in the same homes. Specifically, they compare the similarity of identical twins, who share 100% of their genes, with that of fraternal twins, who share 50% of their genes on average (see Myth #31). Aspects of the environment relevant to the causes of personality traits are usually shared to a comparable extent by identical and fraternal twins. So we can treat environmental influences as about equal for each type of twin and test for differences due to shared genes. If identical twins are more similar to one another on a personality trait than are fraternal twins, this finding suggests that the trait is at least somewhat heritable; the larger the difference between identical and fraternal twins, the higher the heritability.

Using twin studies and other informative research designs, investigators have consistently found moderate heritability for personality traits like extraversion, conscientiousness, and impulsivity, as well as cognitive ability and vulnerability to psychopathology (Plomin & Rende, 1991). Even attitudes toward such political issues as abortion and such ideo logical positions as liberal or conservative are heritable, more so than affiliation with the Democratic or Republican political parties (Alford, Funk, & Hibbing, 2005). Judith Rich Harris (1995) reviewed evidence that the portion of individual differences in personality traits that’s environmental has little to do with shared environmental factors, such as similar rearing by parents, and much more to do with nonshared environmental factors, such as differences in people’s exposure to peers (see Myth #31).

The heritability of intelligence has been especially controversial. As early as the 1990s, there was strong agreement that IQ is heritable, but estimates of how much ranged from 40% to 80% (Gottfredson, 1997). Why is this range so large? Many factors can affect heritability estimates across studies, such as the socioeconomic status or age of participants. In a sample of 7-year-old children, Eric Turkheimer and his colleagues found that heritability was only 10% among the poorest families, but 72% among the richest families (Turkheimer, Haley, Waldron, D’Onofrio, & Gottesman, 2003). Other researchers have found that the heritability of IQ increases across the lifespan (Plomin & Spinath, 2004). As we develop into adults, our inherited traits and preferences exert an increas ing influence on our environments. This phenomenon occurs both actively, as we select and create our own environments, and passively, as when people treat us differently. The net effect is that our inherited traits exert a greater impact on the development of our intelligence over time. Heritability estimates for IQ are low when our parents shape our environments, but they increase up to 80% by the time we’re adults.

As our earlier examples illustrate, even knowledgeable authors have misunderstood what heritability means. Most critically, the heritability of a trait like IQ doesn’t mean we can’t modify it (Gottfredson, 1997). Instead, a high heritability means only that current environments exert a small effect on individual differences in a trait; it says nothing about the potential effects of new environments, nor does it imply that we can’t successfully treat a disorder. For example, phenylketonuria (PKU) is a 100% heritable condition involving the inability to metabolize (break down) the amino acid phenylalanine. This condition can lead to irreversible problems in brain development, such as mental retardation. Products that contain phenylalanine, which includes anything made with the artificial sweetener aspartame (trade names Equal and Nutra-Sweet), include a warning on their labels for people with PKU; check the back of any Diet Coke can to see what we mean. By ridding their diets of phenylalanine, people with PKU can ward off harmful effects. So the fact that PKU is entirely heritable doesn’t mean we can’t modify it. We can.

In the same way, an enriched environment could stimulate the devel opment of intellectual ability even if IQ is highly heritable. In fact, if research told us what constitutes an optimal learning environment and we provided this environment equally to all children in a study, the heritability of IQ would be 100% because only genetic variation would remain. So the good news is that we don’t need to be troubled by high heritability. It doesn’t mean we can’t change a trait, and it may even indicate that we’ve made good progress in improving bad environments. Who knows—maybe in a few decades we’ll use heritability estimates to measure the success of our programs!

Myth #33 Low Self-Esteem Is a Major Cause of Psychological Problems

On the morning of April 20, 1999—perhaps not coincidentally, Adolph Hitler’s 110th birthday—two teenage students dressed in black trench coats strolled calmly into Columbine High School in Littleton, Colorado. Although essentially unknown prior to that morning, Eric Harris and Dylan Klebold were to become household names in America by the day’s end. Armed with an assortment of guns and bombs, they gleefully chased down and slaughtered 12 students and a teacher before killing themselves.

No sooner did the Columbine tragedy occur than a parade of mental health experts and social commentators took to the airwaves to specu late on its causes. Although these pundits invoked a host of possible influences, one emerged as the clear front-runner: low self-esteem. The opinions expressed on one website were typical:

The shootings at Columbine and other schools across the country continue the frightening pattern of kids shooting kids … While keeping guns out of the hands of our children is critical, teaching them to value them selves and others is even more important. (www.axelroadlearning.com/teenvaluestudy.htm)

Others have explained the supposed recent epidemic of school shoot ings across America in terms of a marked decline in children’s self-esteem (we say “supposed” because the claim that school shootings have been becoming more common is itself a myth; Cornell, 2006). The handful of mental health professionals who’ve questioned this prevailing wisdom publicly haven’t always been well received. During a 1990s televised talk show, a psychologist patiently attempted to spell out the multiple causes underlying teen violence. One associate producer, believing the psychologist’s arguments to be needlessly complicated, angrily waved a large card at her that read simply “SELF-ESTEEM!” (Colvin, 2000).

Indeed, many popular psychologists have long maintained that low self-esteem is a prime culprit in generating many unhealthy behaviors, including violence, depression, anxiety, and alcoholism. From Norman Vincent Peale’s (1952) classic The Power of Positive Thinking onward, self-help books proclaiming the virtues of self-esteem have become regular fixtures in bookstores. In a bestselling book, The Six Pillars of Self-Esteem, self-esteem guru Nathaniel Branden insisted that one:

cannot think of a single psychological problem—from anxiety and depres sion, to fear of intimacy or of success, to spouse battery or child molestation —that is not traceable to the problem of low self-esteem. (Branden, 1994)

The National Association for Self-Esteem similarly claims that:

A close relationship has been documented between low self-esteem and such problems as violence, alcoholism, drug abuse, eating disorders, school dropouts, teenage pregnancy, suicide, and low academic achievement. (Reasoner, 2000)

The perception that low self-esteem is detrimental to psychological health has exerted an impact on public policy. In 1986, California funded a Task Force on Self-Esteem and Personal and Social Responsibility to the tune of $245,000 per year. Its goal was to examine the negative con sequences of low self-esteem and to find a means of remedying them. The prime mover behind this task force, California state assemblyman John Vasconcellos, argued that enhancing the self-esteem of California’s citizens could help balance the state’s budget (Dawes, 1994).

The self-esteem movement has also found its way into mainstream educational and occupational practices. Many American schoolteachers ask children to generate lists of what makes them good people in the hopes of enhancing their pupils’ self-worth. Some athletic leagues award trophies to all schoolchildren to avoid making losing competitors feel inferior (Sommers & Satel, 2005). One elementary school in Santa Monica, California banned children from playing tag because the “children weren’t feeling good about it” (Vogel, 2002), and still other schools refer to children who spell poorly as “individual spellers” to avoid hurting their feelings (Salerno, 2009). A number of U.S. companies have also leapt on the self-esteem bandwagon. The Scooter Store, Inc., in New Braunfels, Texas, hired a “celebrations assistant” who is assigned to toss 25 pounds of confetti each week to employees in an effort to boost their self-worth, and the Container Store has instituted “Celebration Voice Mailboxes” to provide continual praise to its workers (Zaslow, 2007).

Moreover, the Internet is chock full of educational books and products intended to boost children’s self-esteem. One book, Self-Esteem Games (Sher, 1998), contains 300 activities to help children feel good about themselves, such as repeating positive affirmations emphasizing their uniqueness, and another book, 501 Ways to Boost Your Children’s Self-esteem (Ramsey, 2002), encourages parents to give their children more say in family decisions, such as allowing them to choose how to be punished. One can order a “Self-Esteem Question Desk” of cards consist ing of questions designed to remind oneself of one’s accomplishments— like “What is a goal you have already achieved?” and “What honor have you received in the past that you are proud about?” Or one can even buy a self-esteem cereal bowl emblazoned with positive affirmations, like “I’m talented!” and “I’m good looking!”

But there’s a fly in the ointment. Most research shows that low self esteem isn’t strongly associated with poor mental health. In a painstaking—and probably painful!—review, Roy Baumeister, Jennifer Campbell, Joachim Krueger, and Kathleen Vohs (2003) canvassed all of the available evidence—over 15,000 studies worth—linking self-esteem to just about every conceivable psychological variable. Contrary to widespread claims, they found that self-esteem is minimally related to interpersonal success. Nor is self-esteem consistently related to smoking, alcohol abuse, or drug abuse. Moreover, they discovered that although self-esteem is positively associated with school performance, it doesn’t seem to cause it (Mercer, 2010). Instead, better school performance appears to contribute to high self-esteem (Baumeister et al., 2003). It’s likely that some earlier researchers had misinterpreted the correlation between self-esteem and school performance as reflecting a direct causal effect of self-esteem (see Introduction, p. 13). Furthermore, although self-esteem is associated with depression, this correlation is only moderate in size (Joiner, Alfano, & Metalsky, 1992). As a consequence, “low self-esteem is neither necessary nor sufficient for depression” (Baumeister et al., 2003, p. 6).

Still, readers with high self-esteem needn’t despair. Self-esteem seems to afford two benefits (Baumeister et al., 2003). We say “seems” because the findings are merely correlational and may not be causal (see Introduction, p. 13). That said, self-esteem is associated with greater (1) initiative and persistence, that is, a willingness to attempt tasks and to keep at them when difficulties arise, and (2) happiness and emotional resilience.

Self-esteem is also related to a tendency to view oneself more posit ively than others do. High self-esteem individuals consistently regard them selves as smarter, more physically attractive, and more likeable than other individuals. Yet these perceptions are illusory, because people with high self-esteem score no higher than other people on objective measures of intelligence, attractiveness, and popularity (Baumeister et al., 2003).

When it comes to violence, the story becomes more complicated. There’s some evidence that low self-esteem is associated with an elevated risk of physical aggression and delinquency (Donnellan, Trzesniewski, Robins, Moffitt, & Caspi, 2005). Yet high esteem doesn’t protect people against violence. To the contrary, a subset of high self-esteem individuals— specifically, those whose self-esteem is unstable—is at highest risk for physical aggression (Baumeister, 2001). These individuals tend to be narcissistic and believe themselves deserving of special privileges, or so-called narcissistic “entitlements.” When confronted with a challenge to their perceived worth, or what clinical psychologists term a “nar cissistic injury,” they’re liable to lash out at others.

Interestingly, Harris and Klebold appeared to be anything but uncer tain of themselves. Both were fascinated with Nazism and preoccupied with fantasies of world domination. Harris’s diaries revealed that he saw himself as morally superior to others and felt contempt for almost all of his peers. Harris and Klebold had frequently been teased by class mates, and most commentators assumed that this mistreatment produced low self-esteem, bolstering Harris and Klebold’s risk for violence. These commentators probably fell prey to post hoc, ergo propter hoc (after this, therefore because of this) reasoning (see Introduction, p. 14), which may be a key source of the low self-esteem myth. Tempting as it may be, we can’t draw the inference that because teasing precedes violence, it necessarily produces it. Instead, Harris and Klebold’s high self-esteem may have led them to perceive the taunts of their classmates as threats to their inflated sense of self-worth, motivating them to seek revenge.

In a series of clever experiments, Brad Bushman, in collaboration with Baumeister, asked participants to write essays expressing their attitudes toward abortion (see also Myth #30). A research assistant pretending to be another participant evaluated each essay. Unbeknownst to par ticipants, this evaluation was a complete ruse. In fact, Bushman and Baumeister randomly assigned half of the participants to receive positive comments (“No suggestions, great essay!”) comments, and half negative comments (“This is one of the worst essays I have read!”). Participants then took part in a simulated “competition” allowing them to retaliate against their essay evaluator with a loud and annoying blast of noise. Narcissistic participants responded to negative evaluations by bombard ing their opponents with significantly louder noises than other parti cipants. Positive essay evaluations produced no such effect (Bushman & Baumeister, 1998).

Consistent with these findings, bullies and some aggressive children tend to have overly positive perceptions of how others view them (Baumeister et al., 2003). Christopher Barry and his colleagues asked aggressive and nonaggressive children to estimate their popularity among their peers and compared their ratings with actual popularity ratings obtained from peers. Aggressive children were more likely than non-aggressive children to overestimate their popularity; this tendency was especially marked among narcissistic children (Barry, Frick, & Killian, 2003; Emler, 2001).

The implications of these findings are troubling, especially consider ing the popularity of self-esteem programs for at-risk teenagers. The National Association for Self-Esteem recommends 13 programs—many of which fly under the banner of “affective education programs”—designed to bolster the self-esteem of troubled youngsters (http://www.self-esteem-nase.org/edu.php). Moreover, many prisons have developed self-esteem programs to reduce repeat offending. The research we’ve described sug gests that these programs could produce negative consequences, especi ally among participants at high risk for aggression. The one thing that Eric Harris and Dylan Klebold didn’t need was higher self-esteem.

Myth #34 Most People Who Were Sexually Abused in Childhood Develop Severe Personality Disturbances in Adulthood

“Scarred for life.” Phrases like this appear in a seemingly endless parade of popular psychology books written for sexual abuse victims. The self-help literature is replete with claims that childhood sexual abuse pro duces lasting personality changes, including deep psychological wounds. Other popular psychology books, such as Jade Angelica’s (1993) A Moral Emergency, refer to the “cycle of child sexual abuse.” According to them, many or most sexually abused individuals become abusers themselves. Some self-help books go further, implying that sexual abuse leaves in its wake a distinctive “personality profile.” Low self-confidence, intimacy problems, reluctance to commit to others in relationships, and fears of sex are among its tell-tale signs (Bradshaw, 1991; Frederickson, 1992).

The profound personality alterations induced by early sexual abuse are self-evident truths in many popular psychology circles. A popular article (Megan, 1997) maintained that “Like scar tissue, the effects of sexual abuse never go away, experts say, continuing to influence victims in various ways, such as by contributing to drug and alcohol abuse, low self-esteem, divorce and distrust.” Or take The Courage to Heal, a 1988 self-help book by Ellen Bass and Laura Davis that’s sold over a million copies. The authors informed readers that

The long term effects of child sexual abuse can be so pervasive that it’s sometimes hard to pinpoint exactly how the abuse affected you. It permeates everything: your sense of self, your intimate relationships, your sexuality, your parenting, your work life, even your sanity. Everywhere you look, you see its effects. (Bass & Davis, 1988, p. 37)

In addition, scores of Hollywood films, including Midnight Cowboy (1969), The Color Purple (1985), Forrest Gump (1994), Antwone Fisher (2002), and Mystic River (2003), powerfully depict adult characters who’ve experienced longstanding personality changes following sexual abuse in childhood.

Understandably, many laypersons believe that the close link between child sexual abuse and personality changes is well established. In one survey of 246 citizens of rural Oregon, 68% of males and 74% of females expressed the view that child sexual abuse “always” results in obvious behavioral changes (Calvert & Munsie-Benson, 1999).

There’s little doubt that child sexual abuse, especially when extreme, can produce harmful effects (Nelson et al., 2002). Yet the most telling finding in the research literature on the apparent long-term consequences of child sexual abuse is the absence of findings. Numerous investigations demonstrate that the typical reaction to a history of child sexual abuse is not psychopathology, but resilience (also see “Mythbusting: A Closer Look”).

In 1998, Bruce Rind and his colleagues conducted a meta-analysis (see p. 32) of the research literature on the correlates of child sexual abuse in college students. They had earlier conducted a similar review using community samples, which yielded almost identical results (Rind & Tromovitch, 1997). Their 1998 article appeared in the American Psychological Association’s Psychological Bulletin, one of psychology’s premier journals. Chock full of dense numerical tables and the technical details of statistical analyses, Rind and colleagues’ article seemed an unlikely candidate for the centerpiece of a national political firestorm. Little did Rind and his colleagues know what was in store.

Rind and his co-authors reported that the association between a self-reported history of child sexual abuse and 18 forms of adult psychopathology—including depression, anxiety, and eating disorders —was weak in magnitude (Rind, Tromovitch, & Bauserman, 1998). The average correlation between the two variables was a mere .09, an association that’s close to zero. Moreover, a history of an adverse family environment, such as a highly conflict-ridden home, was a much stronger predictor of later psychopathology than was a history of sexual abuse. As Rind and his co-authors cautioned, the effects of early abuse are difficult to disentangle from those of a troubled family environment, particularly because each can contribute to the other. Surprisingly, they found that the relation between sexual abuse and later psychopathology was no stronger when the abuse was more severe or frequent.

The “Rind article,” as it came to be known, provoked a furious media and political controversy. Radio talk-show personality Dr. Laura Schlessinger (“Dr. Laura”) condemned the article as “junk science at its worst” and as a “not-so-veiled attempt to normalize pedophilia” (Lilienfeld, 2002). Several members of Congress, most notably Repres entatives Tom DeLay of Texas and Matt Salmon of Arizona, criticized the American Psychological Association for publishing an article that implied that sexual abuse isn’t as harmful as commonly believed. On the floor of Congress, Salmon referred to the article as the “emancipa tion proclamation of pedophiles.” Eventually, on July 12, 1999, the Rind article was denounced by the House of Representatives in a 355 to 0 vote, earning it the dubious distinction of becoming the first scientific article ever condemned by the U.S. Congress (Lilienfeld, 2002; McNally, 2003; Rind, Tromovitch, & Bauserman, 2000).

Several critics have raised thoughtful challenges to Rind and colleagues’ findings, especially the extent to which they’re generalizable to more severe samples. For example, college samples may not be ideal for studying the negative psychological effects of child sexual abuse, because people with severe personality disturbances may be less likely to attend college than other people (Dallam et al., 2001). Nevertheless, the central thrust of Rind and colleagues’ conclusion, namely that many individuals escape from a history of early sexual abuse with few or no long-term psycho-pathological consequences, appears to hold up well (Rind, Bauserman, & Tromovitch, 2002; Ulrich, Randolph, & Acheson, 2006).

Nor is there evidence that survivors of child sexual abuse exhibit a unique profile of personality traits. In a 1993 review, Kathleen Kendall-Tackett and her co-authors found no evidence for the so-called “signature” of sexual abuse. Although some sexually abused individuals suffered from psychological problems in adulthood, no consistent pattern of specific symptoms emerged across abuse victims (Kendall-Tackett, Williams, & Finkelhor, 2003). Instead, different victims typically experienced very different symptoms.

Research calls into question other widely accepted claims regarding sexual abuse victims. For example, a 2003 article by David Skuse and his colleagues found only weak evidence for the oft-cited “cycle of child sexual abuse,” the popular belief that the abused typically become abusers themselves. Slightly less than one eighth of their sample of 224 men who’d been sexually abused in childhood became sexual molestors as adults. Because the rate of sexual molestors among adults without a sexual abuse history was 1 in 20 in their sample, Skuse and co-authors’ findings raise the possibility that early abuse increases one’s risk of becom ing an adult abuser. But their findings indicate that the cycle of abuse isn’t close to being inevitable (Salter et al., 2003).

Perhaps not surprisingly, many therapists reacted to all of these findings, especially those of Rind and his colleagues, with disbelief. The claim that many child sexual abuse victims lead normal adult lives didn’t jibe with their clinical experiences.

In attempting to explain this wide gulf between clinical perception and scientific reality, selection bias emerges as a major suspect. Because almost all individuals whom clinicians see in their everyday practices are distressed, including those who’ve been sexually abused, clinicians can be seduced into perceiving an illusory correlation (see Introduction, p. 12) between child sexual abuse and psychopathology (Chapman & Chapman, 1967; Cohen & Cohen, 1984). But this conclusion is almost certainly a consequence of the fact that most clinicians have minimal access to two crucial cells of “The Great Fourfold Table of Life,” namely, those cells consisting of sexually abused and non-abused individuals who do not experience psychological problems (again see Introduction, p. 12). If clinicians interacted in therapy with non-distressed individuals as much as they do with their distressed clients, they’d probably find that accounts of childhood sexual abuse would turn up just about as often.

Mythbusting: A Closer Look

Underestimating Childhood Resilience

The research we’ve reviewed on child sexual abuse and later psycho-pathology imparts a valuable but often unappreciated lesson: Most children are resilient in the face of stressors (Bonanno, 2004; Garmezy, Masten, & Tellegen, 1984). Popular psychology has underestimated child hood resilience, often portraying children as delicate creatures who are prone to “crack” when confronted with stressful events (Sommers & Satel, 2005). Yet this “myth of childhood fragility” (Paris, 2000) runs counter to scientific evidence.

For example, on July 15, 1976, 26 schoolchildren ranging in age from 5 to 14 were victims of a nightmarish kidnapping in Chowchilla, California. Along with their bus driver, they were taken hostage on a school bus for 1 1 hours and buried underground in a van for 16 hours. There, they managed to breathe through a few small air vents. Remark ably, the children and driver managed to escape, and all survived with out injury. When found, most of the children were in shock, and some had soiled themselves. Two years later, although most were haunted by memories of the incident, virtually all were well adjusted (Terr, 1983).

To take a second example, much of the popular psychology litera ture informs us that divorce almost always exacts a serious long-run emotional toll on children. One website dealing with divorce says that “children really aren’t ‘resilient’” and that “divorce leaves children to struggle for a life-time with the residue of a decision their parents made” (Meyer, 2008). On September 25, 2000, Time magazine lent credibility to these claims with a cover story entitled “What Divorce Does to Kids,” accompanied by the ominous warning that “New research says the long-term damage is worse than you thought.” This story was sparked by a 25-year investigation by Judith Wallerstein (1989), who tracked a group of 60 divorced families in California. Wallerstein reported that although children in these families initially seemed to recover from their parents’ divorces, the effects of divorce were subtle and enduring. Many years later, these children experienced difficulties with forming stable romantic relationships and establishing career goals. Yet Wallerstein’s study didn’t include a control group of families in which one or both parents had been separated from their children for reasons other than divorce, such as accidental death. As a result, her findings may reflect the effects of any kind of stressful disruption in the family rather than divorce itself.

In fact, most better-designed studies show that although children almost always find divorce stressful, the bulk of them survive divorces with out much, if any, long-term psychological damage (Hetherington, Cox, & Cox, 1985). By and large, these investigations show that 75% to 85% of children are coping quite well in the wake of their parents’ divorces (Hetherington & Kelly, 2002). Moreover, when parents experience severe conflict prior to the divorce, the apparent adverse effects of divorce appear to be minimal (Amato & Booth, 1997; Rutter, 1972). That’s prob ably because children find the divorce to be a welcome escape from their parents’ bitter arguing.

Myth #35 People’s Responses to Inkblots Tell Us a Great Deal about Their Personalities

Is an inkblot always just an inkblot? Or can it be something far more, perhaps a secret passageway to hidden personality traits and psychological disorders?

The most familiar version of the inkblot test, developed by Swiss psychiatrist Hermann Rorschach, figures prominently in popular culture. Andy Warhol painted a series of mammoth inkblots inspired by the Rorschach Inkblot Test, and Mattel markets a game called “Thinkblot,” which encourages players to generate creative responses to amoeboid black-and-white shapes. A successful rock band even calls itself “Rorschach Test.” The 2009 film, Watchmen, stars a character named Rorschach, who sports a mask consisting of an inkblot.

We can trace the Rorschach Inkblot Test (often known simply as “The Rorschach”) to Hermann Rorschach’s dabblings with inkblots in child hood. Rorschach, a failed artist, apparently received the inspiration for the test that later bore his name from a popular European parlor game. First published in 1921, the Rorschach consists of 10 symmetrical inkblots, 5 in black and white, 5 containing color. Readers can view an inkblot similar to the Rorschach inkblots in Figure 8.1 (because of concerns about influencing test responses, the Rorschach’s publisher cautions against reproducing the actual blots).

But the Rorschach is much more than an icon of popular culture. It’s a cherished tool of clinicians, many of whom believe it can penetrate into the deepest and darkest recesses of the unconscious. In the 1940s and 1950s, psychologists Lawrence Frank and Bruno Klopfer referred to the Rorschach as a “psychological X-ray,” and over half a century later many clinicians still regard it as an essential means of unearthing psychological conflicts (Wood, Nezworski, Lilienfeld, & Garb, 2003). One estimate places the number of Rorschach tests administered per year at 6 million worldwide (Sutherland, 1992). A 1995 survey of members of the American Psychological Association revealed that 82% of clinical psychologists use the Rorschach at least occasionally in their practice and that 43% use it frequently or all of the time (Watkins, Campbell, Nieberding, & Hallmark, 1995). In 1998, the American Psychological Association’s Board of Professional Affairs hailed the Rorschach as “per haps the single most powerful psychometric instrument ever envisioned” (American Psychological Association Board of Professional Affairs, 1998, p. 392). Perhaps not surprisingly, 74% of undergraduates in one survey said that the Rorschach and closely related tests are helpful in psychiatric diagnosis (Lenz, Ek, & Mills, 2009).

Figure 8.1 A blot similar to the one of the 10 Rorschach inkblots (the test’s publisher strongly discourages reproduction of the actual blots). According to Rorschach proponents, different kinds of responses are indicative of oppositionality, obsessiveness, and a host of other personality traits.

Source: Anastasi & Urbina (1997), p. 413.

c08_img01.jpg

The Rorschach is merely one of hundreds of projective techniques, most of which consist of ambiguous stimuli that clinicians ask respondents to interpret. Psychologists refer to these methods as “projective” because they assume that respondents project key aspects of their personalities onto ambiguous stimuli in the process of making sense of them. Using a kind of psychological reverse-engineering, test interpreters work back wards to try to infer respondents’ personality traits. One of the first such techniques was the Cloud Picture Test developed around the turn of the century by German psychologist Wilhelm Stern, which asks respondents to report what they see in cloud-like images (Aiken, 1996; Lilienfeld, 1999). There’s even a variant of the Rorschach test for blind indi viduals, the Cypress Knee Projective Technique, which asks respondents to place their hands around the knotty outgrowths of cypress tree roots and describe their mental imagery (Kerman, 1959).

Researchers subjected the Rorschach to a steady drumbeat of scientific criticism from the 1940s through the 1970s. They argued that the Rorschach was subjective in its scoring and interpretation and that almost none of its supposed personality correlates held up in careful research. One author, educational psychologist Arthur Jensen, commented in 1965 that “the rate of scientific progress in clinical psychology might well be measured by the speed and thoroughness with which it gets over the Rorschach” (Jensen, 1965, p. 509).

The modern version of the Rorschach, the “Comprehensive System” (CS) developed by psychologist John Exner in the 1974, was a heroic effort to rescue the Rorschach from a barrage of scientific attacks. The CS provides detailed rules for scoring and interpretation and yields over 100 indices that purportedly measure almost every imaginable feature of personality (Exner, 1974). For example, responses (see Figure 8.1 for this and the examples to follow) involving reflections (“I see a poodle looking at itself in the mirror”) supposedly reflect narcissism. After all, the word narcissism derives from the Greek mythical character Narcissus, who fell in love with his reflection in the water. Responses involving unusual details (“That tiny speck of ink on the right part of the blot looks like a piece of dust”) ostensibly indicate obsessiveness. And re sponses to the white space nestled within the blots rather than to the blots themselves (“That white area over there looks like a hand broom”) ostensibly indicate rebelliousness toward authority.

Yet controlled research offers virtually no support for these assertions. James Wood and his colleagues found that the overwhelming majority of Rorschach scores are essentially unrelated to personality traits. The lone possible exception is dependency (Bornstein, 1996), which a few researchers have found to be associated with a higher than expected number of responses involving mouths and food (orthodox Freudians, who believe that excessive gratification during the oral stage of infancy produces dependency, would surely delight in this finding). Nor is the Rorschach especially useful for diagnostic purposes: Rorschach scores are negligibly related to clinical depression, anxiety disorders, or antisocial personality disorder, a condition marked by a history of criminal and irresponsible behaviors (Wood, Lilienfeld, Garb, & Nezworski, 2000).

Nevertheless, the Rorschach does a serviceable job of detecting conditions marked by thinking disturbances, such as schizophrenia and bipolar disorder (once known as manic depression) (Lilienfeld, Wood, & Garb, 2001). This fact isn’t terribly surprising, because people who produce bizarre responses to inkblots (for example, “It looks like a giraffe’s head exploding inside of a flying saucer” in response to the card in Figure 8.1) are more likely than other people to suffer from disordered thoughts. As psychologist Robyn Dawes (1994) noted, the use of the Rorschach for detecting thought disorder is actually non-projective in that it relies on the extent to which respondents don’t perceive certain shapes in inkblots.

Moreover, the evidence that the Rorschach contributes to the detec tion of psychological characteristics above and beyond simpler methods —what psychologists call “incremental validity”—is weak. In fact, a few studies demonstrate that when clinicians who already have access to questionnaire or life history information examine Rorschach data, their predictive accuracy decreases. This is probably because they place excess weight on information derived from the Rorschach, which tends to be less valid than the data derived from the other sources (Garb, 1998; Lilienfeld et al., 2001, 2006).

Why is the Rorschach still enormously popular despite the meager evidence for its clinical utility? The phenomenon of illusory correlation (see Introduction, p. 12) probably contributes this test’s mystique. When researchers have asked participants to peruse Rorschach protocols, these participants consistently perceive certain Rorschach indicators as linked to certain personality traits even when the pairing of Rorschach indicators with personality traits in the protocols is entirely random (Chapman & Chapman, 1969). In many cases, these participants are relying excessively on the representativeness heuristic (see Introduction, p. 15), erroneously leading them to conclude that certain Rorschach indicators are valid for detecting personality characteristics. For example, they may assume incorrectly that inkblot responses that contain morbid content, such as skeletons or dead bodies, are strongly associated with certain traits, such as depression, with which they share a superficial resemblance. Studies demonstrate that clinicians are vulnerable to the same mirages (Chapman & Chapman, 1969).

Second, studies show that the CS tends to make normal individuals appear disturbed. A 1999 study by Thomas Shaffer and his colleagues revealed that a sample of normal individuals comprising college students and blood bank volunteers obtained grossly pathological scores on the Rorschach. For example, 1 in 6 scored in the pathological range on the Rorschach Schizophrenia Index, purportedly a measure of schizophrenia (Shaffer, Erdberg, & Haroian, 1999). Paradoxically, the Rorschach’s tendency to overpathologize individuals can mislead clinicians into con cluding that it possesses remarkably sensitive diagnostic powers. Not infre quently, a clinician will find that a respondent produces normal results on questionnaires, but abnormal results on the Rorschach. The clinician may conclude from this discrepancy that the Rorschach is a “deep” test that uncovers hidden psychological disturbances that more “superficial” tests miss. More likely, the clinician is merely being fooled into perceiving psy-chopathology in its absence (Wood, Nezworski, Garb, & Lilienfeld, 2001).

So, returning to the question posed at the outset of this piece: To paraphrase Sigmund Freud, sometimes an inkblot is just an inkblot.

Myth #36 Our Handwriting Reveals Our Personality Traits

“Cross your T’s and dot your I’s” is the universal refrain of teachers charged with the task of transforming their students’ messy scribbles into legible penmanship. For many children, learning to write their name in cursive is a significant milestone. Yet pupils’ handwriting somehow ends up being as distinctive as their fingerprints or earlobes. Therefore, it seems plausible that handwriting analysis—known as graphology—could help to reveal our psychological make-up.

Graphology is merely one branch of the group of pseudoscientific prac tices known as “character reading.” At various times, character readers have assumed that they could acquire a window on our psychological make-up by interpreting the features of the face (physiognomy), creases on the hand (palmistry), bumps on the head (phrenology), features of the belly button (omphalomancy), patterns of forehead wrinkles (meto-poscopy), patterns on tea leaves (tasseography), directions of light rays reflected from fingernails (onychomancy), or our favorite, the appear ance of barley cakes (critomancy) (Carroll, 2003).

Graphologists have attracted legions of followers and persuaded much of the public that their craft is grounded in science. Until it went bankrupt recently, the Chicago-based International Graphoanalysis Society boasted a membership of about 10,000. Hundreds of graphologists have found gainful employment in Southern California, and graphology has even found a home in public schools. For example, in Vancouver, Canada, a graphologist claimed to have secretly identified actual and potential sexual molesters amidst the local teaching ranks. Many corporations, especially in Israel and some European countries, consult graphologists on personnel matters. Some financial institutions hire graphologists to determine whether applicants will prove to be trustworthy borrowers (Beyerstein & Beyerstein, 1992).

Graphology’s modern history begins with the 17th century Italian physi cian, Camillo Baldi. Baldi inspired a group of Catholic clergy, among them the Abbé Jean-Hippolyte Michon, who coined the term “grapho logy” in 1875. Michon is the father of the “analytic” approach, which ascribes personality traits to writers based on specific writing “signs,” such as the shapes or slants of letters. Michon’s student, Crepieux-Jamin, broke with his mentor to found the “holistic” school. Rather than attend ing to individual elements of letters and lines, holists advocate an impressionistic approach in which the analyst intuits an overall “feel” for individuals’ personalities on the basis of their writing. Although most modern graphologists embrace the analytic approach, many schools of graphology can’t even agree on which signs are indicators of which traits. For instance, one well-known graphologist believes that a tendency to cross one’s ts with whip-like lines indicates a sadistic personality, whereas another equally prominent analyst says that this style merely indicates a practical joker (there’s no scientific evidence that either graphologist is right).

Proponents of the analytic approach claim to have identified hundreds of specific handwriting indicators of personality traits. Among them are little hooks on the letter S, which some graphologists claim reveal a willingness to snag others’ belongings. Wide spacing between words supposedly denotes a tendency toward isolation. Writers whose sentences drift upward are optimists, whereas those whose lines sag downward are pessimists. Those who write with letters displaying different slants are unpredictable. Writers with large capital Is have big egos. A 2008 article in the Los Angeles Times claimed that then presidential candi date John McCain’s tendency to sign his first name with letters slanted in opposing directions offered evidence of his “maverick” personality, whereas his opponent Barack Obama’s tendency to shape his letters smoothly offered evidence of his flexibility (Iniquez, 2008). Perhaps our favorite is the claim that large, bulbous loops on gs, ys, and similar letters—ones that dangle below the lines—reveal a preoccupation with sex. Perhaps they do, although this preoccupation may lie more in the mind of the graphologist than of the writer (Beyerstein, 1992).

Some even embrace the bizarre claims of “graphotherapeutics,” a New Age psychotherapy that claims to eliminate individuals’ undesirable personality traits by removing problematic graphological signs from their writing. So, if you’re a hopeless pessimist, you need do nothing more than start writing your sentences with an upward slant to change your attitude toward life.

Graphologists offer a variety of rationales for their practice; we’ll examine the five most common of them here (Beyerstein & Beyerstein, 1992).

Writing is a form of expressive movement, so it should reflect our person alities. Although research links a few global aspects of temperament to certain gestures, the kinds of characteristics loosely related to expressive body movements are far more general than the narrow traits graphologists claim to infer from writing. A general tendency to be irritable or domineering may be slightly correlated with body language, but the relationships are much too weak to allow us to draw con clusions about people’s personalities.

Handwriting is brainwriting. True enough. Studies have shown that people’s “footwriting” is similar to their handwriting (if you’re skep tical, try signing your name on a piece of paper with a pencil stuck between the big toe and second toe of your preferred foot), suggest ing that writing style is more a function of our brains than our limbs. Nevertheless, the fact that writing or, for that matter, sneezing and vomiting are controlled by the brain doesn’t imply they’re correlated with anything else the brain controls, such as personality traits.

Writing is individualized and personality is unique, so each must reflect the other. The fact that two attributes are idiosyncratic isn’t grounds to conclude that they bear a specific relationship to one another. Faces are sufficiently different to serve as personal identification on a driver’s license, but they say nothing about one’s driving ability.

The police and courts use graphology, so it must be valid. This claim illustrates what logicians term the “bandwagon fallacy”: If a belief is widespread, it must be correct. Of course, many convictions held by an overwhelming majority of people at some point in time, such as the belief that that the world is flat, have turned out to be just as flatly wrong. Moreover, much of graphology’s undeserved positive reputation stems from the confusion of graphologists with questioned document examiners (QDEs). A QDE is a scientifically trained invest igator who establishes for historians, collectors, or the courts the origins and authenticity of handwritten documents. QDEs pass judgment only on the probability that a given individual wrote the document in question, not on that individual’s personality.

Personnel managers swear by graphologists’ usefulness in selecting employees. Some do, but most don’t. Furthermore, there are several reasons why managers may be falsely convinced of graphology’s utility. First, graphologists often attend to many non-graphological clues that could point to the best candidate, even if they do so unintentionally. For instance, the contents of handwritten application letters are chock full of biographical information, some of which (like previous job history or a criminal record) can predict job performance. Second, for reasons of expense, employers rarely submit the scripts of all applicants to graphologists. Graphologists usually see the scripts of only short-listed applicants, those already selected using valid hiring criteria. Most people in this pool are already qualified for the job, and there’s rarely an opportunity to determine whether the rejected applicants would have done as well or better.

Scientific tests of graphologists’ ability to recognize job-relevant aptitudes are virtually unanimous. Well-controlled tests ask all partici pants to write the same sentences, and ask graphologists to offer per sonality judgments or behavioral predictions based on this writing. By asking all participants to transcribe the same sentences, researchers eliminate differences in content that could provide indirect cues to per sonality. In a thorough review, Richard Klimoski (1992) found that graphologists did no better than chance at predicting job performance. Geoffrey Dean (1992) conducted by far the most complete review of sci entific tests of graphology. After performing a meta-analysis (see p. 32) over 200 studies, Dean found a clear failure on the part of grapholo gists to detect personality traits or forecast work performance.

Why are so many people convinced that graphology has merit? First, graphology appears compelling because it capitalizes on the represent ativeness heuristic (see Introduction, p. 15). We’ve already encoun tered claims that individuals whose sentences slant upward tend to be “uplifting” or optimistic. Another striking example is the assertion by some graphologists that people who cross their ts with the bar consid erably above the stem are prone to daydreaming. Daydreamers, after all, seem to have their heads in the clouds.

Second, the assertions of graphologists can seem remarkably specific even when they’re hopelessly vague. The mistaken sense that something profoundly personal has been revealed by a character reader stems from what Paul Meehl (1956) called the “The P. T. Barnum Effect,” after the cynical circus entrepreneur who joked that he “liked to give a little some thing to everybody” in his acts. Researchers have discovered that most of us fall prey to this effect, which is the tendency of individuals to find statements that apply to just about everyone to be specific to them (Dickson & Kelly, 1985; Furnham & Schofield, 1987). The Barnum Effect works well because we’re adept at finding meaning even in relatively meaning less information. In one study, participants rated descriptions about someone else generated by a certified graphologist as highly applicable to themselves, and just as applicable as Barnum statements crafted to be applicable to everyone.

Will future controlled research prove any kinder to graphology? Of course, it’s always possible that some positive evidence will surface one day. But if the dismal scientific track record of graphology is any indica tion, we hope we can be forgiven for suggesting that the handwriting appears to be on the wall.

Chapter 8: Other Myths to Explore

Fiction Fact
Astrology predicts people’s personality traits at better than chance levels. Astrology is useless for predicting people’s personality traits.
People’s drawings can tell us a great deal about their personalities. Human figure drawing tests have low validity for detecting almost all normal and abnormal personality traits.
Positive self-affirmations (“I like myself”) are a good way of boosting self-esteem. Research suggests that positive affirmations are not especially helpful, particularly for people with low selfesteem.
Most people who were physically abused in childhood go to become abusers themselves (the “cycle of violence”). Most people who were physically abused as children don’t become abusers as adults.
There’s strong evidence for the concept of “national character.” The evidence for “national character” stereotypes (such as the French are arrogant or the Germans rigid) is mixed and inconclusive.
Obese people are more cheerful (“jollier”) than non-obese people. There’s little association between obesity and cheerfulness; in fact, most research suggests a slight positive association between obesity and depression.
Open-ended interviews are the best means of assessing personality. Open-ended (“unstructured”) interviews possess low or at best moderate validity for assessing personality, and tend to be less valid than structured interviews.
A clinician’s number of years of experience using a personality test predicts his or her accuracy in clinical judgments from this test. For most personality tests, number of years of experience in using the test is uncorrelated with accuracy.
More information is always preferable to less information when making diagnostic judgments. In some cases, more assessment information leads to less accurate diagnostic judgments, because invalid assessment information can dilute the influence of valid information.
Anatomically correct dolls are a good way of determining whether a child was sexually abused. Anatomically correct dolls misidentify large numbers of non-abused children as abused, because many non-abused children engage in sexualized doll play.

Sources and Suggested Readings

To explore these and other myths about personality, see Dean (1987); Furnham (1996); Garb (1998); Hines (2003); Jansen, Havermans, Nederkoorn, and Roofs (2008); Lilienfeld, Wood, and Garb (2000); McCrae and Terracciano (2006); Ruscio (2006).