6

I’VE GOT A FEELING

Myths about Emotion and Motivation

Myth #23 The Polygraph (“Lie Detector”) Test Is an Accurate Means of Detecting Dishonesty

Have you ever told a lie?

If you answered “No,” the odds are high you’re lying. College students admit to lying in about one in every three social interactions— that’s about twice a day on average—and people in the community about one in every five interactions—that’s about once a day on average (DePaulo, Kashy, Kirkendol, Wyer, & Epstein, 1996).

Attempts to deceive others in everyday life are as difficult to detect as they are common (Ekman, 2001; Vrij & Mann, 2007). We might assume that as frequent as lying is, we’d be good at identifying it. If so, we’d be wrong. Contrary to what’s depicted in the television show Lie to Me, starring Tim Roth as deception expert Dr. Cal Lightman, a large body of research reveals surprisingly few valid cues of deception (DePaulo et al., 2003). Moreover, most people, including those with special training in security professions, like judges and police officers, often do no better than chance at spotting lies (Ekman & O’Sullivan, 1991; Ekman, O’Sullivan, & Frank, 1999). Indeed, most of us are dead wrong about bodily cues that give away liars. For example, even though about 70% of people believe that shifty eyes are good indicators of lying, research shows otherwise (Vrij, 2008). To the contrary, there’s evidence that psychopaths, who are pathological liars, are especially likely to stare others in the face when telling blatant fibs (Rime, Bouvy, Leborgne, & Rouillon, 1978).

If we can’t determine who’s lying or telling the truth by watching each other, what else can we do? History reveals a veritable parade of dubious methods to detect suspected liars, such as the “rice test” of the ancient Hindus (Lykken, 1998). Here’s the idea: If deception leads to fear, and fear inhibits the secretion of saliva, then an accused individual shouldn’t be able to spit out rice after chewing it because it will stick to the gums. In the 16th and 17th centuries, many accused witches were subjected to the “ordeal of water,” also called the “dunking test.” Accusers submerged the accused witch in a cold stream. If she floated to the surface, there was both good and bad news: She survived, but was deemed guilty—presumably because witches are supernaturally light or because water is so pure a substance as to repel a witch’s evil nature—and therefore sentenced to death. In contrast, if she didn’t float to the surface, there was again both good and bad news: She was deemed innocent but that was scant consolation to her because she’d already drowned.

Beginning in the 20th century, some enterprising researchers began to tinker with physiological measures to distinguish truth from lies. In the 1920s, psychologist William Moulton Marston invented a device —the first polygraph or so-called “lie detector” test—that measured systolic blood pressure (that’s the number on the top of our blood pressure reading) to detect deception. Under the pen name Charles Moulton, he later created one of the first female cartoon superheroes, Wonder Woman, who could compel villains to tell the truth by ensnaring them in her magic lasso. For Marston, the polygraph was the equivalent of Wonder Woman’s lasso: an infallible detector of the truth (Fienberg & Stern, 2005; Lykken, 1998). Beyond the pages of comic books, Marston’s blood pressure device spurred the development of modern polygraph testing.

A polygraph machine provides a continuous record of physiological activity—such as skin conductance, blood pressure, and respiration— by plotting it on a chart. Contrary to the impression conveyed in such movies as Meet the Parents (2000) or such television shows as The Moment of Truth, the machine isn’t a quick fix for telling us whether someone is lying, although the public’s desire for such a fix almost surely contributes to the polygraph’s enduring popularity (Figure 6.1). Instead, the examiner who asks questions typically interprets the polygraph chart and arrives at a judgment of whether the person is lying. Physiological activity may offer helpful clues to lying because it’s associated with how anxious the examinee is during the test. For example, being nervous causes most of us to sweat, which increases how well our skin conducts electricity. Yet interpreting a polygraph chart is notoriously difficult for several reasons.

Figure 6.1 In the 2000 comedy, Meet the Parents, former CIA agent Jack Bynes (portrayed by Robert De Niro) administers a polygraph test to Greg Focker (portrayed by Ben Stiller) in an attempt to determine whether Focker would make a suitable son-in-law. Most portrayals of the lie detector in films and television programs erroneously portray the technique as essentially infallible.

Source: Photos 12/Alamy.

c06_img01.jpg

For starters, there are large differences among people in their levels of physiological activity (Ekman, 2001; Lykken, 1998). An honest examinee who tends to sweat a lot might mistakenly appear deceptive, whereas a deceptive examinee who tends to sweat very little might mistakenly appear truthful. This problem underscores the need for a baseline measure of physiological activity for each examinee. For investigating specific crimes, the most popular lie detector format is the Comparison Question Test (CQT; Raskin & Honts, 2002). This version of the polygraph test includes relevant questions concerning the alleged misdeed (“Did you steal $200 from your employer?”) and comparison questions that try to force people to tell a lie that’s irrelevant to the alleged misdeed (“Have you ever lied to get out of trouble?”). Almost all of us have fibbed to get out of trouble at least once, but because we wouldn’t want to admit this awkward little fact during a polygraph test, we’d presumably need to lie about it. The rationale of the CQT is that the comparison questions provide a meaningful baseline for interpreting subjects’ physiological activity to known lies.

But this rationale is dubious, because comparison questions don’t control for a host of crucial factors. Moreover, as David Lykken (1998) noted, there’s no evidence for a Pinocchio response: an emotional or physiological reaction uniquely indicative of deception (Cross & Saxe, 2001; Saxe, Dougherty, & Cross, 1985; Vrij, 2008). If a polygraph chart shows more physiological activity when the examinee responded to relevant than comparison questions, at most this difference tells us that the examinee was more nervous at those moments.

But here’s the rub. This difference in anxiety could be due to actual guilt, indignation or shock at being unjustly accused, the realization that one’s responses to relevant—but not comparison—questions may lead to one’s being fired or imprisoned, or even the experience of unpleasant thoughts associated with the alleged misdeed (Ruscio, 2005). Not surprisingly, the CQT and related versions of the polygraph test suffer from a high rate of “false positives”—innocent people whom the test deems guilty (Iacono, 2008). As a consequence, the “lie detector” test is misnamed: It’s an arousal detector, not a lie detector (Saxe et al., 1985; Vrij & Mann, 2007). This misleading name probably contributes to the public’s belief in its accuracy. Conversely, some individuals who are guilty may not experience anxiety when telling lies, even to authorities. For example, psychopaths are notoriously immune to fear and may be able to “beat” the test in high pressure situations, although the research evidence for this possibility is mixed (Patrick & Iacono, 1989).

Further complicating matters is the fact that polygraph examiners are often prone to confirmation bias (Nickerson, 1998), the tendency to see what they expect to see. Examiners have access to outside information regarding the alleged misdeed and have often formed an opinion about examinees’ guilt or innocence even before hooking them up. Gershon Ben-Shakhar (1991) noted that an examiner’s hypothesis can influence the polygraph testing process at several stages: constructing the questions, asking these questions, scoring the chart, and interpreting the results. To illustrate the role of confirmation bias, he described an exposé aired by CBS News magazine 60 Minutes in 1986. The 60 Minutes producers hired three polygraph firms to determine who stole a camera from a photography magazine’s office. Even though there was no actual theft, each polygraph examiner expressed supreme confidence that he’d identified a different employee who’d been subtly suggested as a suspect prior to testing.

Another reason why most polygraph examiners are convinced of the machine’s accuracy probably stems from the undeniable fact that the polygraph is good for one thing: eliciting confessions, especially when people fail it (Lykken, 1998; Ruscio, 2005). As a consequence, poly-graphers are selectively exposed to people who fail the polygraph and later admit they lied (although we’ll learn in Myth #46 that some of these confessions may be false). What’s more, these examiners often assume that people who flunk the test and don’t admit they committed the crime must be lying. So the test seems virtually infallible: If the person fails and admits he lied, the test “worked,” and if the person fails and doesn’t admit he lied, the test also “worked.” Of course, if the person passes, he’ll essentially always agree that he was telling the truth, so the test again “worked.” This “heads I win, tails you lose” reasoning renders the rationale underlying the polygraph test difficult or impossible to falsify. As philosopher of science Sir Karl Popper (1963) noted, unfalsifiable claims aren’t scientific.

In a comprehensive review, the National Research Council (2003) criticized the CQT’s rationale and the studies claiming to support its effectiveness. Most were laboratory investigations in which a relatively small number of college students performed simulated (“mock”) crimes, like stealing a wallet, rather than field (real-world) studies with large numbers of actual criminal suspects. In the few field studies, examiners’ judgments were usually contaminated by outside information (like newspaper reports about who committed the crime), rendering it impossible to distinguish the influence of case facts from polygraph test results. Moreover, participants usually weren’t trained in the use of counter-measures, that is, strategies designed to “beat” the polygraph test. To use a countermeasure, one deliberately increases physiological arousal at just the right times during the test, such as by biting one’s tongue or performing difficult mental arithmetic (like subtracting repeatedly from 1,000 by 17s) during the comparison questions. Information on countermeasures is widely available in popular sources, including the Internet, and would almost surely reduce the lie detector’s real-world effectiveness.

Given these limitations, the National Research Council (2003) was reluctant to estimate the CQT’s accuracy. David Lykken (1998) characterized an accuracy of 85% for guilty individuals and 60% for innocent individuals as charitable. That 40% of honest examinees appear deceptive provides exceedingly poor protection for innocent suspects, and this problem is compounded when polygraphers administer tests to many suspects. Let’s suppose that intelligence information is leaked, evidence suggests it came from 1 of 100 employees who had access to this information, and all of them undergo polygraph testing. Using Lykken’s estimates, there’d be an 85% chance of identifying the guilty individual, but about 40 other employees would be falsely accused! These numbers are worrisome given that the Pentagon has recently beefed up its efforts to screen all 5,700 of its current and future employees every year, partly in an attempt to minimize the risk of infiltration by terrorists (Associated Press, 2008).

Mythbusting: A Closer Look

Is Truth Serum a Lie Detector?

We’ve seen that the polygraph test is far from a perfect tool for sorting truths from lies. But could truth serum be better? As early as 1923, an article in a medical journal referred to truth serum as a “lie detector” (Herzog, 1923). In a number of films, including Jumping Jack Flash (1986), True Lies (1994), Meet the Parents (2000), and Johnny English (2003), characters who’ve been hiding something suddenly begin uttering the truth, the whole truth, and nothing but the truth after taking a swig of truth serum. For decades, governmental intelligence agencies, such as the CIA and the former Soviet KGB, supposedly used truth serum to interrogate suspected spies. Even as recently as 2008, Indian police reportedly administered truth serum to Azam Kasir Kasab, the lone surviving terrorist in the devastating attacks in Mumbai, India (Blakely, 2008). Since the 1920s psychotherapists have occasionally used truth serum in an effort to excavate buried memories of trauma (Winter, 2005). For example, the 1994 sexual abuse charges against pop singer Michael Jackson emerged only after an anesthesiologist administered a truth serum to 13-year-old Jordan Chandler. Prior to receiving truth serum, Chandler denied that Jackson had sexually abused him (Taraborrelli, 2004).

Yet like the lie detector, “truth serum” is misnamed. Most truth serums are barbiturates, like sodium amytal or sodium penthothal. Because the physiological and psychological effects of barbiturates are largely similar to those of alcohol (Sudzak, Schwartz, Skolnick, & Paul, 1986), the effects of ingesting a truth serum aren’t all that different from having a few stiff drinks. Like alcohol, truth serums make us sleepy and less concerned about outward appearances. And like alcohol, truth serums don’t unveil the truth; they merely lower our inhibitions, rendering us more likely to report both accurate and inaccurate information (Dysken, Kooser, Haraszti, & Davis, 1979; Piper, 1993; Stocks, 1998). As a consequence, truth serums increase greatly the risk of erroneous memories and false confessions. Moreover, there’s good evidence that people can lie under the influence of truth serum (Piper, 1993). So Hollywood aside, truth serums aren’t any more likely than polygraphs to detect fibs.

Still, polygraph tests remain a popular icon in the public’s imagination. In one survey, 67% of Americans in the general public rated the polygraph as either “reliable” or “useful” for detecting lies, although most didn’t regard it as infallible (Myers, Latter, & Abdollahi-Arena, 2006). In Annette Taylor and Patricia Kowalski’s (2003) survey of introductory psychology students, 45% believed that the polygraph “can accurately identify attempts to deceive” (p. 6). Moreover, polygraph testing has been featured prominently in more than 30 motion pictures and television shows, typically with no hint of its shortcomings. By the 1980s, an estimated 2 million polygraph tests were administered in the United States alone each year (Lykken, 1998).

Due to increasing recognition of their limited validity, polygraph tests are seldom admissible in courts. In addition, the federal Employee Polygraph Protection Act of 1988, passed by the federal government, prohibited most employers from administering lie detectors. Yet in a bizarre irony, the government exempted itself, allowing the polygraph test to be administered in law enforcement, military, and security agencies. So a polygraph test isn’t deemed trustworthy enough to hire convenience store clerks, yet officials use it to screen employees at the FBI and CIA.

Were he still alive, William Moulton Marston might be disappointed to learn that researchers have yet to develop the psychological equivalent of Wonder Woman’s magic lasso. For at least the foreseeable future, the promise of a perfect lie detector remains the stuff of science fiction and comic book fantasy.

Myth #24 Happiness Is Determined Mostly by Our External Circumstances

As Jennifer Michael Hecht (2007) observed in her book, The Happiness Myth, virtually every generation has had its share of sure-fire prescriptions for how to attain ultimate happiness. From the vantage point of the early 21st century, some of these fads may strike us as positively bizarre. For example, throughout much of history, people have sought out a seemingly endless array of purported aphrodisiacs, such as the rhinoceros horn, Spanish fly, chili peppers, chocolate, oysters, or more recently, green M & M candies, to enhance their sex lives and drives (Eysenck, 1990). Yet research suggests that none of these supposed libido-lifters does much of anything beyond a placebo, that is, a sugar pill (Nordenberg, 1996). In late 19th century America, “Fletcherizing” was all the rage: according to champions of this dietary craze, chewing each piece of food precisely 32 times (that’s one chew for each tooth) will bring us happiness and health (Hecht, 2007). Some of today’s happiness fads may well strike early 22nd century Americans as equally odd. How will future generations perceive those of us who spend thousands of our hard-earned dollars on aromatherapy, feng shui (the Chinese practice of arranging objects in our rooms to achieve contentment), motivational speakers, or mood enhancing crystals? One has to wonder.

All of these fads reflect an underlying tenet central to much of popular psychology: Our happiness is determined mostly by our external circumstances. To achieve happiness, the story line goes, we must find the right “formula” for happiness, one that exists primarily outside of us. More often than not, this formula consists of lots of money, a gorgeous house, a great job, and plenty of pleasurable events in our lives. Indeed, as far back as the 18th century, British philosophers John Locke and Jeremy Bentham maintained that people’s happiness is a direct function of the number of positive life events they experience (Eysenck, 1990). Today, one has to go only as far as Amazon.com to happen upon a treasure trove of advice books that instruct us how to achieve happiness through wealth, such as Laura Rowley’s (2005) Money and Happiness: A Guide to Living the Good Life, Eric Tyson’s (2006) Mind over Money: Your Path to Wealth and Happiness, and M. P. Dunleavy’s (2007) Money Can Buy Happiness: How to Spend to Get the Life You Want. As American social critic Eric Hoffer commented wryly, “You can never get enough of what you don’t need to make you happy.”

Yet over 200 years ago, America’s “First first lady,” Martha Washington, offered a view sharply at odds with much of modern popular culture: “The greater part of our happiness or misery depends on our dispositions, not our circumstances.” Indeed, in recent decades, psychologists have begun to question whether the “truism” that our happiness is mostly a function of what happens to us is really true. The late psychologist Albert Ellis (1977) insisted that one of the most prevalent— and pernicious—of all irrational ideas is the notion that our happiness and unhappiness derive mostly from our external circumstances rather than our interpretations of them. Ellis was fond of quoting Shakespeare’s Hamlet, who said that “There is nothing either good or bad, but thinking makes it so.” Psychologist Michael Eysenck (1990) even described the #1 myth about happiness as the notion that “Your level of happiness depends simply on the number and nature of the pleasurable events which happen to you” (p. 120).

Still, many of us are deeply resistant to the notion that our happiness is affected more by our personality traits and attitudes than our life experiences, and we’re especially resistant to the notion that happiness is influenced substantially by our genetic make-up. In one survey, 233 high-school and college students gave a low rating (2.58 on a 7-point scale) to an item evaluating the perceived importance of genes to happiness (Furnham & Cheng, 2000).

So was Martha Washington right that our happiness “depends on our dispositions, not our circumstances”? Let’s consider two provocative findings. First, Ed Diener and Martin Seligman screened over 200 undergraduates for their levels of happiness, and compared the upper 10% (the “extremely happy”) with the middle and bottom 10%. Extremely happy students experienced no greater number of objectively positive life events, like doing well on exams or hot dates, than did the other two groups (Diener & Seligman, 2002). Second, Nobel Prize-winning psychologist Daniel Kahneman and his colleagues tracked the moods and activities of 909 employed women by asking them to record in detail their previous day’s experiences (Kahneman, Krueger, Schkade, Schwarz, & Stone, 2004). They found that most major life circumstances, including women’s household income and various features of their jobs (such as whether these jobs included excellent benefits), were correlated only minimally with their moment-by-moment happiness. In contrast, women’s sleep quality and proneness toward depression were good predictors of their happiness.

Other research has offered support for what Philip Brickman and Donald Campbell (1971) called the hedonic treadmill. Just as we quickly adjust our walking or running speed to match a treadmill’s speed (if we don’t, we’ll end up face first on the ground), our moods adjust quickly to most life circumstances. The hedonic treadmill hypothesis dovetails with research demonstrating that ratings of happiness are much more similar within pairs of identical twins, who are genetically identical, than within pairs of fraternal twins, who share only 50% of their genes on average (Lykken & Tellegen, 1996). This finding points to a substantial genetic contribution to happiness and raises the possibility that we’re each born with a distinctive happiness “set point,” a genetically influenced baseline level of happiness from which we bounce up and down in response to short-term life events, but to which we soon return once we’ve adapted to these events (Lykken, 2000).

More direct evidence for the hedonic treadmill comes from studies of people who’ve experienced either (1) extremely positive or (2) extremely negative, even tragic, life events. One might expect the first group of people to be much happier than the second. They are—but often for only a surprisingly brief period of time (Gilbert, 2006). For example, even though the happiness of big lottery winners reaches the stratosphere immediately after hitting the jackpot, their happiness pretty much falls back down to earth—and to the levels of most everybody else—about 2 months later (Brickman, Coates, & Janoff-Bulman, 1978). Most paraplegics—people paralyzed from the waist down—return largely (although not entirely) to their baseline levels of happiness within a few months of their accidents (Brickman et al., 1978; Silver, 1982). And although young professors who’ve been denied tenure (meaning they’ve lost their jobs) are understandably crushed after receiving the news, within a few years they’re just about as happy as young professors who received tenure (Gilbert, Pinel, Wilson, Blumberg, & Wheatley, 1998). Most of us adapt fairly quickly to our life circumstances, both good and bad.

Research also calls into question the widespread belief that money buys us happiness (Kahneman, Krueger, Schkade, Schwarz, & Stone, 2006; Myers & Diener, 1996). As an illustration of the striking disconnect between money and happiness, the average life satisfaction of Forbes magazine’s 400 richest Americans was 5.8 on a 7-point scale (Diener, Horowitz, & Emmons, 1985). Yet the average life satisfaction of the Pennsylvania Amish is also 5.8 (Diener & Seligman, 2004), despite the fact that their average annual salary is several billion dollars lower. It’s true that to be happy we need to have enough money to be comfortable in life. Below about $50,000, household income is moderately related to happiness, probably because it’s hard to be happy when we need to worry about putting food on the table or paying next month’s rent. But above $50,000, the relation between money and happiness essentially vanishes (Helliwell & Putnam, 2004; Myers, 2000). Yet this fact didn’t stop major league baseball players, whose average yearly salary was $1.2 million (not including commercial endorsements), from going on strike in 1994 for higher salaries.

Still, Martha Washington may not have been entirely right. Certain momentous life events can affect our long-term happiness for better or worse, although less powerfully than most of us believe. For example, getting divorced, widowed, or laid off from work seem to result in lasting and sometimes permanent decreases in happiness (Diener, Lucas, & Scollon, 2006). Yet even for divorce and death of a spouse, many people eventually adapt more or less completely over time (Clark, Diener, Georgellis, & Lucas, 2008).

So although our life circumstances certainly can affect our happiness in the short run, much of our happiness in the long run is surprisingly independent of what happens to us. More than we might wish to admit, happiness is at least as much a function of what we make of our lives as our lives themselves. As psychologist and happiness expert Ed Diener noted, “A person enjoys pleasures because he or she is happy, not vice-versa” (quoted in Eysenck, 1990, p. 120).

Myth #25 Ulcers Are Caused Primarily or Entirely by Stress

Little more than two decades ago, it was virtually inconceivable that taking a pill would be the treatment of choice for peptic ulcers—sores in the lining of the stomach or small intestines. But breakthrough medical developments, a daring personal “experiment,” and painstaking research transformed medical opinions about ulcers. Prior to the mid 1980s, most physicians and laypersons were convinced that ulcers were caused primarily by stress. They also believed that spicy foods, excess stomach acid, smoking, and alcohol consumption played important secondary roles in ulcer formation. Today, we know otherwise, thanks to the pioneering work of Barry Marshall and Robin Warren, who received the Nobel Prize for groundbreaking research that radically changed our thinking about ulcers and their treatment (Marshall & Warren, 1983).

Many psychologists influenced by the writings of Sigmund Freud once assumed that ulcers resulted from underlying psychological conflicts. Psychoanalyst Franz Alexander (1950) suggested that ulcers are linked to infantile cravings to be fed and feelings of dependency. In adulthood, these conflicts supposedly become rekindled and activate the gastrointestinal system (stomach and intestines), which is associated with feeding.

The idea that specific emotions and conflicts are associated with ulcers was discredited by research, only to be replaced by the popular belief that stress, along with eating habits and lifestyle choices, was the prime culprit. As Thomas Gilovich and Kenneth Savitsky (1996) noted, the belief that stress causes ulcers may stem from a misapplication of the representativeness heuristic (see Introduction, p. 15). Because stress often causes our stomachs to churn, it seems reasonable to suppose that stress can cause other stomach problems, including ulcers. Still, ulcers aren’t limited to over-achieving executives of Fortune 500 companies. About 25 million Americans of all socioeconomic stripes will suffer the gnawing pain of ulcers during their lifetimes (Sonnenberg, 1994).

Despite the widespread public perception of an intimate link between stress and ulcers, a few scientists long suspected that an infectious agent might be responsible for at least some ulcers. Yet it wasn’t until Marshall and Warren (1983) pinpointed a link between peptic ulcers and a curved bacterium—dubbed Helicobacter (H.) pylori—lurking in the lining of the stomach and intestines that scientists made real progress toward identifying a specific disease-causing agent.

Marshall and Warren first discovered that H. pylori infection was common in people with ulcers, but uncommon in people without ulcers. To demonstrate that the microscopic invader was the culprit in producing ulcers, Marshall bravely (some might say foolishly) swallowed a cocktail of the organisms and developed a stomach irritation known as gastritis for several weeks. Still, Marshall’s daring stunt wasn’t conclusive. He ended up with a wicked stomach ache, but no ulcer. So he was unable to show a direct tie between H. pylori and ulcer formation. This result actually isn’t all that surprising given that although the bacterium is present in about half of all people, only about 10–15% of people who harbor the organism develop ulcers. Moreover, a single such demonstration, especially when conducted by the person who advances the hypothesis under study, can at best provide only suggestive evidence. The medical community, while intrigued and excited by these early findings, patiently awaited more convincing research.

The clincher came when independent researchers across the world cultured the bacterium, and demonstrated that treating the H. pylori infection with potent antibiotics reduced the recurrence of ulcers dramatically. This finding was important because drugs that merely neutralize or inhibit the production of stomach acid can effectively treat ulcers in the majority of cases, but 50–90% of ulcers recur after treatment stops (Gough et al., 1984). The fact that antibiotics decreased the recurrence of ulcers by 90–95% provided strong evidence that H pylori caused ulcers.

Nevertheless, as is so often the case, public opinion lagged behind medical discoveries. By 1997, 57% of Americans still believed that stress is the main cause of ulcers, and 17% believed that spicy foods cause ulcers (Centers for Disease Control and Prevention, 1997). Yet 3 years earlier, the U.S. National Institutes of Health had proclaimed the evidence that H. pylori caused ulcers convincing, and recommended antibiotics to treat people with ulcers and H. pylori infections (NIH Consensus Conference, 1994). Even today, the media promotes the singular role of negative emotions in generating ulcers. In the 2005 film, The Upside of Anger, Emily (played by Keri Russell) develops an ulcer after her father abandons the family and her mother frustrates her ambitions to become a dancer.

Because the great majority of people infected with H. pylori don’t develop ulcers, scientists realized that other influences must play a role. Soon there was widespread recognition that excessive use of anti-inflammatory medications, like aspirin and ibuprofen, can trigger ulcers by irritating the stomach lining. Moreover, researchers didn’t abandon their quest to identify the role of stress in ulcer formation. In fact, stress probably plays some role in ulcers, although studies show that the widespread belief that stress by itself causes ulcers is wrong. For example, psychological distress is associated with higher rates of ulcers in human and non-human animals (Levenstein, Kaplan, & Smith, 1997; Overmeier & Murison, 1997). Moreover, stress is linked to a poor response to ulcer treatment (Levenstein et al., 1996), and stressful events —including earthquakes and economic crises—are associated with increases in ulcers (Levenstein, Ackerman, Kiecolt-Glaser, & Dubois, 1999). Additionally, people with generalized anxiety disorder, a condition marked by worrying much of the time about many things, are at heightened risk for peptic ulcers (Goodwin & Stein, 2002). Nevertheless, it’s possible that anxiety may not cause ulcers. Developing an ulcer and the pain associated with it may lead some people to worry constantly, or people may be predisposed to excessive anxiety and ulcers by genetic influences common to both conditions.

We can understand the fact that stress may contribute to the development of ulcers in terms of a biopsychosocial perspective, which proposes that most medical conditions depend on the complex interplay of genes, lifestyles, immunity, and everyday stressors (Markus & Kitayama, 1991; Turk, 1996). Stress may exert an indirect effect on ulcer formation by triggering such behaviors as alcohol use, and lack of sleep, which make ulcers more likely.

The verdict is still out regarding the precise role that stress plays in ulcer formation, although it’s clear that stress isn’t the only or even most important influence. In all likelihood, stress, emotions, and the damage wrought by disease-producing organisms combine to create conditions ripe for the growth of H. pylori. So if you’re having stomach problems, don’t be surprised if your doctor suggests that you learn to relax—as she pulls out a pen and pad to write you a prescription for powerful antibiotics.

Myth #26 A Positive Attitude Can Stave off Cancer

Is cancer “all about attitude?” Perhaps negative thinking, pessimism, and stress create the conditions for the cells in our body to run amok and for cancers to develop. If so, then self-help books, personal affirmations, visualizing the body free of cancer, and self-help groups could galvanize the power of positive thinking and help the immune system to prevail over cancer.

Scores of popular accounts tout the role of positive attitudes and emotions in halting cancer’s often ruthless progression. But this message has a subtle, more negative twist: If positive attitudes count for so much, then perhaps stressed-out people with a less than cheery view of themselves and the world are inflicting cancers on themselves (Beyerstein, 1999b; Gilovich, 1991; Rittenberg, 1995). The fact or fiction of the link between cancer and patients’ attitudes and emotions, on the one hand, and cancer on the other, thus bears important consequences for the 12 million people worldwide diagnosed with cancer each year, and for those engaged in a protracted battle with the disease.

Before we examine the scientific evidence, let’s survey some popular sources of information about whether psychological factors cause or cure cancer. Dr. Shivani Goodman (2004), author of the book, 9 Steps for Reversing or Preventing Cancer and Other Diseases, wrote that one day she was able to “suddenly make sense” of her breast cancer. When she was a child, every morning she heard her father say the Jewish prayer: “Thank you, God, for not making me a woman” (p. 31). Her epiphany was that her breasts were her “symbol of femininity,” and that unconsciously she was “rejecting being a woman, along with the notion that she deserved to live” (p. 32). Once she identified her toxic attitudes, she claimed “to change them into healing attitudes that created radiant health” (p. 32).

Similarly, in her book, You Can Heal Your Life (1984), Louise Hays boasted of curing her vaginal cancer with positive thinking. Hayes contended that cancer developed in her vagina because she experienced sexual abuse as a child. Her recommendation to chant self-affirmations like, “I deserve the best, I accept it now,” to cure cancer stemmed from her belief that thoughts create reality. Rhonda Byrne (2006), author of the blockbuster bestseller, The Secret (which has sold over 7 million copies), gushed with a similar message. She related the tale of a woman who, after refusing medical treatment, cured her cancer by imagining herself cancer-free. According to Byrne, if we send out negative thoughts, we attract negative experiences into our lives. But by transmitting positive thoughts, we can rid ourselves of mental and physical ailments. After Oprah Winfrey plugged The Secret in 2007 on her popular television program, one viewer with breast cancer decided to stop her recommended medical treatments and use positive thoughts to treat her illness (on a later show, Oprah cautioned viewers against following in this viewer’s footsteps). In Quantum Healing, self-help guru Deepak Chopra (1990) claimed that patients can achieve remission from cancer when their consciousness shifts to embrace the possibility they can be cured; with this shift, the cells in their bodies capitalize on their “intelligence” to defeat cancer.

The Internet overflows with suggestions for developing positive attitudes through healing visualizations, not to mention reports of seemingly miraculous cures of cancers of people who found meaning in their lives, quieted their turbulent emotions, or practiced visualization exercises to harness the power of positive thinking and reduce stress. For example, the website Healing Cancer & Your Mind suggests that patients imagine (a) armies of white blood cells attacking and overcoming cancer, (b) white blood cells as knights on white horses riding through the body destroying cancer calls, and (c) cancer as a dark color slowly turning paler until it’s the same color as the surrounding tissue.

Self-described “healers” on the Internet offer manuals and advice on how to vanquish cancer. Brent Atwater, who claims to be a “Medical Intuitive and Distant Energy Healer,” wrote a manual to “Help Survive Your Cancer Experience” containing the following advice:

(1) Separate YOUR identity from the Cancer’s identity.

(2) You are a person, who is having a Cancer “experience.” Recognizethat an “experience” comes and goes!

(3) Your Cancer “experience” is your life’s reset button! Learn from it.

Few would quibble with the idea that maintaining positive attitudes in the face of the most taxing life and death circumstances imaginable is a worthy goal. Yet many popular media sources imply that positive attitudes and stress reduction help to beat or slow down cancer. Does evidence support this claim? Many people who’ve had cancer certainly think so. In surveys of women who’ve survived breast cancer (Stewart et al., 2007), ovarian cancer (Stewart, Duff, Wong, Melancon, & Cheung, 2001), and endometrial and cervical cancer (Costanzo, Lutgedorf,Bradley, Rose, & Anderson, 2005) for at least 2 years, between 42% and 63% reported they believed their cancers were caused by stress, and between 60% and 94% believed they were cancer-free because of their positive attitudes. In these studies, more women believed their cancers were caused by stress than by an array of influences, including genetic endowment and environmental factors, such as diet.

Yet meta-analyses of research studies (see p. 32) tell a different story. They contradict the popular belief in a link between stressful life events and cancer, with most studies revealing no connection between either stress or emotions and cancer (Butow et al., 2000; Duijts, Zeegers, & Borne, 2003; Petticrew, Fraser, & Regan, 1999). Interestingly, in a recent study of job stress (Schernhammer et al., 2004) among 37,562 U.S. female registered nurses who were followed for up to 8 years (1992-2000), researchers observed a 17% lower risk of breast cancer among women who experienced relatively high stress in their jobs compared with women who experienced relatively low job stress. Researchers who followed 6,689 women in Copenhagen for more than 16 years discovered that women who reported they were highly stressed were 40% less likely to develop breast cancer than those who reported lower stress levels (Nielsen et al., 2005). The once popular idea of a “cancer prone personality,” a constellation of such personality traits as unassertiveness, shyness, and avoidance of conflict that supposedly predisposes to cancer, has similarly been discredited by controlled research (Beyerstein, Sampson, Stojanovic, & Handel, 2007).

Scientists have also failed to unearth any association between either positive attitudes or emotional states and cancer survival (Beyerstein et al., 2007). Over a 9-year period, James Coyne and his colleagues (Coyne et al., 2007b) tracked 1,093 patients with advanced head and neck cancer who suffered from non-spreading tumors. Patients who endorsed such statements as, “I am losing hope in my fight against my illness” were no less likely to live longer than patients who expressed positive attitudes. In fact, even the most optimistic patients lived no longer than the most fatalistic ones. Kelly-Anne Phillips and her associates (2008) followed 708 Australian women with newly diagnosed localized breast cancer for 8 years and discovered that people’s negative emotions— depression, anxiety, and anger—and pessimistic attitudes bore absolutely no relation to life expectancy.

These and similar findings imply that psychotherapy and support groups geared to attitude and emotional adjustment aren’t likely to stop cancer in its tracks or slow its progression. But psychiatrist David Spiegel and his colleagues’ (Spiegel, Bloom, & Gottheil, 1989) widely publicized study of survival in breast cancer patients suggested otherwise. These researchers discovered that women with metastatic (spreading) breast cancer who participated in support groups lived almost twice as long as women who didn’t attend support groups—36.6 months versus 18.9 months. Nevertheless, in the following two decades, researchers failed to replicate Spiegel’s findings (Beyerstein et al., 2007). The accumulated data from psychotherapy and self-help groups show that psychological interventions, including support groups, can enhance cancer patients’ quality of life, but can’t extend their lives (Coyne, Stefanek, & Palmer, 2007a).

So why is the belief that positive attitudes can help to fight off cancer so popular? In part, it’s almost surely because this belief appeals to people’s sense of hope, especially among those who are seeking it desperately. In addition, cancer survivors who attribute their good outcomes to a positive attitude could be falling prey to post hoc, ergo propter hoc (after this, therefore because of this) reasoning (see Introduction, p. 14). The fact that someone maintained a positive attitude before their cancer remitted doesn’t mean that this attitude caused the cause to remit; the link could be coincidental.

Finally, we may be more likely to hear about and remember cases of people who’ve fought off cancer with a positive outlook than cases of those who didn’t survive cancer even with a positive outlook. The former cases make for better human interest stories, not to mention better subjects of television talk shows.

Although visualizations, affirmations, and unsubstantiated advice on the Internet probably won’t cure or stave off cancer, that’s not to say that a positive attitude can’t help in coping with cancer. People with cancer can still do a great deal to relieve their physical and emotional burdens by seeking quality medical and psychological care, connecting with friends and family, and finding meaning and purpose in every moment of their lives. Contrary to widespread belief, people with cancer can take a measure of comfort in the now well-established finding that their attitudes aren’t to blame for their illness.

Chapter 6: Other Myths to Explore

Fiction Fact
Voice stress analyzers can help to detect lying Voice stress analyzers only detect vocal changes sometimes associated with arousal, not lying per se.
“Positive thinking” is better than negative thinking for all people. People with high levels of “defensive pessimism,” for whom worrying is a coping strategy, tend to do worse on tasks when forced to think positively.
If we’re upset about something, we should just try to put it out of our minds. Research on “thought suppression” by Daniel Wegner and others suggests that trying to put something out of mind often increases its probability of reoccurrence.
Women have better social intuition than men. Studies show that women are no better at accurately guessing the feelings of others than are men.
People are especially sad on Mondays. Most research finds no evidence for this claim, which seems to be due to people’s expectancies about feeling depressed on Mondays.
People who are prone to good moods tend to have fewer bad moods than other people. The tendency toward positive moods (positive emotionality) is largely or entirely independent of the tendency toward negative moods (negative emotionality).
Most women’s moods worsen during their premenstrual periods. Studies in which women track their moods using daily diaries show that most don’t experience mood worsening during premenstrual periods.
Living in modern Western society is much more stressful than living in undeveloped countries. There’s no systematic support for this belief.
Being placed in control of a stressful situation causes ulcers. This claim, which derived largely from a flawed 1958 study on “executive monkeys” by Joseph Brady, is probably false; to the contrary, having control over a stressful situation is less anxietyprovoking than having no control.
Familiarity breeds contempt: we dislike things we’ve been exposed to more frequently. Research on the “mere exposure effect” indicates that we typically prefer stimuli we’ve seen many times before to those we haven’t.
Extreme fear can turn our hair white. There’s no scientific evidence for this belief, and no known mechanism that could allow it to occur.
Sex makes advertisements more effective. Sex makes people pay more attention to advertisements, but often results in less recall of the product’s brand name.
Women have a “G-spot,” a vaginal area that intensifies sexual arousal. There’s little or no scientific evidence for the G-spot.
Men think about sex an average of every 7 seconds. This claim is an “urban legend” with no scientific support.
Beauty is entirely in the eye of the beholder. There are large commonalities across cultures in their standards of physical attractiveness.
People are who most distinctive in certain physical features are typically viewed as most attractive by others. People who are most statistically average in their physical features are typically viewed as most attractive by others.
Athletes shouldn’t have sex prior to a big game. Studies show that sex burns only about 50 calories on average and doesn’t cause muscle weakness.
Exposure to pornography increases aggression. Most studies show that pornography exposure does not increase risk for violence unless the pornography is accompanied by violence.
Most children who frequently “play doctor” or masturbate were sexually abused. There’s no scientific support for this belief.

Sources and Suggested Readings

To explore these and other myths about emotion and motivation, see Bornstein (1989); Croft and Walker (2001); Eysenck (1990); Gilbert (2006); Hines (2001); Ickes (2003); Lykken (1998); Nettle (2005); Norem (2001); O’Connor (2007); Radford (2007); Tavris (1992); Wegner (2002).