CHAPTER 3
HEADS I WIN, TAILS IT’S CHANCE
The psychologist Martin Seligman first set forth the highly influential “learned helplessness” theory of depression in 1967. He reasoned that depressed people see the world too negatively because they are scarred by early hardship and learn to feel helpless. Studies on animals had supported the theory, and in 1979 two of Seligman’s students, Lauren Alloy and Lynn Abramson, decided to test it on undergraduates. They asked their test subjects to press a button and observe whether it turned on a green light. The subjects effectively had no control over the light; the researchers would decide whether or not to turn it on when the subjects pressed the button. In several different trials, the researchers varied how often the light came on after the button was pressed: in one experiment, it lit up 75 percent of the time; in another experiment, 50 percent; in a third, only 25 percent. Students were divided into two groups, based on tests they’d previously taken: those with no symptoms of depression, and those with some depression.
Alloy and Abramson made an unwelcome discovery: their teacher was wrong. Depressed students didn’t underestimate how much control they had; normal students overestimated it. This observation, which they called “depressive realism,” was strongest when the green light came on most often. When the light came on only 25 percent of the time after they pressed the button, about half of both depressed and normal groups realized they had no control over it. But when the light came on 75 percent of the time, only 6 percent of normal students realized they had little control versus 50 percent in the depressed group.
Aware that this artificial test didn’t replicate real-world decisions because there were no performance-based rewards or penalties, the researchers introduced motivation—money. Students were told they would gain or lose up to five dollars (which was worth a lot more in 1979 than it is now) each time the green light went on. As before, the experimenters manipulated the frequency of the light. When they made money off the green light, normal students again thought they had more control over it than they actually did. But when the light meant losing money, the normal students were more realistic. In all cases, depressed students accurately judged how much control they had.
Alloy and Abramson had discovered something new: depression led to more, not less, realistic assessments of control over one’s environment, an effect that was only enhanced by a real-world emotional desire, like making money. In the decades since they published their paper, their results have been replicated many times in other experiments. As counterintuitive as the idea of depressive realism may be, it is hard to deny.
 
 
A FEW YEARS EARLIER, psychologists Ellen Langer and Jane Roth had tested the concept of an “illusion of control” in our daily decisions. They devised an experiment in which ninety Yale students were asked to call out heads or tails just before thirty coin tosses. Unbeknownst to the students, the results were rigged. One group was told repeatedly, early in the thirty coin tosses, that their guesses were correct even when they weren’t; this was called the descending outcomes group. Occasionally, researchers would show the coin to the students when they guessed right in order to reinforce the impression that they were being told the truth. The opposite, ascending group, was repeatedly told that their guesses were incorrect (even when they were right) early in the thirty tosses, and then more and more correct as the study went on. A third group was told the truth throughout the thirty coin tosses. Afterward, students were asked how good they thought they were at predicting coin tosses and whether they thought they could improve with practice.
The ascending and random groups were realistic: they were convinced the results were the product of chance, that they hadn’t done especially well, and that they wouldn’t do better with practice. But, the descending group thought they’d done rather well, and would do better with practice.
Langer and Roth found it remarkable that highly intelligent students at a prestigious college, who clearly knew that coin tosses are completely random events, could be fooled by early apparent success into consistently overestimating their sense of control. The researchers titled their paper “Heads I Win, Tails It’s Chance” and concluded that normal people have an illusory sense of control, especially if things seem to go well for them.
Building on this work, Shelley Taylor, a psychologist at the University of California at Los Angeles, spent much of her career developing the concept of “positive illusions.” Studying how we react to sickness, she first thought that normal people who became ill and then recovered would return to their former worldviews. But she found that breast cancer patients saw the experience of serious illness and subsequent recovery as transformative; they didn’t just go back to being who they were. They became different, and two-thirds of them said they’d changed for the better. But this sense of well-being came at a price. Taylor dryly noted, “From many of their accounts there emerged a mildly disturbing disregard for the truth.” The women emerged with a greater sense of control over their disease or their recovery than was actually the case. The typical patient consistently overestimated her likely survival compared to the known statistics and her own medical status. Interviewing the oncologists and psychotherapists who cared for these patients, the researchers found that their unrealistically optimistic attitudes correlated with better psychological adjustment. That is, the psychologically healthier patients were the most unrealistic. Taylor had discovered “positive illusion”—the opposite of depressive realism, a kind of healthy illusion found not just in a trivial button-pushing test, but in life-threatening illness.
If all this is correct—if there is depressive realism, and if there are normal illusions that have positive effects—then we have to reconsider what it means to be mentally healthy. We tend to see mental health as “being normal”—happy, realistic, fulfilled. Yet Taylor showed that we sacrifice realism in the interest of happiness. These counterintuitive data lead me to two conclusions about what it means to be normal:
1. The skew of happiness: Under normal conditions, normal people overestimate themselves. We think we have more control over things than we do; we’re more optimistic than circumstances warrant; we exaggerate our skills, beauty, and intelligence. “Heads I win; tails it’s chance” is our unconscious philosophy of life. More than a hundred separate studies have documented that people estimate themselves as more likely to experience positive events than their peers.
One study even quantified this principle. Standardizing sixteen studies of life satisfaction on a 0 to 100 scale—with 0 reflecting abject misery and 100 bliss—the average score was 75 percent, meaning that most people are mostly happy about their lives. More important, the spread of scores was very tight: almost everyone scored between 70 and 80 percent. In fact, 90 percent of people scored above 50 percent (which would theoretically be an average level of satisfaction). In other words, there is a skew to normal life: most everyone feels happier than average, which means that “average” satisfaction is uncommon.
2. The perils of success: Leston Havens, a wise psychotherapist, once commented to me that he had known many people who had been improved by failure, and many ruined by success. Failure deflates illusion, while success only makes illusion worse, as shown in the coin toss and button-pushing studies (in which believing one was correct early on, or winning money, enhanced the illusion of control). Most normal, mentally healthy people have these features: they overestimate how happy they are; and when things go well, this illusion only gets worse.
This isn’t a settled debate, and these interpretations could be proven wrong. But if they’re correct, they raise several questions. Why do positive illusions occur? Can we only arrive at realism through personal hardship? Or are some of us inherently more likely than others to become realistic? Is depression the royal road to realism?
We tend to assume linear relationships about most things. If some is good, we presume that more is better. But for many things, there is a curvilinear relationship: too little is harmful, so is too much; in the middle is just right. Scientists call this the inverted U-curve, but we can also see it in the fairy tale about Goldilocks and the three bears, where the girl is choosing between bowls of porridge or beds until she finds the ones that are just right. We might call this the Goldilocks principle.
In biology, it’s generally accepted that anxiety is curvilinear. A moderate amount is good for the organism, keeping it vigilant, ready to defend itself or flee. Too little would make an organism vulnerable to predators or other danger; too much would cause excessive stress, making the organism less capable of handling danger. Illusion may play a similar role, suggests Taylor, noting that most of the patients in her studies of physical illness were not completely out of touch with reality; they were far from psychotic, or even neurotic. They were basically normal people, in touch with reality, who, in relation to their medical illness, were overly optimistic. They were, in short, only a little unrealistic. Too little illusion, she suggests, makes us all too realistic, seeing the stark hopelessness of the facts, leading us to give up. Too much illusion, as Freudians argue, renders us unable to respond properly to the world’s challenges. Positive illusion in people with medical illness is a moderate, in-between amount that helps them cope with adversity even better, to prepare responses to life’s challenges and to meet them.
 
 
WHETHER ONE SUCCEEDS by luck or skill, the absence of early hardship often has a later negative effect; when difficult times arrive, one is vulnerable. Early triumph can promote future failure.
In contrast, early failures repeatedly experienced by a person predisposed to depression inoculate against future illusion. Like the ascending group in the Yale experiment, later success fails to swell one’s head because one remembers one’s failures and respects the role of chance in life. The philosopher Karl Jaspers once said that how a man responds to failure determines who he will become. Through suffering, one becomes more realistic about the world, and thus better able to change it. Lincoln suffered immensely; Churchill suffered much; so did Sherman. Others who were luckier in their early lives—including, as we’ll see later in the book, McClellan and Neville Chamberlain—failed where the mentally ill leaders succeeded.
Of course, everyone suffers. But life’s pain can come harshly or gently, earlier or later. For the lucky, suffering is less frequent, less severe, and delayed until it can’t be avoided. The unlucky, who, early in their lives, endure hardships and tragedies—or the challenge of mental illness—seem to become, not infrequently, our greatest leaders.