26

image

AN EPIDEMIC

Now the question was: Had Rosenhan outright invented pseudopatients to up his “n”—or the number of subjects in his data set—to lend more legitimacy to his findings? Had getting away with his exaggerated symptoms emboldened him to go ten steps further and invent pseudopatients? Did he get caught up with a book deal and out of desperation decide to fill in the blank pages? This elaborate ruse no longer seemed impossible: There were Mary Peterson’s letters and the undergraduates’ journals and their odd placement within Rosenhan’s files; there was Chestnut Lodge and his “famous artist” pseudopatient Laura Martin, whose case conference sounded a bit too perfect; and then there was Carl, who so closely resembled one of Rosenhan’s friends, but one who had never participated in the study.

I hadn’t wanted to believe that the man I had so admired could turn out to be this—whatever this was. My goal was no longer to just find the pseudopatients; I was now seeking out proof that they didn’t exist. So I spent the next months of my life chasing ghosts. I wrote a commentary for the Lancet Psychiatry asking for help. I made a speech at the American Psychiatric Association, calling for anyone who had ever met David Rosenhan to contact me. I hunted down rumors, spending a month pursuing a lead that St. Elizabeths Hospital in Washington, DC, was one of Rosenhan’s locations, just because the Wikipedia page on his study included an image of the hospital as its key art. I even hired a private detective, who got no further than I had. I contacted everyone who had ever entered Rosenhan’s orbit, and was shocked to find as I departed further from his inner circle how many people wanted no part in the retelling of his story, including one former secretary who may have had access to some of his work during the writing of “On Being Sane in Insane Places.” When I reached her, all she would offer was, “Well, he did often use some ‘creative thinking.’” She laughed, and then her tone darkened: “I have nothing nice to say, so I won’t say anything at all.”

All navigable roads led back to Bill and Harry. Students, fellow professors, and friends either knew nothing about the study or led me straight back to the two I had already found.

I researched lying and found a splashy Daily Mail article that claimed to offer “scientifically proven” ways to spot a liar using textual analysis that scoured writing for “minimal self-references and convoluted phrases” and “simple explanations and negative language.” Unfortunately, when I ran this by a real expert, Jamie Pennebaker, a University of Texas social psychologist who studies lying, he said that it was impossible to suss out a liar from text alone, and that anyone who told me otherwise was probably lying to me.

I ran all of this researched skepticism past Florence. She had often called Rosenhan a “storyteller” and said that he might have been happier as a novelist than a researcher, but would the fantastical side of him go this far? At first, Florence doubted he would. But upon further reflection she wrote me an email:

“I continue to wonder whether some of these folks were fabricated… it would certainly explain why David never completed the book.”

It was a good point. His publisher, Doubleday, sued him in 1980 in the Supreme Court of New York to recoup the first installment advance for Locked Up (by then he had changed the name from Odyssey into Lunacy), which was already seven years late and would never be delivered. Had the editor’s encouraging comments, in which he also suggested adding more detail about the “vague” pseudopatients, spooked Rosenhan? To almost everyone I spoke with, his abandonment of the study that made his career was the most concerning, even damning, evidence that something was seriously amiss.

After the publication of “On Being Sane in Insane Places,” Rosenhan returned to researching altruism, publishing a paper on the effects of success and failure on childhood generosity. After 1973, he jumped from topic to topic, from mood and self-gratification to the joys of helping to moral character to pseudoempiricism to the study of nightmares experienced after an earthquake. The research all seemed a bit unfocused. In fact, one colleague told me, after all of his success with his famous paper and professorship at Stanford, “David became sort of less involved academically… less research oriented generally.”

His most successful work, after the study, was a textbook on abnormal psychology that he published with Martin Seligman and that as of this writing is in its fourth edition and is still used in classrooms around the country. He researched juror behavior, including one paper on how note-taking aids jurors’ recall of facts and another on their ability (or, rather, inability) to disregard facts that judges had ruled inadmissible. He also joined forces with Lee Ross and Florence Keller as trial consultants—or psychologists who help with trial preparations, like jury selection and opening and closing statements—early adopters in the use of the social sciences to aid in legal analysis.

His research on “intense religiosity,” which friends cite as his most beloved work, though it was never published, found that a shocking percentage of Stanford students believed not only in God (75 percent) but also in creationism (59 percent), leading Rosenhan to conclude that although “for most of this century religiosity was negatively correlated with intelligence and social class, there is increasing evidence that the direction of that correlation has reversed sharply.”

Okay, but: As interesting as this sounds, does it sound realistic to you that 59 percent of the Stanford student body believed in creationism into the 1990s?

Perhaps I’m being unfair, my antennae now hyperattuned to any signals of fraud. All of this is to say that after he published his classic work, the study that would help bring psychiatric care as he knew it crashing down, except for a brief follow-up he never again published research on the subject of serious mental illness and psychiatric hospitalization.

His dual professorship in law and psychology, which came along with not only a higher salary than his psychology peers’ but also the benefit of two separate offices, afforded him a cloak of invisibility that some of his students and colleagues thought was shady. “Whenever you’d try to find him in the Psychology Department, he would be in his law office,” one former graduate student told me. “Whenever you went to find him in law, he’d be back in psychology.” He seemed to be everywhere and nowhere.

Eleanor Maccoby, one of the most respected psychologists in her field of developmental psychology, who worked with Rosenhan for forty years and even headed the tenure committee when Rosenhan received the honor, didn’t soft-pedal during our interview at her retirement home on the eve of her one hundredth birthday. “I was suspicious of him,” she said. “Many of us were.” When his tenure came up for review, the department was divided, she recalled. Of the study, she said, “Some people were doubtful about it. It was impossible to know what he had really done, or if he had done it.” Though they ultimately decided to grant him tenure because of his talents as a lecturer, this doubt shadowed him throughout his professional career. “His reputation gradually shrank away,” she said.

Marshmallow test creator Walter Mischel, who passed away in 2018, told me that he didn’t have much contact with Rosenhan, despite having edited an early draft of his study. In a private correspondence, however, he was more forthcoming: “I never really connected with Rosenhan, found him a pain when I was chair, and thought he avoided work like the plague. I also was not drawn to his research, and made a point of staying away from it and from him.”

I contacted a woman who had loved Rosenhan years ago and still held on to his memory, even if that love had long ago soured. She agreed to speak with me with one caveat: I never ask her about their affair. It was a tough agreement to uphold, especially when she took out a box of recordings of his lectures that she had kept for decades.

“He could make you feel like the most important person in the world, just in the way he talked to you,” she told me. She had worked with many psychologists and said that they all shared one common trait. “Look at their focus of study and you can count on it that that’s what they have a problem with. That’s why they study that particular area.”

“Oh, that’s funny,” I said. “What would Rosenhan’s problem be?”

“Well, morality, altruism, being a decent person, I suppose,” she said. She was laughing but in a raw way. “I mean, I always used to say, ‘He’s polishing his halo again.’ He had an uncanny ability with all his training on personality and character and so on; he had an uncanny way of projecting himself. That you saw him exactly the way he wanted to be seen.”

Rosenhan’s research assistant Nancy Horn was one of the holdouts who refused to believe that Rosenhan was capable of such dishonesty. She gave me a resounding “absolutely not possible” when I broached the possibility that he had made up a good deal of the paper. His Swarthmore student Hank O’Karma, the author of one of the undergraduate journals that Rosenhan kept in his pseudopatient files, was adamant that he couldn’t have, too. Rosenhan’s son, Jack, to whom Florence and I posed the possibility over lunch at a diner in Palo Alto, also dismissed it, and added, “My dad was a storyteller. That’s true. But I do not think that he would ever do anything that would mess with this research.”

When I presented Bill with the facts, he seemed uncertain. “I don’t know,” he said. “Seems unlikely to me. It’s hard for me to imagine.”

Harry disagreed. “I never thought of him as a BS artist as an undergraduate. I felt neglected as a graduate student, but that’s a different issue. But this…,” he said, referring to what Rosenhan had written about Harry’s experience, “is total fiction.”

All of the little things—the wig, the lying about his hospitalization dates, the exaggeration in his medical records, the playing with numbers, the dismissal of Harry’s information, the unfinished book, his never tackling the subject again—all of these piled up. Rosenhan does not seem to be the man I’d believed in.

It wasn’t the first time a paper published in a journal as esteemed as Science had been called into serious question, even exposed as an outright fraud. One of the most ignoble examples is social psychologist Diederik Stapel, once famous for his article published in Science about a correlation between filthier train platforms and racist views at a Utrecht station. The media hailed the piece. He followed it up by claiming to find a link between carnivorous appetites and selfishness. Then the ground fell out beneath him. The New York Times called him “perhaps the biggest con man in academic science.” For years he had invented data in more than fifty papers. Diederik Stapel’s story, though extreme, revealed not only that this level of con could happen but that the environment—where journals select articles that will make splashy news, where there’s pressure to leave contradictory data out (something called “p-hacking”), where negative, non-sensational studies go unrewarded and unpublished, where grant money and livelihoods are dependent on publishing (the “publish or perish” issue)—provided a hothouse environment for people like Stapel looking to exploit the system.

Right now the field of psychology—especially social psychology—is in the midst of a “replication crisis,” and a few critics have turned their sights onto some of the field’s most cited works, from “power posing,” to “the facial feedback hypothesis,” to “ego depletion.” Bryan Nosek from the University of Virginia started the “Reproducibility Project,” which repeated one hundred published psychological experiments and could successfully reproduce the findings from fewer than half of them.

Walter Mischel’s marshmallow study (the one Bill’s daughter took part in at Stanford), in which preschoolers who were able to restrain themselves in the face of a fluffy snack showed greater achievements later in life, has since been questioned. A replication of the study published in Psychological Science in 2018 found that the correlation between the ability to delay gratification in childhood and achievement later in life was “half the size” of the effect reported in Mischel’s original work. Furthermore, once you controlled for education, family life, and early cognitive ability, the correlation between denying a marshmallow and later behavior dropped to a big fat zero. Yet the marshmallow test and its follow-ups (though admittedly never intended to be used this way) helped shape public school educational policies.

Stanley Milgram and his shock tests—using the same machine that Rosenhan used during some of his early studies before he arrived at Stanford—have also been challenged. Psychologist and author Gina Perry revealed in her book Behind the Shock Machine that Milgram and his cohorts coerced participants into delivering shocks, which shows that the conclusions of the study—that we all are susceptible to blindly following authority—may not be as cut-and-dry as the experiment alleged, though there have been many replications of his research (including a 2017 paper out of Poland where 72 out of 80 participants were willing to shock other innocent subjects at the highest level).

Among the hardest hit has been Philip Zimbardo, the architect of the famous prison study, which took place in Stanford’s basement in 1971 while Rosenhan was working on “On Being Sane in Insane Places.” Zimbardo and his researchers recruited students from a newspaper ad and assigned them roles as “inmates” or “guards.” Guards abused inmates; inmates reacted as real prisoners. One even famously screamed, “I’m burning up inside… I want to get out!… I can’t stand another night! I just can’t take it anymore!” The whole demonstration evidently revealed the ingrained sadism at the core of all of us, if given the power and opportunity. Zimbardo became an overnight expert and his work was even consulted in a 2004 congressional hearing on Abu Ghraib prisoner torture. When Zimbardo first saw the photographs of the abused, he told the New York Times, “I was shocked. But not surprised… What particularly bothered me was that the Pentagon blamed the whole thing on a ‘few bad apples.’ I knew from our experiment, if you put good apples into a bad situation, you’ll get bad apples.” Some argue that this perspective released the aggressors from responsibility. If we all have a monster inside of us, waiting to emerge in the right context, then how can we blame or punish people when it inevitably does?

The study, some say, even helped push the dial away from prison reform, since prison was deemed, thanks in part to Zimbardo, as “not reformable.” But the study’s critics, of which there were many, landed a few more concrete hits in recent years. A 2018 Medium piece by journalist Ben Blum blew up the internet (in certain circles). Blum had tracked down one of the “inmates”—the one who screamed “I’m burning up inside”—and found out that his pain was a performance. “It was just a job. If you listen to the tape, you can hear it in my voice: I have a great job. I get to yell and scream and act all hysterical. I get to act like a prisoner. I was being a good employee. It was a great time.” Blum further discovered that Zimbardo had coached the guards and even thanked one of the more aggressive ones. “We must stop celebrating this work,” personality psychologist Simine Vazire tweeted. “It’s anti-scientific. Get it out of textbooks.”

Psychologist Peter Gray, who had removed Zimbardo from his Psychology textbook in 1991, long before the Medium article, told me in an interview that he sees this as a “prime example of a study that fits our biases… There is a kind of desire to expose the problems of society, but in the process cut corners or even make up data.” He said this is happening more often now because there are greater numbers of postdoctorates competing for fewer jobs and grant resources. “There is an epidemic of fraud.”

This epidemic is not limited to social psychology, but is mirrored across all disciplines, from the heavily data-oriented fields of cancer studies and genetics to dentistry and primate studies. In 2016, Australian researcher Caroline Barwood and colleague Bruce Murdoch were convicted of cooking the books on a “breakthrough” study on Parkinson’s—and nearly went to jail for it. Korean stem-cell researcher Hwang Woo Suk and Harvard evolutionary biologist Marc Hauser are just two more celebrated academics to face allegations that they had fabricated their work and committed academic fraud. This of course happens when there is big business interest outside of academia, too. There’s Elizabeth Holmes and her blood testing company, Theranos, which raised $700 million before the Wall Street Journal’s John Carreyrou helped expose the company to be a “massive fraud.” Richard Horton, editor of the Lancet, wrote in a 2015 op-ed, “Much of the scientific literature, perhaps half, may simply be untrue… Science has taken a turn towards darkness.” One of the leaders of the push to uncover academic fraud is Stanford’s John Ioannidis, who authored a scathing 2005 paper titled “Why Most Published Research Findings Are False.” He’s found that out of thousands of early papers on genomics, only a tiny fraction stood the test of time. He then followed forty-nine studies that had been cited at least a thousand times, and found that seven had been “flatly contradicted” by further research.

I notice fraud everywhere now. In the fall of 2018, Cornell University professor Brian Wansink resigned after thirteen of his papers—including one that showed how serving bowl size affects food consumption—were retracted and Cornell found he committed “academic misconduct in his research and scholarship, including misreporting research data.” That same time, thirty-one papers published by Dr. Piero Anversa, a former Harvard Medical School professor and cardiac stem cell researcher, were singled out as including “falsified and/or fabricated data” and retracted. If you want to see the scourge this has become on the field in real time, check out a blog called Retraction Watch, which strives to post every single academic retraction and keeps a top ten list of the most highly cited retracted papers.

And this fraud, played out every day in our academic journals and our newspapers (or more likely our social media feeds), breeds an anti-science backlash born of distrust. We’ve seen this most dangerously in the recent measles outbreak spurred on by the anti-vaxxer movement (whose theories are based around the fraudulent Wakefield study, published in the Lancet, one of the world’s oldest and respected journals, and since retracted). How many times, people wonder, can we be told that this or that was “proven” in studies—only to be warned that the opposite is true the very next day—before we start to doubt all of it?

As we’ve seen, this doubt is particularly corrosive to psychiatry.

We still don’t know exactly how so many psychiatric drugs work or why they don’t work for a significant percentage of people. All of the current treatments for mental illness are “palliative, none are even proposed as cures.” We still don’t have clear-cut preventive measures and we still haven’t figured out how to improve clinical outcomes for everyone or even how to prolong life expectancy. Though serious mental illnesses, like schizophrenia, clearly have heritable components, genetic research has yielded interesting but mostly inconclusive results.

The lay public today is fully aware of the deep connections between Big Pharma and psychiatry, which were cemented during the creation of the DSM-III and have only expanded since. No wonder there’s been a fallout with the drugs, as direct-to-consumer advertising promised all manners of advancements and cures. But the new drugs, called “atypicals” or “second-generation” antipsychotics because they were marketed to have fewer side effects, have failed to deliver on many of their promises. Second-generation drugs come with their own issues, including excessive weight gain and metabolic disorders, and, in 2010, the New York Times reported that they were “the single biggest target” of the False Claims Act, resulting in billions of dollars spent settling charges of fraud. (Johnson & Johnson, for example, agreed to pay $2.2 billion in 2013 for hiding the host of side effects of its drug Risperdal, which include stroke and diabetes.)

Author and journalist Robert Whitaker, who has created a powerful arena for challenging traditional psychiatry on his blog, Mad in America, based on his 2001 book of the same name, sums up the outrage: “For the past twenty-five years, the psychiatric establishment has told us a false story. It told us that schizophrenia, depression, and bipolar illness are known to be brain diseases… It told us that psychiatric medications fix chemical imbalances in the brain, even though decades of research failed to find this to be so. It told us that Prozac and the other second-generation medications were much better and safer than the first-generation drugs, even though the clinical studies had shown no such thing. Most important of all, the psychiatric establishment failed to tell us that the drugs worsen long-term outcomes.”

In the face of such rampant distrust, some of the “best and brightest” cling to their arsenal with a delusional level of certainty. A well-known psychiatrist (whom I will allow to remain nameless since he doesn’t see patients these days—he’s that high up; apparently the more successful you are, the fewer hours you spend with patients) lectured me about how to fix the broken system: “They just need to take their drugs,” he said, sipping his wine. “What we have is just as effective as the drugs that treated you.” The blind arrogance of this comment made me laugh out loud. Though some people are bigger drug advocates than others, most reasonable doctors acknowledge the limitations of psychiatric medications. The hardest part of coping with a serious mental illness, according to people I’ve interviewed who live with one, is the more subtle negative symptoms—the cognitive impairments, the parts of the illness that make life harder to navigate and are not ameliorated by the medications available. It feels like “your life is taken away from you. That all the things you once enjoyed are gone,” said a twenty-year-old who had recently been diagnosed with schizophrenia.

But I’m not here to rail against the drugs. There are plenty of places where you can get that perspective. I see that these drugs help many people lead full and meaningful lives. It would be folly to discount their worth. We also can’t deny that the situation is complicated. If I know this and you know this, then that arrogant doctor, a leader in the field, does, too. Yet there he sits, sipping wine and spouting absurdities.

The reputation, the distrust, the lack of progress: All of this has contributed to a worldwide shortage of mental health care workers. Some say it’s the pay—for many years psychiatrists were the third lowest paid medical specialists (though this, as we’ll see, is starting to change). Psychiatry was once seen as a humanistic medical science; as of 2006 only 3 percent of Americans receive any kind of psychotherapy—from “problem based” cognitive behavioral therapy to open-ended psychodynamic treatment. Freud’s officially “dead.” His work has been reexamined as “sexist, fraudulent, unscientific, or just plain wrong… Psychoanalysis belongs with the discarded practices like leeching.” In the meantime, psychiatry has shifted from a soft science to a hard one, and in doing so has become largely mechanical and mundane.

These issues partially explain why I didn’t receive the smug reaction from the mental health community that I expected when I started to share my investigation outside the small world around Rosenhan. A few expressed shock, but many claimed not to be surprised. Psychiatrist Allen Frances listened to my case, then interrupted: “Before we get to that, could you go after the Koch brothers next?” But then he let the news sink in. That study was key to Robert Spitzer’s work. Without it, “Spitzer could never have done what he did with DSM-III,” he said. To find out that at least part of it was flimsy—if not worse—was far from vindicating; it was disheartening.

One psychiatrist friend started ranting about how the study was “ridiculous” and that Rosenhan’s focus on labeling was “total bullshit.” She wouldn’t concede that his larger points—namely about how patients are treated because of those labels—had any validity. Eventually she got so red in the face that I promised I wouldn’t bring it up again.

At a research conference in Europe, where I was invited to speak about my illness, I agreed to meet a small group of research-oriented psychologists and psychiatrists for dinner after my talk. We met at a hotel bar that seemed plucked out of Midtown Manhattan, and joined four people at a table, all of whom were drinking martinis. I ordered a Manhattan, ignoring a voice warning me that it was never a good idea to drink bourbon cocktails at a professional event with strangers. The psychiatrists joked that they were going to “stay on New York time” so that they could just party through the conference. They talked a bit about my presentation and asked a few questions, but it was clear they were in vacation mode so the questions veered off-track.

One person asked: “How do schizophrenics feel about your book?”

I wasn’t aware that there was one way that people with schizophrenia felt about anything, let alone my book. I looked back at him blankly, until one of the psychologists spoke for me. “Schizophrenics don’t read.” No one reacted. Was this a joke or was this truly the way a clinician felt about his patients?

Later, at a crowded restaurant, our table got rowdier the more alcohol we consumed. At some point, the subject of Rosenhan came up and I spoke a little about my research.

The psychologist who’d made the comment about people with schizophrenia not reading interrupted me. “I don’t understand why you’re even focusing on this study,” he said, his voice thick. “I have no idea why you would do something that is so anti-psychiatry.”

When I told him about my growing suspicions about the study, he got even more aggressive.

“Something like this is bad for all of us,” he said, making a sweeping motion around the table, his voice rising in the now near-empty restaurant. The same person who was happy to dismiss the study as “anti-psychiatry” immediately raised his hackles at evidence that it wasn’t aboveboard. Could it be that keeping this study solid benefited the narrative sold to many people in and outside the field—that we’re making steady progress, that the bad old days are behind us?

“You have an opportunity to do something good and instead you focus on this,” he said, now pounding on the table. “Whether you like it or not, you’re a symbol, and you should do something good with that power.”

Perhaps it was the jet lag, or the latent frustration of getting nowhere with the pseudopatients, or the growing certainty that the study was fabricated and the feelings of disappointment I had about the man behind it, or the mixture of red wine and Manhattans. Perhaps it was the fact that he called me a symbol (a symbol of what?). Whatever the cause, I lost it. I disappeared into the restaurant’s closet-size bathroom, gazed into my own bleary eyes in the mirror, and mouthed, Get yourself together—remembering my own mirror image, the one who would not thrive as I had. I calmed myself enough to return to the table, my eyes red and my mascara smeared, where I couldn’t help but launch back in. “I’m not trying to attack psychiatry. Give me a positive story to write and I will,” I said, standing at the head of the table and speaking too loudly.

He looked up at me, resigned, put down his wine, and said, “Give me ten years.”

We don’t have ten years.