WHEN I RETURNED to Penn from England in January 1976, my research group was swollen with talent, and we were eager to take up John Teasdale’s challenge to the helplessness theory of depression. Three remarkable women led my feisty research group: Lyn Yvonne Abramson, in cutoff jeans and outsize tie-dyed blouses, looked like a hippy, which she decidedly was not. Naturally beautiful, she just didn’t much care about what she wore. Hailing from rural Minnesota, she mixed some Finnish bloodlines inclined to depression with a dollop of Sioux. She came to Penn as a graduate student by way of a stellar record at the University of Wisconsin. She was deep and scholarly, but completely out of keeping with female graduate students of the era. She was refreshingly assertive, even loud. Her doe-eyed innocence and enthusiasm carried us along. They also marked her as the voice and intellectual leader of the team.
Lyn’s closest comrade was Lauren Bersh Alloy. Lauren was the rare Penn undergraduate whose record was so outstanding that she was accepted as a graduate student. Lauren did not take no for an answer. She was a tough Philadelphia kid and had inherited “dauntless” genes: her grandfather’s parents were murdered in a Russian pogrom at the turn of the twentieth century, and as a six-year-old, her grandfather had journeyed alone, mostly on foot, to England from Russia. Her father, Phil Bersh, was a well-known, traditionally-minded professor of animal learning at Temple. Unlike the rest of the graduate student body, she actually dressed in fashionable clothes, sported a giant diamond engagement ring, and wore makeup, the very opposite of Lyn. She was actually Dick Solomon’s PhD student but was inseparable from Lyn and soon fell into my intellectual penumbra. Together Lauren and Lyn would soon make a discovery that rocked the conventional field of depression.
The third member of the triumvirate (this is the wrong-gender word) was Judy Garber. Judy was fleeing a disastrous graduate student year at Florida State University. She visited Penn and pleaded with me to take her on as an unpaid research assistant for redemption. Our agreement was that if she shined, I would help her restart her career as an academic psychologist. She did shine and soon proved to be invaluable. She carried out the mundane duties, such as feeding the rats their daily chow, but soon became the bookish center of the group.
Judy was central because unlike the rest of us she came from a background in social psychology and knew “attribution” theory, which we—clinical and experimental psychologists—did not. She got us to read the literature and then led discussions of attribution theory. I was wowed by her teaching and scholarship, but I was not impressed with attribution theory. “Attribution” is an ambiguous word that here means the causal story that a person tells herself to explain an event, and the theory derives from Julian Rotter’s “locus of control” theory.1 Rotter pointed out that when a bad event occurs, one can attribute its cause either internally to oneself (“I am to blame”) or externally, such as to an event, circumstances, or another person (“He was out to get me”). When facing inescapable noise or unsolvable problems, one can see his helplessness as either internal (“I am defective”) or external (“This damn problem is unsolvable”). It made sense to us that people who attributed their failure internally would feel unworthy and lose self-esteem, whereas people who attributed the cause of failure externally would not. So far so good. But attribution of blame was only the jumping-off point of Teasdale’s critique.
He further wondered when helplessness would last as opposed to just evaporating and when it would bleed into everything in one’s life as opposed to undermining trying (“response initiation”) only in the original setting. Judy pointed us in the direction of Bernard Weiner’s2 refinement of attribution theory. Attributions need not be confined to the internal or external. The cause can be thought of as temporary (“I had a hangover”) or permanent (“I am stupid”), as specific to the original situation (“I am bad at anagrams”) or more global (“I can never solve my problems”).
So to respond to John Teasdale, we reformulated the helplessness theory of depression to take into account the way people think about the causes of their helplessness. A person who interprets the cause as internal will feel unworthy and lose self-esteem. A person who interprets the cause as permanent will be helpless long into the future. A person who interprets the cause as global will be helpless in entirely different situations. And crucially, if habitual, this way of thinking is a “depressive attributional style,” internal, stable, and global for bad events: “It’s me, it’s going to last forever, and it’s going to undermine everything I do.” Nondepressed people who have this unfortunate style, when they encounter setbacks, will become depressed more easily than people without this style. I called the style “pessimism” and hypothesized that it was a risk factor for depression in exactly the same way that smoking was a risk factor for lung cancer and heart disease.
John Teasdale heartily agreed with us, and with him as a coauthor we published the theory.3 This became a pattern in my own work: when wrong, I tried not to get defensive. I acknowledged my shortcoming in print and, if possible, reformulated or even jettisoned ideas accordingly. I tried to work with my critics, not against them. Unlike a Broadway show in which criticism means a flop, in science it is the key to progress. I have Ludwig Wittgenstein’s splendid example here in mind. Wittgenstein’s first book—in which, he confided to Bertrand Russell, he had “solved all the problems of philosophy”—was grounded in logical atomism.4 Midway through his stormy career, he came to believe that this view of language—that words can correspond to the logical atoms of the world—was wrong. In his last major work published after his death, Philosophical Investigations,5 he recanted and held that the meaning of a word was just its use in the language, not its correspondence to the world. While I suspect that Wittgenstein was a bit of a self-conscious poseur, I believe his openness to changing his mind is the very model of how knowledge progresses.
With the reformulated theory to guide us, we set out to test it on several fronts throughout 1977 and the first half of 1978. The critical member of the research team, Chris Peterson, soon joined us. But before I could really understand just how critical he was, a year of transformation, 1978–1979, awaited me in California.
THE CENTER FOR Advanced Studies in the Behavioral Sciences sits majestically aloof from the world atop a live-oak-dotted hill in Stanford, California. It had been the sought-after destination for young, ambitious scholars going on sabbatical since it was founded twenty-five years before, and I spent my second sabbatical there. Dave Krantz, a distinguished mathematical psychologist, said of it, “It is the first honor I’ve gotten that wasn’t for my potential.” There I realized what my Guggenheim Museum dream meant, and there I had my heart broken.
Suzanne Miller and I rented a modern house in pricy Woodside, horse and horsefly country in the wild hills that separated Stanford from the Pacific Ocean. I had left Kerry and my two children the summer before, and Suzanne and I then rented an apartment together in Washington Square in the center of Philadelphia. Ours was a rocky relationship. She left her assistant professorship at the University of Western Ontario to be with me and was scouting for her next job. The way other men looked at her—and, more, the way she looked back—troubled me. I worried that she was still in the romance market and wanted to trade up. Perhaps, I hoped, the slow-paced idyllic atmosphere of the center would settle her down.
More than forty of us were in residence, and the cast of characters included some of the brightest lights of my generation. Jim Heckman, quiet and shy, would go on to win a Nobel Prize in economics. Nan Keohane would become president of Wellesley and then president of Duke. Marina Whitman, professor of economics, would increase her salary tenfold when she left the center to become financial vice president of General Motors.
Beth Loftus, just beginning her work on defective memory in eyewitness testimony and in alleged sexual abuse, started our year off by sending us each a survey about our opinions on memory. “Do we have all our memories stored up someplace in the brain?”
My reply was typically contentious: “I think we have almost nothing at all stored. I am a bridge player, and I have looked at the queen of spades literally half a million times. I only have the foggiest idea of what she looks like. She is mostly black in color of course, but is she wearing a crown? Carrying a scepter? I don’t know. You could put a pearl necklace on her, and I wouldn’t notice the difference.
“Outside the first-floor men’s room in College Hall at Penn,” I continued, “is a large bronze plaque dedicated to the memory of ‘Pomp,’ Albert Monroe Wilson, a beloved black janitor from the nineteenth-century annals. I gave a quiz to the male denizens of College Hall. ‘Who is Albert Monroe Wilson?’ Despite passing the plaque several times a day, no one could identify Albert Monroe Wilson. Memory is overrated, and computer storage is the wrong model for memory. Unlike any machine, we ‘store,’ and perhaps only ever know, the gist of things.”
Days at the center could be contentious, but for the most part the place was paradise—picking fresh figs from the tree outside my carrel for breakfast, writing the morning away, then heading off on an unscheduled adventure at lunch with any of the yeasty people who happened to sit with me: Seymour Kety, Janet Spence, John Whiting, Dick Neustadt, Dave Featherman, Dick Thompson, or Jim Fries. After lunch, the director, Gardner Lindsey, hosted my favorite time of the day, the volleyball game. Gardner had the cushiest of jobs—running the center. He excused himself after one game, saying that he had to go write a letter.
“D?” I inquired.
“I actually got a note about you and volleyball,” he confided, never short of repartee. “I had mentioned what a powerful spiker you are, and my correspondent said, ‘Of course, but can Seligman set up?’” This stung. I prided myself on setting others up in volleyball and believed I did the same in science with my students and colleagues. But others apparently perceived me as being out only for myself. We were discussing Erik Erikson in a seminar that week, and I wondered now about the end stage of a successful life, the stage of generativity. Was I only a spiker? Would I ever truly reach the stage of setting others up?
I came to the center to write a textbook on abnormal psychology with my friend, Dave Rosenhan. W. W. Norton, the tony New York publisher, guaranteed us each a substantial income, $100,000 spread over five years, as an advance. This was the first real money I’d earned from writing. Dave and I spent hundreds of hours schmoozing about abnormality, and I spent hundreds of hours in my little carrel overlooking Palo Alto, reading and writing. The chapters on depression and anxiety flowed effortlessly, since I’d been thinking about these for a decade. Some of the other chapters were hard slogs. Psychosomatic disorders was one. I read my way through a literature filled with neuromythology and still couldn’t put the topic together sensibly. But the chapter on sex came together, and I ventured a new theory on sexual disorders. Any new theory is way out of place in a textbook, but I put it in anyway and refined it in later editions. I proposed that sexual development has five layers, and each can go awry. The deeper the layer, the harder it is to change. Layer 1, the very deepest, is gender identity: feeling like a man or a woman. It gets laid down hormonally in the third month of pregnancy, and if the hormonal bath is abnormal, transsexuality—feeling one is a man trapped in a woman’s body or vice versa—results. Layer 2 is object choice: being attracted to members of the same or the other sex. This too is largely hormonal but gets added a bit later in gestation, with neuroanatomical consequences. Layer 3 is sexual preference: the objects (breasts, bottoms) and the fetishes (feet, pain, children) that turn you on. These are acquired in late childhood or early adolescence. Layer 4 is sex role: stereotypically masculine or feminine behavior. This is acquired socially through childhood and adult experience. Layer 5 is sexual performance: adequacy in intercourse. Disruption here can happen anytime in adulthood. These are layers of depth, implying how hard or easy they are to change, for example in therapy. We have almost no choice about layers 1 and 2. Sexual identity and sexual object choice are difficult if not impossible to change, whereas preferences, roles, and performance are increasingly easier to change. I still believe this theory is largely true, but as far as I know it has never made its way out of our textbook and into sex research or the clinic.6
By the middle of 1979, the textbook was drafted. But this turned out not to be the main event of my sabbatical.
I and most of the rest of the cast were “loners” at the center. But one group gathered here to carry out a single purpose, and this group transformed me. These individuals came from, in my view, a low-prestige part of the scientific jungle—applied social science—and I had not, in my narrow world of pure experimental psychology, even heard of any of them.
Their purpose was to found a new field around aging and to make it respectable. “Why,” I wondered as I first got to know them, “does the world need a field of aging?”
They launched a volley of answers back at me, answers I found unsettling. First, they said that psychology had so-called developmental psychology, but “development” referred only to infants and toddlers, barely to adolescents, and not at all to the middle-aged or elderly. This narrowness rested on the dubious assumption that not much changed after childhood. Adult development, they contended, was a time of both continuity and more importantly change. My year at the center and my own life demonstrated to me that this was true.
Second, they said that mainstream social science had adopted as self-evident a simplistic and misleading methodology. The routine methods of the mainstream were experimental and cross-sectional, but the laboratory methods suffered from the thorny problems of external validity. They got an A for rigor but a D– for realism. They rarely captured reality and so were rarely taken seriously and applied to issues in the world that mattered.
“Do you really believe, Marty, that the response of giving up when loud noise is inescapable is anything like the response of a mother to the death of her child or the reaction to unemployment or mandatory retirement?” Even if they were too polite to confront me directly, I had enough imagination to harbor exactly these doubts about my own work.
What they contended was wrong with “cross-sectional” methods was an entirely new challenge for me. All of the experiments I’d done were cross-sectional; we looked at one time slice and assumed continuity into the next time slice of a person’s life. But aging was a field that doubted continuity; it suggested that people may change as they age and that what you find early may not hold later.
Cross-sectional studies, my new colleagues contended, are lazy shortcuts that can result in completely wrong conclusions. Let’s say you want to know the effect of aging on worrying about money. Further let’s say you do a cross-sectional study of three age groups: people born in 1915, 1930, and 1945. The results would be clear-cut, with the oldest group the most worried about money and the youngest group the least worried. You conclude that worrying about money grows with age.
Wrong! The members of the 1915 group were in their formative teenage years when the Great Depression hit, whereas those born in 1930 and 1945 became teenagers at more and even more prosperous times. Financial insecurity is not an age effect but a “cohort” effect, determined not by age per se but by the historical moment when an individual happens to hit adolescence.
An alternate method is much better: a longitudinal study that repeatedly tests the very same people as they age. This would reveal no change in financial insecurity as those born in 1915 got older and perhaps even a lessening of their worry about money as they aged. Longitudinal studies are undoubtedly better than cross-sectional studies, but they are hideously expensive and labor intensive. Imagine following several hundred people born in 1915 for fifty years, tracking them down, and trying to convince them to take the same old questionnaires every decade.
My colleagues’ third justification for their new field was political: the changing demographics of America. The baby boomers (born between 1946 and 1964) were now the dominant taxpayers and wanted Congress to fund a science that would tell them how they could age well, with better physical and mental health.
So this aging group gathered at the center with a clear purpose: to establish a respectable scientific field, to seduce the best young scientists into it, and if possible to build support for the fledgling National Institute of Aging (NIA).
This group was indeed trying to seduce me. Flattered, I began to attend their seminars and did the reading, even though a year before I would have labeled them “softheaded.” I got to know each of them well. Matilda Riley, a gorgeous silver-haired seventy-year-old and second in command of the NIA, was their official boss, but the power behind the throne was Paul Baltes. Paul grew up in the Rhineland at the end of World War II. His parents had owned a struggling restaurant and put him to work in the kitchen as a kid. When Paul was eight, a drunken, belligerent customer was arguing with his mother at the cash register. Paul came in from the kitchen and talked the customer down. After that, they sent Paul out to deal with difficult customers. Paul grew into a soft-spoken dynamo, the Henry Kissinger of academic psychology, and I was astonished to find myself, surely a difficult customer, listening well to his vehement critique of experimental psychology and even liking him in the process. I started to sip their Kool-Aid.
George Vaillant, a Harvard-bred psychoanalyst and a proper WASP Brahmin, was by his provenance the group member most likely to repel me. He had just published Adaptation to Life,7 the life stories (to date) of the two hundred most “promising” members of the Harvard classes of 1939 through 1944. He was the custodian of the Grant Study, a longitudinal study centered on the outlandish idea of investigating what is right in life as opposed to what cripples life and stultifies growth. It started with the most privileged and talented young people to discover what leads to success and what leads to failure. George’s theory of success was “mature defenses.” Those Harvard undergraduates who reacted to setbacks with “immature” defenses, such as denial and repression, seemed to fail later in life: broken marriages, alcoholism, and underemployment. Those who reacted “maturely,” with humor, sublimation, or altruism, went on to lead much more fulfilled and successful lives.
This was all too positive for me. It reminded me of Horatio Alger, Norman Vincent Peale, and the chapel services at the Albany Academy. It felt like the empty “American” propaganda I was wise enough to reject in favor of a career working to undo that which cripples life.
George, however, listened well and was charming, humble, and agreeable, talents that enabled him to interview two hundred Harvard snobs every five years and remain in their good graces. We shared an interest in fine wine and befriended Dick Graff, owner of the renowned boutique vineyard Chalone. George and I were wine groupies, and Dick, for his part, turned out to be a fervent psychology groupie. The three of us became the unofficial sommeliers for the center and gave wine tastings for the class. Dick spent a good deal of time with Suzanne and me. Suzanne flirted openly with him, and Dick confided to me that she was the only female he had ever found attractive. I probably should have taken more notice of her other flirtations, since unbeknownst to me she had already successfully traded up—to Walter Mischel, professor in the Stanford psychology department down the hill. Walter was famous for the ingenious “marshmallow” test.8 He was a polished, wealthy Viennese émigré and a collector of modern paintings. Within the year, Walter left his wife, Harriet, for Suzanne, and Suzanne left me for Walter.
While I left the center bereft of family and broken of heart, I had nevertheless grown scientifically. I began my year there a snobby dyed-in-the-wool experimental psychologist, convinced that the well-controlled experiment was the royal road to truth. After all, no other method isolated the cause so perfectly. But I had now become convinced that the laboratory experiment came at too great a cost: the variables workable in the laboratory were anemic and too often bore only an unconvincing relationship to the real world. My “aging” friends convinced me that longitudinal methods undertaken outside the laboratory, observing large groups of people across time, had better external validity and that the statistical methods developed for longitudinal studies could also hone in on cause. Study real people—not just white rats and college sophomores. Study real outcomes—divorce, depression, cancer—not just laboratory analogues. Study how real people actually change over time in their natural habitats.
Study well-being in addition to misery.
These seemed like the “right” questions that the Godhead floating over the Guggenheim Museum wanted me to take on.