Ideology is a conceptual framework, it’s the way people deal with reality.
Everyone has one. You have to. To exist, you need an ideology.
—Alan Greenspan, October 23, 2008
IT’S AS EASY to fall in love with an idea as with a person. Big ideas are especially alluring. They bring order to the world, give meaning to life. When we join political parties or churches or governments, we find soul mates with the same worldview, the same values—and life feels complete. We may even talk of being “wedded” to our ideas. Much of our identity is defined by what we believe and we actively seek confirmation of those beliefs. Actually, we go even further: Our brains treat differently any information that might challenge our closely held beliefs.
In 2004, a team of cognitive neuroscientists set out to see what this process actually looks like. Drew Westen, at Emory University, was interested in what psychologists call “motivated reasoning” and what Freud called defense mechanisms: the processes by which people adjust what they know to avoid bad feelings such as anxiety and guilt. He theorized that the brain’s neural networks would try to satisfy two kinds of constraints: cognitive constraints—we want to put information together in a way that feels rational—and emotional constraints, meaning we want to feel good about the information we take in.
To test his theories, Westen and his team recruited fifteen committed Democrats and fifteen committed Republicans to submit to fMRI scans of their brains while reading political material. As they lay in the scanner, they read pairs of quotes attributed either to President George W. Bush or to presidential candidate John Kerry. In each pair, one statement was entirely compatible with the candidate’s position, but one statement was contradictory. Westen wanted to find out whether the brain would treat the contradictions of the preferred candidate in the same way as it would treat the contradictions of a disliked candidate.
The experiment found that the partisan participants gave a far rougher ride to the contradictions that came from the candidate they opposed.
“They had no trouble seeing the contradictions for the opposition candidate,” Westen wrote. “But when confronted with potentially troubling political information, a network of neurons becomes active that produces distress. Not only did the brain manage to shut down distress through faulty reasoning—but it did so quickly. The neural circuits charged with regulation of emotional states seemed to recruit beliefs that eliminated the distress and conflict.”1
But, said Westen, the brain didn’t stop at eliminating the uncomfortable contradictions. It worked overtime “to feel good, activating reward circuits that give partisans a jolt of positive reinforcement for their biased ‘reasoning.’ ”2
In Westen’s experiment, the reward circuits the brain was using were the same that are activated when a junkie gets a fix. In other words, when we find the thoughts we agree with, or are able to eliminate the ones that make us uncomfortable, we feel that same kind of euphoria and reassurance that an addict feels when reunited with his drug of choice: all is right with the world. At least for a while.
The brain doesn’t like conflict and works hard to resolve it. This may be one reason why, when we gather with like-minded people, we are more likely to seek out common ground than areas of difference: quite literally, it feels better. But it also feels rational, even when it isn’t. Which means that when we work hard to defend our core beliefs, we risk becoming blind to the evidence that could tell us we’re wrong.
Alice Stewart arrived in Oxford, England, in 1941 to work as a resident physician at the Radcliffe Infirmary. She was, by all accounts, an outstanding doctor, the youngest woman at the time to enter the Royal College of Physicians. Her colleagues considered her to be a wonderful teacher and an outstanding diagnostician, full of boundless energy, with an appetite for big challenges and hard problems. Doctors were much needed during the war and the fact that, as a mother with two small children, she couldn’t be called up for military service made her even more valuable, while her failing marriage meant she was willing and able to go wherever she was needed.
While in Oxford, Stewart treated patients but also led a number of research projects into problematic, puzzling disease patterns. One of them involved trying to figure out why munitions workers filling shells with TNT seemed so susceptible to jaundice and anemia. In wartime, the munitions factory was staffed by “the ragtag of the population,”3 which posed the question: Were they getting ill because they were vulnerable anyway, or was TNT the culprit? What originally started as a laboratory study soon became a field study: By persuading her healthy medical students to work in the plant and emulate the lives of the factory workers, Alice was able to prove that the diseases were not a consequence of their weaker health but of their exposure to TNT. Subsequent projects—investigating high turnover among laborers working with carbon tetrachloride, and another into miners suffering from lung disease—meant that, without having deliberately chosen to do so, Alice found herself working in the field of social medicine and epidemiology. It was an emerging discipline awash with hard problems.
Growing concern about the connection between high rates of illness and low social status led to the creation in 1942 of Oxford’s Institute of Social Medicine. Why did poorer people suffer approximately twice the rate of infant mortality; ear, mastoid, and respiratory illnesses; ulcers; and heart disease? What was the relationship between poverty and illness and what, in the light of the newly formed National Health Service, could be done about it? Stewart was recruited to the institute by one of the founding fathers of epidemiology, John Ryle, and she brought to her work all the indignant energy that was her hallmark.
“Practicing medicine without asking these larger questions is like selling groceries across the counter,” she said. “You go in with an illness; the doctor sells you a pill. It’s no more responsible than that. Nobody goes out and asks, ‘Who didn’t come in because he was too sick to come? Why are so many people coming in with this, and so few with that?’ ”4 But when Ryle died in 1950, Stewart’s progress ground to a halt. His institute was demoted to the “Social Medicine Unit” and Stewart lost her mentor and her status.
Abandoned by the Oxford establishment, and left with a tiny salary but no building, no funding, and no work, in a field that commanded little respect and few kudos, Alice found that the only way she could make her mark was by identifying—and solving—a hard problem. The burning issues of the day—lung cancer, cardiovascular disease, and polio—were crowded. This left just one: leukemia. Incidence of the disease was on the increase, at a rate that made it look like an epidemic, but the number of patients was still so small that the field was difficult to study using statistics, the traditional tool of epidemiology. Two anomalies caught Alice Stewart’s eye. Leukemia was affecting children aged two to four. That was odd because typically by that age children are healthy: they’ve survived infancy and haven’t yet started school. And the children dying from leukemia weren’t poor: In fact, they came from counties with better medical care and lower overall death rates. How could that be? Stewart decided to interview the mothers of leukemia victims to see if she could find anything in their lives that might account for this pattern. She didn’t know what she was looking for, so her questions started with conception.
“It was a needle-in-a-haystack search,” says Gayle Greene, who first met Alice Stewart in 1992. Even at the age of eighty-six, Stewart was so dazzling that Greene was inspired to write her biography.
“Alice didn’t know what she was looking for, so she asked questions about everything: exposure to infection, inoculation, cats, dogs, hens, shop-fried fish and chips, highly colored drinks, colored sweets, and have you had an X-ray?”5
Stewart proposed interviewing all of the mothers of children who had died of leukemia and other forms of cancer between 1953 and 1955. But she couldn’t get mainstream funding for her work. A mere £1,000 was found, from the Lady Tata Memorial Fund for Leukemia Research, to pay for Alice’s pioneering study. With such minimal resources, she had to be inventive. She designed her questionnaire and took it in person to all the medical officers in 203 county health departments in the country. With characteristic tenacity, she persuaded them to use their own people and local records to answer all the questions on her survey. Her tiny grant was all spent on train fares as she went up and down the country, laden with carbon paper and manila envelopes.
“When the Americans tried to do a study like ours,” Alice later recalled, “they gave up because it cost too much. But it cost us so little because I was making use of existing records.”6
Her study compared five hundred leukemia deaths, plus five hundred deaths from other forms of cancer, with one thousand live children of the same age, sex, and region. When the surveys started to come back, the results leapt out. The common denominator wasn’t the colored sweets, the pets, or even the fish and chips.
“ ‘Yes’ was turning up three times for every dead child to one for every live child, for the question, ‘Had you had an obstetric X-ray?’ Yes was running three to one. It was a shocker. They were as like as two peas in a pod, the living and the dead. They were alike in all respects except on that one score. And the dose was very small, very brief, a single diagnostic X-ray, a tiny fraction of the radiation exposure considered safe. And it was enough to almost double risk of an early cancer death.”
The recognition that X-raying pregnant mothers so dramatically increased the chances of childhood cancer was the kind of finding epidemiologists dream of: a hard problem with good data pointing to a clear solution. But, like the thorough scientist she was, when her excitement died down, Stewart questioned her results over and over again and she asked colleagues to check them before she published them. When her article, “Preliminary Communication: Malignant Diseases in Childhood and Diagnostic Irradiation In-Utero,” appeared in The Lancet in 1956, it caused a stir. The Nobel Prize was mentioned and Alice was asked to repeat her survey in Scotland. Over the next eighteen months, Stewart and her team continued to collect data. Within a three-year period they had traced 80 percent of all childhood cancer deaths in England between 1953 and 1955. Publishing a full report in the British Medical Journal in 1958, they were able to conclude definitively that a fetus exposed to an X-ray was twice as likely to develop cancer within the next ten years as was a fetus that had not been exposed.
“We reckoned that a child a week was dying from this practice. We thought that doctors would stop X-raying on the mere suspicion that we were right and we felt that we must hurry to cover all the deaths that occurred in the next ten years, because once they stopped X-raying, there would be no further cases.”
To the contrary, doctors carried on X-raying pregnant mothers for the next twenty-five years. Not until 1980 did major American medical organizations finally recommend that the practice be abandoned. The United Kingdom followed suit a year later.
Why did it take so long? How could so many doctors, the world over, have been so blind? Stewart’s findings were clear, her data voluminous and, initially, greeted with acclaim. To us now, and to Alice at the time, it seemed obvious that the practice of X-raying pregnant women should stop immediately. What happened?
Many like to lay the blame on a personality clash with fellow epidemiologist Richard Doll.
“Doll was really influential and he was on the Medical Research Council and he truly did not want to let her into the story,” argues Gayle Greene. “I think he knew that she was a better scientist than he and I think she had principles that he did not have and he could not forgive her for that.”
Doll rushed out a paper refuting Stewart’s paper—a tiny, quick study that he later acknowledged was “not very good” and the results of which he later described as “unreliable.” But Doll was a dominant figure in the British medical establishment and his voice carried a long way. Alice Stewart’s daughter, Anne Marshall, remembered the impact Doll’s opposition had on her mother.
“I don’t know if Mum was upset by Doll but he certainly made her think again and again. And then she’d settle down and do the work—and she knew she was right. She didn’t enjoy a fight but, if she felt strongly about something, she was very good at having one.”7
It didn’t help that Stewart was an unconventional scientist; she was a divorced mother with two children, at a time when there were few women—and even fewer mothers—in science and when divorce was still not entirely respectable. Looking after her children alone didn’t leave Alice much time to network, build alliances, or seek out support.
“She wasn’t a political person,” says Gayle Greene. “She was doing her research, raising her family, and that took thirty-two hours of the day! Compare that to Doll who was such a schmoozer, such a political creature. So smooth! When I met him, I thought: this is a guy who’s done a lot of PR! You would never say that meeting Alice. She was authentic, genuine, a very disarming person—I mean everybody loved her, but she was not playing the game.”
Doll was a major obstacle. But personality alone doesn’t explain why, worldwide, the practice of X-raying unborn children persisted. At the Harvard School of Public Health, Brian MacMahon also set out to refute Stewart’s findings—but he found exactly what she had found: Cancer mortality was 40 percent higher among children whose mothers had been X-rayed. In the early 1960s, one of the largest radiation studies examined six million X-ray subjects in New York, Maryland, and Minnesota; that too confirmed Alice’s findings. New statistical methods and the advent of computers all served to make collecting and analyzing data easier and more accurate—but all the subsequent studies did, over and over again, was show that Alice Stewart, with her paper surveys and carbon paper, had been right all along. So why did doctors continue a practice that study after study showed to be so dangerous? How could they be so blind to all the data?
In part, the sexiness of X-rays was to blame. Ever since their discovery in 1895, X-rays had developed an aura of mastery and mystique. They were used as an exquisite and expensive form of portraiture in the 1890s and even were used as the ultimate tool for finding a ring that had been mistakenly baked into a cake.8 Shoe stores boasted of X-ray machines that ensured a perfect fit: “The salesman, the purchaser and even a purchaser’s advisory friend can visually know exactly how well a shoe is fitting, both under pressure and otherwise,” claimed the 1927 patent for the “shoe fluoroscope.” “With this apparatus in his shop, a shoe merchant can positively assure his customers that they need never wear ill-fitting boots and shoes; that parents can visually assure themselves as to whether they are buying shoes for their boys and girls which will not injure and deform the sensitive bone joints.”9
With so much investment in it, neither shoe salesmen nor doctors wanted to hear that there might be any risks associated with the new technology. They were wedded to it.
“No one likes to be told they’ve been doing something wrong all their lives!” That’s how Anne Marshall explains the reaction to her mother’s findings. “That’s what the radiologists and obstetricians took from Mum’s work—that they’d been doing something wrong. There were lots of them and they liked what they were doing and wanted to keep on doing it.”
“Doctors’ enthusiasm about radiology was so enormous that medical centers had invested in all kinds of X-ray equipment,” explains Gayle Greene. “They didn’t like being told that they were not only not helping their patients—but they were actually killing them! People are very resistant to changing what they know how to do, what they have expertise in and certainly what they have economic investment in.”
But Alice Stewart’s survey of childhood cancers did something even more radical and provocative than question standard medical practice. Her findings struck at the heart of a Big Idea central to scientific thinking at the time. Threshold theory maintained that, while a large dose of something like radiation would be dangerous, there was always a point—a threshold—beyond which it was safe. (That point is what, today, we might call a tipping point.) But Alice Stewart was arguing that in this case, there was no acceptable level of radiation that was safe for fetuses. It wasn’t just shoe shops and medical centers; a cornerstone of scientific orthodoxy was under attack.
She had to be wrong. If she was right, too many other assumptions had to be reexamined. What Alice Stewart had provoked in her scientific colleagues was cognitive dissonance: the mental turmoil that is evoked when the mind tries to hold two entirely incompatible views. It could not be true that threshold theory was right—but also that such tiny doses of radiation caused cancers. It could not be true that radiation was both a new wonder tool—and that it also killed children. It could not be true that doctors cured patients—and made them sick. The dissonance produced by mutually exclusive beliefs is tremendously painful, even unbearable. The easiest way to reduce the pain—the dissonance—is to eliminate one of the beliefs, rendering dissonance consonant. It was easier for scientists to cling to their beliefs: in threshold theory, in the benevolence of X-rays, and in the idea of doctors as authoritative, smart, good people. Alice Stewart and her findings were sacrificed to preserve the Big Idea. Dissonance is eliminated when we blind ourselves to contradictory propositions. And we are prepared to pay a very high price to preserve our most cherished ideas.
The theory of cognitive dissonance was initially developed by Leon Festinger around the same time that Alice Stewart was studying childhood cancers. He had developed much of his theory studying the religious millenarian movements in the nineteenth century, but he yearned for a live, modern case study to test his ideas. In September 1954, he found his opportunity in a newspaper story.
PROPHECY PROM PLANET. CLARION CALL TO CITY: FLEE THAT FLOOD.
IT’LL SWAMP US ON DEC. 21, OUTER SPACE TELLS SUBORDINATE.
The story described a suburban housewife, Marian Keech, who believed, on the basis of automatic writing, that the earth would be flooded on December 21. The fact that Mrs. Keech and her adherents held such a specific belief about an event destined to occur on a specific date made this a perfect test case for Festinger’s research: What would happen when a deeply help belief—a Big Idea—was disconfirmed by events? That the end of the world was due in months, and not years, made his research practical, too. When, as Festinger predicted, the flood failed to take place, would Mrs. Keech surrender her belief in the light of experience? Festinger’s theory suggested that she would continue in her faith but also, crucially, that it would become stronger than ever.
Even more unconventional in his research methods than Alice Stewart, Festinger and a few of his colleagues from the University of Minnesota set out to infiltrate Mrs. Keech’s community. For two months, they monitored the beliefs and varying levels of commitment among a small group with whom Mrs. Keech had shared the automatic writing, which she believed came to her via extraterrestrial messengers. Dr. Thomas Armstrong, a physician at Eastern Teachers College in Collegeville, and his wife, Daisy, became Keech’s devoted followers and they, in turn, recruited numerous students until there was a core of some fifteen devotees.
Mrs. Keech’s messages described an apocalyptic vision, according to which Lucifer had returned to Earth in disguise and was leading scientists to build ever greater weapons of destruction. Their work would culminate in the Earth falling apart and the disruption of the entire solar system. While forces of light struggled to reclaim humanity, man’s only hope was that enough people would be open to the light to escape another explosion.
Festinger went out of his way to point out that Keech and the Armstrongs weren’t crazy and weren’t psychotic. “True, Mrs. Keech put together a rather unusual combination of ideas—a combination peculiarly well adapted to our contemporary, anxious age,” Festinger wrote. “But scarcely a single one of her ideas can be said to be unique, novel or lacking in popular support.”10 There was nothing in her belief system that people haven’t believed before—or since.
Central to these beliefs was the prediction Mrs. Keech received that the world would end with an enormous flood on December 21. Only true believers would be saved. “The Supreme Being is going to clean house by sinking all of the land masses as we know them now and raising the land masses now under the sea. There will be a washing of the world with water. Some will be saved by being taken off the earth in spacecraft.”11
The messages were so bizarre and the belief system so open to ridicule that Festinger took pains to document just how serious and how real the group’s commitment was. These were not a bunch of kids pretending. One particularly devout member, Kitty O’Donnell, quit her job, quit school, lived off her small savings, and moved into an expensive apartment because she did not expect to need what little remained of her cash. Two members—Fred Burden and Laura Brooks—gave up their college studies; Laura Brooks threw away many of her personal possessions. Dr. Armstrong was eventually asked to resign his college position; the amount of time he spent talking to students about flying saucers had caused a flurry of parental complaints. But he was not dismayed, considering this merely “part of the plan.”12 His wife chose not to bother getting her dishwasher repaired: “It isn’t worth it, because the time is so short now.”13 And when Mrs. Keech received a call from a salesman of cemetery lots, she calmly explained that burial was “the least of my worries.”14 However ludicrous the prophesies may seem to us, this group lived their lives in the sincere belief that the flood was imminent.
Mrs. Keech and her followers confidently expected flying saucers to transport them to other planets before the cataclysm occurred. Several false alarms, when messages seemed to promise the arrival of spacemen who failed to arrive, tested their faith. But on each occasion, the group either reinterpreted the messages to fit events or blamed themselves for faulty understanding. On the eve of the promised flood, the group spent the day together in “peaceful idleness” confidently awaiting their rescue. Arthur Bergen, a teenage member of the group, complained that his mother had threatened to call the police if he weren’t home by two the next morning. “The believers smilingly assured him that he need not worry—by that time they would all be aboard a saucer.”15 Warned not to wear metal of any kind, they fastidiously eliminated it from their clothing—zippers, snaps, belt buckles, bra clips—and removed foil from chewing gum, watches from wrists.
The last ten minutes were tense. When one clock said 12:05, a chorus of people pointed out that a slower clock was more accurate. But even when the slower clock confirmed midnight, no one appeared and nothing happened. No flood, no flying saucers. Mrs. Keech continued to receive long, confusing messages from “the Creator” but by two A.M. Arthur Bergen had to take a cab home to his mother. By four thirty A.M., the group was distraught, close to tears, and some were beginning to show signs of doubt. How would they handle the discomfirmation of their passionately held beliefs? This was the gist of Festinger’s two-month study.
At four forty-five A.M. Mrs. Keech received a new message. “Not since the beginning of time upon this Earth has there been such a force of Good and light as now floods this room and that which has been loosed within this room now floods the entire Earth.”16 The goodness of the group had saved the world from flood.
The group was jubilant: the belief system was intact. But a greater change overcame Mrs. Keech. Previously highly reticent, now she was more eager than ever to call the local newspaper and share her good news. Another member of the group insisted the news go farther, to the Associated Press; the Creator surely wouldn’t want the story to be an exclusive. Despite—or because of—the initial challenge to their belief, their faith now was stronger than ever and the believers more energetic in their proselytizing. Evidence had not upset belief. Just as Festinger had hypothesized, disconfirmation had actually made their belief stronger.
And they never lost it. While Mrs. Keech eventually left Lake City, she continued to receive automatic messages that she relayed to the faithful. The Armstrongs were as devout as ever, their faith “boundless and their resistance to disconfirmation sublime.” Of the eleven members of the Lake City group, each of whom had witnessed unequivocal disconfirmation firsthand, only two completely gave up their belief in Mrs. Keech’s writings—and they were the two who had been least committed from the outset.
Festinger’s academic account of this episode can’t resist some of the humor implicit in it, but the thrust of his argument is deadly serious. He and subsequent psychologists argued that we all strive to preserve an image of ourselves as consistent, stable, competent, and good. Our most cherished beliefs are a vital and central part of who we are—in our own eyes and the eyes of our friends and colleagues. Anything or anyone that threatens that sense of self produces pain that feels just as dangerous and unpleasant as hunger or thirst. A challenge to our big ideas feels life-threatening. And so we strive mightily to reduce the pain, either by ignoring the evidence that proves we are wrong, or by reinterpreting evidence to support us.
Psychologist Anthony Greenwald called this phenomenon the “totalitarian ego.”17 It operates, he said, just like a police state: locking away threatening or incompatible ideas, suppressing evidence, and rewriting history, all in the service of a central idea or self-image. Marian Keech’s followers would reinterpret events to fit their expectations, because not to have done so threatened to destroy their sense of who they were in the world. If doctors and scientists who read Alice Stewart’s research believed it, and acted upon it, then they would have had to accept that they had harmed patients. But doctors don’t like to think of themselves as sources of harm; they go into medicine to be, and do, good. Scientists embracing Alice’s findings would have had at least to question the big idea of threshold theory, but scientists like big ideas, organizing principles, perhaps more than the rest of us do. They’re what hold the data together, in just the same way that our beliefs and values hold our sense of self together. Acknowledging error, in these areas that are so vital for our self-definition, feels far too costly. Even as late as 1977, the National Council of Radiation Protection argued that doctors must have X-rayed only those fetuses that were destined to get cancer. How they could have known which these were the council never explained. But that scientists should have developed so convoluted an argument illustrates how hard the mind will work to defend its most cherished and defining beliefs.
Festinger argued that, as individuals, we are all highly driven to make sense of the world and of our place in it. And we do so by gathering around us the ideas but also the people that verify our story, so to speak. The work that Drew Westen and other scientists have done more recently has served to illustrate that cognitive dissonance is not just a theory; it has a physical reality in the way the brain handles information that we like—and the way that it handles the information that causes us distress. The fact that, as we already know, we’re drawn to people similar to ourselves merely reinforces this process. Mrs. Keech and her followers would have had difficulty maintaining their faith if they had been isolated and alone: confirmation of each other by each other kept their commitment secure. In just the same way, the medical profession stuck together, led by the socially adroit Richard Doll, holding out against Alice Stewart’s findings.
Social support makes it easier to do things, or believe in ideas, that would feel a lot more uncomfortable if we were on our own. That social support always comes in the form of family, friends, or colleagues who share, and act on, the big ideas that bring them together. But institutional power is a particularly seductive form of social support. After all, if you are in a position of tremendous institutional or political power, then not only are you hugely confirmed by the colleagues who share your beliefs, but questioning them would threaten everything: job, position, reputation, future career.
There’s an exquisite moment in Errol Morris’s film about Robert McNamara, The Fog of War, when McNamara talks about meeting North Vietnamese former foreign minister Nguyen Co Thach years after the ending of the Vietnam War. As U.S. secretary of defense from 1961 to 1968, McNamara had been as convinced a Cold Warrior as any of his cabinet colleagues. His job, as he saw it then, was not to question the war but to prosecute it effectively. His passion, both for the ideology of the administration and for the success of his career, he later saw, had blinded him to any understanding of his enemy.
“Mr. McNamara, you must never have read a history book,” McNamara recalled Thach saying to him. “If you had, you’d know we weren’t pawns of the Chinese or the Russians. McNamara, didn’t you know that? Don’t you understand that we have been fighting the Chinese for a thousand years? We were fighting for our independence. And we would fight to the last man. And we were determined to do so. And no amount of bombing, no amount of U.S. pressure would ever have stopped us.”
Cold War ideology had blinded McNamara and his colleagues to the fundamental, primary motivation of the Vietnamese. They weren’t fighting to become part of a greater communist bloc. They were fighting to become free from all imperial powers. But for anyone to have questioned Cold War orthodoxy within the Johnson administration at the time would have jeopardized status, reputation, and position. McNamara’s blindness doomed him to failure; he could not win the war because he did not understand his opponent. Far from making sense of events, the big idea left him powerless to understand them. And so, ultimately, he lost everything he had striven so hard to protect.
Economic models work in ways very similar to such ideologies: pulling in and integrating the information that fits the model, leaving out what can’t be accommodated. The economist and Nobel laureate Paul Krugman compared such models to ancient maps. At first, they were wildly misleading but did incorporate a lot of information: secondhand travelers’ reports, guesses, and anecdotes. As the standard for accuracy rose, much of that information wasn’t deemed good enough so it got left out—with the result that by the eighteenth century, most of Africa went blank. But Krugman, whose reputation for beautiful models has brought him such fame, recognizes how incomplete they can be.
“I think there’s a pretty good case to be made that the stuff that I stressed in the models is a less important story than the things I left out because I couldn’t model them, like spillovers of information and social networks.”18
The problem with models, in other words, is that they imply that whatever does not fit into them isn’t relevant—when it may be the most relevant information of all. But we treasure our models and personal big ideas because they help us to make decisions about what to do with our lives, whom to befriend, and what we stand for. A profound and innate part of who we are, they become so deeply entwined in all aspects of our lives that we may forget how profoundly they filter what we see, absorb, and remember. As our brains give our preferred ideas a smooth, easy flow, impeding distressing contradictions, the riverbed of our beliefs gets deeper and its sides grow higher.
In the case of someone like Mrs. Keech, it was very obvious (to everyone but her) just how peculiar her ideas were. But when ideas are widely held, they don’t stand out as much; they can even become the norm. We may not see them as ideology and we don’t see their proponents as zealots. But appearances can be deceptive.
“Greenspan’s willful blindness was incredible,” says Frank Partnoy, professor of law and finance at the University of San Diego. “He had a highly simplistic view of how markets behaved. He believed in the core of his soul that markets would self-correct and that financial models could forecast risk effectively.” Partnoy doesn’t criticize the former chairman of the Federal Reserve Bank lightly or from the lofty perch of someone observing financial shenanigans from a safe distance. He sold derivatives on Wall Street from 1993 to 1995 and knew, firsthand, in gritty detail, just how convoluted, disingenuous, obscure, and risky they were. He eventually left the industry, utterly disillusioned by how fraudulent it was. But his years on Wall Street had shown him, up close and personal, what the derivatives market was all about. And he watched with mounting frustration and disbelief as the so-called Maestro Greenspan failed to do anything about it.
“There was just so much happening in markets that Greenspan didn’t understand—because it was inconsistent with his worldview,” says Partnoy. “It really illustrates the dangers of having a particular fixed view of the world and not being open to evidence that your worldview is wrong until it is too late.”
Greenspan’s worldview was significantly developed when, in his late twenties and early thirties, he became a devoted acolyte of Ayn Rand. At this time, Greenspan had given up on his career playing bebop with a big touring band and turned to economics. He dropped out of Columbia’s Ph.D. program to form a consulting firm while developing a close personal relationship, and passionate intellectual relationship, with Rand and her fellow Objectivists. Rand’s attraction to men remains somewhat mysterious: an adulterous, failed screenwriter who’d emigrated from revolutionary Russia, she seems an unlikely muse for corporate titans and gurus of economic theory. Her understanding of how markets worked derived from the traumatic experience of having lived through the Russian Revolution, during which her family lost everything. But she had never trained as an economist, had never run a business, and wrote extraordinarily hideous, often impenetrable prose that purported to be philosophy. Nevertheless, Greenspan was smitten, springing to Rand’s defense when her second novel, Atlas Shrugged, received an unfavorable critical review. Defending the book in 1957, he wrote to the New York Times: “Justice is unrelenting. Creative individuals and undeviating purpose and rationality achieve joy and fulfillment. Parasites who persistently avoid either purpose or reason perish as they should.” Greenspan could not conceive that a book with such properties might have its flaws.
What Greenspan admired in Rand, and what he embraced with evangelical fervor, was the belief that, if he were only liberated from the regulations and constraints imposed by government, man would attain ever greater heights of freedom, creativity, and wealth.
“I am opposed,” Rand told Mike Wallace in 1959, “to all forms of control. I am for an absolute laissez-faire free unregulated economy. I am for the separation of state and economics.”19
In Rand’s world, those who could do well would be freed from all constraint to express and articulate the full capacity of their talents; they would achieve joy and fulfillment. Those who weren’t up to it—“parasites,” she called them—would fail and get out of the way. It’s a touchingly romantic idea, as long as you assume that you will be one of the successful ones. It reminds me of adolescents who think all their troubles would be over if only their parents would stop telling them how to behave.
Greenspan wasn’t in love with Ayn Rand, but he was in love with her ideas and they framed everything he did. In his autobiography, The Age of Turbulence, he describes her as a “stabilizing force” in his life.20 Describing himself as a convert, Greenspan was clearly in awe of her, though proud that he had been able to “keep up with her most of the time.” Ayn Rand was right there, standing next to him, when, in 1974, he was sworn in as chairman of President Gerald Ford’s Council of Economic Advisors. And his ideas hadn’t changed a bit when he took over the Federal Reserve in August 1987. The man with a religious belief in the evils of regulating was now in charge of money supply.
“I do have an ideology,” Greenspan told the U.S. Congress. “My judgment is that free, competitive markets are by far the unrivaled way to organize economies.”21
“He wanted to do whatever he could to deregulate the market,” says Partnoy, who has studied Greenspan’s career critically for many years. “But he was very clever about it. Rather than lobby upfront for the repeal of Glass-Steagal, he pressed for a series of small incremental changes. I think of it as Swiss cheese: put a few holes in, then a few more—and eventually there’s no cheese left! He honestly believed that we would all be better off if regulated markets got smaller and smaller, and the deregulated markets got bigger and bigger. That’s how you get to the promised land.”
What’s so striking, however, is that all the time Greenspan was nibbling away at regulation, the market was being rocked by a series of warning tremors that offered strong evidence that its most deregulated sectors—the sector Greenspan was so eager to help grow—threatened to blow everything up.
In 1994, when Greenspan raised interest rates from 3 percent to 3.25 percent, the marketplace was full of derivatives that assumed interest rates would stay low. When interest rates rose instead, all hell broke loose. David Askin, one of the most active traders in complex mortgage derivatives, ran a six-hundred-million-dollar fund that went up in smoke in a matter of weeks, filing for bankruptcy on April 7. Five days later, Gibson Greetings, Air Products, Dell, Mead Corporation, and Procter & Gamble admitted to billions of dollars in losses from derivatives, many of which even their internal financiers did not understand. Congressional hearings were held, in which George Soros testified that “there are so many of them and some are so esoteric, that the risks involved may not be properly understood even by the most sophisticated investors.”22
In May 1994, Greenspan’s Fed raised the rate another half percent—and there was a bloodbath on Wall Street. Property and casualty insurers lost more than they had paid out on Hurricane Andrew in 2002; hedge funds, banks, securities firms, and the life insurance industry lost billions. According to Frank Partnoy, virtually every kind of institution, from every sector of the economy, suffered massive losses.
In Orange County, California, the seventy-year-old county treasurer—and college dropout—Robert Citron, bet twenty billion dollars of public money on derivatives sold to him by Merrill Lynch; in December 1994, the county filed for bankruptcy. And Orange County wasn’t alone. Dozens of smaller municipalities, from California to Georgia to Maine and Montana, along with public utilities, city colleges, and pension funds, had invested in collateralized mortgage obligations and other derivatives, losing millions.
In 1994, Procter & Gamble sued Bankers Trust for the huge losses they’d suffered from derivatives. The prosecution used taped phone calls that demonstrated just how deliberately and knowingly the bankers had misled the firm. For once, a chink of light was shed on the “dark market” of derivatives, and what it showed wasn’t pretty—and wasn’t regulated. At the same time, Kidder, Peabody, which was part of General Electric at the time, under the legendary Jack Welch, discovered losses of $350 million. It turned out that no one at Kidder, Peabody really understood what traders in the firm were up to. The quarter in which the losses were reported was the first time in fifty-two quarters that earnings were less than they’d been in the previous year. But after all this mayhem, the only legislation that emerged actually made life far harder for would-be plaintiffs when Congress limited securities lawsuits in 1995.
Frank Partnoy has chronicled each of these debacles, from what he calls “Patient Zero” in 1987 through to Enron in 2002 and the banking crisis twenty years later. “It was foolish,” he wrote, “to deregulate markets simply because large institutions instead of individuals were involved. It was a well-established economic principle that markets with large sophistication and information gaps did not function well. The more they carved up markets, the harder it was for anyone to keep tabs on risk.”23
Each of these debacles reinforced the same lesson: derivatives were a “dark market.” No one knew what went on in these deals and because there was no statutory requirement to report anything, even the parties to the deals often did not know what they had. Had there been any reporting requirement, at least those with the most at stake might have gained some insight into their own exposure. Instead, the freedom to report nothing meant that not only did the government not really know what was going on; no one did.
That this could continue was possible only because so many people shared Greenspan’s ideology. The Financial Times journalist Gillian Tett compares such blind faith to the medieval Church.
“If this was a religion, Alan Greenspan was the pope,” says Tett. “He blessed derivatives. Then you had the high priests up at the altar, passing out blessings in the financial Latin that the congregation doesn’t understand. The pope is saying it’s all miraculous and wonderful and the blessings come in the form of cheap mortgages.”24
A few dissenting voices had the temerity to stand up to Greenspan and argue that derivatives posed a major risk and required oversight. In 1988, stockbroker Muriel Siebert testified before the congressional subcommittee on telecommunications and finance, in the wake of the 1987 market crash. The major problem with the market, she said, was derivatives.
“Program trades and index arbitrage end up bringing the volatility and rampant speculation of the futures pits to the floor of the Big Board. Futures have become the tail wagging the dog.”
In 1996, Brooksley Born was appointed to head up the Commodity Futures Trading Commission, the regulator uniquely tasked with regulating the twenty-seven-trillion-dollar derivatives market. And after the string of catastrophes in 1994, she was one of the few eager to impose some oversight. But Greenspan was having none of it. In 1998, Born’s CFTC issued a “Concept Release” outlining how regulation might work; Greenspan instantly issued a statement condemning it. Born was trying to devise mechanisms to monitor risks to a major segment of the economy; Greenspan marshaled all his friends, allies, and political capital to blast Born’s proposals out of existence.
“It seemed totally inexplicable to me,” Born recalled later. “It was as though the other regulators were saying, ‘We don’t want to know.’ ”25
Six weeks later, the hedge fund Long Term Capital Management, which included Greenspan’s former deputy at the Fed, David Mullins, became insolvent. The size of the failure threatened the entire U.S. economy.
“Long Term Capital Management was exactly what I’d been worried about,” Born said. “No regulators knew it was on the verge of collapse. Why? Because we didn’t have any information about the market.”26
At last, the LTCM crisis provoked some support in Congress for regulation. But Greenspan once again moved quickly to quash it, saying “I know of no set of supervisory actions we can take that can prevent people from making dumb mistakes. I think it is very important for us not to introduce regulation for regulation’s sake.”
And Greenspan and his friends won the day: no regulation for over-the-counter derivatives was introduced. They could continue to trade without any capital requirements or rules against manipulation or even fraud. Greenspan’s free market was allowed to get bigger and bigger, and Brooksley Born resigned.
Three years later, in 2001, the sixth largest corporation in America, Enron, went bust. Inside an intricate web of malfeasance lay deadly derivatives, tied to the company’s stock price, which left its investors with nothing. Those investors weren’t all Enron employees; many were small investors, like Mary Pearson, a Latin teacher who testified before Congress after the company failed.
“I am just a pebble in the stream, a little bitty shareholder. I did not lose billions but what I did lose seems like a billion to me. I was going to use my Enron stock as my long-term health care. I was disappointed in the people that I put my trust in years ago. And after a little time passed on, bitterness came into being, and bitterness will eat you alive if you let it. But sometimes at night I do feel real bitter over what I have lost, because it was a big part of my future, and I do not know how I am going to handle the future now. All I can do is hope and pray I do not get sick.”27
This was not a narrative Greenspan could see. From his perspective, “creative individuals and undeviating purpose and rationality” went on to “achieve joy and fulfillment” and the parasites perished. Until, that is, the banking crisis of 2008.
“This was my worst nightmare coming true,” said Brooksley Born. “Nobody really knew what was going on in the market. The toxic assets of our biggest banks were in over-the-counter derivatives and caused the economic downturn that made us lose our savings, lose our jobs, lose our homes.”
Just as Rand had wished, the state and economics had been separated; Greenspan had proved true to the big idea of his life, but blind to the realities of it. Even after the biggest financial catastrophe of his lifetime, when Greenpan went to testify to Congress about what had gone wrong, he held fast to his big idea. It wasn’t wrong; it was just flawed.
CHAIRMAN WAXMAN: You had an ideology. “My judgment is that free, competitive markets are by far the unrivaled way to organize economies. We have tried regulation, none meaningfully worked.” That was your quote. Now our whole economy is paying its price. Do you feel that your ideology pushed you to make decisions that you wish you had not made?
MR. GREENSPAN: Well, remember, though, whether or not ideology is a conceptual framework with the way people deal with reality, everyone has one. You have to. To exist, you need an ideology. The question is whether it exists, is accurate or not. What I am saying to you is yes, I found a flaw, I don’t know how significant or permanent it is, but I have been very distressed by that fact. I found a flaw in the model that I perceived is the critical functioning structure of how the world works.
CHAIRMAN WAXMAN: In other words, you found that your view of the world, your ideology, was not right, it was not working?
MR. GREENSPAN: Precisely. That’s precisely the reason I was shocked, because I had been going for forty years or more with very considerable evidence that it was working exceptionally well.28
Greenspan’s performance was mesmerizing drama: a proud, old man wriggling to protect himself from the sharp, hard prongs of fact. His adversaries had prepared long and hard for this inquisition and neither they, nor the nation as a whole, were prepared to offer him room to maneuver. They demanded his recantation but he fought hard to evade the cognitive dissonance implicit in events that would not change shape just to fit into his ideology. Despite failure after failure, he still could not deny his big idea. He could just admit a flaw, not that he was wrong. Just like Mrs. Keech, he acknowledged only that he had gotten a minor detail wrong. He couldn’t see and wouldn’t see the financial wreckage strewn right across his career, but instead insisted that his big idea had worked just fine for forty years. The free-market economist Friedrich von Hayek once said that “without a theory, the facts are silent.” But for Greenspan, with his theory, the facts became invisible.
“Greenspan was blind to two things,” says Partnoy. “He missed the fact that in the modern regulated state, you can’t have a truly free market. There are always partially regulated markets and therefore there are opportunities for people to exploit information traps. And he didn’t understand that even to the extent that the market isn’t regulated, there are serious potential downsides to a free-for-all. That’s why in the U.S. and UK we have common law. Because if those things are absent, you have problems. Not just unfairness and injustice—but also just this kind of volatility and destabilization.”
Of course, Greenspan did not act alone. He had the support of the powerful, he had the support of the crowd—as long as the economy was doing fine. You could say that he was blind, but he operated within a collective myopia that reinforced his ideology. And he hasn’t recanted.
“Greenspan isn’t atoning,” says Partnoy. “It would be very hard for him to do that. He’d be escaping from a long tunnel.”
In that, Greenspan is more like Marian Keech than like Richard Doll. In 1997, his reputation secured by his having proved the connection between smoking and lung cancer, Doll did recant, although with the most modest of mea culpas. In a paper titled “Risk of Childhood Cancer from Fetal Irradiation,”29 he quietly announced the death of threshold theory.
“The association between the low dose of ionizing radiation received by the fetus in utero from diagnostic radiography, particularly in the last trimester of pregnancy, and the subsequent risk of cancer in childhood provides direct evidence against the existence of a threshold dose below which no excess risk arises, and has led to changes in medical practice.”
But by the time Doll changed his mind, millions of pregnant women had been X-rayed. And Alice Stewart had moved on to confront the nuclear power industry.