Say what you will about Michael Moore, he has the distinction of being one of the few American liberals to come even close to beating the Republicans at their own game. His films, from Roger and Me to Sicko, are carefully crafted pieces of political propaganda, designed to amuse and outrage in equal measure. Sicko, Moore’s critique of the American health care system, contains a number of astonishingly clever set pieces. He starts with the plight of a group of American 9/11 rescue workers who are unable to get health care for various ailments they suffer as a result of volunteering to help at the site of the collapsed World Trade Center buildings. He then contrasts this with the health care facilities provided to detainees at the American prison camp in Guantanamo Bay. (The video he uses to describe these facilities is of politicians and military officials who were obviously reacting against the suggestion that detainees were being abused. Because of this, most of them go overboard describing the lavish facilities and superlative care being provided.) Moore takes the rescue workers to Cuba, in order to get them some of that free U.S. government health care being provided to the accused terrorists. After failing to secure entry to the Guantanamo prison, he decides to take them to a local clinic in Cuba, where they discover that they can refill their prescriptions at a fraction of what they are paying in the United States and that a comprehensive public system of care is available.
The idea that America treats its enemies better than its own citizens—its own 9/11 “heroes”—is a clever way of redirecting a nationalist impulse, which people on the right feel quite keenly. Judging from the number of websites and books attacking Moore and his work, he has managed to make a lot of conservatives crazy, in much the same way Palin makes liberals crazy. And yet there is a clear price to be paid for this rhetorical effectiveness. Inevitably, he winds up cutting a lot of corners. In his Fahrenheit 9/11, he devolves into fairly explicit conspiracy-mongering. Even in Sicko, where it’s not too difficult to find dramatic examples of how the American health care system creates desperation and suffering, a lot of the play is fast and loose. For instance, in his profile of the socialized medicine systems in Canada, France, and the U.K., public health care is consistently described as being “free.” Thus the contrast between U.S. health care and public health care systems is presented as a choice between paying for your health care and getting it for free, which makes it seem like a no-brainer. But of course public health care is not free: it is paid for through taxes.
Now there are a lot of things to be said in favor of public health care, particularly universal single-payer systems. Health care winds up costing a lot less when paid for through taxes than when paid for through private insurance, for a number of rather subtle reasons. The United States spends an eye-popping 18 percent of GDP on health care, and yet Americans do not actually consume more health care than people in many countries that spend closer to 10 percent of GDP on health care. In fact, by now the United States government is spending a greater fraction of GDP to deliver health care to just the elderly and the poor than other countries spend to deliver equally good care to their entire populations.1 In other words, Americans have a catastrophically inefficient way of organizing the delivery of health care. And yet the reasons for this are far too difficult to explain in a movie, much less a movie that aims to be entertaining. It is much easier simply to draw an invidious comparison between “expensive” private health care and “free” public health care.
One can see here the central dilemma that modern liberals face: between taking the “high road” and taking the “low road.” What makes it a dilemma, as opposed to just a choice, is that both options seem to involve huge compromises. The problem with the high road is that most of the time it doesn’t work. The problem with the low road is that with almost any progressive public policy, it requires misrepresentation, and this can have untoward effects down the line. Much of the dynamic of American liberalism in the past three decades has involved the search for a third option. Success so far has been limited.
The temptation to bowdlerize the issues is not confined to Moore; it is an inevitable consequence of any attempt to frame complex policy questions in an intuitively compelling way. George Lakoff’s take on health care, for instance, is not much more subtle than Moore’s. Lakoff sets up the issue by drawing a contrast between “insurance companies,” which he regards as essentially evil, and “government,” which is concerned with the welfare of citizens: “Insurance companies get their money by denying care, by saying no to as many people in need as they can get away with, while maximizing the premiums they get from the healthy people. Health insurance will always work this way.”2 Health care, by contrast, should be “part of the moral mission of government, where the role of the government is protection and empowerment, which in turn is based on a morality of empathy and responsibility.”3 Thus the “moral bottom line,” in Lakoff’s view, is a simple choice between a “life-affirming” and a “life-denying” system.4
Again, this may be a compelling way of framing the issue. “Life-denying,” after all, is just a fancy way of referring to death, and who wants to be on the side of death rather than of life? Many people who have had to settle a claim with an insurance company have come away scarred from the experience, and American health insurance companies are apparently particularly difficult to deal with, yet it is preposterous to claim that the basic business model of an insurance company is to make money by denying claims. Many insurance companies are actually mutual societies (which is to say, cooperatives), where any money that is not paid out in the form of claims is refunded to policyholders at the end of the year. (Similarly, Kaiser Permanente, which is often portrayed as a big bad corporation, committed to denying health care to as many people as possible, was founded as a cooperative, and the core businesses in the consortium are still run as nonprofits.5)
Insurance companies create value by pooling risk. Because we have difficulty knowing what our future health care needs will be, there are (significant) advantages to be had from pooling our own health care savings with those of others. Once you get a couple thousand people together, it becomes possible to predict with confidence what percentage of that group will get diabetes, suffer from heart attacks, contract cancer, and so on, and so you know with much greater confidence how much to set aside, and thus what premiums to charge. There’s nothing shady about it—it’s a straightforward economic benefit. Unfortunately, once you get several thousand people together pooling their health care savings, those same individuals lose whatever incentive they may once have had to seek the most affordable care or to use it only when necessary (after all, “insurance is paying”). Furthermore, people have an incentive to forego insurance when they are young and healthy then sign up for it when they get older or anticipate needing costly care. Thus insurance companies—regardless of whether they are trying to make profits or not—need to have a lot of rules and regulations in order to prevent people from taking advantage of the risk-pooling scheme.
The primary benefit that comes from public health care systems comes from the fact that government has the power to do this much more effectively than private insurers do. It is therefore able to deliver health insurance—not health care—at lower cost than a market system does. The important point is that the way socialized medicine works, in countries that have it, is not by abolishing insurance, but rather by having government control the insurance market. It does not consist of governments giving away free health care, as Lakoff and Moore suggest. The way that public health care systems work in Canada, in Australia, and throughout Europe is through control of the (supposedly “life-denying”) insurance market.6 Furthermore, any conceivable transformation of the American system in the direction of universal care would occur through government control of insurance. The ideal system for the United States would be a “single-payer” system like Canada’s, where health care delivery is left almost entirely to the private sector, and government simply exercises a monopoly in the health insurance sector. And yet if one were to take Lakoff’s “frame” seriously—and treat the central function of insurance as the denial of care—then this would be precisely the nightmare that Sarah Palin envisaged, where a panel of bureaucrats got together to withhold care from those who are deemed insufficiently productive members of society.
In Lakoff’s defense, one might say that the advantages of public over private insurance is pretty esoteric stuff, not suitable for framing in any sort of viscerally compelling way. It is inevitable that in trying to put a “human face” on the issue, things will get simplified a bit. The problem is that the way Lakoff and Moore choose to frame the issue is not just simplistic, it is simplistic in a way that undermines the ability to do what ultimately needs to be done to fix the underlying social problem. It is a case where good rhetoric makes for bad policy. This is a dilemma that is often faced by progressive reformers, precisely because the policies that are actually needed to resolve any pressing social problem are almost always very complex, and so cannot be presented in a rhetorically effective way. They can be communicated only rationally, to an audience willing to give them their full attention and to engage in a rational assessment of their merits. If you try that, however, you wind up confronting the Frasier problem, of quoting de la Rochefoucauld to a shock jock.
This situation is one that clearly tempts many people to adopt a more cynical strategy, which is to run a campaign that appeals to the heart but then, once elected, rule from the head. All political parties do this type of bait and switch to some degree, and it carries obvious risks of voter backlash if carried out hamhandedly. The greater danger, however, is that it will never be carried out—that policies adopted for rhetorical reasons or because they poll well will actually become the governing policies of the party. The tail will start to wag the dog.
I think by now it is safe to say that Americans overreacted to the terrorist attacks of September 11, 2001. In fact, with the passage of time, one begins to suspect that the devastating success of these attacks—in particular, the number of casualties that the terrorists were able to inflict—will be seen as a one-off, a fluke. After all, even Osama bin Laden didn’t expect the towers to collapse. That’s why he interpreted the outcome of the attacks as a sure sign of the direct intervention of God (on the side of Al-Qaeda, of course).
Terrorism turns out to be a bit harder to pull off than our untutored intuitions might lead us to expect. The Boston Marathon bombing in 2013 may have attracted a huge amount of attention and injured many people, but it actually resulted in only three deaths. This is about the same as the average American school shooting. Inflicting mass casualties is much more difficult than one might imagine.
This is just one of the many correctives to popular perception that Dan Gardner tries to impress upon the reader in his book Risk: The Science and Politics of Fear.7 In general, Gardner thinks that most of the things we’re afraid of are vastly overrated. He makes the case with respect to terrorism by pointing to the experience of the Aum Shinrikyo “doomsday” cult in Japan. Here was an organization that had an extraordinary amount of money at its disposal, several well-equipped labs, and as many as one hundred highly trained scientists, working full-time, dedicated exclusively to the task of figuring out how to inflict mass casualties upon the Japanese population, in an attempt to provoke an apocalyptic war. Nevertheless, over the course of seventeen different attacks, using a range of biological and chemical weapons, they never managed to kill more than a few dozen people. The most “successful” was the 1995 sarin nerve gas attack in the Tokyo subway, which killed twelve people and severely injured another forty-two.
Many people found this attack terrifying, yet for Gardner, it was also strangely reassuring. After all, it would be difficult to imagine circumstances more propitious for the success of a terrorist plot—“a fanatical cult with a burning desire to inflict mass slaughter has heaps of money, international connections, excellent equipment and laboratories, scientists trained at top-flight universities, and years of near-total freedom to pursue its operations”8—yet they came nowhere near accomplishing their ends.
All of this goes to show that even terrorists who get their hands on biological or chemical weapons (or nuclear material, for that matter) are still a long way away from being able to hurt large numbers of people. Yet in August 2006, 44 percent of Americans told Gallup that they were “very” or “somewhat” worried that they or someone in their family would be a victim of a terrorist attack. This is a phenomenal overestimation of the actual risk. Even before the 9/11 attacks, the thought of terrorism evoked irrational responses. In one particularly famous experiment, published in 1993, a group of psychologists showed that subjects offered flight insurance were willing to pay more for a policy that covered death due to terrorist attack than for a policy that covered death from any cause (including, of course, terrorist attack).9
Gardner’s book is one of many published in recent years that try to show how overblown most popular fears and anxieties are. So what is the solution, according to Gardner? Ultimately, he argues, we simply have to buck up and become more rational: “To protect ourselves against unreasoning fear, we must wake up Head and tell it to do its job. We must learn to think hard.”10 In Gardner’s view, the problem is one of reason versus passion, and if reason seems to be losing, then we simply have to try harder to be rational. (Writing books haranguing people, telling them to be more rational, seems to be an important part of the plan.) What he is calling for, in effect, is a revival of the old Enlightenment strategy (“Once more folks, this time with a bit more effort!”) of evaluating all received opinion at the tribunal of reason and consigning our old biases and superstitions to the dustbin of history.
When compared to Lakoff’s “fight fire with fire” strategy, there is much to recommend Gardner’s view. Most importantly, he recognizes the fundamental antagonism between what he calls “Head” and “Gut,” and sees that in many cases gut is a source of nothing but bad ideas and confused impulses, which only “Head” is able to sort out. For example, if Americans ever want to work out a non-crazy, non-paranoid response to terrorism, they are simply going to have to start thinking about the risks in a more rational way. The United States federal government was once a global leader when it came to the use of cost-benefit analysis to evaluate new public initiatives, but all of this seems to have gone out the window when the issue of security arose. It is difficult to believe that any of the delays and inconveniences imposed on American airline passengers since 2001 would survive a cost-benefit analysis. The decision was made to exempt these security measures from the usual tests—to let the strategy be dictated by fear rather than by any sort of calculation. (Conservatives seem to have no problem with the idea of assigning a dollar value to the lives of workers and consumers when it comes to workplace safety and environmental regulation. Yet they suddenly lose their nerve when it comes to assigning a dollar value to the lives of victims of terrorism.)
Unfortunately, a lot of people are afraid of things that, in the grand scheme of things, are not very dangerous—terrorism being one of them. This is an area where our intuitions are simply wrong. There is no good way of framing that, because we have no intuitive sense of the weakness of intuition. The only way to get to the conclusion is by looking at the statistics in a dispassionate way and saying to people—even the families of the victims—“Your personal experiences and impressions are wrong, terrorism is not really that big a problem.” This is why the British slogan “Keep calm and carry on” is so admirable. It’s not because it constitutes an adequate policy response to terrorist violence; it doesn’t. It’s because it recommends an emotional response—not getting agitated and hysterical—that makes it possible to formulate a rational policy response. It encourages a public culture in which the “cool passion” of reason has a chance of being heard.
Unfortunately, it’s difficult to imagine that a more rational public culture could be created simply through concerted individual effort to be more rational. We do enjoy the benefits of modern psychology, which gives us a much better sense of where the major pitfalls lie so that we can, in principle, be much more intelligent and self-aware in our quest to eliminate biases in our own thinking. And people certainly have become more disciplined as part of “the civilizing process,” not to mention more sophisticated in their reasoning. Yet there are clearly limits to how much we can enhance our onboard resources, not to mention diminishing returns in our attempts at improvement.
In the end, it is a huge exercise in wishful thinking to imagine that just knowing about our biases will make us less likely to fall victim to them. Gardner’s book is only one of several that have appeared in recent years by people alarmed by the current climate of irrationalism. Yet they all seem to subscribe to a variant of the view that knowledge will set us free—that just reading books about irrationality will somehow make us less irrational. Christopher Chabris and Daniel Simons, for example, at the end of their lengthy book on the subject of “how our intuitions deceive us,” conclude with a little pep talk, encouraging the reader to “think twice before you decide to trust intuition over rational analysis.” They go on to make a surprising series of claims for the curative powers of their work:
There may be important things right in front of you that you aren’t noticing due to the illusion of attention. Now that you know about this illusion, you’ll be less apt to assume you’re seeing everything there is to see. You may think you remember some things much better than you really do, because of the illusion of memory. Now that you understand this illusion, you’ll trust your own memory, and that of others, a bit less, and you’ll try to corroborate your memory in important situations. You’ll recognize that the confidence people express often reflects their personalities rather than their knowledge, memory, or ability. You’ll be wary of thinking you know more about a topic than you really do, and you will test your own understanding before mistaking familiarity for knowledge.11
What is strange about this passage is that the authors are making an empirical claim—that knowing about illusions and biases will make people less likely to fall victim to them—and yet offer no empirical evidence in support of it. This is in striking contrast to the rest of the book, which is a meticulous presentation of the empirical research that has established the existence of the major set of cognitive biases. For some reason, when it comes to the question of how to cure these biases, the reader is offered nothing but a profession of faith.
The big question then is whether this claim is true. Does knowledge set us free? Unfortunately, when it comes to our own mind, the answer appears to be no. Just knowing about our biases does not make us less susceptible to them. Indeed, knowledge can easily have the opposite effect, encouraging people to think that they are immune to bias precisely because they know so much about it. (There’s a strong current of this in Gardner’s work.) They fail to appreciate the truly insidious nature of cognitive bias, which is that it’s your own brain that’s doing it to you, so you can’t tell by introspection when it’s happening.
To the extent that the question has been studied, the results are not encouraging. Brian Wansink, for example, who has conducted extensive research on the contribution that cognitive biases and illusions make to overeating, specifically investigated the question of whether learning about biases makes people less likely to suffer from them. He found that it did not. One of his studies involved the phenomenon that he refers to as size bias—an anchoring effect generated by the size of the bowl people are given to serve themselves food. He would bring people into a room with a table laden with snacks, presented in large serving bowls. Participants would each be given an individual bowl, which they could fill with snack food to carry over to their seats. The trick is that some bowls would be bigger than others. Predictably, the people given the larger bowls would eat significantly more.
Wansink decided to try the same experiment on his own students but to warn them in advance about the underlying bias. Here is how he described the plan: “We’ll devote a full 90-minute class session just before Christmas vacation to talking about the size bias. We’ll lecture to them, show them videos, have them go through a demonstration, and even break them into small groups to discuss how people could prevent themselves from ‘being tricked’ by bigger serving bowls. We’ll use just about every educational method short of doing an interpretive dance.”12 One month later, he invited these same students to a Super Bowl party. They entered a room containing a table laden with snacks. He gave them each a bowl in order to serve themselves. Some of them were given small bowls, some were given larger bowls. The students then went on to exhibit exactly the same bias that they had just learned about in class and had spent time thinking and talking about how to avoid.
Part of what made Wansink’s experiment work so nicely was a bit of misdirection that he employed. At the end of the snack table, he had students fill out a questionnaire about Super Bowl commercials. (In order to fill it out, they had to set down their snack bowls, which gave him the opportunity to secretly weigh the bowls to see how much food the person had taken.) It was this bit of misdirection that was undoubtedly crucial, because it prevented students from recognizing the setup, and therefore allowed their rational faculties to remain asleep at the switch. This is consistent with Stanovich’s observation that intelligence offers no protection against bias, for much the same reason. Once a cognitive override is triggered, both knowledge of our biases and general intelligence are important for helping us to overcome them. The problem is that neither knowledge nor intelligence makes the override more likely to be triggered. People simply can’t be expected to be suspicious of everything all the time. Apart from cognitive miserliness—our unwillingness to expend the effort—the limits of human attention do not permit constant vigilance. Furthermore, even if we are in a suspicious frame of mind or likely to be paying attention, all it takes is a bit of distraction to slip one past us.
Thus there is something deeply unrealistic about the old Enlightenment strategy of simply telling people to think harder, to be more critical, to “question authority,” and so on. Chabris and Simons end their book by encouraging readers to “take any opportunity you find to pause and observe human behavior through the lenses we’ve given you. Try to track your own thoughts and actions as well, to make sure your intuitions and gut-level decisions are justified. Try your best to slow down, relax, and examine your assumptions before you jump to conclusions.”13 Now obviously, none of these are bad suggestions, in the sense that it wouldn’t hurt most people to take a bit more time and think a bit more critically. But we need to be serious about how much can be achieved in this way. There is no basis for optimism that this strategy can do anything to reverse the tide of irrationalism that that confronts us every time we walk out the front door.
When reason steps in to override an incorrect or maladaptive intuitive response, it is an exercise in self-control. The evidence suggests that it is the same system of self-control that we exercise when we prevent ourselves from acting in ways that are irrational or contrary to our long-term interests.14 As such, it is both limited and subject to depletion. (In other words, most people have a self-control “budget,” and once they’ve exceeded it, they become more impulsive until they’ve had a chance to rest and recharge.) Meanwhile, our environment is constantly evolving in such a way as to put increased demands upon us, requiring us to override our intuitive responses more and more often. In this situation, what kind of a solution is it to tell someone who suffers from cognitive biases simply to think harder? It’s like telling someone who is overweight to eat less, or telling an alcoholic to drink less. If he could do that, then he wouldn’t have the problem in the first place. The advice is not so much wrong as it is simply unhelpful.
The current environment, particularly in the media, creates a genuine dilemma for those who would like to see reason prevail over passion in politics. How do you deal with a political opponent who responds to truth with truthiness, or a social environment in which no one seems to know the difference? The “fight fire with fire” strategy recommended by Lakoff makes sense only under the assumption that all political positions can be translated into pithy, emotionally resonant slogans. In practice, it merely favors extremists, and distorts policy away from what would be best for society toward what can be most easily sold to the electorate. There is no getting around the fact that most moderate, progressive positions are frankly difficult to explain. This is partly because they involve trade-offs between multiple considerations, require collective action, and involve tacit recognition of the limitations of different institutional structures (markets, the state, corporations, legal regulation, etc.).
On the other hand, the “just try harder” strategy recommended by Gardner seems to be no more promising. Trying to engage in rational debate with an opponent who doesn’t have any interest in the basic norms of truth is pointless. You can’t argue with a Rush Limbaugh or a Bill O’Reilly, and if you do, you come off looking like a fool. (“Never wrestle with a pig,” as the old saying goes. “You both get dirty, but the pig enjoys it.”) Furthermore, the suggestion that the public or the media might respond to exhortations encouraging them to be more rational underestimates the seriousness of the problem. The human capacity to reason is extremely frail and easily exhausted. Our susceptibility to bias and false belief is not an aberration, something that can be blamed on the media. It is certainly not being helped by the media at the moment, but the media is only part of a much larger set of forces in our society that are conspiring to produce an environment in which it has become more difficult to carry on a rational debate.
One can see here why a third alternative, which has emerged in recent years in the United States, has considerable merit. The way to respond to right-wing demagogues is not with left-wing demagogues, but with comedians. Trying to engage in serious argument with the demagogues is a mug’s game: all you wind up doing is elevating their views and debasing your own. Trying to hit back using the same tactics is either ineffective or self-defeating. The solution? Just make fun of them. Reason cannot win in a head-to-head contest against unreason. So when someone is being unreasonable, the best you can do is point out how unreasonable they are, often to comedic effect. This is why satire has always occupied a prominent role in Enlightenment polemic (with Voltaire being the best-known practitioner).
The success of this formula is what explains the most peculiar feature of American politics at the moment, which is that the national political debate is dominated by an exchange between right-wing talk-radio hosts and left-wing comedians. When right-wing demagoguery first started to gather steam, the initial impulse among Democrats was to clone Michael Moore, to have liberal “attack dogs,” and to create a liberal talk-radio station (Air America) where liberal political ideas could be presented in more forceful, aggressive ways. The problem is that universal health care, racial tolerance, gender equality, gays in the military, and liberalized immigration don’t really lend themselves to hard-hitting, in-your-face presentation. Over time, it became apparent that the more effective response was coming from comedians—from Janeane Garofalo, Al Franken, Jon Stewart, and Stephen Colbert. The reason it worked is that for the most part they would leave liberal political positions out of it and simply focus on the irrationality of the positions being advanced by the right.
Jon Stewart is the most consistent and effective in this regard. Although his show is all about politics, he refuses to engage in political debate with demagogues. He will criticize and make fun of them, but when they try to turn it around and say “Well, what do you propose, smart guy?” he refuses to play along. What they want, of course, is for him to lay out some kind of liberal policy that they can proceed to attack using one of a dozen prepared sound-bites. But Stewart’s answer is always “I’m just a comedian. My show is a comedy show. You’re the one who claims to be reporting the news or having a serious political debate.” The subtext is “If you wanted to have a serious discussion, we could have a serious discussion. But what you do is fundamentally not serious.”
The result is a division of labor. The job of wrestling with the pig is taken over by the comedians, which frees up more serious political actors from having to “respond” to every insane allegation made by the right. (For example, when Republican congresswoman Michele Bachmann accuses President Obama of wanting to abandon the U.S. dollar in favor of a “global currency” and therefore introduces legislation in Congress to “ensure that the U.S. dollar remains the currency of the United States,”15 you don’t really want to waste the valuable time of serious people unpacking all the different layers of crazy that went into this particular confection. Just let the comedians handle it.)
This solution, however, remains something of a stopgap. It’s better than the sort of deer-in-the-headlights paralysis that dominated the Democratic Party during the Dukakis period, but it does have the effect of turning most of the public debate on political issues in America into a circus sideshow. Furthermore, it is impossible to “restore sanity” through comedic jibes alone (at most one can stem the tide of insanity). The only real solution is to change the environment. A more effective response lies in the recognition that the rapid-fire pace of modern politics, the hypnotic repetition of daily news items, even the preponderance of visual sources of information, are all inimical to the exercise of reason. If it is indeed our objective to “restore sanity,” we need to pursue a higher-level strategy, one of restructuring the environment in such a way that the voice of reason has a chance of being heard.