CHAPTER 3

--------

IF “MISFEARING” IS THE PROBLEM, IS COST-BENEFIT ANALYSIS THE SOLUTION?

Many people have argued for cost-benefit analysis on economic grounds. In their view, a primary goal of regulation is to promote economic efficiency, and cost-benefit analysis is admirably well suited to that goal. Societies should not waste resources, and cost-benefit analysis reduces the risk of waste. Arguments of this kind have been met with sharp criticism from those who are not so enthusiastic about economic efficiency or who believe that in practice, cost-benefit analysis is likely to produce a kind of regulatory paralysis (“paralysis by analysis”) or to represent a victory for business interests.

In this chapter, I offer support for cost-benefit analysis not from the standpoint of conventional economics but on grounds associated with cognitive psychology and behavioral economics. My basic suggestion is that cost-benefit analysis is best defended as a means of responding to the general problem of “misfearing.” That problem arises when people are afraid of trivial risks and neglectful of serious ones.

For purposes of law and policy, the central points are twofold. First, misfearing is part of the human condition, and both ordinary people and public officials are subject to it. Second, misfearing plays a role in public policy, in part because of how human ­beings think, in part because of the power of self-­interested ­private groups, and in part because of the ordinary political ­dynamics. Misfearing can produce unfortunate misallocations of public resources when nations devote a lot of money to small problems and much less to big ones. Because cost-benefit analysis draws ­attention to the actual consequences of the various ­options, it ­reduces that risk. So understood, cost-benefit analysis is a way to insure better priority setting and to overcome predictable ­obstacles to desirable regulation.

Of course, much of the controversy over cost-benefit analysis stems from the difficulty of specifying what that form of analysis entails. An understanding of misfearing cannot support any particular understanding of cost-benefit analysis. This is not the place to offer any such understanding. My goal is to provide a general defense of cost-benefit analysis, rooted in behavioral economics, that should be able to attract support from people who have diverse theoretical commitments or who are uncertain about the appropriate theoretical commitments. (See the discussion of minimalism in chapter 10.)

MISFEARING AND THE PUBLIC DEMAND FOR REGULATION

Why, exactly, do people fall prey to misfearing? I shall offer several answers, but it will be helpful to orient them under a simple framework. A great deal of recent work in psychology has explored two families of cognitive operations in the human mind, sometimes described as System 1 and System 2, through which people evaluate many things, including risky activities and processes.1

System 1 is fast, associative, and intuitive. System 1 tends to be frightened by loud noises and big animals, and it does not care much about the abstract idea of air pollution. (When it objects to something someone has done, System 1 tends to say, “I hate you!”) By contrast, System 2 is deliberative, calculative, slow, and analytic. (When it objects to something someone has done, ­System 2 offers a constructive suggestion.) System 2 engages in some kind of assessment of whether loud noises or big animals pose a genuine threat. Though aspects of the two systems may well have different locations in the human brain, the distinction is useful whether or not identifiable brain sectors are involved; it should be understood as an effort to capture, in simple form, the difference between effortless, automatic processing and effortful, slower processing.

Because of the operation of System 1, people have immediate and often visceral reactions to persons, activities, and processes, and their immediate reactions operate as mental shortcuts for a more deliberative or analytic assessment of the underlying issues. Sometimes the shortcut can be overridden or corrected by System 2. For example, System 1 might lead people to be terrified of flying in airplanes, but System 2 might create a deliberative check, leading people to recognize that the risks are minimal.

Misfearing is often a product of System 1, and cost-benefit analysis can operate as a kind of System 2 corrective, giving people a better sense of what is actually at stake. People might think that some activity creates serious risks, but cost-benefit analysis can establish that the risks are quite low. To be sure, System 2 itself can go badly wrong, and the analysis of costs and benefits may be erroneous. Perhaps the underlying scientific judgments are incorrect; perhaps the scientists are unduly optimistic. In addition, the translation of risks into monetary equivalents creates many challenges.2 It would be foolish to contend that System 2 is infallible, even if it is competent and working extremely hard. The only claims are that System 1 is prone to making systematic errors, that those errors produce misfearing, and that an effort to assess the costs and benefits of risk reduction, if done properly, will operate as a helpful constraint. If, for example, Jones is afraid of flying and Smith is unafraid of smoking cigarettes, some kind of cost-benefit analysis—formal or informal—might help them both.

These points work most naturally for individual judgments. Remarkable as the human brain is, it evolved for particular purposes, and it can blunder, especially in unfamiliar contexts. But the political process is also influenced by System 1, certainly in the most responsive democracies. Of course, it is also true that when the public demand for law threatens to produce excessive reactions to minor risks, there are constraints. Any such reactions must overcome a series of barriers to ill-considered public action. And when a legislature or administrative agency is moved to act, a number of factors are responsible, perhaps including the activities of self-interested private groups with strong incentives to move government in their preferred directions.

Nonetheless, it is clear that public misfearing helps to produce significant misallocations of public resources. Misfearing can be a result of social interactions, as fear or complacency spreads rapidly from one person to another, and also of political influences, as sophisticated political actors try to stir people up or calm them down. To the extent that misallocations are a product of these and related factors, the argument for cost-benefit analysis is strengthened rather than weakened.

These claims raise an immediate question: What, exactly, is cost-benefit analysis? For present purposes, let us understand that approach to require regulator to identify—and to make relevant for purposes of decision—the positive effects and the negative ­effects of regulation, and to quantify these as much as possible in terms of monetary equivalents, capturing everything that matters, including lives saved, hospital admissions prevented, workdays gained, and so forth. For purposes of illustration, here is a representative effort at valuation, from the Environmental Protection Agency in 2013 (with blank spaces for unavailable estimates):

Unit Values for Economic Valuation of Health Endpoints3

 

CENTRAL ESTIMATE OF VALUE PER STATISTICAL LIFE

HEALTH ENDPOINT

1990 INCOME LEVEL

2020 INCOME LEVEL

Premature Mortality
(Value of a Statistical Life)

$8,000,000

$9,600,000

Nonfatal Myocardial Infarction
(heart attack)

3 percent discount rate

   

Age 0–24

$87,000

$87,000

Age 25–44

$110,000

$110,000

Age 45–54

$120,000

$120,000

Age 55–64

$200,000

$200,000

Age 65 and over

$98,000

$98,000

7 percent discount rate

Age 0–24

$97,000

$97,000

Age 25–44

$110,000

$110,000

Age 45–54

$110,000

$110,000

Age 55–64

$190,000

$190,000

Age 65 and over

$97,000

$97,000

HOSPITAL ADMISSIONS

2000 INCOME LEVEL

2020 INCOME LEVEL

Chronic Lung Disease (18–64)

$21,000

$21,000

Asthma Admissions (0–64)

$21,000

$21,000

All Cardiovascular

Age 18–64

$42,000

$42,000

Age 65–99

$41,000

$41,000

All Respiratory (Age 65 and over)

$36,000

$36,000

Emergency Department Visits for Asthma

$430

$430

RESPIRATORY AILMENTS NOT REQUIRING HOSPITALIZATION

2000 INCOME LEVEL

2020 INCOME LEVEL

Upper Respiratory Symptoms

$31

$33

Lower Respiratory Symptoms

$20

$21

Asthma Exacerbations

$54

$58

Let us assume that government uses figures of this sort and that those figures have adequate technical foundations. Let us also assume that cost-benefit analysis can accommodate, or be supplemented by, factors that are not easy to monetize, giving special weight to human dignity or to adverse effects on disadvantaged social groups.4 How might cost-benefit analysis help to correct the problem of misfearing?

THE AVAILABILITY HEURISTIC

It is well known that people use mental shortcuts, or heuristics, in thinking about risks. The first problem is purely cognitive: the availability heuristic.5 People tend to think that an event is more probable if they can recall an incident in which it came to fruition.6 Ease of recall greatly affects our judgments about probability. In a famous paper, Amos Tversky and Daniel Kahneman found that people think that, on any given page, more words will end with the letters ing than will have n as the second-to-last letter (though a moment’s reflection shows that this is not possible).7 With respect to risks, judgments are typically affected by the availability heuristic, so that people overestimate the number of deaths from highly publicized events (motor vehicle accidents, tornados, floods, botulism) but underestimate the number from less publicized sources (stroke, heart disease, stomach cancer).8 It is in part for this reason that direct personal experience can play a large role in perceptions of risk.9

Consider in this regard a 2004 study of perceptions of risk associated with terrorism and severe acute respiratory syndrome (SARS).10 The study involved Americans and Canadians. In the ­aftermath of the 9/11 attacks, Americans perceived terrorism to be a far greater threat to themselves and to others than SARS—whereas in the aftermath of an outbreak of SARS, Canadians perceived SARS to be a greater threat to themselves and to others than terrorism. Americans estimated their chance of serious harm from ­terrorism as 8.27 percent, about four times higher than their estimate of their chance of serious harm from SARS (2.18 percent). ­Canadians estimated their chance of serious harm from SARS as 7.43 percent, significantly higher than their estimate for terrorism (6.04 percent). The estimated figures for SARS were unrealistically high, especially for Canadians; the best estimate of the risk of contracting SARS, based on Canadian figures, was just .0008 percent (and the chance of dying as a result, less than .0002 percent). For obvious reasons, the objective risks from terrorism are much harder to calculate, but if it is estimated that the United States will suffer at least one terrorist attack each year with the same number of deaths as on September 11, 2001, the risk of death from terrorism is about .001 percent—a highly speculative number under the circumstances, but suggestive that Americans were exaggerating the risk.

What accounts for such large differences between two neighboring nations? The availability heuristic provides a large part of the answer. In the United States, risks of terrorism have (to say the least) received a great deal of attention, producing a continuing sense of threat, especially in the years immediately following 9/11. But in the United States, there have been no incidents of SARS, and the media coverage has been limited to events elsewhere—producing a modest degree of salience but far lower than that associated with terrorism. In Canada, the opposite was the case in the relevant period. The high degree of public discussion of SARS cases, accompanied by readily available instances, produced an inflated sense of the numbers—sufficiently inflated to exceed the same numbers from terrorism (certainly a salient risk in Canada, as in most nations following 9/11).

To the extent that people lack information or base their judgments on mental shortcuts that produce errors, there is a risk that a highly responsive government will blunder. Indeed, private groups often exploit the availability heuristic, emphasizing a particular incident that is supposed to be taken as representative of a much larger problem. Cost-benefit analysis is a natural corrective, above all because it focuses attention on the actual effects of regulation, including, in some cases, the existence of surprisingly small benefits from regulatory controls. When cost-benefit analysis is working well, it can counteract both “availability bias,” in the form of an inflated sense of risk, and “unavailability bias” (excessive complacency), stemming from an absence of available incidents.

To this extent, cost-benefit analysis should not be taken as undemocratic. On the contrary, it should be seen as a means of fortifying democratic goals by insuring that government decisions are responsive not to temporary fears but to well-informed public judgments.

AGGRAVATING SOCIAL INFLUENCES: ­INFORMATIONAL AND REPUTATIONAL CASCADES

The availability heuristic does not, of course, operate in a social vacuum. It interacts with emphatically social processes.11 The first of those processes involves the spread of information within social networks and societies in general. The second process involves the role of reputation, and, in particular, people’s desire to protect their own.

Especially in the modern era, risk perceptions can go viral. One reason is that when individuals do not have information of their own, initial signals by a few people may initiate an informational cascade, with significant consequences for private and public behavior, and potentially with distorting effects on regulatory policy. When the public is fearful, it may be because of cascade effects, leading people to rely on what other people think, and thus lending their voices to an increasingly loud chorus—even if there is little or no reason for fear.

In chapter 1, we saw that cascade effects play a role in the spread of conspiracy theories. Now turn to environmental risks and imagine that Anita says that abandoned hazardous waste sites are dangerous, or that she initiates protest activity because such a site is located nearby. Benjamin, otherwise skeptical or unsure, may go along with Anita. Charles, otherwise unsure, may be convinced that if Anita and Benjamin share the same belief, the belief must be true; and it will take a confident Declan to resist the shared judgments of Anita, Benjamin, and Charles. The result of this set of influences can be informational cascades, as hundreds, thousands, or millions of people come to accept a certain belief simply because of what they think other people think.12

There is nothing fanciful in the idea. Cascade effects help account for widespread public concern about abandoned hazardous waste dumps (a problem to be sure, but not the most serious environmental hazard), and they spurred excessive public fears of the pesticide Alar in the late 1980s. Such effects helped produce massive declines in beef production in Europe in connection with bovine spongiform encephalopathy, sometimes known as mad cow ­disease; they have also spurred fear of genetically engineered food in Europe.

Now turn to the reputational side. If many people are alarmed about some risk, you might not voice your doubts about whether the alarm is justified, simply in order not to seem obtuse, cruel, or indifferent. You might be especially likely to silence yourself if the alarm is felt by people who are your friends or in your social network. And if many people believe that a certain risk is trivial, you might not disagree through words or deeds, lest you appear cowardly or confused. The result of these forces can also be cascade effects, and those effects can produce a public demand for regulation even if the risks are small. At the same time, there may be little or no demand for regulation of risks that are, in fact, quite large. Self-interested private groups can exploit these forces. For instance, European companies have tried to play up fears of genetically engineered food as a way of fending off American competition.

As we saw in chapter 1, there are interactions between the availability heuristic and cascade effects. A particular incident may spread rapidly from some people to others, giving rise to availability cascades.13 If such an incident is the source of the spread of information or the reputational effects, an availability cascade may be at work, perhaps leading people to think that a rare or isolated event is reflective of some terrible social risk.

Cost-benefit analysis has a natural role here. If agencies are disciplined by that form of analysis, they will have a degree of insulation from cascade effects produced by informational and reputational forces, even when the availability heuristic is at work. The effect of cost-benefit analysis is to subject misfearing to a kind of technocratic scrutiny—to insure that the public demand for regulation is not rooted in myth and to insure as well that government is regulating real risks even when the public demand is low. And here, too, there is no democratic problem with the inquiry into consequences. If people’s concern is fueled by the information spread by others, and if such information is unreliable, a technocratic constraint on “hot” popular reactions is hardly inconsistent with democratic ideals. Similarly, there is nothing undemocratic about a governmental effort to shift resources to serious, life-threatening problems that have not gotten public attention as a result of cascade effects.

EMOTIONS AND PROBABILITY NEGLECT

Because of the availability heuristic, people can have an inaccurate assessment of probability. But sometimes people focus on bad outcomes and not much on the question of probability, especially when strong emotions are involved. What affects thought and behavior is the worst case, not the likelihood that it will occur. Here is another source of misfearing.

The phenomenon of probability neglect received its clearest empirical confirmation in a striking study of when people pay attention to outcomes, and when they focus on probability as well.14 In the relatively emotion-free setting, participants were told that the experiment entailed some chance of a $20 penalty. Some of the subjects were told that there was only a 1 percent chance of receiving the bad outcome (the $20 loss), while others were told that the chance was 99 percent. Not surprisingly, the difference in probability mattered greatly. The difference between the median willingness to pay for a 1 percent chance and the median payment for a 99 percent chance was large: $1 to avoid a 1 percent chance, and $18 to avoid a 99 percent chance.

In the strong-emotion setting, subjects were asked to imagine that they were taking part in an experiment involving some chance of a “short, painful, but not dangerous electric shock.” Here again, some of the subjects were told that there was only a 1 percent chance of receiving the bad outcome (the electric shock), while others were told that the chance was 99 percent. In this setting, probability mattered a lot less. The median willingness to pay was $7 to avoid a 1 percent chance of an electric shock—and $10 to avoid a 99 percent chance! The general implication is clear: when people’s emotions are especially strong—when System 1 is activated—people might well focus on bad outcomes and not pay much attention to the likelihood that they will occur.

There is much evidence in the same vein. Consider these findings:

1. When people discuss a low-probability risk, their concern rises even if the discussion consists mostly of apparently trustworthy assurances that the probability of harm is small.15

2. If people are asked how much they will pay for flight insurance for losses resulting from “terrorism,” they will pay more than if they are asked how much they will pay for flight insurance for all causes.16

3. People show “alarmist bias.” When presented with competing accounts of danger, they tend to gravitate toward, and to accept, the more alarming account.17

4. In experiments designed to test levels of anxiety in anticipation of a painful electric shock of varying intensity, the probability that people would actually receive the shock had no effect. As the study’s authors noted, “Evidently, the mere thought of receiving a shock is enough to arouse individuals, but the precise likelihood of being shocked has little impact on level of arousal.”18

We need not venture into controversial territory in order to observe that some risks seem to produce extremely sharp, largely visceral reactions. The role of cost-benefit analysis is straightforward here. Just as the Senate was designed to have a “cooling effect” on the passions of the House of Representatives, cost-benefit analysis can help insure that policy is driven not by hysteria or alarm but by a full appreciation of the effects of relevant risks and of trying to control them. Nor is cost-benefit analysis, in this setting, only a check on unwarranted regulation. It can and should serve as a spur to regulation as well. If risks do not produce visceral reactions, partly because the underlying activities do not provoke vivid mental images, cost-benefit analysis can show that they warrant regulatory control. The elimination of lead in gasoline, driven by cost-benefit analysis, is a case in point.

SYSTEMIC EFFECTS AND “HEALTH-HEALTH TRADE-OFFS”

Often regulation has complex systemic effects. A decision to ban ­asbestos may cause manufacturers to use less safe substitutes. In compliance with the Montreal Protocol, the 1987 treaty that called for phasing out ozone-depleting chemicals, the US government has ­prohibited certain asthma medications on the grounds that they emit such chemicals. The problem is that such a prohibition could put asthma patients at risk by increasing the price of their medicines and perhaps by making their preferred medicines unavailable. Aggressive regulation of certain forms of air pollution can increase electricity prices, and such regulation may harm poor people.19 Higher prices for energy are especially hard on people who do not have a lot of money.

These are a few examples of the many situations in which a government agency is inevitably making “health-health trade-offs” in light of the systemic effects of one-shot interventions. Any regulation that imposes high costs may, by virtue of that fact, produce some risks to life and health, since richer people are likely to be safer as well.20 An advantage of cost-benefit analysis is that it tends to overcome people’s tendency to focus on parts of problems, by requiring them to look globally at the consequences of apparently isolated actions.

DANGERS ON-SCREEN, BENEFITS OFF-SCREEN

Why do some people believe that minimal risks from pesticides should be regulated, even if they do not much worry about other minimal risks, such as those from X-rays? Why are people so concerned about the risks of genetically modified food when many experts believe that those risks are quite low—lower, in fact, than the risks from high levels of sugar consumption, which may not much trouble people?

Consider this finding: when people think that a product or activity is dangerous, they tend to think that it has low benefits too.21 And when people think that a product or activity is highly beneficial, they tend to think that it is not dangerous. In people’s minds, danger and benefit tend to be bundled, even though it is certainly possible that some activities are harmful in some ways and beneficial in others (consider coal-fired power plants, which emit high levels of pollution, but also produce cheap energy).

The obvious conclusion is that sometimes people favor regulation of some risks because the underlying activities are not seen to have compensating benefits.22 The problem is that in such cases, people do not see that difficult trade-offs are involved. Dangers are effectively on-screen, but benefits are off-screen.

An important factor here is loss aversion, which leads people to see a loss from the status quo as more undesirable than a gain is seen as desirable.23 To appreciate the power of loss aversion, consider an ingenious study of teacher incentives.24 Many people have been interested in encouraging teachers to do a better job of improving their students’ achievements. The results of providing economic incentives are decidedly mixed; unfortunately, many of these efforts have failed.25 But the relevant study enlisted loss aversion by giving teachers money in advance and telling them that if students did not show real improvements, the teachers would have to give back the money. The result was a significant increase in math scores—indeed, an increase equivalent to a substantial improvement in teacher quality. The underlying idea here is that losses from the status quo are especially unwelcome, and people will work hard to avoid those losses.26

In the context of risk regulation, the consequence of loss aversion is that any newly introduced risk, or any aggravation of existing risks, might well be seen as a serious problem, even if the accompanying benefits are both real and large. When a new risk adds danger, people may focus on the danger itself and not on the benefits that accompany the danger. In these circumstances, the role of cost-­benefit analysis should not be obscure. It can be a necessary corrective, by placing all of the various effects on-screen.

FEAR ITSELF

The behavioral argument for cost-benefit analysis is now in place. It is true but obvious to say that a lack of information can lead to an inadequate or excessive demand for regulation, or a form of “paranoia and neglect.” What is less obvious is that predictable features of cognition may lead to a public demand for regulation that is not based on the facts. Self-interested private groups and political actors can exploit those features of cognition, attempting to enlist availability and probability neglect so as to propagate misfearing. In many cases, cost-benefit analysis demonstrates that regulatory controls are desirable; consider the problem of ozone-depleting chemicals, where the Reagan administration favored aggressive regulation, in part because cost-benefit analysis demonstrated that it was justified.27 When intuitions and anecdotes are unreliable, when poor priority setting is a problem, or when interest groups press public officials in their preferred directions, cost-benefit analysis serves as an indispensable safeguard.