The Indian Ocean tsunami of December 2004 focused attention on a type of disaster to which policymakers pay too little attention – a disaster that has a very low or unknown probability of occurring, but that if it does occur creates enormous losses. The flooding of New Orleans in the late summer of 2005 was a comparable event, although the probability of the event was known to be high; the Corps of Engineers estimated its annual probability as 0.33% (Schleifstein and McQuaid, 2002), which implies a cumulative probability of almost 10% over a 30-year span. The particular significance of the New Orleans flood for catastrophic-risk analysis lies in showing that an event can inflict enormous loss even if the death toll is small – approximately 1/250 of the death toll from the tsunami.
Great as that toll was, together with the physical and emotional suffering of survivors, and property damage, even greater losses could be inflicted by other disasters of low (but not negligible) or unknown probability. The asteroid that exploded above Siberia in 1908 with the force of a hydrogen bomb might have killed millions of people had it exploded above a major city. Yetthat asteroid was only about 200 feet in diameter, and a much larger one (among the thousands of dangerously large asteroids in orbits that intersect the earth’s orbit) could strike the earth and cause the total extinction of the human race through a combination of shock waves, fire, tsunamis, and blockage of sunlight, wherever it struck. 1 Another catastrophic risk is that of abrupt global warming, discussed later in this chapter.
Oddly, with the exception of global warming (and hence the New Orleans flood, to which global warming may have contributed, along with man-made destruction of wetlands and barrier islands that formerly provided some protection for New Orleans against hurricane winds), none of the catastrophes mentioned above, including the tsunami, is generally considered an ‘environmental’ catastrophe. This is odd, since, for example, abrupt catastrophic global change would be a likely consequence of a major asteroid strike. The reason non-asteroid-induced global warming is classified as an environmental disaster but the other disasters are not is that environmentalists are concerned with human activities that cause environmental harm but not with natural activities that do so. This is an arbitrary separation because the analytical issues presented by natural and human-induced environmental catastrophes are very similar.
To begin the policy analysis, suppose that a tsunami as destructive as the Indian Ocean one occurs on average once a century and kills 250,000 people. That is an average of 2500 deaths per year. Even without attempting a sophisticated estimate of the value of life to the people exposed to the risk, one can say with some confidence that if an annual death toll of 2500 could be substantially reduced at moderate cost, the investment would be worthwhile. A combination of educating the residents of low-lying coastal areas about the warning signs of a tsunami (tremors and a sudden recession in the ocean), establishing a warning system involving emergency broadcasts, telephoned warnings, and air-raid-type sirens, and improving emergency response systems would have saved many of the people killed by the Indian Ocean tsunami, probably at a total cost less than any reasonable estimate of the average losses that can be expected from tsunamis. Relocating people away from coasts would be even more efficacious, but except in the most vulnerable areas or in areas in which residential or commercial uses have only marginal value, the costs would probably exceed the benefits – for annual costs of protection must be matched with annual, not total, expected costs of tsunamis. In contrast, the New Orleans flood might have been prevented by flood-control measures such as strengthening the levees that protect the city from the waters of the Mississippi River and the Gulf of Mexico, and in any event, the costs inflicted by the flood could have been reduced at little cost simply by a better evacuation plan.
The basic tool for analysing efficient policy towards catastrophe is cost-benefit analysis. Where, as in the case of the New Orleans flood, the main costs, both of catastrophe and of avoiding catastrophe, are fairly readily monetizable and the probability of the catastrophe if avoidance measures are not taken is known with reasonable confidence, analysis is straightforward In the case of the tsunami, however, and of many other possible catastrophes, the main costs are not readily monetizable and the probability of the catastrophe may not be calculable. 2 Regarding the first problem, however, there is now a substantial economic literature inferring the value of life from the costs people are willing to incur to avoid small risks of death; if from behaviour towards risk one infers that a person would pay $70 to avoid a 1 in 100,000 risk of death, his value of life would be estimated at $7 million ($70/.00001), which is in fact the median estimate of the value of life of a ‘prime-aged US workers’ today (Viscusi and Aldy, 2003, pp. 18, 63). 3 Because value of life is positively correlated with income, this figure cannot be used to estimate the value of life of most of the people killed by the Indian Ocean tsunami. A further complication is that the studies may not be robust with respect to risks of death much smaller than the 1 in 10,000 to 1 in 100,000 range of most of the studies (Posner, 2004, pp. 165–171); we do not know what the risk of death from a tsunami was to the people killed. Additional complications come from the fact that the deaths were only a part of the cost inflicted by the disaster – injuries, suffering, and property damage also need to be estimated, along with the efficacy and expense of precautionary measures that would have been feasible. The risks of smaller but still destructive tsunamis that such measures might protect against must also be factored in; nor can there be much confidence about the “once a century’ risk estimate. Nevertheless, it is apparent that the total cost of the recent tsunami was high enough to indicate that precautionary measures would have been cost-justified, even though they would have been of limited benefit because, unlike the New Orleans flood, there was no possible measure for preventing the tsunami.
So why were not such measures taken in anticipation of a tsunami on the scale that occurred? Tsunamis are a common consequence of earthquakes, which themselves are common; and tsunamis can have other causes besides earthquakes – a major asteroid strike in an ocean would create a tsunami that could dwarf the Indian Ocean one. A combination of factors provides a plausible answer. First, although a once-in-a-century event is as likely to occur at the beginning of the century as at any other time, it is much less likely to occur in the first decade of the century than later. That is, probability is relative to the span over which it is computed; if the annual probability of some event is 1%, the probability that it will occur in 10 years is justa shade under 10%. Politicians with limited terms of office and thus foreshortened political horizons are likely to discount low-risk disaster possibilities, since the risk of damage to their careers from failing to take precautionary measures is truncated. Second, to the extent that effective precautions require governmental action, the fact that government is a centralized system of control makes it difficult for officials to respond to the full spectrum of possible risks against which cost-justified measures might be taken. The officials, given the variety of matters to which they must attend, are likely to have a high threshold of attention below which risks are simply ignored. Third, where risks are regional or global rather than local, many national governments, especially in the poorer and smaller countries, may drag their heels in the hope of taking a free ride on the larger and richer countries. Knowing this, the latter countries may be reluctant to take precautionary measures and by doing so reward and thus encourage free riding. (Of course, if the large countries are adamant, this tactic will fail.) Fourth, often countries are poor because of weak, inefficient, or corrupt government, characteristics that may disable poor nations from taking cost-justified precautions. Fifth, because of the positive relation between value of life and per capita income, even well-governed poor countries will spend less per capita on disaster avoidance than rich countries will.
An even more dramatic example of neglect of low-probability/high-cost risks concerns the asteroid menace, which is analytically similar to the menace of tsunamis. NASA, with an annual budget of more than $10 billion, spends only $4 million a year on mapping dangerously close large asteroids, and at that rate may not complete the task for another decade, even though such mapping is the key to an asteroid defence because it may give us years of warning. Deflecting an asteroid from its orbit when it is still millions of miles from the earth appears to be a feasible undertaking. Although asteroid strikes are less frequent than tsunamis, there have been enough of them to enable the annual probabilities of various magnitudes of such strikes to be estimated, and from these estimates, an expected cost of asteroid damage can be calculated (Posner, 2004, pp. 24–29, 180).
As in the case of tsunamis, if there are measures beyond those being taken already that can reduce the expected cost of asteroid damage at a lower cost, thus yielding a net benefit, the measures should be taken, or at least seriously considered.
Often it is not possible to estimate the probability or magnitude of a possible catastrophe, and so the question arises whether or how cost-benefit analysis, or other techniques of economic analysis, can be helpful in devising responses to such a possibility. One answer is what can be called ‘inverse cost-benefit analysis’ (Posner, 2004, pp. 176–184). Analogous to extracting probability estimates from insurance premiums, it involves dividing what the government is spending to prevent a particular catastrophic risk from materializing by what the social cost of the catastrophe would be if it did materialize. The result is an approximation of the implied probability of the catastrophe. Expected cost is the product of probability and consequence (loss): C= PI. If P and L are known, C can be calculated. If instead C and L are known, P can be calculated: if $1 billion (C) is being spentto avert a disaster, which, if it occurs, will impose a loss (L) of $100 billion, then P = C/L = .01.
If P so calculated diverges sharply from independent estimates of it, this is a clue that society may be spending too much or too little on avoiding L. It is just a clue, because of the distinction between marginal and total costs and benefits. The optimal expenditure on a measure is the expenditure that equates marginal cost to marginal benefit. Suppose we happen to know that P is not. 01 but .1, so that the expected cost of the catastrophe is not $1 billion but $10 billion. It does not follow that we should be spending $10 billion, or indeed anything more than $1 billion, to avert the catastrophe. Maybe spending just $1 billion would reduce the expected cost of catastrophe from $10 billion all the way down to $500 million and no further expenditure would bring about a further reduction, or at least a cost-justified reduction. For example, if spending another $1 billion would reduce the expected cost from $500 million to zero, that would be a bad investment, at least if risk aversion is ignored.
The federal government is spending about $2 billion a year to prevent a bioterrorist attack (raised to $2.5 billion for 2005, however, under the rubric of ‘Project BioShield’) (U.S. Department of Homeland Security, 2004; U.S. Office of Management and Budget, 2003). The goal is to protect Americans, so in assessing the benefits of this expenditure casualties in other countries can be ignored. Suppose the most destructive biological attack that seems reasonably possible on the basis of what little we now know about terrorist intentions and capabilities would kill 100 million Americans. We know that value-of-life estimates may have to be radically discounted when the probability of death is exceedingly slight. However, there is no convincing reason for supposing the probability of such an attack less than, say, one in 100,000; and the value of life that is derived by dividing the cost that Americans will incur to avoid a risk of death of that magnitude by the risk is about $7 million. Then if the attack occurred, the total costs would be $700 trillion – and that is actually too low an estimate because the death of a third of the population would have all sorts of collateral consequences, mainly negative. Let us, still conservatively however, refigure the total costs as $1 quadrillion. The result of dividing the money being spent to prevent such an attack, $2 billion, by $1 quadrillion is 1/500,000. Is there only a 1 in 500,000 probability of a bioterrorist attack of that magnitude in the next year? One does not know, but the figure seems too low.
It does not follow that $2 billion a year is too little to be spending to prevent a bioterrorist attack; one must not forget the distinction between total and marginal costs. Suppose that the $2 billion expenditure reduces the probability of such an attack from.01 to .0001. The expected cost of the attack would still be very high – $1 quadrillion multiplied by .0001 is $100 billion – but spending more than $2 billion might not reduce the residual probability of .0001 at all. For there might be no feasible further measures to take to combat bioterrorism, especially when we remember that increasing the number of people involved in defending against bioterrorism, including not only scientific and technical personnel but also security guards in laboratories where lethal pathogens are stored, also increases the number of people capable, alone or in conjunction with others, of mounting biological attacks. But there are other response measures that should be considered seriously, such as investing in developing and stockpiling broad-spectrum vaccines, establishing international controls over biological research, and limiting publication of bioterror ‘recipes’. One must also bear in mind that expenditures on combating bioterrorism do more than prevent mega-attacks; the lesser attacks, which would still be very costly both singly and cumulatively, would also be prevented.
Costs, moreover, tend to be inverse to time. It would cost a great deal more to build an asteroid defence in 1 year than in 10 years because of the extra costs that would be required for a hasty reallocation of the required labour and capital from the current projects in which they are employed; so would other crash efforts to prevent catastrophes. Placing a lid on current expenditures would have the incidental benefit of enabling additional expenditures to be deferred to a time when, because more will be known about both the catastrophic risks and the optimal responses to them, considerable cost savings may be possible. The case for such a ceiling derives from comparing marginal benefits to marginal costs; the latter may be sharply increasing in the short run. 4
A couple of examples will help to show the utility of cost-benefit analytical techniques even under conditions of profound uncertainty. The first example involves the Relativistic Heavy Ion Collider (RHIC), an advanced research particle accelerator that went into operation at Brookhaven National Laboratory in Long Island in 2000. As explained by the distinguished English physicist Sir Martin Rees (2003, pp. 120–121), the collisions in RHIC might conceivably produce a shower of quarks that would ‘reassemble themselves into a very compressed object called a strangelet. … A strangelet could, by contagion, convert anything else it encountered into a strange new form of matter. … A hypothetical strangelet disaster could transform the entire planet Earth into an inert hyperdense sphere about one hundred metres across’. Rees (2003, p. 125) considers this ‘hypothetical scenario’ exceedingly unlikely, yet points out that even an annual probability of 1 in 500 million is not wholly negligible when the result, should the improbable materialize, would be so total a disaster.
Concern with such a possibility led John Marburger, the director of the Brookhaven National Laboratory and now the President’s science advisor, to commission a risk assessment by a committee of physicists chaired by Robert Jaffe before authorizing RHIC to begin operating. Jaffe’s committee concluded that the risk was slight, but did not conduct a cost-benefit analysis.
RHIC cost $600 million to build and its annual operating costs were expected to be $130 million. No attempt was made to monetize the benefits that the experiments conducted in it were expected to yield but we can get the analysis going by making a wild guess (to be examined critically later) that the benefits can be valued at $250 million per year. An extremely conservative estimate, which biases the analysis in favour of RHIC’s passing a cost-benefit test, of the cost of the extinction of the human race is $600 trillion. 5 The final estimate needed to conduct a cost-benefit analysis is the annual probability of a strangelet disaster in RHIC: here a ‘best guess’ is 1 in 10 million. (See also Chapter 16 in this volume.)
Granted, this really is a guess. The physicist Arnon Dar and his colleagues estimated the probability of a strangelet disaster during RHIC’s planned period of 10-year life as no more than 1 in 50 million, which on an annual basis would mean roughly 1 in 500 million. Robert Jaffe and his colleagues, the official risk-assessment team for RHIC, offered a series of upper-bound estimates, including a 1 in 500,000 probability of a strangelet disaster over the 10-year period, which translates into an annual probability of such a disaster of approximately 1 in 5 million.
A 1 in 10 million estimate yields an annual expected extinction cost of $60 million for 10 years to add to the $130 million in annual operating costs and the initial investment of $600 million – and with the addition of that expected cost, it is easily shown that the total costs of the project exceed its benefits if the benefits are only $250 million a year. Of course this conclusion could easily be reversed by raising the estimate of the project’s benefits above my ‘wild guess’ figure of $250 million. But probably the estimate should be lowered rather than raised. For, from the standpoint of economic policy, it is unclear whether RHIC could be expected to yield any social benefits and whether, if it did, the federal government should subsidize particle-accelerator research. The purpose of RHIC is not to produce useful products, as earlier such research undoubtedly did, but to yield insights into the earliest history of the universe. In other words, the purpose is to quench scientific curiosity. Obviously, that is a benefit to scientists, or at least to high-energy physicists. But it is unclear why it should be thought a benefit to society as a whole, or in any event why it should be paid for by the taxpayer, rather than financed by the universities that employ the physicists who are interested in conducting such research. The same question can be asked concerning other government subsidies for other types of purely academic research but with less urgency for research that is harmless. If there is not good answer to the general question, the fact that particular research poses even a slight risk of global catastrophe becomes a compelling argument against its continued subsidization.
The second example, which will occupy much of the remaining part of this chapter, involves global warming. The Kyoto Protocol, which recently came into effect by its terms when Russia signed it, though the United States has not, requires the signatory nations to reduce their carbon dioxide emissions to a level 7–10% below what they were in the late 1990s, but exempts developing countries, such as China, a large and growing emitter, and Brazil, which is destroying large reaches of the Amazon rain forest, much of it by burning. The effect of carbon dioxide emissions on the atmospheric concentration of the gas is cumulative, because carbon dioxide leaves the atmosphere (by being absorbed into the oceans) at a much lower rate than it enters it, and therefore the concentration will continue to grow even if the annual rate of emission is cut down substantially. Between this phenomenon and the exemptions, it is feared that the Kyoto Protocol will have only a slight effect in arresting global warming. Yet the tax or other regulatory measures required to reduce emissions below their level of 6 years ago will be very costly.
The Protocol’s supporters are content to slow the rate of global warming by encouraging, through heavy taxes (e.g., on gasoline or coal) or other measures (such as quotas) that will make fossil fuels more expensive to consumers, conservation measures such as driving less or driving more fuel-efficient cars that will reduce the consumption of these fuels. This is either too much or too little. It is too much if, as most scientists believe, global warming will continue to be a gradual process, producing really serious effects – the destruction of tropical agriculture, the spread of tropical diseases such as malaria to currently temperate zones, dramatic increases in violent storm activity (increased atmospheric temperatures, by increasing the amount of water vapour in the atmosphere, increase precipitation), 6 and a rise in sea levels (eventually to the point of inundating most coastal cities) – only towards the end of the century. For by that time science, without prodding by governments, is likely to have developed economical ‘clean’ substitutes for fossil fuels (we already have a clean substitute – nuclear power) and even economic technology for either preventing carbon dioxide from being emitted into the atmosphere by the burning of fossil fuels or for removing it from the atmosphere. 7 However, the Protocol, at least without the participation of the United States and China, the two largest emitters, is too limited a response to global warming if the focus is changed from gradual to abrupt global warming. Because of the cumulative effect of carbon-dioxide emissions on the atmospheric concentration of the gas, a modest reduction in emissions will not reduce that concentration, but merely modestly reduce its rate of growth.
At various times in the earth’s history, drastic temperature changes have occurred in the course of just a few years. In the most recent of these periods, which geologists call the ‘Younger Dryas’ and date to about 11,000 years ago, shortly after the end of the last ice age, global temperatures soared by about 14°F in about a decade (Mithin, 2003). Because the earth was still cool from the ice age, the effect of the increased warmth on the human population was positive. However, a similar increase in a modern decade would have devastating effects on agriculture and on coastal cities, and might even cause a shift in the Gulf Stream that would result in giving all of Europe a Siberian climate. Recent dramatic shrinking of the north polar icecap, ferocious hurricane activity, and a small westward shift of the Gulf Stream are convincing many scientists that global warming is proceeding much more rapidly than expected just a few years ago.
Because of the enormous complexity of the forces that determine climate, and the historically unprecedented magnitude of human effects on the concentration of greenhouse gases, the possibility that continued growth in that concentration could precipitate – and within the near rather than the distant future – a sudden warming similar to that of the Younger Dryas cannot be excluded. Indeed, no probability, high or low, can be assigned to such a catastrophe. But it may be significant that, while dissent continues, many climate scientists are now predicting dramatic effects from global warming within the next 20–40 years, rather than just by the end of the century (Lempinen, 2005). 8 It may be prudent, therefore, to try to stimulate the rate at which economical substitutes for fossil fuels, and technology both for limiting the emission of carbon dioxide by those fuels when they are burned in internal-combustion engines or electrical generating plants, and for removing carbon dioxide from the atmosphere, are developed.
Switching focus from gradual to abrupt global warming has two advantages from the standpoint of analytical tractability. The first is that, given the rapid pace of scientific progress, if disastrous effects from global warming can safely be assumed to lie at least 50 years in the future, it makes sense not to incur heavy costs now but instead to wait for science to offer a low-cost solution of the problem. Second, to compare the costs of remote future harms with the costs of remedial measures taken in the present presents baffling issues concerning the choice of a discount rate. Baffling need not mean insoluble; the ‘time horizons’ approach to discounting offers a possible solution (Fearnside, 2002). A discounted present value can be equated to an undiscounted present value simply by shortening the time horizon for the consideration of costs and benefits. For example, the present value of an infinite stream of costs discounted at 4% is equal to the undiscounted sum of those costs for 25 years, while the present value of an infinite stream of costs discounted at 1% is equal to the undiscounted sum of those costs for 100 years. The formula for the present value of $1 per year forever is $1/r, where r is the discount rate. So if r is 4%, the present value is $25, and this is equal to an undiscounted stream of $1 per year for 25 years. If r is 1%, the undiscounted equivalent is 100 years.
One way to argue for the 4% rate (i.e., for truncating our concern for future welfare at 25 years) is to say that people are willing to weight the welfare of the next generation as heavily as our own welfare but that’s the extent of our regard for the future. One way to argue for the 1% rate is to say that they are willing to give equal weight to the welfare of everyone living in this century, which will include us, our children, and our grandchildren, but beyond that we do not care. Looking at future welfare in this way, one may be inclined towards the lower rate – which would have dramatic implications for willingness to invest today in limiting gradual global warming. The lower rate could even be regarded as a ceiling. Most people have some regard for human welfare, or at least the survival of some human civilization, in future centuries. We are grateful that the Romans did not exterminate the human race in chagrin at the impending collapse of their empire.
Another way to bring future consequences into focus without conventional discounting is by aggregating risks over time rather than expressing them in annualized terms. If we are concerned about what may happen over the next century, then instead of asking what the annual probability of a collision with a 10 km asteroid is, we might ask what the probability is that such a collision will occur within the next 100 years. An annual probability of 1 in 75 million translates into a century probability of roughly 1 in 750,000. That may be high enough – considering the consequences if the risk materializes – to justify spending several hundred million dollars, perhaps even several billion dollars to avert it.
The choice of a discount rate can be elided altogether if the focus of concern is abrupt global warming, which could happen at any time and thus constitutes a present rather than merely a remote future danger. Because it is a present danger, gradual changes in energy use that promise merely to reduce the rate of emissions are not an adequate response. What is needed is some way of accelerating the search for a technological response that will drive the annual emissions to zero or even below. Yet the Kyoto Protocol might actually do this by impelling the signatory nations to impose stiff taxes on carbon dioxide emissions in order to bring themselves into compliance with the Protocol. The taxes would give the energy industries, along with business customers of them such as airlines and manufacturers of motor vehicles, a strong incentive to finance R&D designed to create economical clean substitutes for such fuels and devices to ‘trap’ emissions at the source, before they enter the atmosphere, or even to remove carbon dioxide from the atmosphere. Given the technological predominance of the United States, it is important that these taxes be imposed on US firms, which they would be if the United States ratified the Kyoto Protocol and by doing so became bound by it.
One advantage of the technology-forcing tax approach over public subsidies for R&D is that the government would not be in the business of picking winners – the affected industries would decide what R&D to support – and another is that the brunt of the taxes could be partly offset by reducing other taxes, since emission taxes would raise revenue as well as inducing greater R&D expenditures.
Itmightseem that subsidies would be necessary for technologies that would have no market, such as technologies for removing carbon dioxide from the atmosphere. There would be no private demand for such technologies because, in contrast to ones that reduce emissions, technologies that remove already emitted carbon dioxide from the atmosphere would not reduce any emitter’s tax burden. This problem is, however, easily solved by making the tax a tax on net emissions. Then an electrical generating plant or other emitter could reduce its tax burden by removing carbon dioxide from the atmosphere as well as by reducing its own emissions of carbon dioxide into the atmosphere.
The conventional assumption about the way that taxes, tradable permits, or other methods of capping emissions of greenhouse gases work is that they induce substitution away from activities that burn fossil fuels and encourage more economical use of such fuels. To examine this assumption, imagine (unrealistically) that the demand for fossil fuels is completely inelastic in the short run. 9 Then even a very heavy tax on carbon dioxide emissions would have no short-run effect on the level of emissions, and one’s first reaction is likely to be that, if so, the tax would be ineffectual. Actually it would be a highly efficient tax from the standpoint of generating government revenues (the basic function of taxation); it would not distort the allocation of resources, and therefore its imposition could be coupled with a reduction in less efficient taxes without reducing government revenues, although the substitution would be unlikely to be complete because, by reducing taxpayer resistance, more efficient taxes facilitate the expansion of government.
More important, such a tax might – paradoxically – have an even greater impact on emissions, precisely because of the inelasticity of short-run demand, than a tax that induced substitution away from activities involving the burning of fossil fuels or that induced a more economical use of such fuels. With immediate substitution of alternative fuels impossible and the price of fossil fuels soaring because of the tax, there would be powerful market pressures both to speed the development of economical alternatives to fossil fuels as energy sources and to reduce emissions, and the atmospheric concentration, of carbon dioxide directly.
From this standpoint a tax on emissions would be superior to a tax on the fossil fuels themselves (e.g., a gasoline tax, or a gas on B.T.U. content). Although an energy tax is cheaper to enforce because there is no need to monitor emissions, only an emissions tax would be effective in inducing carbon sequestration, because sequestration reduces the amount of atmospheric carbon dioxide without curtailing the demand for fossil fuels. A tax on gasoline will reduce the demand for gasoline but will not induce efforts to prevent the carbon dioxide emitted by the burning of the gasoline that continues to be produced from entering the atmosphere.
Dramatic long-run declines in emissions are likely to result only from technological breakthroughs that steeply reduce the cost of both clean fuels and carbon sequestration, rather than from insulation, less driving, lower thermostat settings, and other energy-economizing moves; and it is dramatic declines that we need. Even if the short-run elasticity of demand for activities that produce carbon dioxide emissions were – 1 (i.e., if a small increase in the price of the activity resulted in a proportionately equal reduction in the scale of the activity), a 20% tax on emissions would reduce their amount by only 20% (this is on the assumption that emissions are produced in fixed proportions with the activities generating them). Because of the cumulative effect of emissions on atmospheric concentrations of greenhouse gases, those concentrations would continue to grow, albeit at a 20% lower rate; thus although emissions might be elastic with respect to the tax, the actual atmospheric concentrations, which are the ultimate concern, would not be. In contrast, a stiff emissions tax might precipitate within a decade or two technological breakthroughs that would enable a drastic reduction of emissions, perhaps to zero. If so, the effect of the tax would be much greater than would be implied by estimates of the elasticity of demand that ignored such possibilities. The possibilities are masked by the fact that because greenhouse-gas emissions are not taxed (or classified as pollutants), the private incentives to reduce them are meagre.
Subsidizing research on measures to control global warming might seem more efficient than a technology-forcing tax because it would create a direct rather than merely an indirect incentive to develop new technology. But the money to finance the subsidy would have to come out of tax revenues, and the tax (whether an explicit tax, or inflation, which is a tax on cash balances) that generated these revenues might be less efficient than a tax on emissions if the latter taxed less elastic activities, as it might. A subsidy, moreover, might induce overinvestment. A problem may be serious and amenable to solution through an expenditure of resources, but above a certain level additional expenditures may contribute less to the solution than they cost. An emissions tax set equal to the social cost of emissions will not induce overinvestment, as industry will have no incentive to incur a greater cost to avoid the tax. If the social cost of emitting a specified quantity of carbon dioxide is $1 and the tax therefore is $1, industry will spend up to $1, but not more, to avoid the tax. If it can avoid the tax only by spending $1.01 on emission-reduction measures, it will forgo the expenditure and pay the tax.
Furthermore, although new technology is likely to be the ultimate solution to the problem of global warming, methods for reducing carbon dioxide emissions that do not depend on new technology, such as switching to more fuel-efficient cars, may have a significant role to play, and the use of such methods would be encouraged by a tax on emissions but not by a subsidy for novel technologies, at least until those technologies yielded cheap clean fuels.
The case for subsidy would be compelling only if inventors of new technologies for combating global emissions could not appropriate the benefits of the technologies and therefore lacked incentives to develop them. But given patents, trade secrets, trademarks, the learning curve (which implies that the first firm in a new market will have lower production costs than latecomers), and other methods of internalizing the benefits of inventions, appropriability should not be a serious problem, with the exception of basic research, including research in climate science.
A superficially appealing alternative to the Kyoto Protocol would be to adopt a ‘wait and see’ approach – the approach of doing nothing at all about greenhouse-gas emissions in the hope that a few more years of normal (as distinct from tax-impelled) research in climatology will clarify the true nature and dimensions of the threat of global warming, and then we can decide what if any measures to take to reduce emissions. This probably would be the right approach were it not for the practically irreversible effect of greenhouse-gas emissions on the atmospheric concentration of those gases. Because of that irreversibility, stabilizing the atmospheric concentration of greenhouse gases at some future date might require far deeper cuts in emissions then than if the process of stabilization begins now. Making shallower cuts now can be thought of as purchasing an option to enable global warming to be stopped or slowed at some future time at a lower cost. Should further research show that the problem of global warming is not a serious one, the option would not be exercised.
To illustrate, suppose there is a 70% probability that in 2024 global warming will cause a social loss of $1 trillion (present value) and a 30% probability that it will cause no loss, and that the possible loss can be averted by imposing emission controls now that will cost the society $500 billion (for simplicity’s sake, the entire cost is assumed to be borne this year). In the simplest form of cost-benefit analysis, since the discounted loss from global warming in 2024 is $700 billion, imposing the emission controls now is cost-justified. But suppose that in 2014 we will learn for certain whether there is going to be the bad ($1 trillion) outcome in 2024. Suppose further that if we postpone imposing the emission controls until 2014, we can still avert the $1 trillion loss. Then clearly we should wait, not only for the obvious reason that the present value of $500 billion to be spent in 10 years is less than $500 billion (at a discount rate of 3% it is approximately $425 billion) but also and more interestingly because there is a 30% chance that we will not have to incur any cost of emission controls. As a result, the expected cost of the postponed controls is not $425 billion, but only 70% of that amount, or $297.5 billion, which is a lot less than $500 billion. The difference is the value of waiting.
Now suppose that if today emission controls are imposed that cost society $100 billion, this will, by forcing the pace of technological advance (assume for simplicity that this is their only effect – that there is no effect in reducing emissions), reduce the cost of averting in 2014 the global-warming loss of $1 trillion in 2024 from $500 billion to $250 billion. After discounting to present value at 3% and by 70% to reflect the 30% probability that we will learn in 2014 that emission controls are not needed, the $250 billion figure shrinks to $170 billion. This is $127.5 billion less than the superficially attractive pure wait-and-see approach ($297.5 billion minus $170 billion). Of course, there is a price for the modified wait-and-see option – $100 billion. But the value is greater than the price.
This is an example of how imposing today emissions limits more modest than those of the Kyoto Protocol might be a cost-justified measure even if the limits had no direct effect on atmospheric concentrations of greenhouse gases. Global warming could be abrupt without being catastrophic and catastrophic without being abrupt. But abrupt global warming is more likely to be catastrophic than gradual global warming, because it would deny or curtail opportunities for adaptive responses, such as switching to heat-resistant agriculture or relocating population away from coastal regions. The numerical example shows that the option approach is attractive even if the possibility of abrupt global warming is ignored; in the example, we know that we are safe until 2024. However, the possibility of abrupt warming should not be ignored. Suppose there is some unknown but not wholly negligible probability that the $1 trillion global-warming loss will hit in 2014 and that it will be too late then to do anything to avert it. That would be a ground for imposing stringent emissions controls earlier even though by doing so we would lose the opportunity to avoid their cost by waiting to see whether they would actually be needed. Since we do not know the point at which atmospheric concentrations of greenhouse gases would trigger abrupt global warming, the imposition of emissions limits now may, given risk aversion, be an attractive insurance policy. An emissions tax that did not bring about an immediate reduction in the level of emissions might still be beneficial by accelerating technological breakthroughs that would result in zero emissions before the trigger point was reached.
The risk of abrupt global warming is not only an important consideration in deciding what to do about global warming; unless it is given significant weight, the political prospects for strong controls on greenhouse-gas emissions are poor. The reason can be seen in a graph that has been used without much success to galvanize public concern about global warming (IPCC, 2001; Fig. 9.1). The shaded area is the distribution of predictions of global temperature changes over the course of the century, and is at first glance alarming. However, a closer look reveals that the highest curve, which is based on the assumption that nothing will be done to curb global warming, shows a temperature increase of only about 10° Fahrenheit over the course of the century. Such an increase would be catastrophic if it occurred in a decade, but it is much less alarming when spread out over a century, as that is plenty of time for a combination of clean fuels and cheap carbon-sequestration methods to reduce carbon dioxide emissions to zero or even (through carbon sequestration) below zero without prodding by governments. Given such an outlook, convincing governments to incur heavy costs now to reduce the century increase from 10 to say 5 degrees is distinctly an uphill fight. There is also a natural scepticism about any attempt to predict what is going to happen a hundred years in the future, and a belief that since future generations will be wealthier than our generation they will find it less burdensome to incur large costs to deal with serious environmental problems.
Nevertheless, once abrupt global warming is brought into the picture, any complacency induced by the graph is quickly dispelled. For we then understand that the band of curves in the graph is arbitrarily truncated; that we could have a vertical takeoff say in 2020 that within a decade would bring us to the highest point in the graph. Moreover, against that risk, a technology-forcing tax on emissions might well be effective even if only the major emitting countries imposed substantial emission taxes. If manufacturers of automobiles sold in North America, the European Union, and Japan were hit with a heavy tax on carbon dioxide emissions from their automobiles, the fact that China was not taxing automobiles sold in its country would not substantially erode the incentive of the worldwide automobile industry to develop effective methods for reducing the carbon dioxide produced by their automobiles.
It is tempting to suppose that measures to deal with long-run catastrophic threats can safely be deferred to the future because the world will be richer and therefore abler to afford costly measures to deal with catastrophe. However, such complacency is unwarranted. Catastrophes can strike at any time and if they are major could make the world significantly poorer. Abrupt climate change is a perfect example. Change on the order of the Younger Dryas might make future generations markedly poorer than we are rather than wealthier, as might nuclear or biological attacks, cosmic impacts, or super-volcanic eruptions. These possibilities might actually argue for using a negative rather than positive discount rate to determine the present-value cost of a future climate disaster. 10
Fig. 9.1 The global climate of the twenty-first century will depend on natural changes and the response of the climate system to human activities.
Credit: IPCC, 2001: Climate Change 2001: Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change [Houghton, J.T., Y. Ding, D.J. Griggs, M. Noguer, P.J. van der Linden, X. Dai, K. Maskell, and C.A. Johnson (eds.)]. Figure 5, p 14. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA.
I thank Meghan Maloney for her very helpful research assistance and Nick Bostrom and Milan Cirkovic for their very helpful comments on a previous draft.
Chesley, S.R. and Ward, S.N. (2006). A quantitative assessment of the human hazard from impact-generated tsunami. J. Nat. Haz., 38, 355–374.
Emanuel, K. (2005). Increasing destructiveness of tropical cyclones over the past 30 years. Nature, 436, 686–688.
Fearnside, P.M. (2002). Time preference in global warming calculations: a proposal for a unified index. Ecol. Econ., 41, 21–31.
Hassol, S.J. (2004). Impacts of a Warming Arctic: Arctic Climate Impact Assessment (Cambridge: Cambridge University Press). Available online at http://amap.no/acia/
IPCC (Houghton, J.T., Ding, Y., Griggs, D.J., Noguer, M., van der Linden, P.J., and Xiaosu, D. (eds.)) (2001). Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) (Cambridge: Cambridge University Press).
Lempinen, E.W. (2005). Scientists on AAAS panel warn that ocean warming is having dramatic impact (AAAS news release 17 Feb 2005) http://www.aaas.org/news/releases/2005/0217warmingwarning.shtml
Mithin, S. (2003). After the Ice: A Global Human History, 20,000–5,000 BC (Cambridge, MA: Harvard University Press).
Posner, R.A. (2004). Catastrophe: Risk and Response (New York: Oxford University Press).
Rees, M.J. (2003). Our Final Hour: A Scientist’s Warning; How Terror, Error, and Environmental Disaster Threaten Humankind’s Future in this Century – on Earth and Beyond (New York: Basic Books).
Schleifstein, M. and McQuaid, J. (3 July 2002). The big easy is unprepared for the big one, experts say. Newhouse News Service. http://www.newhouse.com/archive/story1b070502.html
Socolow, R.H. (July 2005). Can we bury global warming? Scientific Am., 293, 49–55. Trenberth, K. (2005). Uncertainty in hurricanes and global warming. Science, 308, 1753–1754.
U.S. Department of Homeland Security (2004). Fact sheet: Department of Homeland Security Appropriations Act of 2005 (Press release 18 October 2004) http://www.dhs.gov/dhspublic/interapp/press_release/press_release_0541
U.S. Office of Management and Budget (2003). 2003 report to Congress on combating terrorism (O.M.B. report Sept 2003). http://www.whitehouse.gov/omb/inforeg/2003_combat_terr.pdf
Viscusi, W.K. and Aldy, J.E. (2003). The value of a statistical life: a critical review of market estimates throughout the world. J. Risk Uncertainty, 27, 5–76.
Ward, S.N. and Asphaug, E. (2000). Asteroid impact tsunami: a probabilistic hazard assessment. Icarus, 145, 64–78.