4
Climate in the Future
As described in the discussion of ENSO forecasting in chapter 3, climate predictions are achieved using computer models. These models represent the behavior of natural systems, including projected future behavior, using highly simplified mathematical descriptions of natural processes. For climate prediction, the model is computer code, sometimes referred to as a numerical simulation. Almost all of the familiar weather forecasts provided by media are made using computer models. One of the great advantages a computer has is a vast and unfailing memory.
Deep Blue, IBM’s chess-playing computer, was able to beat Garry Kasparov because it stored thousands of games in memory, compared the current game to its database, and tried millions of next moves to determine which would provide the greatest advantage. The computer simulated new games that might follow a suite of next moves, in effect producing model chess games. Kasparov simply did not have the memory, or speed of thought, to simulate millions of possible moves, even though his IQ has been measured at 190.
Climate Models
Global climate models are different from weather forecasts in very important ways. One difference is that the past provides only a limited guide to the future into which climate predictions are made. The models need to project into a future in which greenhouse gas (GHG) concentrations are much greater than they have been in the past few thousand years, so they enter a territory for which there is little precedent. To state that difference more clearly, weather forecasting is an initial value problem—present conditions are constantly updated. Climate predictions are a boundary value problem—after initialization, the model outcome is governed by changes in prescribed conditions such as GHG concentration, and the model outcome becomes largely independent of the starting position.
A comprehensive climate model includes a large and complex set of mathematical expressions (comprising sets of simultaneous differential equations in continuous form solved by standard numerical methods). These expressions nevertheless describe idealizations. For instance, the Navier-Stokes equations (four simultaneous differential equations) accurately describe how convection operates and gives rise to Hadley cells and so forth, but it is not possible to account for the development of clouds, precipitation, or chemical reactions in the atmosphere using these equations.
In fact, the global climate model is not solved by the full form of the Navier-Stokes equations but by a reduced form known as the “primitive equations.” These equations accurately describe an idealized, unrealistic situation in which no clouds form and it never rains. In effect, they accurately describe a low-level approximation of the world, just as the simple mathematics we used to explain why Earth is about 15°C on average accurately describes a very simplified Earth. The basic equations can be modified to include many more processes that affect climate, but they are generally introduced as add-ons and can be very cumbersome. Technically this is known as parametrization; that is, some processes are introduced but not calculated by the model itself. Adding features means that the depiction of Earth in the model becomes a better approximation of the real Earth, but it remains an approximation.
Global climate models have become more comprehensive over time, incorporating more and more aspects of Earth’s systems that are important in climate processes. The earliest models from the 1970s didn’t even include the oceans. When oceans were first added, they effectively had no depth and were referred to as swamp oceans. It is only in the most recent models that such features as ocean circulation, response of vegetation to changing climate, clouds, and chemical reactions in the atmosphere have been incorporated. Even so, these models remain approximate and incomplete depictions of the planet.
Box 4.1 The Intergovernmental Panel on Climate Change (IPCC)
The IPCC, founded in 1988 by the World Meteorological Organization and the United Nations Environment Program, is an international body assessing the state of science related to climate change. The panel regularly assesses developments, future risks, and improvement efforts in the realm of climate change research. It currently has 195 member countries.
Its purpose is to promote a consensus for all countries involved on the implications of climate change, so governments may develop informed policy for climate action:
The IPCC embodies a unique opportunity to provide rigorous and balanced scientific information to decision-makers because of its scientific and intergovernmental nature.
IPCC work is shared between three working groups:
Working Group I: assesses the physical scientific aspects of the climate system and climate change.
Working Group II: assesses the vulnerability of socioeconomic and natural systems and options for adaption to negative impacts.
Working Group III: assesses options for mitigating climate change through limiting or preventing greenhouse gas emissions and enhancing activities to remove them from the atmosphere.
The IPCC has thus far issued five “Assessment Reports” and the sixth is underway. All reports are available at www.ipcc.ch/report/ar5/.
Along with the equations that describe the mathematical physics of climate in computer code, Earth itself has to have a representation in the computer. To do this, Earth is divided into grid cells (figure 4-1). The finer the grid cell, the more realistically Earth can be represented. The figure shows the improvement in resolution obtained in successive IPCC models. This is something similar to pixel density of a computer screen or television, although no climate model can be thought of as having high definition. No model can derive any aspect of climate on a spatial scale smaller than the grid size. The grids shown represent Earth’s surface as an example, but the atmosphere and oceans must also be represented in gridded form.
image

Figure 4.1 IPCC climate models used for prediction must include a representation of the Earth’s topography and numerous other parameters. These are input in grid form in which parameter values are constant in each grid cell. Over time the representation has become more detailed. In the 1990 report the shape of Europe is barely recognizable.
Source: Intergovernmental Panel on Climate Change.
The grid size in current models is much finer than earlier generations of models, but the size is still too large to adequately represent a number of features important in the climate system. Cloud formations, for instance, are often much smaller than the 110 × 110 km grid size of current models. Rivers are also much smaller than this grid. Vegetation type varies considerably at scales much smaller that the finest model grids. The effects of these, and a variety of other processes, are approximated and introduced as an average over the grid cell.
Why not use even smaller grid sizes? The reason is purely practical. A computer calculation is made using equations that describe a simplified system for each cell. More cells require more calculations, and each one takes a finite, though tiny, amount of time. The more complete the model, the more terms and parameters are required in the equations. More cells, combined with more complete representations of the physics and chemistry of climate processes, requires more time to execute a model simulation. The most important reasons climate models have become higher in resolution, and more complete, are that computers are constantly getting faster and memory is getting larger.
For example, if a model makes a simulation of the future climate every ten years for one hundred years, that calculation consumes 10 times less computer time than making a simulation every year for one hundred years. Current models run at time resolutions as fine as a few minutes, so the number of calculations needed to project to 2100, for example, is enormous.
All the major climate research institutions—NASA, NOAA, and NCAR in the United States and the Hadley Center in the United Kingdom—have very large computer facilities dedicated to making predictions. Significant efforts have been made in model intercomparison, running suites of models under specified conditions and diagnosing differences in output, so models can be more consistent in their performance. Nevertheless, climate model outputs are destined to be approximations that predict changes in an idealized world, and idealizations will differ from one model to another.
Surely there is only one correct climate model. That would be ideal, but no agreement exists today on that model. This is somewhat akin to evaluating search engines. We are all familiar with searching on keywords. But if you use the same keyword with different search engines, you will get different answers, at least after a page or two of searching. Why? The search algorithm associated with each search tool is different, maybe not hugely different but different enough to give varied answers. And those answers become more different as the search progresses. The first ten results of a keyword search by different tools may be quite similar, but after the fiftieth search the results will look quite different—not unlike the way climate models diverge more as they predict farther into the future. Is there one correct way to search? Probably not.
How Predictions Are Made
A typical model scheme can be depicted as a cascade, or sequence, of steps as shown in figure 4-2. The model itself is shown shaded.

Figure 4.2 A climate model makes calculations of Earth’s future temperature using as input the projected change in emissions of GHGs. The model calculation proceeds in several steps explained in the text.
Source: “Global Climate Projections,” chap. 10 in Intergovernmental Panel on Climate Change, Climate Change 2007: The Physical Science Basis, contribution of Working Group I to the Fourth Assessment Report of the Intergovernmetnal Panel on Climate Change, ed. Susan Solomon, Dahe Qin, Martin Manning, Zhenlin Chen, Melinda Marquis, Kristen B. Averyt, Melinda M. B. Tignor, and Henry LeRoy Miller Jr. (Cambridge: Cambridge University Press, 2007).
Model Inputs: Emission Scenarios
The heart of a climate model is a calculation that takes prescribed changes in the concentration of numerous constituents of the atmosphere and determines the temperature change that would result, together with other derived features of climate such as precipitation. The model requires concentrations of these atmospheric components both at the present time and into the future. Models do not calculate the future concentrations of anthropogenic GHGs but require them as an input. The most recent models, including biochemical and atmospheric chemistry, do calculate changes in natural GHG concentrations, but the anthropogenic component must be specified as a boundary condition.
Climate model inputs are described in terms of emissions scenarios that describe how GHG emissions might evolve in the future. They rely on different projections of factors such as population growth, the adoption of new cleaner technologies for energy production, changes in development status around the world and associated changes in consumption, and energy use. The IPCC uses a suite of scenarios and publishes a special report describing how they arrived at each scenario and listing the types of components that go into each scenario. Each scenario is given an acronym and represents a different conception of what the future might be like. Box 4.2 outlines the types of components that go into each scenario. Figure 4-3 illustrates the importance of scenario choice.
image

Figure 4.3 Temperature predictions made using different scenarios and the same suite of models.
Source: “Global Climate Projections,” chap. 10 in Intergovernmental Panel on Climate Change, Climate Change 2007: The Physical Science Basis, contribution of Working Group I to the Fourth Assessment Report of the Intergovernmetnal Panel on Climate Change, ed. Susan Solomon, Dahe Qin, Martin Manning, Zhenlin Chen, Melinda Marquis, Kristen B. Averyt, Melinda M. B. Tignor, and Henry LeRoy Miller Jr. (Cambridge: Cambridge University Press, 2007).
Some scenarios are quite pessimistic about the future, suggesting that emission rates will remain high throughout the twenty-first century and population will continue to increase. Other scenarios are more optimistic, imagining a peak in emissions at midcentury and large-scale adoption of new cleaner energy technologies. For instance, the second panel in figure 4-3 predicts a 6°C temperature change, whereas the fifth panel model predicts about a 2°C temperature change. The difference in the calculations reflects different inputs to the model for calculation. Critical to the construction of these global scenarios are assumptions about how the poorer economies will develop and the emissions consequences that might arise from that.
Box 4.2 Emissions Scenarios from the Special Report on Emissions Scenarios (SRES) AR4
A1. The A1 storyline and scenario family describes a future world of very rapid economic growth, global population that peaks in midcentury and declines thereafter, and the rapid introduction of new and more efficient technologies. Major underlying themes are convergence among regions, capacity building and increased cultural and social interactions, with a substantial reduction in regional differences in per capita income. The A1 scenario family develops into three groups that describe alternative directions of technological change in the energy system. The three A1 groups are distinguished by their technological emphasis: fossil intensive (A1FI), non-fossil-energy sources (A1T), or a balance across all sources (A1B) (where balanced is defined as not relying too heavily on one particular energy source, on the assumption that similar improvement rates apply to all energy supply and end-use technologies).
A2. The A2 storyline and scenario family describes a very heterogeneous world. The underlying theme is self-reliance and preservation of local identities. Fertility patterns across regions converge slowly, which results in continuously increasing population. Economic development is primarily regionally oriented, and per capita economic growth and technological change are more fragmented and slower than other storylines.
B1. The B1 storyline and scenario family describes a convergent world with the same global population, peaking in midcentury and declining thereafter, as in the A1 storyline, but with rapid change in economic structures toward a service and information economy, with reductions in material intensity and the introduction of clean and resource-efficient technologies. The emphasis is on global solutions to economic, social, and environmental sustainability, including improved equity, but without additional climate initiatives.
B2. The B2 storyline and scenario family describes a world in which the emphasis is on local solutions to economic, social, and environmental sustainability. It is a world with a continuously increasing global population at a rate lower than A2, intermediate levels of economic development, and less rapid and more diverse technological change than in the A1 and B1 storylines. Although this scenario is also oriented toward environmental protection and social equity, it focuses on local and regional levels.
Source: Intergovernmental Panel on Climate Change
In highly developed countries like the United States, Canada, Australia, and the EU countries, individuals contribute far more carbon dioxide to the atmosphere than do individuals in poor countries. In large part, this is due to emissions from the power sector and manufacturing, which are typically less advanced in poor countries. So-called mobile power, meaning fuels in transportation, are an important source of GHGs, and that can be important in poorer countries.
An important issue that arises for calculations of future emissions is that poor countries have strong and legitimate aspirations for welfare improvement, and in the past that has depended on development of energy resources based on fossil fuels. If poor countries mimic today’s wealthy countries in their development strategies, and developed countries do not significantly reduce emissions from fossil fuels, the most pessimistic of all emissions scenarios will come about with GHG concentrations far more than double their concentration today.
One critical factor in constructing emissions scenarios is population growth. Even if some people are not emitting a great deal, an increasing population will correspond with increasing emissions. Almost a threefold difference in population projections to 2100 is shown between the AR4 scenarios, and this has a profound effect on emissions. A large population in itself does not necessarily imply vastly greater emissions. High fertility rates are typically associated with poor countries today where per capita emissions are very low. More important is the growth in population of people whose lifestyles are associated with high levels of fossil fuel use for energy and transportation.
Figure 4-4 shows AR4 emissions scenarios for carbon dioxide. The range in carbon dioxide emissions across these scenarios is even greater than the range of population estimates, in percentage terms.

Figure 4.4 In the upper left is the suite of projections of GHG emissions used as input to climate models. In the upper right, emissions have been converted to concentrations. Emissions for the four scenarios in the recent past are shown in the lower left.
Source: Based off of U.S. Global Change Research Program, Global Climate Change Impacts in the United States 2009 Report, https://nca2009.globalchange.gov/index.html.
Emissions are usually described in gigatons per year (GT/yr); a gigaton is a billion tons. Emissions are a flow rate, like the strength of water running from an open faucet, and therefore the unit is expressed as an amount per unit of time, usually one year. Note that in the figure the axis is in gigatons of total emissions. It is important to note that even the optimistic scenarios suggest that emissions will begin to decline in midcentury and concentrations will continue to increase, if more slowly.
From Emissions to Concentrations
There is another popular misunderstanding concerning emissions reduction targets. If emissions rates were reduced to 1990 levels, that would not reduce global average temperature to 1990 values. It would only reduce the rate of temperature rise to 1990 rates. The only way temperature will stabilize is by reducing emissions to very near zero. The atmosphere responds to the total concentration of GHGs, and that is derived from the emissions as a first step (see figure 4-2).
As a first step, the model calculates concentrations based on emissions scenarios, one for each scenario. This calculation involves some uncertainty because the exact proportion of each GHG into various sinks—atmosphere, biosphere, and ocean—is not precisely known.
From Concentrations to Radiative Forcing
With the concentrations calculated from differing emissions scenarios, the next step is to estimate the effect this will have on near-surface air temperature. This requires the calculation of radiative forcing. How much does a given change in the concentration of GHGs change the critical balance of incoming and outgoing radiation? The calculation is usually made at the level of the troposphere. If net forcing is positive, Earth’s surface temperature must rise to move the balance back to equilibrium. If it is negative, Earth must cool.
Figure 4-5 shows the best estimates available of the forcing effects of various components of the atmosphere relative to preindustrial values.

Figure 4.5 The effect of different components of the atmosphere on the balance of incoming and outgoing radiation—radiative forcing. GHGs all have positive forcing so increasing their concentration causes an imbalance that leads to warming. Aerosols have negative forcing effects. The net or aggregate forcing is about 1.5 watts per square meter in a positive sense but with a high range of uncertainty.
Source: Courtesy of Leland McInnes/Wikimedia Commons.
Here another uncertainty enters into model calculations. Attached to each bar that represents the magnitude of forcing of a particular atmospheric component is a thin vertical line with a short horizontal bar at either end. This expresses the uncertainty in the forcing estimate—sometimes called whiskers. The forcing effect of some components, such as CO2, is quite accurately known (the whiskers are short) and has been measured in lab experiments, but the effect of others (ozone, for instance) is much less certain (long whiskers).
Chief among those forcing components about which there is large uncertainty are aerosols, which often result from the polluting effect of incomplete fossil fuel combustion at low temperature, as happens, for instance, in diesel engines. They are known to have negative forcing, meaning that additions of these components will lead to cooling rather than warming. Aerosols are not GHGs; they are solid particles, and their effect is to shield Earth from incoming radiation. GHGs are usually defined as gases that give rise to absorption of infrared radiation emitted by Earth. The size of the cooling effect of atmospheric aerosols could be quite large, possibly as large as the warming effect of CO2. The large uncertainty associated with aerosols comes from the many ways in which they interact with incoming radiation (seven ways in all). Aerosols also have direct and indirect effects on cloud formation and properties, each of which has a different magnitude although all are negative. The effect of aerosols might, for instance, enhance the lifetime of some clouds. Note that the aerosol effect is not a feedback; it is the direct result of fossil fuel combustion.
All forcings have been added together and a “net anthropogenic forcing” is determined (most right-hand column in figure 4-5). The net effect is positive, so overall warming is implied, but with a large range of possible values—from as little as 0.5 to as much as almost 2.5 w/m2. The larger figure would imply that aerosols have a relatively small effect and that GHGs dominate radiative forcing. The smaller figure would imply that negative forcings resulting from aerosols are almost in balance with positive forcings from GHGs.
The authors of the IPCC fifth assessment report acknowledged the importance of emphasizing radiative forcing and chose to represent scenarios in terms of representative concentration pathways (RCP; figure 4-6). The concentration of GHGs in the atmosphere is the primary concern, and targets should be thought of in those terms rather than emissions. A reduction in emissions does not imply a reduction in concentration—something that is often missed in policy discussions.

Figure 4.6 The most recent IPCC report, Assessment 5, uses Representative Concentration Pathways (RCPs).
Source: Detlef P. van Vuuren, Jae Edmonds, Mikiko Kainuma, et al., “The Representative Concentration Pathways: An Overview,” Climatic Change 109 (2011): 24. DOI: 10.1007/s10584-011-0148-z.
RCPs essentially combine the first step of a climate model in which emissions are used to calculate concentrations. The vertical axis in figure 4-6 is now scaled in units of w/m2 and is the radiative forcing, not a concentration. In RCP8.5, for instance, the forcing at the end of the century is 8.5 w/m2. That is, the radiative balance would be altered by 8.5 w/m2 and would require Earth to warm considerably to restore the balance. Another advantage to framing the discussion using RCPs as a metric is that temperature is linearly related to the cumulative total of anthropogenic emissions.
Comparing the effect of different GHG components of the atmosphere is frequently done using the global warming potential (GWP), which compares the effect of these gases over specified periods of time. Atmospheric components have quite variable lifetimes (sometimes called average residence times), and they remain in the atmosphere for different periods of time. GWP describes the ability of a specified gas to absorb and reradiate heat compared to carbon dioxide over a specified period of time, from 20 to 500 years. The GWP of carbon dioxide is always set at “1,” and other greenhouse gases are compared to carbon dioxide for the same time frame. For example, methane has a GWP of 56 integrated over 20 years, 21 over 100 years, and 6.5 over 500 years. The most potent of all GHGs is sulfur hexafluoride (SF6), which has an atmospheric lifetime of 3,200 years; the 20, 100, and 500 year GWP values are 16,300, 23,900 and 34,900, respectively. Fortunately, the concentration of SF6 in the atmosphere is very small; there are no natural sources, and human activity is no longer increasing its input to the atmosphere. Although water vapor is a strong GHG, it does not have a calculable GWP because it does not decay in the atmosphere.
Calculating the Temperature Field
The final step is to calculate temperature from estimated forcing. In principle, this should be straightforward, with the caveat that there are considerable associated uncertainties in how to treat various feedback effects. Of the many feedback effects in the climate system that contribute to the uncertain outcome of climate models, one of the most studied and potentially the most important is the effect of clouds—cloud radiative feedback (figure 4-7).
image

Figure 4.7 Cloud radiative feedback. Low clouds are generally very reflection and hence have a cooling effect on the Earth’s surface while highs clouds have the opposite effect.
Source: NASA/Visible Earth, “Cloud Effects on Earth’s Radiation,” https://visibleearth.nasa.gov/view.php?id=54219.
Clouds are composed of water vapor, water, and ice (H2O in gas, liquid, and solid form). Water vapor condenses on cloud concentrating nuclei to form ice and water. Clouds have a major effect on maintaining the temperature of the planet at 15°C in two ways. Water vapor is a GHG, so it has a positive feedback effect on near-surface temperature. But clouds also may have high albedo and reflect a considerable amount of incoming short-wavelength radiation (see chapter 2), so clouds can have positive or negative forcing. Which effect dominates depends on the cloud type and its altitude (see figure 1-13). In general, low cumulus clouds are quite strongly reflective (high albedo), and their dominant effect in the climate system is negative feedback cooling. High clouds are the opposite. They are very thin with low albedo—the Sun may be visible through high clouds—and their dominant effect is positive feedback warming.
As the world warms, more clouds will form because more ocean water will evaporate. Setting aside clouds for the moment, adding water vapor to the atmosphere itself will amplify any temperature increase because moister air holds more heat—a positive feedback. With more clouds, there will be more of the two types of cloud feedback effects. If the cooling effect wins over warming, then clouds will have an overall countering effect to warming, pushing temperatures back down as cloudiness increases. If high clouds dominate, they will have a net warming effect, leading to even more warming. The first is a negative feedback effect and the second is a positive feedback effect, and both are close to instantaneous. Of twenty commonly used models, fourteen have negative aggregate cloud feedback effects and six have positive cloud radiative feedback effects. The sign and strength of cloud radiative feedback is the source of the greatest uncertainty in climate change model predictions today.
The reflectivity of low-level clouds has inspired one of many propositions to intervene in Earth’s radiative balance (with particles introduced into the atmosphere via balloon in one suggestion) to shield Earth from solar radiation. This is known as solar radiation management (SRM). Many other ideas have been proposed, including stripping carbon dioxide directly from the atmosphere. Collectively these ideas are described as geoengineering, a topic too large to cover in this book.
Different models give different results, even when run with the same input scenario (figure 4-8). These model runs begin by “predicting” the past. The model simulations start in 1850 and display the slow variations that characterize the preindustrial period. They also capture the rise in temperature in the immediate postindustrial period, but they diverge soon after that time.
image

Figure 4.8 Different computer models provide different temperature predictions even given the same parameters as input. They diverge more with time into the future. Those that predict in the higher range are described as “sensitive” and in the lower range “insensitive” to GHG forcing.
Source: New Zealand National Institute of Water and Atmospheric Research, “Climate Change Scenarios for New Zealand,” https://www.niwa.co.nz/our-science/climate/information-and-resources/clivar/scenarios.
Because both scenario choice and model play important roles in climate predictions, it can be useful to illustrate both in the same diagram. Figure 4-9 shows the results of six models using two different scenarios: the most optimistic and the most pessimistic of the AR4 suite of scenarios. There is a sixfold difference in the derived year 2100 temperature prediction, from less than 2°C to almost 6°C.

Figure 4.9 A typical way in which temperature projections are illustrated combines a suite of scenarios with a suite of models to give an aggregate estimate of temperature change. Here there are two scenarios shown, A2 and B1 and the spread of projections is that arising from different models.
Source: Reto Knutti, “Should We Believe Model Predictions of Future Climate Change?,” Philosophical Transactions of the Royal Society A, September 25, 2008, https://doi.org/10.1098/rsta.2008.0169.
Temperature change as expressed in figure 4-9 is the average air temperature near the surface of Earth, but the change will not be uniform throughout Earth. All models show that the high northern latitudes will experience much greater changes in temperature than low to middle latitudes because they are currently ice covered; the ice albedo feedback described previously enhances the warming effect. In the more pessimistic scenario/model combinations, temperatures rise by almost 10°C in the Arctic. There is ample evidence that this is indeed happening.
Figure 4-10 illustrates a different representation that includes a rendering of the changes across the globe for two different RCPs: one very optimistic (2.6), one quite pessimistic (8.5).
image

Figure 4.10 Projected change in average annual temperature for two RCP scenarios.
Source: Jerry M. Melillo, Terese Richmond, and Gary W. Yohe, eds., Climate Change Impacts in the United States: The Third National Climate Assessment (Washington, D.C.: U.S. Government Printing Office, 2014), doi:10.7930/J0Z31WJ2.
Figure 4-10 is a common way to illustrate future temperature projections. Rather than select a midrange scenario, a suite of scenarios is chosen that express the range of plausible outcomes. In this way, the projections can be thought of as best-case and worst-case scenarios. What this type of representation cannot do is provide any sense of uncertainty in the projections that could be assessed from figures 4.8 and 4.9.
Possible Futures
In summary, two sources of uncertainty give rise to predictions that range from 1°C to more than 6°C by 2100. The first uncertainty is the very large range in emissions scenarios (RCPs), and the second is inherent uncertainties in model outputs. Scenario uncertainty is sometimes referred to as boundary value uncertainty because the emissions described by a scenario provide the boundary values for the model calculation. The two uncertainties combine additively. The great uncertainty in human behavior implicit in the scenarios/RCPs introduces by far the greater uncertainty.
If it were somehow possible to decide on a “correct” scenario, the range of model-predicted temperatures would be reduced substantially. For instance, if it were decided that RCP 2.5 was “correct”—or the one used for planning purposes—the range of predictions for 2100 would be reduced to 2°C to 3.5°C, less than half of the total range found across all scenarios and models. Similarly, if it could be decided that one model is best, that too would shrink overall uncertainties.
One simple way to think about the outcomes of model calculations is in terms of climate sensitivity. We can write this:
image
ΔT is the change in temperature that would result from a change of ΔF in the forcing. The factor λ is described as climate sensitivity (this is not the same λ as that used for wavelength in describing electromagnetic waves).
If λ is a large number, relatively small changes to the properties of the atmosphere will have large effects on surface air temperature—the climate is sensitive to small changes in GHGs. If the value is small, the atmosphere can change a lot with little effect on surface temperatures—the climate is relatively insensitive to GHGs. The value of λ implicit in model calculations is therefore extremely important in making projections of climate into the future.
How is the value of λ obtained? One way is to examine the ancient record. The best archive of ancient climate is found in ice cores. They record both temperature in the oxygen isotope ratios of the ice crystals and trap tiny bubbles of the ancient atmosphere, which can be analyzed to describe the composition of the atmosphere at the time the ice formed. The two upper graphs in figure 4-11, obtained from analysis of ice cores taken in Antarctica, show the concentration of carbon dioxide and methane, two very important GHGs. The lower graph is the temperature derived from oxygen isotope analysis. The front cover of this book displays one such sample of ice, and you can see the bubbles of the ancient atmosphere trapped inside the dark spots in the ice.
image

Figure 4.11 A comparison of temperature, carbon dioxide (CO2) concentration obtained from ice core data in Antarctica.
Source: NOAA, http://www.ncdc.noaa.gov/paleo/icecore/antarctica/vostok/vostok.html.
The high degree of correlation between these curves is striking, and this can be used to estimate climate sensitivity, assuming that the direction of causation is from GHG concentration to temperature change rather than the other way around. Using this approach, sensitivity can be estimated to be 3/4°C±1/4°C per w/m2. The forcing (ΔF) is expressed in w/m2, which is the same unit as energy flux discussed in chapter 2. The energy flux coming into the top of the atmosphere is 350 w/m2 averaged over the whole Earth, and climate forcing is described as the change relative to that background level.
Sensitivity also can be estimated by running models and matching their output to records of past climate obtained from proxy data such as tree rings. One way to express sensitivity is to ask what temperature change at the surface would be expected for an instantaneous doubling in carbon dioxide concentration. The range of sensitivities estimated from various approaches is 1.5°C to 4.5°C. A commonly used value is 3°C, meaning that were the carbon dioxide content of the atmosphere to double, the expected change in temperature after equilibrium has been reached would be 3°C.
There are a few important caveats to consider. One is that the sensitivity equation assumes a linear relationship between forcing and temperature, which cannot be substantiated. Second, much of what will determine the value of the sensitivity parameter λ are feedback effects that operate on very different time scales: some very short, others much longer. It is common to consider two sensitivities, one that is the immediate response to a doubling of CO2, referred to as instantaneous climate sensitivity, which is the immediate change that would come about from a sudden change in CO2. The second is equilibrium climate sensitivity, which is the relationship after all feedback effects come into play and the climate has reached equilibrium. Estimates of sensitivity from ice core records provides this measure.
Another measure of sensitivity is the transient climate response to cumulative carbon emissions, which is the mean surface temperature change that would come about per 1,000 GtC. It allows the discussion to be framed in terms of total accumulated carbon in the atmosphere, and it has the advantage of a linear relationship to near-surface temperature.
One way to summarize the interacting roles of climate sensitivity and scenario outlook is shown in figure 4-12. Two extreme outcomes are considered. In the lower left is the most desirable outcome. It uses an optimistic emission scenario in which GHGs are steadily reduced and economic growth is equitable. It involves modest population growth combined with an insensitive climate. In the upper right is a pessimistic scenario in which emissions remain high combined with a very sensitive climate system.

Figure 4.12 The interacting roles of climate sensitivity and scenario outlook.
Any location in the space of this sketch is possible, depending on the choice of model and scenario. More important, it is not reasonable to assume that a midway point between the two extremes would be the most likely future.
Past experience does not provide clear guidance on how to predict the future using models of this sort. The past record is, of course, useful in providing guidance on factors that lead to different climate conditions, but this information does not easily lead to improved model prediction strategies. Examination of past climate is very important, however, in estimating equilibrium sensitivity. Statistical predictions of ENSO (and daily weather), for instance, benefit from examination of a time series of many previous El Niño and La Niña events, and weather system models are improved and updated with every new occurrence. One of the defining characteristics of the global climate prediction problem is that Earth is entering a state in which GHG concentrations are higher than any levels experienced for millions of years, so past experience cannot be used directly to tune climate models.
In addition, there is no target time for prediction. Unlike El Niño, which always peaks in December, defining the “peak” of climate change would be arbitrary. Many predictions are made for the year 2100, but that does not mean climate change will have peaked at that time.
As a final comment on uncertainties, consider figure 4-13, which shows a compilation of estimates of equilibrium climate sensitivity. The horizontal axis is the expected temperature, and the vertical axis is the probability of that expectation. The compilation is derived from the analysis of many papers in the scientific literature, each of which makes an assessment of sensitivity. What is of great importance is the overall non-Gaussian shape of the distribution (box 4.3).
image

Figure 4.13 Climate sensitivity based on the analysis of twenty literature studies.
Source: NASA Earth Observatory.
Box 4.3 Skewed Gaussian Distribution
A skewed Gaussian distribution is compared here to an unskewed Gaussian, or normal, distribution. The normal distribution is symmetric about its peak value, as shown in the example to the left. The skewed distribution is asymmetric, with one side much more extended than the other. In the example shown, the distribution would be said to be right-skewed because the distribution appears to be extended to the right side. This is sometimes called a fat-tailed distribution.

Box Figure 4.3.1 Gaussian distributions.
The reason the distribution is fat tailed comes entirely from the effects of feedbacks, as explained in this illustration:

Box Figure 4.3.2 Gaussian uncertainty and aggregate feedback.
Source: Gerard H. Roe and Marcia B. Baker, “Why Is Climate Sensitivity So Unpredictable?,” Science 318, no. 5850 (2007): 629–632.
Shown on the horizontal axis is the aggregate system feedback, f, assumed Gaussian in its distribution and with a positive mean as in the upper panel. The curve in the middle of the graph is the change in temperature from a starting state T0 for a given feedback, f. The vertical axis then maps the Gaussian uncertainty in feedback factors into a right-skewed, or fat-tailed, probability distribution of temperature outcomes. A fat-tailed distribution will always result if the net feedback is positive and Gaussian. This mapping applies to any system with positive aggregate feedbacks and is not unique to the climate system.
Gaussian distributions, or so-called bell curves, are perfectly symmetrical and describe random processes. A “fat-tailed” distribution1 is asymmetric—one side of the distribution trails off slowly, and the other side drops quickly, as in the distributions in figure 4-13. The important thing to recognize is that the tail of the expected temperature is “fat” on the side of the distribution you would rather it weren’t—it’s fat on the hot side. Technically, this is known as a right-skewed Gaussian. When many model realizations are grouped together, their peaks average at around 3°C, which would indicate that Earth’s average temperature would rise 3°C if CO2 concentration were doubled.
On the low side, the distribution cuts off at zero, and many realizations cut off before reaching zero. That means that no model predicts cooling, and it’s just a matter of how much warming we should expect. A distribution like this shows there is a finite probability that temperature could rise much more than 3°C but with very little chance that it could rise much less than 3°C. The probability of very high temperature changes does diminish, but very slowly. Although 3°C is the mean value, the shape of the distribution tells us that temperatures are likely to be higher than the mean rather than lower. Technically, for a regular Gaussian the mean and median value are identical, but for a skewed Gaussian they separate and the median (the most likely value) moves in the direction of the tail. The usual intuition we would like to follow is that the future will lie about midway between the two extremes, as in figure 4-12, but that is not correct—it is more likely to be closer to the darkened dot on the figure.