5
When asked late in life about how he came up with the “confluent hypergeometric equation,” Jule Charney had a stunning answer. In late 1944 or early 1945, he recalled, he’d gone for a walk in the Brentwood Hills of Los Angeles, performing mathematical transformations in his head as he went along. And then suddenly he “saw”—not derived—the solution. “The equation was so simple,” Charney remembered, “that it had to be right.”
A similar quest for sublimely simple explanation pervades the research of Kerry Emanuel. In one 2004 essay on the relationship between hurricanes and climate, he wrote of his search for a “satisfying” understanding of the problem, which he described as a “more fulfilling aesthetic under which to perform science.” It’s clear that Gray’s approach—detecting correlations and seeing patterns, never mind the why—could never fully satisfy Emanuel in this sense. According to Gray, we’ll never get the atmosphere to reveal all its secrets: It’s too damn complicated. “Our philosophy is not to try to understand the physics of hurricane formation,” he has stated. Yet achieving such understanding is the driving force behind Emanuel’s work.
In an interview in his sixteenth-floor MIT office, next door to that of “butterfly effect” discoverer Edward Lorenz and offering a spectacular view of the Charles River, Emanuel also provided a sharp personal contrast to Gray. He’s short and professorial-looking, with wavy graying hair. Rather than loud and unguarded, he’s nuanced and sophisticated. Where Gray swears and occasionally stutters, Emanuel talks in complete sentences, his speech as cadenced as carefully honed prose. Once in our conversation he even said “e.g.”
In his science, Emanuel clearly works as a theoretician, but he isn’t opposed to data-gathering or even above doing it himself. Scientists who want to achieve the fullest understanding, he believes, must be able to wear different hats. In 1985 he flew on several NOAA missions into Hurricane Gloria, a Cape Verde-type storm and a Category 4 at its peak, when it was located off Puerto Rico. Gloria later moved up the East Coast, hit the Outer Banks, and then, as Emanuel remembers, “she followed me home.” He experienced the storm’s weakened remnants when it passed over Massachusetts, inspiring hurricane parties at Harvard.
Gray seemed to have unlimited time to talk when I met with him; Emanuel was busier. Since publishing a now-famous 2005 paper linking observed hurricane intensification to ongoing global warming, he’d been constantly traveling for lectures and fielding media calls. He’d received some press for previous theoretical work on hurricanes—for example, his suggestion that the asteroid impact that killed the dinosaurs may have generated “hypercanes,” runaway super-hurricanes capable of destroying the planet’s ozone layer—but nothing compared to the present feeding frenzy. On the day of our meeting, Emanuel gave me over an hour of his time before taking his leave to join a media conference call: It was May 22, 2006, and NOAA was set to release its first forecast for the coming hurricane season (another active year, NOAA predicted, with thirteen to sixteen named storms). Emanuel’s conference call was designed to counter the position that NOAA was taking—like Gray, the agency was ascribing the recent spate of active hurricane seasons in the Atlantic to a natural up-and-down cycle.
Emanuel has another hypothesis, one he claims better satisfies the demands of “Occam’s razor,” the criterion of parsimony or simplicity of explanation that scientists sometimes invoke to argue why one account trumps another. Emanuel thinks the attempt to explain the twentieth century’s Atlantic hurricane “cycles” by invoking a so-called Atlantic Multidecadal Oscillation or “mode” (presumably tied to the thermohaline circulation) derives from a misreading of data. He argues instead that the mid-century cooling of the Northern Hemisphere, driven by sulfate aerosols produced through air pollution (here Emanuel accepts the climate scientists’ explanation for “global cooling”), also helped to chill the Atlantic enough to suppress many hurricanes. Then, in the mid- to late 1980s, as aerosol cooling diminished, the Atlantic started to warm again under the influence of carbon dioxide, and has been doing so ever since. And the hurricanes have just kept coming.
It’s a grim viewpoint because, contrary to the AMO theory or Gray’s view of a natural balance that ultimately rights itself, it suggests there may not be any reprieve from strong hurricanes a decade or more down the line. Instead, as we keep heating the planet, the Atlantic storm seasons could steadily worsen.
Mentors matter a great deal in science, but the researchers they train don’t always follow in their exact footsteps. Emanuel has been publishing on the relationship between hurricanes and climate for twenty years now, but his initial turn toward the subject came when he decided to criticize the central work on hurricanes published by the man who’d advised him at MIT in the late 1970s—Charney.
As we’ve seen, Charney proposed an influential but flawed model of hurricane formation generally referred to as “Conditional Instability of the Second Kind,” or CISK. More than anyone else, Emanuel is responsible for dismantling it. The debunking occurred after he returned to MIT in the early 1980s, following a three-year stint at UCLA. At MIT, Emanuel occasionally had to teach classes and seminars that involved hurricanes. Just as any other meteorologist at the time would have done, he taught the CISK theory and even published on it. But then something happened: “I realized it couldn’t be right after a while,” Emanuel remembers, “and that really got my attention.”
Although Emanuel’s doctoral thesis had focused on winter storms—like a typical dynamically trained meteorologist, he’d begun his career by largely ignoring the tropics—now he started working on hurricanes. “It often happens this way in science,” he says. “First I developed my own ideas, and then somewhat later discovered that a lot of those ideas really had already been published many, many years earlier.” Emanuel breathed new life into the heat-engine theory that had been propounded by Riehl, Malkus, and Kleinschmidt in the 1950s.
Charney, who had died in 1981, wasn’t around to read Emanuel’s back-to-back papers in 1986 and 1987 that set out to unseat CISK. In its place, Emanuel introduced his own account of hurricanes, according to which air-sea interaction, rather than the spatial organization of thunderstorm growth, constituted the storms’ driving force. In essence, Emanuel presented a dynamically trained meteorologist’s reinterpretation of Riehl’s heat-engine theory. Then he and his coauthor used a numerical model to test whether the new theory produced a realistic hurricane. It did.
At the outset, Emanuel noted that prior modeling studies of hurricanes had, following CISK, presumed an atmosphere characterized by conditional instability. But Emanuel contended that if that’s truly what drives hurricanes, then they ought to form at least weakly over land. On the contrary, Emanuel argued, hurricanes rely upon fluxes of ocean heat for their strength. That doesn’t make their deep cumulonimbus thunderclouds unimportant—but they’re merely the instrument by which hurricanes transport heat from the ocean up into the atmosphere. The release of latent heat in clouds is “important but it’s not causal,” as Emanuel puts it.
In distinguishing between these two versions of storms, Emanuel found he could construct realistic hurricanes without initially postulating atmospheric instability (not that it hurt). In this interpretation, hurricanes instead require a “starter,” like an easterly wave, to get going. After that trigger, Emanuel found, the storm’s intensity depended upon the difference in temperature between the sea-surface boundary layer on the one hand, and the freezing outflow region high in the atmosphere on the other. That’s precisely what might be expected in a heat engine, which cycles heat from a warm reservoir to a cooler exhaust area while using it to do work along the way. The greater the temperature differential between these two regions, the more work can be done, and—as “work” in hurricanes means maintaining the storm against friction by driving winds and consequently lowering central pressure—the more powerful the storm can get.
At this point, Emanuel saw his findings had a strong corollary. If you significantly increase the temperature of the initial heat reservoir (the ocean), or if you significantly decrease the temperature of the outflow region (in intense hurricanes, the tropopause or lower stratosphere), you can get much more out of your heat engine. Hurricane intensity will increase. And that quite literally brought global warming into the equation—in this case, an equation Emanuel had derived to describe the “maximum potential intensity” that a hurricane can achieve under various climatic conditions. Within months of the appearance of his second theoretical paper describing hurricanes as heat engines, Emanuel’s first paper on hurricanes and climate appeared in Nature.
The study used results from the climate model run by James Hansen and colleagues at the NASA-Goddard Institute for Space Studies. For a doubling of CO2, Hansen’s model projected sea-surface temperature increases on the order of 2.3 to 4.8 degrees Celsius in the tropics in August. “The prediction of maximum cyclone intensity [is] crucially dependent on estimates of sea water temperature,” Emanuel noted, in part because rising air will carry more water vapor, and thus more latent heat, if it is warmer. So Emanuel proceeded to calculate that in the world represented by Hansen’s model, the maximum potential wind speeds attainable by hurricanes should increase by about 5 percent for every degree Celsius of ocean warming, with larger corresponding pressure falls and still larger increases in a storm’s “destructive potential” (by as much as 40 to 50 percent at the extreme). The study also suggested the theoretical possibility of staggering storms with central pressures of 800 millibars in the Gulf of Mexico and Bay of Bengal—both regions where land-falling hurricanes had wrought tremendous damage in the past.
In 1970, in what is easily the greatest hurricane tragedy of modern times, a tropical cyclone of unknown intensity plowed up the Bay of Bengal and made landfall at high tide in what was then East Pakistan (now Bangladesh), killing between 300,000 and 500,000 people in a twenty-foot or higher storm surge that flooded many miles inland in the low-lying country and swept up everything in its path. In 1991 came an almost-as-awful repeat. An extremely intense cyclone—known only as “02B”—made landfall in Bangladesh packing 155-mile-per-hour winds and again driving a twenty-foot surge. This storm killed well over 100,000 people and left ten million displaced. Clearly, hurricane vulnerability in much of the world greatly exceeds anything found in the wealthy United States. Emanuel’s hypothesis had particularly large implications for coastally situated developing countries.
Like any responsible scientist, Emanuel included numerous caveats in his 1987 analysis. Climate projections were uncertain, and a hurricane’s maximum potential intensity isn’t the same thing as intensity in the real world, where any number of dynamic factors can squelch a storm’s growth. The study also did not address how global warming
might affect the total number of storms—that was a different issue. Emanuel did observe, however, that “there is no obvious reason . . . to suppose that frequencies would be substantially diminished in a climate with doubled CO2.”
This seminal publication on the hurricane-climate connection had a distinctly theoretical rather than empirical character. Emanuel had provided seemingly persuasive reasons to think that hurricanes would intensify under enhanced greenhouse conditions, but he had not produced any data suggesting an actual trend. Not yet. Nevertheless, Emanuel’s work drew modest media coverage when it came out and much additional attention in the years that followed. The paper’s appearance preceded the extremely intense Atlantic hurricanes Gilbert of 1988 and Hugo of 1989, and journalists soon interpreted both storms in the context of Emanuel’s analysis. As the subject of global warming itself became increasingly prominent, they would do the same for other strong Atlantic hurricanes, especially those making landfall.
It didn’t hurt that Emanuel had given journalists an irresistible catchphrase that telegraphed alarm about strong hurricanes: “hypercanes.” Emanuel introduced the concept in a 1988 paper in which he refined the equation he had developed to describe the maximum potential intensity that a hurricane can achieve. For certain very high sea-surface temperatures and/or very low upper-atmospheric temperatures (conditions far exceeding what exists on the planet at the current time), Emanuel found that runaway superhurricanes might occur. They would be characterized by extremely tall storm columns penetrating high into the stratosphere, finely concentrated eyes, mind-boggling pressure drops, and surface winds of 500 miles per hour or perhaps more.
To get some sense of how calamitous a hypercane would be, consider tornadoes, which can have stronger winds than hurricanes (albeit much more briefly and on a far smaller scale). The colorfully worded Fujita Scale describes an “incredible tornado”—classification F5—as one whose fastest winds reach 261 to 318 miles per hour. Such storms are capable of tossing cars through the air like tennis balls. Yet still that would pale in comparison to the supposed 500-mile-per-hour winds of hypercanes.
Later, Emanuel and a group of fellow scientists used a numerical model to simulate this theoretical cataclysm, and suggested that hypercanes might emerge if a large object from space struck the ocean and warmed it dramatically. Perhaps they had even played a role in the extinction of the dinosaurs. After all, the site of the Yucatán asteroid strike 65 million years ago, thought to have triggered that extinction, had been underwater at the time. So Emanuel speculated that the asteroid might have set off hypercanes that in turn would have damaged the Earth’s protective ozone layer by pumping water high into the stratosphere—thus allowing more deadly ultraviolet radiation to reach the planet’s surface and kill off many or most living organisms. It was hard to get more cataclysmic than that. The press loved it.
That journalists made much of Emanuel’s theoretical work on hurricane intensity and climate during the late 1980s reflected the political and social tenor of the times. Along with the atmosphere and the oceans, a battle over climate science was heating up.
During the 1980s, Congressman Al Gore—who’d been impressed when one of his professors at Harvard, Roger Revelle, showed him Charles David Keeling’s curve of ever-increasing greenhouse gas concentrations in the atmosphere—began to hold hearings on the possibility of climate change. Gore sought to translate this esoteric scientific subject into mainstream politics, and to embarrass the anti-environmental Reagan administration in the process. Yet not until the scorching summer of 1988, amid heat waves and droughts, did global warming have its breakout moment in the United States.
During that summer, NASA’s James Hansen came before Congress to testify about climate change. And in a blockbuster statement that Emanuel and many other scientists would later criticize for going too far, Hansen announced his 99 percent certainty that a significant global warming trend had begun, caused by human activities. “It is time to stop waffling so much and say that the evidence is pretty strong that the greenhouse effect is here,” he declared in his testimony. To be sure, Hansen cautioned that no specific heat wave could be directly blamed on global warming. But he added that these were the kinds of events that we might see more frequently in a globally warmed world.
Accurate or otherwise, the apparent linkage between Hansen’s testimony and present-day events affecting real people—including hurricanes Gilbert and Hugo—helped the issue of global warming reach a tipping point. That’s the paradox of public communication on this subject. It often seems as if the only way journalists and advocates can draw attention to climate change is in the context of individual disasters and weather events, such as very intense hurricanes. Yet specific weather events can never be “caused” by a statistically averaged change in global climate over time, even if they’re precisely the kind of events that should grow more common as global warming sets in.
The attempt to explain and translate such nuances to the public and elected leaders only became more fraught in the years following Hansen’s testimony, as politics quickly and predictably infected the science. Those who’d been drumming up concern about global warming also talked about ways of dealing with it, including mandatory restrictions on industrial greenhouse gas emissions. This idea implicated the fortunes of some of the most powerful special interests in the United States: the coal, oil, and automotive industries. Before long, these and other industries organized to combat the global warming forecasters. One of their core weapons involved raising doubts about the validity of the science itself—the same “manufacturing uncertainty” tactic previously employed by many other American industries, ranging from tobacco to lead, when scientific information pointed to adverse consequences from their economic activities.
The first Bush administration, sympathetic to these interests, abetted their attacks on the science. In 1989, Hansen again testified before Congress—but this time, the Bush administrations Office of Management and Budget altered his testimony, against Hansen’s consent, to weaken his conclusions. It wouldn’t be the last time Hansen, who has been a federal government employee throughout his career, would find a political filter interfering with his ability to state his opinions on climate change.
To prosecute their fight over the emerging science of climate, the fossil fuel interests needed their own scientific arguments, and scientific experts to make them. A group of global warming “skeptics” emerged on the scene, proceeding to dispute either the rising temperature trend itself, the contention that humans had been causing it, or both. Often these “skeptics” aligned themselves, just as Gray had done throughout his career, with a strict adherence to scientific empiricism. After all, much of the concern about global warming derived from projections by global climate models like Hansen’s. So the “skeptics” attacked the models and found recourse in the “data,” arguing that even if temperatures seemed on the rise, a linear extrapolation of the current trend did not show anything like the model-predicted warming (which included amplifying factors such as the water vapor feedback). The skeptics also focused on what they claimed were anomalous satellite and radiosonde data sets, and used them to argue that warming wasn’t happening in the troposphere at the rate predicted by climate models.
These two questions—whether temperatures were actually rising and what might be causing that trend, referred to in the scientific literature as “detection” and “attribution”—became the main battlegrounds in the escalating climate wars of the late 1980s and 1990s. Yet even then, hurricane intensification and other projected consequences of global warming, such as sea-level rise and changing precipitation, comprised a kind of third front (referred to as “impacts”). Emanuel’s calculations on greenhouse hurricanes were regularly cited within the climate community and regarded by many as a plausible outcome of climate change. In 1988, the American Meteorological Society and the University Corporation for Atmospheric Research, a nonprofit research organization in Boulder, Colorado, jointly released a major statement discussing potential climate risks, which included “a higher frequency and greater intensity of hurricanes.” With mainstream scientific organizations making such pronouncements, the contrarians, or “skeptics,” quickly denounced the scientific evidence about any of a number of hypothesized global-warming impacts, including hurricane intensification. Once again, their strategy involved raising doubts about speculative models or theories and prizing hard data.
The fight burst into the open following 1992’s Hurricane Andrew, the small but extremely powerful storm that devastated southeastern Florida, destroying 25,000 homes and setting a then record for economic losses from a single hurricane in the United States, on the order of $26.5 billion (in 1992 dollars). Delivering the worst of its punishment to Dade County, Andrew tore through the backyards of many of the nations hurricane experts, at least one of whom lived through a hellacious night of shrieking winds and later told the tale. The National Hurricane Center, then located in Coral Gables, measured wind gusts of over 160 miles per hour from the roof of the building, and that wasn’t even in the eye wall. The storm knocked out the radar antenna on the center’s roof and tossed around cars in its parking lot, but that’s nothing compared to what it did to neighborhoods to the south. For years Andrew was classified as a Category 4 hurricane at landfall, but a 2004 reanalysis led by Gray’s student Chris Landsea determined that maximum sustained winds when the storm hit southeastern Florida near Fender Point had been nearly 170 miles per hour. Andrew had officially been the first Category 5 hurricane to strike the United States since Camille.
Following this incredible storm, a prominent article in Newsweek asked, “Was Andrew a Freak—or a Preview of Things to Come?” It quoted Emanuel. Such murmurings drew out Gray with one of his characteristic denunciations: “People who say, whenever there is an intense storm, that this indicates global warming, they don’t know what they’re talking about.” Gray was right: No one, certainly not Emanuel, could argue that one strong storm proves the influence of global warming. Individual storms respond to their immediate environments, and strong storms had been observed in the past. However, Emanuel certainly did contend that over time, the average hurricane would grow more intense due to average changes in those environments.
Gray was hardly the only critic, and he personally wasn’t linked to private industry. Other “skeptics,” however, appeared more closely entangled. In an opinion article published in the Washington Times following Andrew, the global warming contrarian Patrick Michaels of the University of Virginia—who would later be shown to have received substantial funding from energy interests—also took issue with Emanuel’s work, claiming it “flies in the face of what has been observed in the twentieth century.” Like Gray, Michaels often criticized models and prized “data,” so it came as no surprise that he called Emanuel’s calculations “of more theoretical importance than practical significance.” Later, once again in a commentary article for the Washington Times, Michaels dubbed Emanuel’s work “merely an exercise in hurricane vortex mathematics.”
When the Atlantic kicked back into an active phase in 1995, generating nineteen named storms and eleven hurricanes, the hurricane-climate battle intensified. Fittingly, for the peak months of August through October that year, sea surface temperatures in the Atlantic’s main hurricane development region—from 10 to 20 degrees North latitude and from 20 to 60 degrees West longitude, or stretching from the coast of Africa to the eastern edge of the Caribbean—had been at their warmest since records began in 1865.
Once again, the scientific debate wound up being amplified and sharpened by media coverage—which, in turn, responded to the 1995 hurricane season and then folded global warming into the story line. Gray quickly sounded a skeptical note. “I don’t believe that this hurricane season is a result of anything to do with global warming,” he told the Associated Press. But Gray wasn’t the only scientist talking to the media. In an interview with the Houston Chronicle, climatologist Kevin Trenberth of the National Center for Atmospheric Research delivered a partial rebuttal to Gray’s arguments. In particular, Trenberth cited the effect that global warming could have on hurricane rainfall. Because warmer air can hold more water vapor, stronger precipitation theoretically should occur in storms as the planet heats up. And that, of course, could worsen one key cause of hurricane damage: extensive flooding and destructive landslides. In Trenberth, Emanuel seemed to have found an ally.
Meanwhile, Gray and Michaels were joined by conservative think tanks, which helped battle suggestions of a hurricane—climate linkage. In 1997 the Competitive Enterprise Institute—a recipient of considerable funding from oil giant ExxonMobil over the years, although that funding has apparently ceased more recently—published a report entitled Calmer Weather: The Spin on Greenhouse Hurricanes, by climate contrarian Robert C. Balling, Jr., of Arizona State University. “Blaming hurricanes on recent warming is flawed on all fronts—not only is there little to no linkage between global warming and hurricane activity, but there seems to have been no warming in recent decades either,” Balling wrote.
Traditional weather forecasters also joined the global warming fight during the 1990s, and denounced the new climate science as it continued to build momentum. In 1995, the UN’s Intergovernmental Panel on Climate Change famously declared that the “balance of evidence” suggested that the human impact on the climate system had already become discernible in rising temperatures. Then, in 1997, came the Kyoto Protocol, a treaty designed to put the world on the path to mandatory restrictions on greenhouse gas emissions. To promote Kyoto, the Clinton administration appealed to TV weathercasters to help explain global warming to their audiences. The entreaty backfired: Many forecasters publicly signed a global warming “skeptic” document called the Leipzig Declaration, which stated, “There does not exist today a general scientific consensus about the importance of greenhouse warming from rising levels of carbon dioxide. In fact, many climate specialists now agree that actual observations from weather satellites show no global warming whatsoever—in direct contradiction to computer model results.”
Among the Leipzig signatories was Neil Frank, former director of NOAA’s National Hurricane Center (a post he held for thirteen years) and now chief meteorologist for Houston’s KHOU television station. A leading light of the hurricane community as well as a strong empiricist like Gray, Frank has often denounced fears of human-caused global warming and the flawed computer models upon which those fears are based. Along with Gray, Frank debunked global warming in a speech delivered at the 1998 National Hurricane Conference meeting held in Norfolk, Virginia, contradicting Clinton administration scientists on hand who supported the scientific consensus on climate change. “On numerical models that I can’t put faith in for a three-day forecast, we’re being asked to simplify and run out [predictions] for two centuries,” Frank charged. Gray, meanwhile, went after the Clinton administrations NOAA for not funding his research: “They don’t want to give money to people who have a different approach besides climate modeling,” he declared.
Why did so many weather forecasters so distrust the theory of global warming? First, TV meteorologists don’t necessarily have much theoretical training; their job is to communicate to the public, not to solve equations or conduct cutting-edge research in an academic context. They build their careers around the practical problem of predicting weather day in and day out. They’re focused on the immediate, not the long term. And all they’ve seen from the weather is change, change, and more change, making them understandably suspicious about claims as to trends.
While the influence of fossil-fuel interests may have had much to do with the chorus of global-warming “skepticism” that emerged in the 1990s, then, it hardly constituted the only factor contributing to a gathering political and scientific storm. Inertia among members of the more traditional weather establishment also drove a disciplinary divide among meteorologists. Correspondingly, an anti-global warming constituency emerged in the hurricane-forecasting community. “Tropical meteorologists were probably the last informed people in the atmospheric sciences to take global warming seriously,” observes Hugh Willoughby. And some, like Gray, still don’t. No wonder Neil Frank’s attacks on global warming at the 1998 National Hurricane Conference were well received by the audience and triggered considerable applause.
The media debates over hurricanes and global warming during the 1990s were highly episodic in nature. A big storm appeared somewhere in the Atlantic basin, or struck the United States. Or a very active season occurred. So a journalist called up scientists with different views to get their opinions, and that was about it. Until the next significant hurricane event, anyway.
Within the professional scientific literature, however, a more nuanced dialogue developed, one in which Emanuel had his critics as well as his supporters. In this more rarefied debate, it became clear that while Emanuel had not yet won everyone over to his position, he had inarguably (and pretty much single-handedly) put the hurricane-climate issue on the map, and generated considerable follow-up work as researchers attacked the topic from a variety of angles.
The strongest criticisms came from Gray and his supporters. While acknowledging the importance of sea-surface temperatures to hurricane strength, these scientists repeatedly pointed out that hurricanes depend upon a number of other factors as well, many of which had been highlighted by Gray over the course of his career. Thus much of the criticism of Emanuel amounted to citing Gray’s global-genesis parameters, or his findings about regional controls on hurricanes, and arguing that Emanuel’s emphasis on ocean heat represented too simplistic an approach in light of all these other factors.
Such arguments were epitomized by a 1994 paper in which Gray and seven other scientists—including his pupils Landsea and Holland (the latter not yet a hurricane-climate convert)—published their own take on hurricanes and global warming. Calling Emanuel’s approach a “worst case thermodynamic study,” the scientists argued that the postulated effect of increasing sea-surface temperatures upon hurricane intensification (and regions of formation) would be offset in a variety of ways. For example, stronger hurricanes would stir up more cool water from the ocean depths and thereby sap their own strength more than weaker storms. “Even though the possibility of some minor effects of global warming on [tropical cyclone] frequency and intensity cannot be excluded, they must effectively be ‘swamped’ by large natural variability,” the critics concluded.
Gray and coauthors also dismissed entirely the notion that possible changes to hurricanes could be studied in global-climate models. In so doing, they pooh-poohed a trajectory of research dating back to 1970, when Syukuro Manabe of the Geophysical Fluid Dynamics Laboratory had first detected tropical disturbances in such a model. Following Emanuel’s first publication on hurricanes and climate, this strand of research had evolved into a series of GCM studies to see how hurricanes might change as atmospheric concentrations of CO2 increased. That included not only whether the storms would intensify but also whether their total numbers would go up.
The results were decidedly mixed. Perhaps the most revealing analysis came in 1990 from Manabe and fellow GFDL modeler Anthony Broccoli, who started out expecting to see hurricanes sprouting up “like mushrooms” in their model once they cranked up the CO2 levels, as Manabe remembers. Yet instead, Manabe and Broccoli found that depending upon how the model treated clouds, it produced either increases or decreases in a combined measure of the number and duration of storms. Still, the scientists concluded that climate models were “appropriate tools” for further research on the issue, because modeled storms generally appeared in the right ocean basins and looked more like real hurricanes as model resolution increased. So the GCM research continued, although it produced a scattershot of results, ranging from more and stronger storms for doubled CO2 to substantially fewer (although possibly stronger) storms.
The hurricane empiricists, however, would have none of it, questioning whether these computerized eddies could be safely analogized to storms in nature. Each time a new GCM study that purported to detect hurricane-like storms appeared in a journal, their critical letters seemed to follow. The modelers, in turn, struck back with just as cutting a criticism: Grays empirically derived genesis parameters certainly weren’t any better than models for studying hurricanes in future climates. In fact, they were probably worse. The empirical relationships Gray had uncovered worked for the present climate, but “there is no a priori way of knowing how well they would govern tropical cyclogenesis in a different climate,” the modelers noted.
By the mid- to late 1990s, then, a modest debate had begun to brew over hurricanes and global warming within the professional scientific literature. Emanuel had started it, but he certainly hadn’t won it. Instead, the scientific process—working just as it so often does—had churned out a small literature of publications that, when surveyed comprehensively, amounted to a collective shoulder shrug from the experts.
The unsettled state of knowledge came across in the 1995 report of the Intergovernmental Panel on Climate Change, which stated: “It is not possible to say whether the frequency, area of occurrence, time of occurrence, mean intensity or maximum intensity of tropical cyclones will change.” Following this report, the “World Meteorological Organization pulled together a group of experts to outline in more detail what was known about the subject and what wasn’t, and to analyze new results. They included Emanuel, Gray, Landsea, Holland, and a number of other scientists, among them Australian tropical meteorologist Peter Webster, a longtime friend of Holland who would have much to contribute to the hurricane-climate debate in later years.
Reviewing the science, this group made the following points: First, there did not appear to be any trends in hurricane numbers, intensity, or regions of formation. Second, neither Gray’s empirically derived set of genesis parameters nor the current generation of global-climate models seemed up to the task of studying hurricanes in future climates. Nevertheless, while there was no reason to think the regions of hurricane formation would grow for a doubling of CO2, Emanuel’s maximum potential intensity theory (and a similar theory Holland had just published) suggested that storm intensity would increase by 10 to 20 percent when measured by the fall in central pressure (maximum wind speed increases would be smaller, on the order of 5 to 10 percent). Yet various unknowns—such as ocean spray effects, then considered a negative rather than a positive influence on storm strength, or possible atmospheric stabilization from increased warming at upper levels—seemed likely to cut into that potential intensification by an unknown amount. The upshot: Some change in tropical cyclones might occur, but it wouldn’t be much, and would be far in the future.
So went the consensus as of 1998. It downplayed any dramatic impact of global warming on hurricanes, without ruling out some possible effect. The extent to which each individual scientist agreed with these conclusions remains unclear, however, especially in the case of Gray. While Gray had his name on the study, it took at least one position he rejected outright. It accepted unquestioningly the 1995 IPCC position that according to the “balance of evidence,” humans were causing global warming.
One very important study came out around the time of the consensus analysis, yet apparently not soon enough to be cited by it. Amid all the criticism of low-resolution modeling studies, the Geophysical Fluid Dynamics Laboratory scientists had devised a new angle of approach. Rather than using a global-climate model, this time they used the labs much more highly resolved regional hurricane tracking and prediction model—the same one the National Hurricane Center had been employing since 1995 to determine where storms will go. Into the hurricane model the scientists imported 51 Northwest Pacific storms from a GCM, and then 51 more Northwest Pacific storms from a GCM simulation run with increased levels of carbon dioxide, in which sea-surface temperatures were about 2.2 degrees Celsius higher. Upon running the higher-resolution hurricane model, they found that the storms from the CO2-heated climate had 5 to 12 percent faster wind speeds on average. Central pressures dropped by an additional 7 to 20 millibars. This result, the study noted, dovetailed nicely with the maximum potential intensity theories of Emanuel and Holland.
The new GFDL study, which broke ground both by its high resolution and by its sole focus on storm intensity rather than storm numbers, had a lead author named Thomas Knutson. It represented the first of a series of modeling studies by Knutson and his colleagues focused on how hurricanes might change under global warming scenarios. No wonder, then, that Knutson would have much more to say about hurricanes and climate in subsequent years—for some, perhaps too much. In 2006, much like James Hansen, he would allege that officials at NOAA had constrained his ability to participate in media interviews on the subject.
Whatever the role of climate change, understanding hurricane intensification itself represented much more than a matter of idle intellectual interest. By the mid- to late 1990s, dynamical models like that of GFDL had shown impressive skill when it came to projecting the paths that hurricanes would take, and thus where they might make landfall. However, modeling attempts to predict either hurricane intensification or weakening lagged far behind, and that was deeply-troubling. The hurricane forecasters had a nightmare scenario, one with a firm basis in the history of certain notorious storms: a Category 1 hurricane suddenly and unexpectedly strengthens into a Category 4 or 5 killer just before making landfall. Then it hits a populated area that knows a storm is coming but has been led to expect a weak one. So most people haven’t evacuated—and now it’s too late.
Accordingly, a better understanding of why hurricanes intensify, whether under global-warming conditions or otherwise, became a key focus of research for many scientists. Emanuel led the way, designing a simple atmosphere-ocean model to attack the problem. The gigantic 1999 Hurricane Floyd also increased the impetus to understand storm intensity. Close to 600 miles in diameter, Floyd very nearly reached Category 5 strength as it approached the Bahamas, which meant that unlike many other storms, it had come extremely close to achieving its full potential.
“Why did this happen for Floyd but not for every hurricane? In a 2000 statistical analysis of real-world storm intensities, Emanuel showed that every storm that becomes a hurricane has an equal probability of attaining “any given intensity, up to but not beyond its potential intensity.” So if global warming did indeed increase the maximum potential intensity of hurricanes by 10 to 20 percent, as his theory predicted, Emanuel wrote that “the wind speeds of real events would, on average, rise by the same percentage.” Given enough time, that ought to be detectable in storm statistics.
Gray, meanwhile, criticized Emanuel’s hurricane intensity prediction model, which had a strong thermodynamic emphasis but which Gray called “too simplified.” This, in turn, set the stage for a rollicking 2000 debate between Gray and Emanuel at the American Meteorological Society’s biennial conference on Hurricanes and Tropical Meteorology, held that year in Fort Lauderdale. The question posed by the debate did not explicitly involve global warming. Instead, the two scientists squared off over what controls hurricane intensity, with Emanuel focusing more on thermodynamic factors—particularly ocean temperatures—and Gray upon dynamic ones such as vertical wind shear. As neither scientist fully disputed the importance of the factors highlighted by the other, the debate came down to a matter of emphasis—or at least, it should have.
In the recollection of many scientists present, however, it wound up being something of a circus. Gray called Emanuel’s attention to thermodynamics a “fixation.” He said Emanuel was “playing games.” He even likened his fellow scientist to a salesman. “He could sell ice cubes to the Eskimos and steam heat to the Amazonians,” Gray declared at one point. The audience chuckled, but this is not how scientific debates are generally expected to go down. “It was sort of like a Kerry Emanuel roast, making fun of me personally,” Emanuel remembers. “It was a humorous sort of thing that didn’t bring in any science.”
Over the course of the coming decade, however, Gray’s gags and one-liners would come to seem less entertaining to Emanuel and his supporters, especially in the context of unprecedented hurricane damage in 2004 and 2005, years that dramatically raised the stakes in the hurricane-climate debate and brought to it a new vigor and passion. In the process, the 1998 consensus came unraveled. Numerous scientific and personal realignments quickly followed, even as important new voices dove into the fray.
Gray and Emanuel continued to serve as figureheads for the different sides—but the two would not debate each other publicly again.