3
At the time when Riehl conducted his most important research, understanding hurricanes constituted a major national priority. The 1950s and 1960s continued a very active Atlantic hurricane era that began in 1926. Just a few of the more notable storms to affect the United States during this period included the San Felipe/Okeechobee Hurricane of 1928 (which killed 1,500 people in Puerto Rico and several thousand more during an inland storm surge in southern Florida), the Great New England Hurricane of 1938 (which killed hundreds and flooded Providence, Rhode Island, under twenty feet of water), 1965’s Hurricane Betsy (which hit New Orleans as a Category 3 storm and led to the creation of the levee system that failed in Hurricane Katrina), and finally, 1969s Camille—one of only three storms in recorded history to strike the United States at full Category 5 intensity.
Camille’s maximum surface winds over the Gulf of Mexico were estimated to have exceeded 200 miles per hour; the day before landfall, a reconnaissance aircraft measured the storms minimum central pressure at 905 millibars (26.73 inches). Retrospective analyses suggest Camille’s relentless intensification is explainable only if the storm tracked straight up the Loop Current, a deep pulse of warm water that circles through the Gulf at varying locations and intensities in different times of the year. The storm blitzed the Mississippi coast just before midnight on August 17, killing 150 people and propelling a storm surge of well over twenty feet at Pass Christian. “The old antebellum residences, which had stood in grandeur along the Mississippi coastline from Pass Christian to Biloxi and had withstood the ravages of many hurricanes for more than a hundred years, had been totally or substantially destroyed, with few exceptions,” wrote then National Hurricane Center director Robert Simpson. In the Pass Christian area, “houses had been swept entirely off their foundations and splintered into unrecognizable small pieces, characteristic of the wind damage ordinarily associated with major tornadoes.” Camille then blew inland and dumped rain measured in feet, not inches, over Virginia’s Blue Ridge Mountains, causing flash flooding and landslides that led to more than 150 additional deaths.
In the context of such destruction, society depended heavily upon scientists like Simpson and Riehl, who flew into these killer storms to study and track them. Yet even during an active hurricane era, tropical meteorology remained outside the meteorological mainstream. “It was not that popular, even at the University of Chicago,” remembers Riehl student T. N. Krishnamurti, now a meteorologist at Florida State University. “It was a side issue. Most people were still worried about North American weather.”
And not just weather itself. Even as Riehl and colleagues flew into storms, most meteorologists were moving in a different direction, one rooted much more deeply in theory and mathematics than data-gathering. They aimed to reformulate weather forecasting on the basis of the laws of physics, which in turn required understanding the large-scale dynamics of the atmosphere—the central equations governing its fluid flow. These “dynamical meteorologists” sought to simplify the equations so that they could be plugged into early computers and used to predict weather. They weren’t conducting aircraft reconnaissance; they were far too busy writing code.
The theoreticians who provided the greatest insights into dynamical meteorology became the central leaders of the field, the scientists everyone else admired and wanted to follow. Perhaps the most distinguished of them all was Jule Gregory Charney, who would become one of the most famous meteorologists of the twentieth century and chair the meteorology department at MIT. Not only did Charney train today’s central hurricane-climate theorist, Kerry Emanuel, but he also helped to found the tradition of weather and climate modeling—now centered, among other places, at Princeton’s Geophysical Fluid Dynamics Laboratory, a branch of NOAA—that has bolstered concerns about the effect of global warming on hurricanes and about global warming in general.
Despite dramatic differences in immediate posthumous stature, Charney’s early career overlaps with Riehl’s. He grew up in Los Angeles and showed a strong mathematical aptitude, teaching himself calculus as a teenager. Then he attended UCLA, another major early hub of American meteorology. Soon Charney, like Riehl, found himself giving military meteorologists a crash course in synoptic forecasting techniques. Ironically, Charney loathed the drudgery of drawing isobars on charts. For a mathematically inclined thinker like himself, such exercises were a “chore” and a “total waste of time.” Weather map analysis was the only UCLA meteorology course in which he didn’t get an A, and he would later describe the map-based extrapolation techniques of synoptic meteorology as far too subjective in nature—an art, perhaps, but not a science. But during wartime everything hinged upon the immediate and the practical, so Charney pitched in.
After the war, however, Charney demonstrated what mathematical tools—and a theoretical style of thinking that stripped complex problems down to their essential components—could do for his field. For his doctoral thesis, he ambitiously set out to solve what he called the “haute problème of meteorology”—the longstanding question of how extra-tropical cyclones (not hurricanes) originate. The thesis developed, mathematically, a tremendously influential concept now referred to as “baroclinic instability,” which explains the origins of extra-tropical cyclones on a rotating planet by showing how the westerly winds of the mid-latitudes become a “seat of constant instability” due to their increasing speed with altitude and the temperature differences to their north and south. The latter arise inevitably from the planet’s differential heating by the sun: The equatorial regions receive much more solar energy than the poles. The huge cyclonic eddies of the mid-latitudes are thus the atmosphere’s way of mixing together air of different temperatures and redistributing heat pole-ward. That means they’re critically locked in to the climate system—indeed, fundamental to it.
In this early work, Charney maintained meteorology’s selective emphasis on extra-tropical (specifically North American and European) weather. The bias was embedded in the very analysis of baroclinic instability. In his unending quest to pare down meteorological problems to their core elements, Charney privileged dynamic over thermodynamic thinking in his famous thesis and deliberately ignored latent heat release from condensation. No scientist concerned with the tropics could have safely made a similar simplification—but most meteorologists were not so concerned, and baroclinic instability quickly became a dominant paradigm.
Charney’s explanation of baroclinic instability helped usher in the age of numerical weather prediction or numerical modeling, in which computers—today they are invariably supercomputers—forecast future weather and climate states by starting with observations from nature and then solving the equations governing motion (which include the Coriolis force), conservation of energy, the behavior of gases including water vapor, and other fundamental attributes of the atmosphere (or, in “coupled” models, the ocean-atmosphere system). Numerical models essentially divide the atmosphere up into different sections or cells of a grid, and then solve the relevant equations for each section, as well as calculating how the different parts relate to each other. The higher the model’s “resolution,” the more sections there will be and thus the more calculations, which means that more computer power is necessary to run the model.
There’s a long history to the notion that if scientists had perfect knowledge of the state of the atmosphere and the equations governing it, they could predict weather and climate far into the future. In 1904 the Norwegian Vilhelm Bjerknes proposed precisely this idea: Get together enough observations about the atmosphere’s initial state, and then solve all the equations to get a forecast. It was a bold new vision for meteorology, one that would, if realized, make the field a much more direct extension of physics. Meteorology, Bjerknes wrote, would become an “exact science.” But in the early 1900s that was merely a distant dream. Another scientist, Lewis Fry Richardson, had devised a similar plan and spent six weeks trying to calculate six hours of weather over Europe, only to come out with a result that was dramatically off” base. Still, the hope of calculating the weather remained alive. Finally in the early 1950s, two scientists more than any others made it happen: Charney and the mathematician and computing pioneer John von Neumann, both based at Princetons Institute for Advanced Study at the time.
In 1946 von Neumann had issued a proposal to launch an “investigation of the theory of dynamic meteorology in order to make it accessible to high-speed, electronic, digital, automatic computing, of a type which is beginning to be available and which is likely to be increasingly available in the future.” Von Neumann had the vision, but Charney’s participation in the project was crucial. “With his genius for reducing the atmosphere’s behavior to its most essential processes, he set to work extending the process of dynamic simplification that he had begun in his baroclinic instability paper, filtering out factors, such as fast-moving sound and gravity waves, that were less essential to forecasting the weather. Ultimately, Charney honed his equations down enough to be used in early computers. And so in 1950 the team of Princeton scientists traveled to Maryland to run the first numerical weather prediction using the famous ENIAC (short for Electronic Numerical Integrator and Computer), a hulking machine that used punch cards, malfunctioned frequently, and had vastly less processing power than todays PCs. The era of numerical modeling had begun, and it would transform meteorology forever, making purely data-driven forms of research and forecasting a thing of the past.
During the heady days of the 1950s, some speculated that it might be possible to perfectly predict weather years into the future. But the modelers would soon be humbled by the 1961 discovery of chaos, or the so-called “butterfly effect,” by Edward Lorenz, Charney’s colleague at MIT There’s a limit to how accurate future weather prediction can be, Lorenz realized, because tiny differences in a model’s initial description of the state of the atmosphere can have a large impact on the forecast. As a consequence, it would never be possible to reliably predict the weather beyond about a week or two at most. Weather predictions within this range, however, have become increasingly accurate over time, as computers have grown more powerful and the equations contained in the models more comprehensive.
In the late 1950s, scientists began attempting to use numerical models to track the paths of hurricanes. It would not be until the 1990s, however, that the hurricane model run by NOAA’s Geophysical Fluid Dynamics Laboratory could show more accuracy when it came to projecting storm paths than statistical models, which employed a variety of techniques including comparing a present storm to previous historical cases so as to determine the best analogue, and then using that analogue as the basis for the forecast. This was a significant breakthrough: Scientists’ dynamic understanding of hurricanes, based upon equations and executed through a computer, now provided the best means of forecasting where they will go.
Today, when you’re watching the Weather Channel during hurricane season, the familiar white cone projecting where a storm may travel within the next seventy-two hours is based upon the combined outputs from a variety of dynamical forecasting models (plus the judgments of National Hurricane Center experts who analyze those model outputs). This is known as an “ensemble” forecast. When forecasters employ multiple models to track the same hurricane, they get a far better sense of the range of possibilities for where it might end up—and accordingly, for which areas should be evacuated.
To Charney, computer modeling represented a grand merger of theory and observation. It used equations to explain the behavior of the atmosphere as well as hard data to determine what values to plug into those equations. As Charney wrote in a 1972 essay: “When a computer simulation successfully synthesizes a number of theoretically-predicted phenomena and is in accord with reality, it validates both itself and the theories—just as the birth of a child who resembles a paternal grandfather legitimizes both itself and its father.” In other words, models must be tested against reality, and judged by that standard.
Charney noted in the same essay, however, that in the absence of a complete theoretical understanding, models themselves become the source of experiments: Vary the equations or other aspects of the model, plug the data back in, and see how much closer to reality you can get. But from the perspective of some scientific empiricists, this seemed an ungrounded and even suspicious way of doing things. “Tuning” the models to make them line up better with observations sounded like rigging the game.
Charney envisioned a variety of other uses for computer models. They could be used to study phenomena on different time scales, from short-range weather to long-range climate, as well as on different spatial scales—from the general circulation of the atmosphere down to individual types of storms, such as hurricanes. In fact, a decade or so after unleashing numerical weather prediction upon the world, Charney sought to carry his simplifying approach over into hurricane science, which had theoretical aspects of its own but had fundamentally been driven by the influx of new data from radar, upper-air measurements, and storm-flying. The resultant clash between different views of storms, while less contentious than the earlier American Storm Controversy, has many parallels to it and continued to create friction among meteorologists into the 1990s.
As for so many other scientists, Charney’s interest in hurricanes arose from direct experience—not in the air in his case, but on the ground. In 1954 he spent the summer as an associate lecturer at the Woods Hole Oceanographic Institution in Cape Cod, Massachusetts. It was the same summer that hurricanes Carol and Edna bore down on New England. Both storms gave the Woods Hole area, which is wedged between Buzzard’s Bay and the Nantucket Sound at the southwest corner of the Cape, a thrashing. “A tree fell on our car, electricity was shut off, and I was very impressed by Hurricane Carol,” Charney later recalled. He directly attributed his later work on hurricanes to this experience: “I think, in my life, there have always been incidents which sort of set me off on something.” Later, Charney traveled down to Florida to visit with the scientists involved in the National Hurricane Research Project.
Charney’s heavily mathematical theory of how tropical cyclones originate—known as “Conditional Instability of the Second Kind,” or CISK—was first published in a 1964 paper entitled “On the Growth of the Hurricane Depression,” coauthored by the Norwegian dynamicist Arnt Eliassen, who had also worked on the Princeton meteorology project. Another dynamicist based at New York University named Vic Ooyama had influential early discussions with the two and later came up with his own version of what’s sometimes called CISK, but has criticized Charney’s account.
CISK sought to solve a problem that had arisen during early failed attempts to simulate hurricane formation in simple mathematical models. The theoreticians behind these studies postulated a state of “conditional instability”*—in other words, an atmosphere very conducive to convection and thus thunderstorm formation because it cooled steadily with elevation. Because air will rise as long as it remains warmer than its surroundings, such a vertical structure will encourage convective updrafts and the release of latent heat in clouds. And sure enough, in response to the unstable atmosphere, the simple models produced clouds and thunderstorms. But these were merely the building blocks of hurricanes, not hurricanes themselves.
So CISK sought to explain why the tropical atmosphere sometimes releases energy through the large-scale phenomenon of a hurricane, rather than simply through a random assortment of smaller individual thunderstorms. In other words, CISK sought to account for the organization of thunderclouds into hurricanes. Charney and Eliassen also postulated an unstable atmosphere but added a twist: a positive-feedback relationship between the release of latent heat and rising air in clouds, leading to ever lower surface air pressure, and still more low-level inflowing air, delivering up moisture through frictional processes at the sea surface. As their paper put it: “The cumulus- and cyclone-scale motions are thus to be regarded as cooperating rather than as competing—the clouds supplying latent heat energy to the cyclone, and the cyclone supplying the fuel, in the form of moisture, to the clouds.” It has been observed that in placing latent heat so close to the center of the story, CISK echoes Espy’s old thermal theory.
CISK was a sensation—a highly theoretical account of hurricanes for a theoretical and modeling era. It quickly crowded out Riehl’s heat-engine theory. Attached to the image and reputation of Charney, CISK brought hurricanes to the attention of the dynamicist mainstream of meteorology. Although now considered flawed, it proved influential for a very long time, prompting a great deal of follow-on work and many permutations of the original Charney-Eliassen account (some arguably mislabeled as CISKs of various sorts). “There are fashions in science, and that was a fashion,” recalls University of Oklahoma meteorologist Doug Lilly, a skeptic of CISK who supported a “heat engine” revival in the 1980s.
CISK had a number of key problems, many of which sprang from the attempt, so characteristic of Charney, to strip hurricanes down to mathematical essentials rather than study them in their full-blown reality. As in his doctoral thesis, Charney went searching for a mathematical type of instability to explain hurricanes—and he thought he had found it in an unstable atmosphere. Yet in the tropics, any temporary instability generated by the sun’s heating of the oceans is quickly released through the formation of clouds and thunderstorms, which lift heat upward and tend to restore stability. This made sense: Riehl had learned many years earlier that tropical cyclones need an independent disturbance, such as an easterly wave, to get churning. Typical thunderstorm formation alone cannot do it, no matter how favorable the atmosphere is to convection and no matter how impressive the thunderstorms of the tropics may become.
Moreover, there was a problem with the way CISK treated the ocean. According to Charney and Eliassen, spiraling winds supply energy to the storm through “frictional convergence”: Rough seas near the storm center slow down the incoming winds, causing them to converge inward toward that center rather than circling it. In such a situation air gets forced upward, carrying water vapor higher so that it can condense and release latent heat—thereby lifting more air, decreasing central pressure, and pulling in still more spiraling winds at low levels. The sea surface figured in the CISK account, but only by providing an environment in which the air at the boundary between sea and sky was suffused with moisture ready to be driven aloft. Yet for simplicity’s sake, the original version of CISK explicitly ignored the importance of fluxes of warmth from the ocean, and generally deemphasized the ocean energy source.
From the 1960s until the 1980s, when it began to fall out of favor, CISK thus distracted attention away from the concept of hurricanes as ocean-driven “heat engines.” In fairness, the theory also prompted a great deal of thinking—wrong ideas can be productive in that way. But CISK has also been characterized as a “setback” for the field, and has incited finger-pointing over who was responsible. “Don’t bury me in the grave of CISK with Charney,” Vic Ooyama, who protests his inclusion among traditional CISK adherents, has written. Apparently, Riehl also argued with Charney over CISK in the 1970s. The two scientists had great respect for one another, but here they diverged. By then, however, CISK was well on its way to becoming yet another dominant paradigm.
By eclipsing Riehl’s heat-engine approach and sending hurricane science off on a bit of a tangent, CISK may have helped delay concern about the influence of global warming upon hurricanes. If he’s truly guilty of this, though, Charney more than atoned with another theoretical and modeling foray—this time into climate science itself. Just like his student Emanuel, Charney combined hurricane and climate research among his many interests. Indeed, Charney played a central role in evaluating whether we ought to trust global climate models—the same models that predict dramatic changes from a doubling of atmospheric carbon dioxide concentrations and that are roundly dismissed by global warming skeptics even today.
Like the study of hurricanes, the trajectory of research that would eventually grow into modern climate science began in the nineteenth century. In mid-century the Irish scientist John Tyndall first discovered the greenhouse effect; by the early 1900s the Swede Svante Arrhenius had calculated that doubling carbon dioxide concentrations in the atmosphere could trigger a dramatic increase in global temperature, warming the Earth by 4 degrees Celsius. But like hurricane research, climate science developed unevenly until the period following World War II. At that point, the two fields were set on a collision course, with the ultimate impact timed for the present moment.
Data gathering helped drive the new concern about climate change. In the 1950s, following decades of rising temperatures between 1910 and 1945, the U.S. Weather Bureau provided funding to begin definitive measurements of atmospheric carbon dioxide at the Mauna Loa Observatory in Hawaii. The scientist responsible was Charles David Keeling of the Scripps Institution of Oceanography, and his “Keeling Curve,” showing ever-rising concentrations over time, has become one of the canonical images of human-caused global warming. Concerns were also spurred, however, by theory and modeling. In 1955, once again at Princetons Institute for Advanced Study, Norman Phillips created the first simple but workable simulation of the Earth’s atmosphere. It was called a “general circulation model,” or GCM. Before long, scientists would use this type of model to predict the effects of the increasing atmospheric carbon dioxide concentrations tracked by Keeling. Indeed, today “GCM” more frequently refers to “global climate model” than to “general circulation model.”
With the support of Charney and von Neumann, Phillips’s work became institutionalized through the General Circulation Research Section of the U.S. Weather Bureau in Washington, D.C., headed by Princeton project scientist Joseph Smagorinsky, another pioneer of the data-theory scientific hybrid that is modeling. The General Circulation Research Section later changed its name to the Geophysical Fluid Dynamics Laboratory and moved back to Princeton, where modeling had originated, in 1968. Beginning in the 1960s, GFDL modeler Syukuro (“Suki”) Manabe published several breakthrough papers with his collaborators detailing their attempts to simulate the workings of the atmosphere. Manabe headed up one prominent climate-modeling group; James Hansen, at the NASA Goddard Institute for Space Studies in New York City, led the other. Their and other models would gradually get faster, better at representing all three dimensions of the atmosphere, more highly resolved, and coupled to models attempting to simulate the behavior of the oceans. Most important, the different teams began to hone in on a key measurement: the so-called climate sensitivity, or the amount of globally averaged warming expected for a doubling of atmospheric concentrations of carbon dioxide. By the late 1970s Manabe’s group had calculated a sensitivity of roughly 2 degrees Celsius, while Hansen’s group got 4 degrees.
When viewed in context of the later hurricane-climate debate, one particularly intriguing result emerged from these early modeling studies. In a 1970 paper on GCM simulations of the tropics, Manabe and colleagues noted that disturbances similar to tropical cyclones appeared in the model, with low central pressures and warm core structures. Many of the computerized storms developed in the same regions of the globe where real hurricanes form, although others developed over land. In general, the modeled storms were unrealistically large and far weaker than real hurricanes. But scientists like Manabe suspected these discrepancies sprang from the fact that the GCM, with its very coarse resolution, simply could not capture the details of such small-scale phenomena as the hurricane eye, eye wall, or spiral rainbands. Manabe and his fellow scientists did not try to see what happened to the modeled storms if they doubled the carbon dioxide concentrations—not yet. But they had come up with a way of studying hurricanes radically different from the approach of the empiricists, who would heavily criticize them for it in later years.
It was around this time that Charney, the godfather of numerical modeling, got involved in climate science. By then he occupied an endowed chair and had recently stepped down from heading MIT’s meteorology department. He was the establishment. Shortly before his death from cancer in 1981, Charney chaired a highly influential 1979 National Academy of Sciences panel charged with evaluating the models showing a substantial rise in global temperatures should atmospheric concentrations of carbon dioxide continue to increase. The Charney Report, as it came to be called, had been requested by President Carters science adviser Frank Press and was the first of many studies from the hallowed National Academies to address the possibility of human-induced global warming. It carefully examined whether any strong reasons existed for calling into question the latest projections of substantial warming for doubled CO2, concentrations, but couldn’t find any. Instead, Charney’s group noted that while several positive feedbacks seemed likely to increase the amount of warming—for example, the melting of reflective snow and ice, which would lead to more absorption of solar radiation by the Earth’s surface, which would lead to more melting of snow and ice—no negative feedbacks seemed capable of significantly offsetting that warming.
The “most important and obvious” positive feedback identified in the Charney report involved atmospheric water vapor. Due to a physical law known as the Clausius-Clapeyron equation, the amount of moisture that can be carried by the air increases along a steeply sloping curve as temperature rises. This means an atmosphere containing more carbon dioxide will also contain more evaporated water, another greenhouse gas sure to cause additional warming by absorbing and emitting still more infrared radiation.
Due in part to the magnitude of the water vapor feedback, Charney and his fellow panelists proclaimed that the two leading climate models of the time—those of Manabe and Hansen—were more or less reliable. Combining their most recent estimates, the Charney Report therefore concluded that the climate sensitivity for a doubling of CO2 concentrations lay in the range of 1.5 to 4.5 degrees Celsius, with the most likely value falling smack in the middle: 3 degrees Celsius. That’s roughly in line with Arrhenius’s calculations made almost a century earlier. It’s also extremely similar to the range—2 to 4.5 degrees Celsius, with a “best estimate” of 3 degrees—offered in 2007 by the Intergovernmental Panel on Climate Change, a United Nations body created to inform policymakers about the state of knowledge about global warming and its impacts, and today considered the gold standard of climate science.
At the time of the Charney Report in 1979, global warming had not yet become the battleground that it is today. The politicization of climate science and climate policy during the late 1980s and 1990s occurred for reasons that will be discussed later. But if we seek to understand why so many of today’s hurricane specialists, as well as members of the broader hurricane preparedness and response community, became so skeptical of global warming and its effect on the storms in which they specialize, at least one of those reasons hinges upon a single very influential personality.
During the 1970s and 1980s, the Atlantic basin went into a relative lull period for hurricanes—especially “major” storms of Category 3 strength or higher—and interest in studying them correspondingly declined, at least in the United States. There were plenty of other exciting areas of disaster research—including, increasingly, climate change. But throughout this lull, one of the few American scientists who stuck with intensive hurricane research was Riehl’s student William Gray. At Colorado State University, Gray was training a formidable team of students of his own. Some hailed from nations such as China and Australia, which also have to deal with tropical cyclones regularly. For all who were up for it, part of Gray’s training regimen involved taking a flight into a hurricane. And part of it involved learning his empirical approach to meteorology.
In his research, Gray applied a data-crunching methodology to further elucidate the structure of hurricanes and uncover a wide range of factors associated with their formation. Much more than Riehl, he distrusted research reliant on complicated equations. Despite his skepticism of modeling, however, Gray excelled at detecting patterns in nature, much as Redfield had, long before him. Over the years, Gray’s work made him a widely recognized leader of American and global hurricane science. After unveiling the first Atlantic seasonal hurricane-forecasting system in 1984, he became a hurricane superstar and a darling of the media. But he had absolutely no use for the notion of global warming, much less the idea that it might seriously affect the storms he’d spent a lifetime studying. And he had no problem saying so—loudly and often.