With more powerful satellites being launched and returning more and more data, there was a growing need for faster computers that could handle the load. Once supercomputers arrived on the scene, forecasters finally had a tool that could assemble and make sense out of the huge volume of data that could now be gathered.
Forecasting is really educated guesswork. There are so many variables that go into a forecast, and they’re always changing. To more fully understand these changes, atmospheric models with a limited set of data were developed in the early days of computing. These models attempted to describe the present state of the temperature, moisture, and pressure in the atmosphere and how those conditions change with the passage of time.
These days, a forecast is a product of six to eight mathematical equations for a given point. Information on air pressure, wind speed, humidity, air density, and the results of surface and upper-air measurements are loaded into a supercomputer, and a program is run that describes the conditions that will occur in a small unit of future time. The program analyzes data for a large number of “grid points,” or imaginary squares of various sizes, both at the surface and up to eighteen layers into the atmosphere.
Now the forecaster has a prediction of future conditions for the next ten minutes or so. Using that data as a starting point, the information is fed back into the computer, which does another prediction for the next few minutes until it reaches a desired time in the future, such as twelve, twenty-four, or thirty-six hours from the starting point. The computer can now draw a map called a prognostic chart that shows how all of the lows, highs, and other weather factors will appear at a future time.
As computer models are developed, their accuracy is tracked and small adjustments are made to improve them. Because one model may evolve to be better at forecasting surface low-pressure systems, while another excels at predicting the movement of upper-level air, meteorologists can pick and choose among the many models available, selecting the ones that are more likely to result in a correct forecast.
Because computer modeling is accomplished using grid points, and a forecast is drawn using conditions within each grid box, it follows that the smaller the grid the more accurate the forecast will be. Global climate models work with a grid that’s about 300 miles square (larger than the state of Iowa). Because it’s a three-dimensional box that extends into the upper levels of the atmosphere, with data being plotted at each level, the output of just one grid computation can total hundreds of megabytes of data. As grids become smaller, the data analysis and storage requirements rise exponentially, and so grids smaller than 300 miles square contain too much data for the purposes of global modeling. As a result, global models are still much less accurate than regional ones, although they’re useful in determining large-scale climate changes over time.
Regional grids produce more accurate forecasts than the global variety, but because they’re smaller, they’re dependable only for a short period; weather from adjacent grids always intrudes before too long.
Although computers are great tools for aiding in the creation of weather forecasts, human interpreters still need to analyze the output from computer models, compare them with past data that have been gathered over a long time period, and determine what changes will improve the model’s accuracy. There is simply no substitute for human experience and wisdom.
Climate models depend on a detailed description of a grid point for their accuracy, so each area is modeled differently depending on whether it’s over land, ice, or ocean. Land specialists help build models that incorporate topographic features like mountains and rivers, as well as water runoff on the surface and the amount of water in the soil. The models also include forests and other areas of vegetation, because plants reflect less sunlight than land and the amount of carbon dioxide that plants release can affect local air composition.
Oceanographers are called on to use their knowledge of the sea to input factors such as salt content, freshwater runoff, sea ice, ocean temperature, and density into numerical models. Atmospheric scientists input information on the distribution of gases in the air, how solar radiation is affecting air temperatures, and the amount of pollutants like industrial smoke and automobile emissions.
With high-tech teamwork and speedy supercomputers combining to create forecasts, why aren’t at least short-term forecasts more dependable? The answer lies in the tendency for small atmospheric disturbances to be greatly magnified over time. Meteorologist Edward Lorenz discovered this effect, called chaos theory, in 1963. Lorenz was running a weather-modeling computer program accurate to the sixth decimal place, but after running into a problem, he reentered the data using a printout, which rounded to three decimals. To his great surprise, the extremely small difference of missing the last three decimal places resulted in a very different processing run.
Lorenz described chaos theory as “a system that has two states that look the same on separate occasions, but can develop into states that are noticeably different.” A golf ball dropped from the same height above a fixed point would always land on the same spot, he noted, but a piece of paper would not because during its fall it would be acted on by chaotic forces like air movement. Because those forces changed constantly, the path of the paper to the ground could not be predicted with any degree of accuracy.
Lorenz illustrated chaos theory by concocting the “butterfly effect,” which states that the flapping of a butterfly’s wings in China could cause tiny atmospheric changes that over a period of time could affect weather patterns in New York City.
Computer models are only a simulation of the atmosphere; they make assumptions about weather conditions that may or may not be accurate. Even though great pains are taken to eliminate models that don’t perform well, they are not now, nor will they ever be, perfect. Another problem is inherent in regional models: errors creep in along their boundaries as weather from nearby grids sneaks in.
With the advent of satellites and radiosondes, many more observation points are now available than in the past, and forecasts have improved as a result. But because of the computational requirements of smaller grids, most models still use data points that are too far apart to accurately predict the movement of small-scale weather systems like thunderstorms. In addition, many models don’t take land features like hills and lakes into account, thereby introducing that first small error that Lorenz showed can be magnified over time into one giant boo-boo.
When you take the effects of chaos into account, is there any real hope that forecasts—especially long-range ones—can be improved? Actually, it’s already happening. Ensemble forecasting combines several computational models into one, using a weighted average system. Introducing different weather factors into a model at the outset mimics the effects of chaos, and often results in a more accurate result. Running several of these ensemble models while using a slightly different weight factor each time increases the chances of at least one being correct, and by weeding out the ones that don’t work, forecasts can become much more dependable.
Another way that scientists are improving forecasts is by filling in data gaps that have long existed in certain remote parts of the globe. In much of the Southern Hemisphere, for example, which is mostly covered by vast oceans, gathering atmospheric information in real time has been a challenge. Now a NASA scatterometer, which is able to measure wind speeds from orbit, passes over 90 percent of the world’s oceans each day, greatly improving marine forecasts.
Forecasters compare the output of different models and assign a degree of confidence in each one depending on how much faith they have in a particular forecast. In general the more the models disagree, the less predictable the weather is.
Recently, it was discovered that some areas of the earth’s surface are responsible for more chaos errors than others, and were dubbed “chaos hot spots.” These areas, which cover about 20 percent of the earth’s surface, are now the target of intense observation since they seem to cause most of the inaccuracies in current global forecasts. As scientists move from the global to the regional and even local scale, more hot spots will be identified, and forecasts for those areas will improve.
Researchers are now comparing computer models with historical conditions. They feed climatic information for a certain past day into the system and then run a projection of the weather for the next fifteen or thirty days. Comparing the results of the projection with the actual conditions that occurred in the past can demonstrate the accuracy of a model.
With the realization that a small atmospheric eddy in one country can affect the weather in another, in 1992 the World Meteorological Organization (WMO) began a program called the Global Climate Observing System (GCOS). GCOS was designed to improve forecasts by coordinating weather data from all over the globe.
Of course one way to improve forecasts is to throw more computing power at them, and that’s just what IBM is doing with its Deep Thunder project. Using the same type of supercomputer as Deep Blue, the system used to defeat Russian chess master Garry Kasparov in May 1997, the company is hoping to substantially reduce the size of a weather modeling grid and produce a much more accurate local forecast. IBM is currently issuing a daily forecast for New York City using Deep Thunder, creating complex 3-D images that can give forecasters a snapshot of future weather conditions at a glance.
While improved forecasts will help you refine everyday activities, such as avoiding a cloudy day at the beach and letting you know when to bring an umbrella to work, these improvements hold enormous implications for businesses. It’s been estimated that in the airline industry alone, weather-related problems cost up to $269 million a year, and improvements in forecasting can help them reduce these costs. Even power companies lose money when bad forecasts cause them to overproduce electricity. The better the forecasts, the more efficiently they can produce the power we depend on.
Even more important, accurate and timely forecasts can save lives and money. In 2016, fifteen weather-related disasters cost more than $1 billion each. Clearly, every small step in improving weather prediction is welcome, especially when that weather turns violent.