Notes |
The story of how scientists came to understand solar activity and its geophysical effects is a long and complicated one. Here are a few short essays that provide a bit more insight into some of the issues covered in this book. For more details visit the Astronomy Cafe web site at http://www.theastronomycafe.net
Chapter 3. “Hello: Is Anyone There?”
While Benjamin Franklin was flying a kite hoping to entice a lightning bolt into a jar, Sir William Watson in England sent another kind of electrical discharge from a battery down a wire some two miles long. It wouldn’t have amounted to more than an odd laboratory curiosity if it hadn’t been for a Frenchman named Lesage some twenty-five years later who found a rather odd application for it. He arranged a set of wires and batteries, one for each letter of the alphabet, and a distant reader could tell what letter was being sent by seeing which wire was charged or not. It was a comically strange way to send a message, but it was the first attempt at sending information that didn’t use the centuries-old methods of smoke, mirrors, lanterns, or flags.
Marconi fully expected that radio broadcasting would be resilient to solar disturbances compared to telegraphy and telephony, because it used a very different medium to transmit its signals. While disturbances from the September 1909 Great Aurora were recorded worldwide in a variety of telegraph and telephone systems, he considered this storm and its impacts a lesson to be learned, not by wireless telegraphy, but by the competing communications technologies. In 1926, another Great Aurora lit up the skies, bringing this twenty-five-year sense of security to an abrupt end. International wireless tests with U.S. shortwave radio operators attempted to pick up stations in Wales, Argentina, and Peru. Electrical disturbances interfered with both broadcasting and telegraph services. Scientists blamed all of this on an unusually large sunspot visible on the Sun. Exactly one solar rotation later, on February 23, the same sunspot group was positioned as it had been for the January storm, and again problems erupted on the telegraph wires and in the ether. This time, shortwave radio reception of stations to the north of Ames, Iowa were blocked. Stations to the south came through clear as a bell. The third and final storm of this series arrived a week later from a different group of sunspots near the center of the Sun’s disk. Again, voltage surges in the telephone lines were recorded, and shortwave reception only improved after the surges ended.
Chapter 4. Between a Rock and a Hard Place
On November 9, 1965, the largest blackout in history erupted in the United States in what became known as the Great Northeast Blackout. The event started at the Niagara generating station when a single transmission line tripped. Within 2.5 seconds, five other lines became overloaded and caused generators to become unstable and go off-line. Within four seconds, thirty million people in New York, Vermont, Massachusetts, and Connecticut lost power for up to thirteen hours. Later that day, President Lyndon Johnson directed the Federal Power Commission to investigate this failure. The full resources of the federal government, the Federal Bureau of Investigation, and the Department of Defense were called upon to support this investigation. There were many lessons learned from this nonspace weather blackout and today’s electrical utility network is the result. The power grid of the 1960s had very few built-in safeguards that could have stifled this failure, it was also much less networked and interconnected. Paradoxically, Maine did not suffer the blackout because it was much less connected to the rest of the grid then than it is now.
More recently, on April 29, 1991, a transformer at the Maine Yankee Nuclear Plant catastrophically failed a few hours after a severe geomagnetic storm. This power plant has been closed since the mid-1990s because it is among the oldest nuclear power plants and it could no longer operate economically. Although the words nuclear plant and transformer failure appear in the same catastrophe, there was never any danger to the safety of the plant.
Engineers were eventually able to figure out that, for power lines, the only way to harden them against geomagnetically induced currents (GICs) was to build them over rock strata of the right conductivity. You could also make transformer cores forty times larger, increase your operating margins, and stop using the “Wye-type” open ground coupling scheme. Constructing transformers forty times larger is out of the question, because the transportation of these 100-ton devices is already at the limits of what can be accomplished with conventional trains and trucks. It is also not feasible to use another coupling scheme because Wye-type grounding is the least expensive, and any changes would cost many billions of dollars to implement. Because of a lack of genuine interest in such sporadic events, the recognition that they could be a significant new problem for large power grids came very slowly. Even by 1968 these types of disruptions were still so infrequent that most utilities could not muster more than a modest interest in them. None of the alterations that could reduce GIC impacts were practical options by 1995, so the only recourse was to attempt forecasting, if you felt so inclined.
In July 1998, electrical power transmission congestion in Wisconsin and Illinois blocked the transport of power from northern supplies to consumers in southern states who were sweltering in the heat. Power lines can only transport a fixed amount of power, and the essential transmission lines along the electrical superhighway were experiencing the equivalent of gridlock. Not enough volunteers could be found in the south to stop using their air conditioners, so the local electrical utilities had to go to local energy suppliers to purchase temporary “makeup” energy. Within a few hours, the price per megawatt soared from $20 to $7,000 and wiped out the yearly profits from several southern utilities. A similar problem occurred in 1999 during hot weather in the Midwest and Northeast. Deregulation has forced utilities into a wait-and-see mode where investments in infrastructure are postponed and new capacity is not planned. All of these factors work together to make even minor geomagnetic storms a potential “straw” that can break the back of regional electric power pools.
Blackouts and power interruptions of one kind or another are actually more common than you might suppose. Some last only fractions of a second, while others can last nearly a week. During a severe ice storm in 1999, seventy thousand people lost power for up to five days during a cold winter in the Washington, D.C. and Maryland region. New York City lost power for several days on a hot Wednesday in July 1977. Ten million people were without electricity, and two thousand were arrested for looting. In July–August 1996, ten states and six million people were without power for sixteen hours. The Electric Power Research Institute in Palo Alto, California has estimated that even short interruptions—sags—lasting less than half a second can cost U.S. companies $12 to $26 billion each year. Some paper companies lose $50,000 each time one of these dips comes along every week or so and stops five-ton spools of paper dead in their tracks, shredding hundreds of yards of paper and jamming presses.
The electrical power industry is slowly beginning to take GICs seriously because of several factors that have come to light in recent years. Power grid performance has become so optimized that any sources of inefficiency have become intolerable. GICs damage expensive equipment and generate VARs, which rob power companies of millions of dollars of potential revenue. Like a tax auditor, GICs can come at any time and affect hundreds of transformers in one stroke. Margins have also diminished so that there is less surplus power to cover emergency situations.
Developers of GIC monitors and forecasting technology are currently on the stump to sell their systems to power companies, but the process is hampered by power company managers who do not fully accept that GICs are a financial liability for them they can do something about. Reports of distant and sporadic blackouts (Quebec) have not fully impressed U.S. power managers because they do not affect their customers directly. In a $214-billion-a-year industry involving 700,000 miles of high-voltage power lines, losing a transformer every few years—or dealing with a solar storm-induced blackout against the hubris of other blackouts and sags—seems an unwarranted financial concern. Besides, although electrical utility customers reasonably expect their local power company to do all they can to keep the power running during an ice storm, solar storms are not a part of this expectation. They are a rare and “cosmic” event that most of us will pardon power companies from worrying about. But, sometimes, rare events can make a difference in tragic ways.
Chapter 6. They Call Them “Satellite Anomalies”
The Marecs-1 satellite, also suffered a complete failure on March 25, 1991. This satellite had a history of space environment problems. Its predecessor, Marecs-A launched in December 1981 had already been disabled ten years earlier by the strong electrical currents flowing during a week of intense auroral activity in February 1982.
Chapter 7. Business as Usual
The loss of Intelsat 708 during a launch in the People’s Republic of China triggered a congressional investigation on the role of commercial space insurance in technology transfer to the PRC. The August 1999 Cox Report was the outcome of this investigation, and it publicly revealed many of the details of how satellite insurance operates. To get insurance, a satellite owner selects an insurance broker who acts as an intermediary between the insurance underwriters and the satellite owner. The broker writes the policy, manages transactions, and settles claims. Brokers do not lose money in the event of an accident but are paid a commission on the basis of the size of the insurance package they write. The satellite owner prepares a technical document, giving a detailed assessment of the satellite and launch vehicle and any risks associated with the technology. This is presented to the broker, who then presents this to the various underwriters during the negotiation phase. This information is confidential and cannot be divulged to the public. Brokers and underwriters often retain their own staffs of independent technical experts, space scientists, and engineers to advise on the risk factors and to decide upon appropriate premium rates. The policy is then negotiated, with the broker serving as the intermediary between owner and underwriter. This can take up to three years prior to launch for major satellite systems. A 10–20 percent deposit is paid to the underwriters no later than thirty days before launch. Typically, the premiums are from 8–15 percent for the launch itself. In-orbit policies tend to be about 1.2 to 1.5 percent per year for a planned ten- to fifteen-year life span once a satellite survives its shakeout period.
According to Michael Vinter, vice president of International Space Brokers in Virginia, this period was once as short as one year but has now grown to as long as five years depending on the perceived riskiness of the satellite. If a satellite experiences environmental or technological problems in orbit during the initial shakeout period, the insurance premium paid by the satellite owner can jump to 3.5–3.7 percent for the duration of the satellite’s lifetime. This is the only avenue that insurers have currently agreed upon to protect themselves against the possibility of a complete satellite failure. Once an insurance policy is negotiated, the only way that an insurer can avoid paying out on the full cost of the satellite is in the event of war, nuclear detonation, confiscation, electromagnetic interference, or willful acts by the satellite owner that jeopardize the satellite. There is no provision for “acts of God” such as solar storms or other environmental problems. Insurers assume that if a satellite is sensitive to space weather effects, this will show up in the reliability of the satellite, which would then cause the insurer to invoke the higher premium rates during the remaining life of the satellite. Insurers, currently, do not pay any attention to the solar cycle.
Chapter 10. Through a Crystal Ball
The Great Aurora of September 9, 1859, lit up the skies around the world and caught astronomer Richard Carrington’s eye just as he was about to end his observing session at the telescope. Carrington was an avid watcher of sunspots, and he had been watching a spectacular sunspot round the western limb of the Sun during the last few days. Within minutes, a powerful optical flare burst into light and then vanished. Meanwhile, miles away at the Kew Observatory outside London, the local magnetic field went haywire. This flare did much more than merely tilt compass needles and make a few astronomers sit upright. In France, telegraphic connections were disrupted as sparks literally flew from the long transmission lines. Huge auroras also blazed in the sky as far south as Hawaii, Cuba, and Chile. People spoke about this now long forgotten event much as we have obsessed about “killer asteroids” in recent years.
Despite the coincidence of flare and aurora, Carrington’s observation was actually a fluke. Astronomers know that such brightenings visible to the eye through a telescope are literally a once-in-a-lifetime event and require especially titanic releases of energy on the Sun. For the next fifty years after Carrington’s sighting, many careful studies were made of the solar surface and magnetic storm records, but no other sudden brightenings of the solar surface were ever seen again. It wasn’t until the invention of the spectroheliograph and its successor, the visible light spectrohelioscope, between 1892 and 1910, that many more sudden brightenings were captured and their geomagnetic impacts could be properly assessed. Ultimately, the only proven way to anticipate solar flares, and the geomagnetic and ionospheric effects that might follow, is to watch the solar surface itself. Constantly.
Since the 1960s, solar physicists have known that sunspots with opposite polarity cores (called umbrae) within the same envelope (called penumbrae) were a potent spawning ground for flare activity. If a flare had been spotted near an active region, the odds were excellent that there would be more flares to follow from this same region over the course of the next few weeks. It didn’t matter how big the sunspot group might be. What counted was how tangled up the magnetic field was in a small region of the solar surface. In the 1970s, new magnetic imaging technologies allowed flaring regions to be correlated with areas where strong shearing was occurring: magnetic fields with opposite polarities were trapped in regions in which gas motions were dynamically moving the magnetic fields around in very small parcels of gas. This seemed to be the crucial observational clue to anticipating when a flare is likely to breakout.
The BearAlert program eventually established an “eight-fold way” for evaluating whether conditions were ripe for a flare event or not. Current, official techniques used by NOAA’s Space Environment Center use images of the entire Sun, rather than detailed studies of individual active regions, and tend to be accurate only about 25 percent of the time. The BearAlerts, with their much more detailed assessments of individual sunspot groups, scored correct predictions for M- and X-class flares about 72 percent of the time. What is also encouraging is that the method developed by Zirin and Marquette rarely misses the really big M-class flares that can do astronauts and satellites serious harm. The amount of lead time we have for solar flares has now expanded from literally a few minutes to several days. There is some indication, however, that a perfect record of correct calls may be forever out of reach. Solar activity, at the scales that trigger flare events, is largely a random process, just as the pattern of lightning strikes during a thunderstorm.
The number of geomagnetically disturbed days rises and falls with the sunspot cycle. The largest number seems to peak a year or so before, and a year or so after, sunspot maximum. The reason for this is not known. These disturbances seem to be more intense in the March–April and September–October periods as well. Here we think we understand this pattern a little better. The Earth’s orbit is tilted five degrees to the equator of the Sun. This means that there will be two “seasons” during the Earth year, around the equinoxes, when the Earth passes through the equatorial plane of the Sun. The Sun also crosses the equatorial plane of the Earth at this time. Under these conditions, the southward-directed field of the Sun has its maximum strength, making it very powerful in stimulating geomagnetic storms. If you want the best chance of seeing a dramatic aurora, wait until sunspot maximum conditions prevail and visit northern latitudes during the March and September equinoxes.
As useful as Kp is, it does little to give you a meaningful advanced warning of what will soon be happening where you are located. Once you see the Kp index growing in size to become a major storm, the damage to your technology has already been done. Historical information about past storms tells the unhappy tale that, by the time you see Kp grow to the level of a medium-sized storm with Kp = 6, you have a roughly one in five chance it will continue to grow into a large storm with Kp = 7. You also have a roughly one in fifteen chance it will become a major storm with Kp = 9. It only takes a few hours for these kinds of changes to play themselves out. More troubling than this, geomagnetic conditions can look fairly normal for hours, then, within minutes, suddenly deteriorate into a severe storm. Despite its limitations for advanced warning, Kp is in many ways the only indicator that is readily available each day, so a variety of groups and industries find even this kind of information better than none at all: the electrical power industry for instance.
Although plasmas, fields, and currents form systems of staggering complexity, there are still consistent patterns of cause and effect that can be traced with considerable mathematical precision. There is nothing ad hoc about how a current of particles will generate a specific amount of magnetic field strength. It doesn’t matter if the current is one ampere of electrons in a wire or a dilute five hundred thousand-ampere river of plasma orbiting the Earth. Maxwell’s famous equations, combined with suitable “equations of motion,” are in principle all that you need to describe the essential features of any “magneto-hydrodynamic” system such as the Earth and Sun. But, even with the theoretical game plan clearly defined, there is still a lot that is left unspecified. Theorists have a large number of mathematical choices to make in deciding which ingredients to keep and which to throw out. The more sources and interactions you add to your equations, the messier they become, and the harder it is to wrest a concrete mathematical prediction from them. High-quality data is the only looking glass that lets space scientists hit upon the right clues to guide them. Like learning how to dance, it is important to start with the correct foot forward, and only a careful study of Nature gives us the right choreography. Eventually, space scientists managed to win their way to a rather firm set of procedures for tackling questions about the Sun-Earth system. These “arrivals” were not in the form of some monolithic, single, comprehensive theory of how the whole shebang worked but a series of minor victories that comprised their own separate pieces in a larger puzzle.
For example, astronomers, armed with telescope and spectroscope, investigated the solar surface and pieced together the physical structure of the photosphere-chromosphere-corona region. They dissected the chemical compositions of the solar gases, measured their temperature, density, and speed, and crafted a working model of the solar atmosphere. They used powerful new “Zeeman-splitting” techniques to measure surface magnetic fields. With Maxwell’s equations, the magnetic data helped theorists build models of the geometry of this field around sunspots and extend them deep into the corona. By 1960, a preliminary theory of why there is a sunspot cycle, and why sunspots occur, was hammered out by Eugene Parker at the University of Chicago and Horace Babcock at the Hale Observatories. Parker also went on to craft a groundbreaking theory, and mathematical description, of the solar wind as it leaves the coronal regions and flows throughout the solar system. Solar physics was, essentially, described by the complex mathematics of magneto-hydrodynamics. The particular phenomena we observed was “only” the working out by the Sun of specific mathematical solutions, driven by its complex convecting surface. What remained to be understood were the details of just how the solar magnetic field was generated, how the corona was heated, and why solar flares and other impulsive events get spawned. The missing link seemed to be the various gyrations of the magnetic field itself, but only new instruments in space would let scientists chase the magnetic forces down the rabbit’s hole of decreasing size.
By the way, you should always keep in mind that things could be far worse for us than they are! For decades, astronomers have been studying stars that are close cousins to our Sun, a middle-aged G2-class star. At Mount Wilson Observatory, careful measurements of some of these stars show a distinct rise and fall in certain spectral lines that on our own Sun are indicators of solar activity. These stars also show periodic “sunspot cycles” with periods from a few years up to thirty years per cycle. Others show a constant level of activity, as our own Sun would have during the Maunder Minimum between 1610 and 1700. So solar activity is not unusual among the kinds of stars similar to our Sun. Rather alarming is that some kindred stars belt out super flares from time to time. In fact, according to Yale astronomer Bradley Schaefer, sunlike stars normally produce one of these superflares every century: “One of these cases I have is a star, S-Fornax, where for a 40-minute period it was seen to be three magnitudes brighter than usual.” The power from the flare made the star appear nearly twenty times brighter than usual. One of these superflares would be about ten thousand times more powerful than the solar storm that caused the 1989 Quebec blackout! According to Schaefer, portions of the surfaces of the outer ice moons of the solar system might be melted, much of the ozone layer would be destroyed, and the entire satellite fleet would be permanently disabled. It is believed that the reason the Sun doesn’t have these flares is that it doesn’t have a close companion star or planet that is magnetically active and able to tangle up our Sun’s magnetic field.
Meanwhile, back at the Earth, the challenges were nearly as daunting. The shape of the Earth’s magnetic field was eventually defined by numerous ground-level measurements and, with Maxwell’s equations, extended thousands of miles into space. Although the general shape was still much like that of a simple bar magnet, there were noticeable lumps to it that followed geological changes in surface rock conductivity and subsurface irregularities reaching all the way to the core of the Earth itself. By the 1930s, physicists Sydney Chapman and Vincenzo Ferraro had mathematically described the impact that an “intermittent” solar wind would have upon the Earth’s magnetic field. It was a staggering tour de force, linking together many separate geophysical systems and phenomena. The compression of the sunward side of the field would eventually lead to the amplification of a powerful ring of current flowing in the equatorial zone. Aurora had been studied meticulously since the nineteenth century and eventually gave up their quantum ghosts once the spectroscope was invented. Something was kicking the atmospheric atoms of oxygen and nitrogen so that they glowed in a handful of specific wavelengths of light. Through the rather contentious technical debates that began with Kristian Birkelund in 1896 and ended with Hannes Alfven in the 1950s, the general details of how aurora are produced came into clearer view. Some process in the distant geotail region was accelerating currents of electrons and protons along polar magnetic field lines. Within minutes, the currents dashed against the atmosphere and gave up their billions of watts of energy. There was, however, no detailed mathematical model that could recover all the specific shapes and forms so characteristic of these displays. This much was certain, however: we were living inside the equivalent of a TV picture tube, and the electron beams from the distant geotail region were drawing magical shapes on the phosphor screen of the sky.
The dawn of the Space Age brought with it an appreciation of most of the main ingredients to the complete geospace environment. All that seemed to be lacking in moving the frontier forward was more data to describe the geospace system in ever more detail. New rounds of complex equations needed to be fed still more detailed data to keep them in harmony with the real world. Space physics had reached a watershed moment where mathematically precise theories were sorely in need of specific types of data to help them further evolve. One small step along this way was to create a series of “average” models of the particles and fields in geospace.
NASA became a leader in developing and refining models of the Earth’s environment through the Trapped Radiation Environment Modeling Program (TREMP) in preparation for the Apollo moon landings. The models combined the measurements made by dozens of satellites such as Telstar and Explorer and even instruments carried aboard the Gemini spacecraft. They didn’t attempt to explain why the conditions were what they were, or how that got that way. Unlike the specific theories of the Sun-Earth system and its various components, TREMP program models, such as AE-8 and AP-8, were merely statistical averages of measured conditions in space and in different localities during solar maximum and solar minimum conditions only. They could not predict conditions that had not already been detected from the smoothed averages. The models did not include solar flares or other short-term and unpredictable events that can substantially increase accumulated radiation dosages. This was the best that could be done by the 1970s, and it is amazing that these models are still in wide use over thirty years later. Although they are adequate for designing satellite radiation shielding, they are useless for forecasting when the next storm will arrive. Some researchers don’t even think they are all that useful for high-accuracy satellite shielding design.