6
Convergence: 1995–2015

During the first decade after the Global Positioning System achieved its full operational capability, from 1995 to 2005, it found increasing use among the branches of the military, scientists, surveyors, and recreational hikers and boaters. For the second decade, from 2005 to 2015, it burst into public consciousness and found uses hardly imagined by its creators. That second decade also saw an effort by Russia to improve its Soviet-era GLONASS system and by the European Union, China, India, and Japan to accelerate their own satellite navigation systems. To understand why that happened, we need to go back 40 years to the mid-1970s, when GPS was first conceived, to examine the technological context of that invention.

The Internet

The basic architecture of what became GPS arose from a series of meetings in the Pentagon and neighboring Arlington, Virginia, in the summer and fall of 1973. At the same time, in another office in the Pentagon, another advanced technological system was being designed—the Advanced Research Projects Agency Network, or ARPANET. Sponsored by the Defense Department’s Defense Advanced Research Projects Agency (DARPA), its goal was to network computers to one another.1 Messages were transmitted over a rudimentary network by 1969, and in October 1972, DARPA gave a dramatic demonstration of ARPANET’s capabilities at the First International Conference of Computer Communication, held in a Washington, DC, hotel.2 The network had been in operation, but many in the computer and telecommunications fields regarded it as an interesting research project but not much more. The October demonstration convinced the attendees that in fact the ARPANET was a functioning and potentially useful system. By the summer of 1973, the ARPANET’s designers were working on ways to interconnect disparate networks of computers—in effect, to create an “internet.” A lot of this work was carried out at DARPA’s Information Processing Technologies Office in Arlington, Virginia, just north of the Pentagon. In September 1973, Stanford professor Vint Cerf and Robert Kahn of the Information Processing Technologies Office convened a meeting in Palo Alto, California, where they and their colleagues developed a set of protocols that replaced the more restricted protocol of the original ARPANET.3 (Protocols are an accepted set of standards that govern the transmission, reception, and verification of messages sent over a network.) The result, eventually called the Transmission Control Protocol/Internet Protocol, or TCP/IP, was accepted by DARPA. In 1983, the same year that President Reagan announced the availability of GPS to civil users, the ARPANET replaced its initial switching protocol with TCP/IP. It remains as the foundation of today’s Internet.

Mirroring the concerns at the Defense Department about civilian access to GPS, the military sponsors of ARPANET were concerned about the numerous civilian uses of it. In 1983, the network was split, with a separate and tightly controlled “MILNET” spun off. The original ARPANET continued as a research-oriented network with fewer controls.4 The basic concepts of packet switching and routing were applied to a variety of military needs, including those used in the Persian Gulf War of 1991.

The Microprocessor

In 1965, Gordon Moore, an engineer working at Fairchild Semiconductor in Mountain View, California, produced a simple plot of the increasing density of active digital devices that one could place on a single integrated circuit and predicted that such density would double approximately every year into the foreseeable future.5 In the decade that followed, many computer engineers kept a piece of semi-log graph paper at hand, on which they reproduced this curve, predicting the number of transistors one could expect to contain on a chip at any given time. The doubling time later stretched to about 18 months, where it remained well into the twenty-first century. By about 1971, Moore’s Law predicted that a single computer chip could contain about 3,000 active circuits, the same number of vacuum tubes used in the earliest commercial computers. Placing all the functions of a digital computer on a single chip of silicon was not that easy, however. But it was done, by engineers at the Intel Corporation, at Texas Instruments, and by engineers working on avionics for the Grumman F-14 jet fighter. The consensus among historians is that Marcian E. Hoff, along with Federico Faggin and Stan Mazor of Intel, made the key breakthrough of translating the implications of Moore’s Law into a practical product, known as the microprocessor. The public knows of this invention as being the heart of the personal computer revolution of the 1980s, which was followed by further advances in computing and communications leading up to the now-ubiquitous smartphone.

The microprocessor is also at the heart of satellites and other space-based systems, including the user segment of GPS. A GPS receiver has to carry out a myriad of calculations to obtain even a simple fix. It has to determine which satellites are in view, determine their positions and orbits, receive the time signals, decode and slew the pseudo-random number sequence, iterate the solution of the time-distance equation, calculate accurate time, translate the results into graphical form, integrate that with mapping software, and so on—all in “real time.” The microprocessor allows a designer to carry out those functions in software. That does not mean the equations are easier to solve, but it does mean that one does not need to design customized electronic circuits for each of the functions mentioned above. The jargon favored in technical circles is “software-defined radio”: the concept of having a microprocessor do nearly all of the work that previously required dedicated circuits to receive and demodulate a signal. Software-defined radios consist of an antenna, a radio-frequency amplifier, and an analog/digital converter. No more dedicated circuits: almost everything else is handled by a microprocessor.

The Cell Phone

A final event took place at this time, and we now see that it completed the ensemble of technical developments associated with civilian access to positioning data. In April 1973, a Motorola employee named Martin Cooper made what has been called the first cell phone call from a handheld phone in midtown Manhattan.6 His phone was heavy and bulky—commercial versions were called “the brick”—but the call went through. Mobile communication, previously restricted to the “walkie-talkie” used by soldiers, was soon available to civilians worldwide. Scholars have studied the social phenomenon of how the cell phone has become a constant companion. What is seldom noted is that cell phone service requires handing calls from one cell to another, as the user moves about. That cannot be done without reference to accurate time to synchronize the handoff. Martin Cooper did not need that for his first cellular call, but when so many people carry and use cell phones every day, the handoff is critical. In what may be one of GPS’s most valuable consumer applications, the time is supplied by GPS receivers mounted on the cell towers.

Thus the early 1970s meetings in Washington, DC, and northern Virginia, which established the architecture of GPS, took place in the context of a series of other technological developments, the sum of which define social, political, economic, and cultural life in the twenty-first century. Of those events, only the development of GPS was directly space-related, but all four are integral parts of the technoscientific world in the twenty-first century. It is impossible to pull out the individual threads without unraveling the entire fabric.7 These events were not well-publicized outside of engineering circles. For the public, the early 1970s was hardly a time of optimism about technology. The technological euphoria that accompanied the Apollo expeditions to the moon quickly evaporated. Beginning in October 1973, the Organization of Petroleum Exporting Countries voted to restrict oil exports to the United States, leading to long lines of angry Americans waiting to buy gasoline for their automobiles. But while drivers waited in long queues to get gasoline, changes were in the air.8

Drones

It would be impractical to enumerate all the ways this ensemble of technological advances has found its way into modern life. Any attempt is likely to be rendered obsolete by the time this book is placed in the hands of a reader.9 Yet there are examples: applications of GPS that seem to usher in a new era. The first of these is the resurrection of an old idea that had languished for decades in the aviation world, which has now found itself front and center in current discussions of military and commercial aviation, as well as transportation in general. That is the rise of unmanned aerial vehicles (UAVs), or drones. (The preferred gender-neutral term is unpiloted aerial systems, a term that also acknowledges the complex electro-optical devices installed on them.) Rather than attempt to follow the daily progression of news about these devices, we focus on the UAV that triggered the renewed interest, namely the General Atomics MQ-9 Predator.

The Predator had a difficult time being accepted by the armed services, in part because of the disappointing performance of its immediate predecessor, the Lockheed MQM-105 Aquila. The Aquila had a 3.9 meter wingspan, launched from a catapult, and recovered by flying into a net. Its mission was to locate and designate a target with a laser, allowing a laser-guided bomb to destroy it.10 DARPA began development on the Aquila in the early 1970s. As the project transitioned into a production version, it performed poorly. The project was canceled in 1987 and had the effect of poisoning further discussions of UAVs in the armed forces in following years.11 The Aquila did not have GPS guidance, but that was not the sole reason for its shortcomings. Likewise, the inclusion of GPS in the Predator was not the sole reason for its success, but the Predator’s guidance system was a major improvement. In particular, an onboard GPS receiver, coupled with satellite links to ground controllers, either nearby or across the globe at CIA headquarters in Langley or at air force bases in the United States, allowed the Predator to fly beyond lines of visual sighting, to photograph areas of interest, and, later on, to deliver Hellfire missiles to precise targets.

Like the Aquila, the Predator had a long period of development, testing, and refinement. It was larger and heavier—the Aquila resembled a model airplane, but the Predator had dimensions comparable to the popular Cessna 150 light aircraft. It used composite structures that significantly reduced weight and increased its endurance. The powerplant, a four-cylinder engine that had been used on ultralight aircraft, “buzzed like a big mosquito.”12 That sound may be the origin of the term “drone” for UAVs in general, although there is some disagreement about that. Its most distinctive feature was a bulbous nose, housing a variety of electro-optical systems and a satellite communications antenna.

Just as the half-finished GPS system was celebrated during Operation Desert Storm, so too were the just-acquired Predators celebrated during their use over the skies of Bosnia in 1995. They were able to loiter over targets in Bosnia and relay video imagery in real time to a ground base in Albania. James Woolsey, Director of Central Intelligence, monitored the video feed from CIA headquarters in Virginia. Two Predators were lost in the early phase of deployment, but no lives were lost and no crew members captured. In 2002, a Predator was equipped with Hellfire missiles, which the CIA used in an offensive against Al Qaeda in the aftermath of the September 11, 2001, terrorist attacks. The remote targeting inaugurated a new—and controversial—era of remote-control warfare.13

The print and television news media covered these events in detail. Media coverage of the GPS component of the weapons implied that GPS was crucial to the Predator’s success where so many previous UAVs had failed. The remote targeting of suspected terrorists by later-generation Predators led to debates over the ethical uses and abuses of GPS, debates which continue to the present day.14 Since 2005, drones of all shapes and sizes have appeared, with much of the current emphasis on civilian drones used to deliver packages, mail, and even passengers—reviving the decades-old search for a “flying car,” which this time will succeed, its creators claim, because it will be flown autonomously and not require a skilled pilot.

The Smartphone

On January 9, 2007, Apple Computer CEO Steve Jobs announced the iPhone, a device he correctly predicted would revolutionize mobile computing and communicating. Jobs had negotiated an agreement with AT&T to be the sole provider of cellular service for the phone through 2009. AT&T’s service operated on the GSM standard. Initially the phone did not have a GPS receiver installed, although it did have a number of other features, the combination of which gave the iPhone its revolutionary place in the history of modern technology. Later models did have a GPS chip installed and could access CDMA as well as GSM carriers.

Jobs did not invent the smartphone. Apple’s product had many antecedents. Some, like the Research in Motion Blackberry and the Palm Pilot, were successful products with a large and enthusiastic customer base. Other products, including Apple’s own Newton, were failures.15 Yet Jobs did indeed make a major contribution. It was not just that the iPhone integrated a number of computing and communication features into a compact package; it was also the way Apple managed that integration in a way that made it seamless to the consumer. Apple learned from the failure of the Newton, as well as the successes of other devices, and got everything right with the iPhone. The iPhone tapped into a deep social need for a device to manage the complexities of modern life.

The strongest case for the emergence of that social need came from a device that was not even electronic: the Filofax personal paper organizer. The Filofax consisted of a six-ring loose-leaf binder, often covered in expensive leather, which contained pages for a calendar, maps, addresses, and other data customizable by its user. It had been invented in the early twentieth century, but it suddenly became a must-have accessory in the 1980s, around the time when the news media took note of the “young urban professional” phenomenon.16 The Filofax was a British product, and its main competitor in the United States was the more prosaic Day-Timer, which had a similar following. In 1985, social critic Ed Tenner observed, “Americans will buy $300 million worth of datebooks this year.”17 He noted the availability of “more than 400” inserts, including specialized maps, “species checklists for birdwatchers … golf scores, and horses’ stud records.” The parallels with the apps now available on a smartphone are obvious. Like the phones, people became obsessed with them.18 Tenner observed the obvious: people carried lists of Parisian five-star hotels and restaurants in their organizer whether they could afford to visit them or not. They enjoyed the psychological high of knowing that all of this information was in their hands. Among the inserts one could buy were maps of the major cities of the world, with accompanying restaurant guides, airline information, and other travel data. That foreshadowed the day when social sites like Yelp integrated satellite positioning and mapping software with reviews of businesses, especially restaurants. By the end of the 1990s, the Filofax fad peaked. The later integration of satellite positioning into electronic devices extended what was a strong social need.

If the only way that GPS receivers displayed data was by latitude and longitude (never mind Universal Transverse Mercator ticks), the average person would hardly know of the system’s existence. The difference is that the data are combined with data on traffic conditions, weather, location, and reviews of nearby restaurants and businesses, graphical advice on how to get to one of those places from wherever one was, the location and ability to call a re-engineered taxi service (Uber, Lyft), etc. Unlike taxi drivers of the past, who had to memorize the street patterns of the city she was operating in, an Uber driver simply places the smartphone on a cradle on the dashboard and lets the phone’s mapping software plot a route to the customer’s destination. Before long, the driver will no longer be needed. A smartphone or a navigation device installed in an automobile not only integrates GPS with those other data, but the device will also select Russian, European, and Chinese positioning signals if the latter are stronger. By nature, GPS signals are not readable in most buildings or under dense tree cover; but one’s location can be derived from other signals also present on the phone: from cell phone towers, Wi-Fi, inertial sensors, and even an onboard magnetic compass.

Tracking and Privacy

Smartphones know where their users are, where they are going, and what is nearby: businesses, traffic, restaurants, and so on. They do that by a combination of techniques. One of the basic requirements of GPS was that it be passive: users do not transmit any signals to use it, allowing soldiers on a battlefield to know their location without revealing that location to an enemy.19 A device that transmitted signals also needs more electrical power and therefore larger and heavier batteries. Finally, a system that transmits, such as the distance-measuring equipment used by aircraft, implies an upper bound to the number of users, beyond which the radio spectrum becomes clogged. We see this effect at gatherings of large crowds for sporting events or political demonstrations, where cell phones stop working. By contrast, the number of GPS receivers is unlimited, just as there is no limit to how many people can tune in to a local radio or television station. Note also that systems like Spotify or Netflix, which stream audio and video over the Internet, do require additional infrastructure as more subscribers log on.

A cell phone’s position is found by the strength of its signals with respect to nearby cell towers, which contain GPS receivers and whose positions are known. By design, the phones have to transmit their location, so that the carrier can locate a phone when someone calls (note how the area code may no longer indicate the geographical location of the phone). Cell phone technology is an active system that uses battery power and bandwidth. Since GPS signals are too weak to work indoors, GPS alone cannot provide complete location information. Cell phones work in most of the populated areas of the United States and along interstate highways, but they do not provide total global coverage. Cell phone triangulation is further augmented by triangulation with Wi-Fi networks, which are increasingly common inside many buildings and homes, although there is little coverage out in the open. These two techniques do not provide the high accuracy or global coverage of a GPS system augmented by differential stations, but they are adequate for many purposes. And when one ventures outdoors, GPS is enabled.

In the early 1990s, Vic Hayes, an engineer working for the National Cash Register Company, developed Wi-Fi to allow sales terminals to be flexibly moved around a store or shopping mall without having to install or remove cabling. It has become a common method of connecting to the Internet for many consumers. Wi-Fi uses the same direct-sequence spread-spectrum coding that GPS satellites use, for many of the same reasons.20 Like GPS, Wi-Fi is free or available for a modest charge, or is advertiser-supported.21 Cell phone contracts can be expensive, especially if one uses the phone outside of Wi-Fi range for Internet access.

By their nature, Wi-Fi and cell phone systems allow one to be tracked by local or federal law enforcement, corporations, governments, or anyone who has the necessary equipment. That has led to studies among civil liberties groups and constitutional law experts over the ethics of tracking people, especially with regard to the Fourth Amendment prohibition against “unreasonable searches and seizures.” These issues are important and worthy of debate. They have technical as well as social and legal dimensions. Prior to the advent of cell phones, the United States expended a lot effort to implement a 911 emergency calling system. It has worked well and saved many lives. In recent decades, however, the number of people who have eschewed a wired phone for a mobile device has grown. The traditional 911 systems were unable to track those calls, and someone calling from, say, a car accident along a remote highway could not be expected to know their location. That led the Federal Communications Commission to implement requirements that allow a 911 dispatcher to locate a caller who is not calling from a wired phone.22 Those rules were first adopted in 1996, when the issue first began to surface and have since been revised and extended. The Federal Communications Commission (FCC) discussion references an ability to locate a phone indoors, or out in the open, as well as the planned use of barometric data to determine altitude—if a person is calling about, say, a fire in a high-rise building. The FCC recognized the difficulty in implementing full tracking ability, but it set a timetable for implementation, with a goal of providing horizontal accuracy to within 50 meters by 2021.23 Location of one’s altitude is still under development.

One of the initial objections to GPS from the aviation community was the sometimes-long time it took for a GPS receiver to locate the satellites in view and obtain a fix. Under optimum conditions that could take a few minutes, and if one brought a receiver to a distant location—say, across the United States—and did not use it en route, it could take much longer. Cell phone designers recognized this issue and responded with a hybrid circuit that took location data triangulated from the nearby cell towers and used that information to tell the onboard GPS receiver what satellites to look for in its current location. Beginning with the iPhone 4S and Samsung Galaxy Note, in 2011, phones were supplied with what the manufacturers called assisted GPS, which reduced the time-to-first-fix to a few seconds.24 That enhancement was to be welcomed, but note that it overrides the passive nature of GPS use. Users who want to reduce the time-to-first-fix but do not wish to be tracked can use a dedicated GPS unit in place of a phone and input an estimate of latitude and longitude manually from a local paper map.

The issue of tracking is a complex one. The popular press does not fully understand the nature of satellite positioning and its relation to other electronic devices that pervade modern life. One often hears stories of a person getting lost or attempting to drive across an abandoned highway or bridge because the driver was following GPS. One example, amusing in retrospect, described a driver who turned onto a popular Arlington, Virginia, rail-trail, scattering cyclists and hikers. The driver was apprehended by a police officer and cited for reckless driving. The headline in the local newspaper reported, “EXCLUSIVE: GPS Sends Florida Driver Down W&OD Trail.”25 Similar stories appear in the newspapers frequently. This one is ironic, as the incident happened midway between the Pentagon and the Spring Hill Motor Lodge, where the architecture of GPS was forged 40 years earlier. That error was not due to a problem with GPS, but that is typically the way such events are reported. For the public, GPS has come to symbolize all that has been happening with surveillance and the loss of privacy in the twenty-first century. In other words, there is the Global Positioning System—a constellation of satellites that gives position and timing information, and nothing else. And there is “GPS”: a term that symbolizes the loss of privacy, loss of spatial awareness, loss of map-reading and related skills, and fears of government and corporate intrusion into one’s private life. Citizens are understandably concerned about these issues. It is worth informing the public of how the Global Positioning System is but one of many factors that have brought this situation about.

Threats to Satellite Positioning

“Put all your eggs in the one basket and—WATCH THAT BASKET.”

—Mark Twain, Pudd’nhead Wilson’s Calendar (1894)

As satellite positioning systems mature and produce better and better accuracies, pressure mounts to reduce or eliminate competing systems that are either expensive, hard to use, or less accurate. That raises the question of what to do if GPS or its European and Asian counterparts stop working. Is it a good idea to follow Pudd’nhead Wilson’s dictum to put all our navigation eggs in the GPS basket? Table 6.1 lists various techniques of navigation, which the introduction of satellite positioning systems have either rendered obsolete, caused to shut down, or relegated to secondary status.

Table 6.1 Status of Existing Positioning, Timing, and Navigating Systems
Technique Use Current Status
Celestial navigation, with sextant, chronometer, and astronomical tables Traditional seafaring; transoceanic air navigation. Navies retain the skills. Military and space systems use automated star-trackers. Secondary status for commercial and military ships.
Orienteering: finding one’s way on land using maps, a barometric altimeter, and a compass Classic use by armies. In the United States, drivers once relied on accurate and free road maps issued by states and oil companies. Armies retain the skills. Recreational use. Automobile drivers no longer use road maps.
Transit Once-widespread worldwide military and civil use. Shut down.
Omega Past worldwide use by ships and transcontinental aircraft. Shut down.
LORAN-C Major air and shipping routes, mainly across oceans. US system shut down.
VHF omnidirectional range (VOR); distance-measuring equipment (DME) Local air navigation over land. Still in widespread use.
Microwave landing system (MLS) Precise landings in all weather. Future is uncertain.
Nondirectional beacon Simple radio beacons for air navigation. Being shut down.
Radar, onboard or ground-based Harbors, inland waterways, airports. Military uses. In extensive use. Its capabilities cannot be replicated by GPS.
Inertial navigation Ballistic missiles, other weapons systems. Commercial long-range navigation. Extensive military use. Resistance to jamming ensures future applications.
Ground-based air traffic control, combining radar, radio, visual sighting, database software, etc. Backbone of commercial and general aviation in the United States. Automatic dependent surveillance–broadcast (ADS-B), a.k.a. NextGen, planned for replacement.
Maintaining and transmitting precise time and frequency standards Vital for banking, commerce, transportation, radio and television broadcasting, etc. GPS now standard. WWV radio station in secondary use.

Of the major navigation systems listed above, the biggest debate has concerned the future of LORAN. The United States decommissioned LORAN-C in 2010, although several foreign systems remain in use. LORAN’s advocates have pressed for an enhanced, evolved “eLORAN” to serve as a backup to GPS. It uses some of the advanced timing and data structures of GPS, but it is an extension of the existing LORAN system, with its chains of ground-based, high-powered, low-frequency transmitters. Opponents of eLORAN cite its high cost, imperfect accuracy, and lack of three-dimensional positioning. More recently, opponents argue that GLONASS, Galileo, and other satellite-based systems would serve as a backup to GPS if the latter were disrupted. However, all satellite systems share a common basic architecture, so the possibility remains of a “common-mode” failure, such as a major solar storm that affects the signals from all the satellites, which use frequencies in a common band. As of this writing, it is not clear which side will prevail.26

What are the major threats to these systems? There are several, some serious. NASA has long classified problems into several major categories, especially these two: “known-unknown” and “unknown-unknown.” In the first case, you know you have a problem, but you do not know how to fix it. In the second case (sometimes pronounced “unk-unk”), you have a problem but you do not even know you have a problem. The following threats to GPS are known. They are serious, but at least we know what they are. We do not know the solutions to these problems yet, but we can develop plans to solve them. (The “unk-unks” are the more serious ones; by nature, we cannot know what they are until they manifest themselves.)

Physical Attacks on the System

The American and Chinese military have both demonstrated an ability to destroy or disable a satellite in low-earth orbit by hitting it with another spacecraft. Destroying a satellite at a 20,000 km altitude is more difficult, and because there are backups already in those orbits, an enemy would have to destroy several to disable the entire system. That would attract international attention and could not be done in secret. Based on the experience thus far of kinetic collisions of satellites, it is likely that no spacefaring nation would take that route. An enemy might disable a GPS satellite by focusing a laser or other beam on it, but again that would be hard to do undetected, and the system has a robust number of backup satellites in those orbits.

The control for GPS, located at Schriever Air Force Base in Colorado Springs, Colorado, is secured and well defended. A physical attack on it, say by an intercontinental ballistic missile, is possible, but would not go unanswered. The United States maintains backup control stations, and new-generation GPS satellites can transmit accurate orbital and timing data for long periods without needing input from the ground as frequently as before.

Solar Radiation

The sun follows an 11-year cycle of sunspots. The last solar maximum was in 2014, and the next is predicted to be in 2025. During a peak, radio communications on Earth are affected. The cycles are irregular, and the last peak was historically very low, having fewer sunspots than had been recorded in other cycles. Satellite designers have enjoyed the luxury of orbiting their equipment in a quiet space environment for the past 15 years. Nevertheless, they must protect the electronics in satellites even during a quiet period of solar activity. That is especially true of GPS satellites, whose orbits lie within the outer Van Allen radiation belt. Will the next cycle bring more intense radiation? That may be classified by NASA as an “unknown-known”: we do not know how intense the next cycle will be, but we do know how to deal with it. The threat to the physical satellites is low, but the sunspots’ effect on the ionosphere can be large, and we have seen that ionospheric effects are a major issue with maintaining the accuracy of GPS timing signals.27

More serious than the sunspot cycle are the occasional eruptions of energetic particles from the sun, known as coronal mass ejections. These consist of not only intense radiation but also particles, both traveling at high velocity from the Sun, typically around 300 km/sec. They cannot be predicted far in advance, but NASA and the European Space Agency have satellites in orbit to detect their occurrence and strength and give advance warning to the GPS control center at Schriever Air Force Base in Colorado to take action to protect the satellites.28 A severe solar storm could cause a lot of damage, not only to GPS but also to the industrial world’s electric power grid, telephone lines, and Internet switching, all of which rely on GPS for timing information. Society’s increasing dependence on complex networks has generated renewed interest in the effects of a solar storm that took place in 1859, the so-called “Carrington Event,” which disrupted what was by today’s standards a modest network of telegraph lines.29 More recent solar flares in 1972 and 1989 caused localized damage to power lines. In October 2016, President Barack Obama issued an executive order entitled “Coordinating Efforts to Prepare the Nation for Space Weather Events.” The order directed the Office of Science and Technology Policy to coordinate efforts with a host of other federal agencies and departments to develop and implement plans to ensure continuity of GPS in the event of extreme solar weather events.30 We do not know when, or whether, these efforts will be sufficient when the next major solar event happens.

Jamming

The most serious threat to the daily operation and use of satellite positioning systems is jamming, whether intentional or inadvertent. By design, spread-spectrum signals are weak. An ordinary radio tuned to the L1 frequency of 1575.42 MHz will play only background noise; the GPS signal is below the noise threshold. In a combat environment, such as that found during the 1991 Gulf War, the GPS signals received by soldiers were competing with signals from their own communication radios, airborne radar, radio transmissions from friendly and hostile vehicles and aircraft, as well as ordinary background electromagnetic noise. Iraqi forces installed jammers at various sites to interfere with the GPS signals.31 The low power of GPS signals is the main factor cited by advocates of eLORAN, whose high-power signals are resistant to jamming.

A major consequence of jamming has been a renewed interest in inertial navigation, which by nature cannot be jammed and which, like GPS, does not broadcast its user’s position. The hand-tuned, precision gyroscopes of the Carousel and nuclear submarine era have given way to compact and rugged gyroscopes and accelerometers that integrate seamlessly with the silicon integrated circuits of modern electronic equipment. The jargon for this trend is MEMS (short for microelectromechanical systems).32 Research into MEMS began as early as 1975 and subsequently followed the progress of Moore’s Law, although with less publicity. Among those responsible for its development were engineers at Draper Laboratory in Cambridge, Massachusetts—the lab that had perfected handcrafted mechanical gyroscopes for navigation. In 1991, Ken Gabriel provided support for the Draper Lab research and established a MEMS program at DARPA, which eventually brought the technology into the mainstream.33

A typical smartphone has a three-axis gyroscope and accelerometer installed, at a wholesale cost to the manufacturer of a few dollars. Proponents of inertial navigation are quick to point out that these devices cannot match the accuracy of the mechanical systems that, for example, helped guide Apollo astronauts to the moon and back. One cannot use the accelerometers in a smartphone to guide an intercontinental ballistic missile—yet. In theory, when an automobile goes through a tunnel where there are no GPS signals, the onboard inertial system should take over and continue navigating. In practice, this integration for civilian users has not been fully implemented.

Jamming occurs elsewhere as well. Employers who wish to keep track of their fleet of trucks or delivery vehicles may encounter resistance from their drivers. GPS receivers in vehicles are typically linked to transmitters that inform where the vehicle is and where and how fast it is traveling. An early and well-received system of this type was OnStar, developed and first installed in General Motors products beginning in the mid-1990s.34 With OnStar, a driver can call an operations center by pressing a button; it also automatically places such a call when one of the vehicle’s airbags deploys or when an onboard accelerometer detects a sudden acceleration indicating a crash. (Accelerometers for airbag deployment are a major consumer use of MEMS technology.)

One can find numerous Internet sites that sell jammers to consumers.35 Many of them block both GPS and cell phone signals, and are marketed as such. Fleet drivers who resent being tracked by their employers are among the customers. The websites typically include a disclaimer that the jammers are to be used only where legal. But jammers are illegal everywhere in the United States. The Federal Communications Commission has issued a statement regarding them referencing the amended Communications Act of 1934:

Federal law prohibits the marketing, sale, or use of a transmitter (e.g., a jammer) designed to block, jam, or interfere with wireless communications. … No person shall willfully or maliciously interfere with or cause interference to any radio communication of any station licensed or authorized by or under … [the Communications] Act or operated by the United States Government.36

Further language states that it is also illegal to advertise the sale or lease of such equipment. In 2012, the FCC took action against the popular website Craigslist.org to stop it from advertising jammers for sale. It is illegal for someone to purchase a jammer from a foreign website, regardless of whether the purchaser activates the device. The FCC does allow limited exceptions for authorized users, or for manufacturers who sell only to those users. Local governments do not qualify. Nor do hospitals, schools, and other public places where cell phone use is prohibited. In spite of this clear language and the severe penalties levied against those caught selling or using them, domestic jammers are common.37 Enforcement by the FCC is vigorous, but as the many websites show, jamming will remain. One major response to this environment is to adjust the parameters of the iterative onboard filters, first, to recognize the presence of a jamming signal and, second, to tune the filter to ignore signals that are not coming from the satellites.

Spoofing

When the US Coast Guard was developing differential methods of augmenting satellite signals in the early 1990s, the question of spoofing came up. Could a hostile force set up a transmitter to act as if it were a differential unit but deliberately transmit false readings and fool the receivers? Whereas control of the satellites is in a secure Air Force base in Colorado, with numerous checks on the integrity of the signals, differential GPS (DGPS) units are scattered near waterways and harbors. It was impractical to secure them by round-the-clock, human monitoring. In early reports on the design of DGPS, the Coast Guard emphasized the system’s integrity as much as its accuracy. That meant having confidence that the received signals were correct. Receivers on ships employed special error-detection and error-correcting codes, as well as enhanced Kalman filters, to achieve this goal. The coding of the signals was designed to be different from the GPS signals, which made them resistant to spoofing. DGPS units near the waterways were monitored by two central control stations, on the East and West Coasts, which would alert users if there was any malfunction or erroneous positioning signal being sent.38 As DGPS spread from waterways to more general use, these security measures were extended and are now in common use. But as with the issue of jamming, spoofing techniques have advanced in step with advances in countering them.

What Comes Next: GPS III, M-Code

The integration of Galileo, GLONASS, and the Chinese BeiDou into modern receivers will help counter these threats. But even this comes with a cost: these systems all operate in the L-band, and their transmissions raise the noise floor, out of which one has to find and decode the navigation signals. Third-generation GPS III satellites, now being deployed, are addressing this issue. They will have additional codes, including a so-called M-code, which will further protect GPS from jammers.

The issue of the cost of the system, which placed the program in peril in the late 1970s, has arisen again. It is a threat as serious as the technical threats to GPS mentioned above. By 2011, the Defense Department estimated that it had spent “more than $24 billion to develop and purchase components of the GPS, including satellites, ground control systems, and receivers.”39 The Congressional Budget Office noted that the costs of implementing the new codes, along with the need to replace military receivers with ones that could access them, plus the costs of modernizing the ground control facilities, were spiraling out of control. An unsigned estimate from DARPA in 2013 stated that “each GPS III spacecraft will cost $500 million for the satellite and $300 million for the launch, compared with $43 million and $55 million, respectively, for the first GPS in 1978” (see figure 18).40 DARPA’s estimates were given in the context of its promotion of a reusable, unmanned space plane, which would lower costs but not have the drawbacks of the Shuttle with its human crew. DARPA’s figures (and the date of the first GPS) do not agree with other figures cited above, but they are close. A subsequent note in Aviation Week printed a disclaimer stating that the estimates were not exact. Part of the reason for the disclaimer has already been mentioned—commercial launch vehicle suppliers do not want cost data to be publicized.

11012_000_fig_018.jpg

Figure 18 A Delta Rocket launching a Block IIR GPS satellite from Cape Canaveral, ca. 1990. The 55° inclination of GPS orbits was dictated in part by the launches from the Cape, for safety reasons. (Photo: US Air Force)

Notes