X-EVENT 1

DIGITAL DARKNESS

A LONG-TERM, WIDESPREAD OUTAGE OF THE INTERNET

BAD SIGNALS

IN THE SUMMER OF 2005, COMPUTER SECURITY CONSULTANT DAN Kaminsky was at home recovering from an accident. While recuperating in a pain killer–induced haze, he began thinking about some Internet security issues he’d pondered earlier, specifically focusing his deliberations on the Domain Name Server (DNS) component of the Net. This is the part that serves as a dictionary for translating everyday-language domain names like oecd.org or amazon.com into the twelve-digit Internet Protocol (IP) addresses that the system actually understands and uses to route traffic from one server to another. For some time Kaminsky had felt that there was something not quite right about the DNS system, that somewhere in it there was a lurking security hole that had existed since the system was put in place in 1983, a hole that could be exploited by a clever hacker to gain access to virtually every computer in the entire network. But he could never quite put his finger on what, exactly, the problem might be.

Then in January 2008, Kaminsky hit on the answer. He tricked the DNS server of his Internet provider into thinking he knew the location of some nonexistent pages at a major US corporation. Once the server accepted the bogus page made up by Kaminsky as being legitimate, it was ready to accept general information about the company’s Internet domain from him. In effect, Kaminsky had found a way to “hypnotize” the DNS system into regarding him as an authoritative source for general information about any domain name on the entire Internet. The system was now ready to accept any information he wanted to supply about the location of any server on the Net.

Kaminsky immediately recognized that he had just entered hacker’s heaven. What he found was not simply a security gap in Windows or a bug in some particular server. Instead his discovery was an error built into the very core of the Internet itself. He could reassign any web address, reroute anyone’s e-mails, take over bank accounts, or even scramble the entire Internet. What to do? Should he try it? Should he drain billions out of bank accounts and run off to Brazil? It’s difficult to imagine being faced with this kind of power over the lives of billions of people worldwide. Maybe he should just turn off his computer and forget it. If what he found was reported in even one blog or website, within seconds unscrupulous hackers around the world would pounce upon it and possibly do irreparable damage to the global economy and to everyone’s way of life.

What Kaminsky actually did was to contact a handful of web security gurus, who then arranged to meet in emergency session as a secret geek A-team. At this meeting, they created a temporary fix for the hole Kaminsky had found to punch his way into the DNS system. But as he put it in concluding an address about the problem given on August 6, 2008, at a hackers’ convention in Las Vegas, “There is no saving the Internet. There is [only] postponing the inevitable for a little longer.”

So it remains to this day. And this is not a Hollywood fantasy scenario either, as a single individual “playing around” in his or her garage has as good a chance to take down a chunk of the Internet as a team of computer specialists at a government security agency. In this game, inspiration and ingenuity are at least as likely to strike a lone individual as it is to descend upon a group.

Kaminsky’s discovery of a hidden flaw at the very heart of the Internet brings into focus the threat posed by a massive Internet failure to our twenty-first-century lifestyle. From e-banking to e-mail to e-books to iPads and iPods, and on to the supply of electricity, food, water, air and surface transport, and communication—every element of life as we know it today in the industrialized world is critically dependent on the communication functions provided by the Internet. When it goes down, so does our way of life. So when we start talking about a massive failure of the Internet, the stakes are about as high as they can get. And as Kaminsky’s hack showed vividly, this system is far from being immune to a catastrophic breakdown.

Since Kaminsky’s discovery struck at the very core of the Internet, perhaps this is a good time to say a few words about how it was created, as well as a bit about what people were thinking in those days more than half a century ago. Ironically, the system in question was developed to help some of us survive an X-event.

The origin of the Internet dates back to the 1960s when the US government began collaboration with private industry to create a robust, fault-tolerant distributed computer network. What the government wanted was a network that was not rooted at one spatial location and thus would be able to continue functioning even when many of its nodes and/or links were destroyed, temporarily broken, or otherwise out of service. It should come as no surprise to learn that the Cold War mentality of the time was the big motivator for creation of what became the Internet, as the US defense establishment needed a command-and-control system that would remain operational even in the face of an X-event: a full-scale nuclear attack by the USSR.

The communication system originally put into service was termed “ARPAnet,” named for the Advanced Research Projects Agency (ARPA), a blue-sky research arm of the US Defense Department. Commercialization took place in the 1980s, along with the appellation “Internet” replacing the ARPAnet. Since then the capabilities of our communication systems have defined our business structures. Information you can quickly accumulate and process supports the economy by facilitating faster decision making, increased productivity, and thus faster economic growth. Speed and access to information determines today’s customer-business relationship.

The distributed nature of the Internet is reflected in the fact that there is no centralized governing structure that “owns” the Internet, as only the two “names spaces” for the system, the Internet Protocol (IP) address space and the Domain Name System (DNS), are governed by a central body.

In essence, then, we have ended up with a communication system, used currently by approximately one-quarter of the people on the planet, that rests on 1970s-style notions of computer networking and computer hardware. Today, the Internet is being used to support services that it was never designed for, as we are converging to a situation in which all data types—voice, video, and verbal information—are being loaded onto it. With this fact as background, it’s no wonder that the technological and lifestyle changes of the last fifty years are putting an ever-increasing strain on the system’s ability to serve the needs of its users. A few eclectic examples will hammer home this point.

Item: In mid-October 2009, a seemingly routine maintenance of the Swedish top-level domain .se went badly off track when all domain names began to fail. Swedish websites could not be reached, Swedish e-mails failed, and even several days later not all systems were fully functional again. The entire Swedish Internet was broken. What went wrong? Investigations suggest that during the maintenance process, an incorrectly configured script, intended to update the .se zone, introduced an error into every .se domain name. But this was only a theory. Another possibility floated at the time was that the Swedish Internet may have crashed when millions of Japanese and Chinese men disrupted the country’s Internet service providers searching for Chako Paul, a mythical lesbian village somewhere in Sweden! So by this hypothesis, the entire network was taken down by Asian men googling a seemingly nonexistent Swedish “village.”

Item: In November 2009, the US television newsmagazine 60 Minutes claimed that a two-day power outage in the Brazilian state of Espirito Santo in 2007 was triggered by hackers. The report, citing unnamed sources, said the hackers targeted a utility company’s computer system. The outage affected three million people, a precursor to a huge blackout in 2009 that took out Brazil’s two largest cities, São Paulo and Rio de Janeiro, along with most of the country of Paraguay. As it turned out, neither of these disruptions appear to have had anything to do with infiltration of any computer system. Rather, the 2007 failure was due to simple human error: faulty maintenance of electrical insulators that allowed them to accumulate so much soot they eventually shorted out. Explanations offered for the much larger power failure in late 2009 are far more interesting, ranging from a major storm destroying power lines at Itaipu Dam, the source of 20 percent of Brazilian electricity (meteorological records show no storm in the vicinity of the dam during the period in question), to renegade Mossad agents hacking into the national grid (the explanation favored by Brazilian president Lulu da Silva), to a “butterfly effect” stemming from the shutdown at CERN in Geneva of the Large Hadron Collider during approximately the same period as the blackout, and even to UFOs in the form of an alien mother-ship harvesting electricity from the generating station. In short, nobody knew anything!

Item: On May 17, 2007, the Estonian Defense Ministry claimed the Russian government was the most likely cause of hacker attacks targeting Estonian websites. They said more than one million computers worldwide had been used in recent weeks to attack Estonian sites following removal of a disputed Soviet statue from downtown Tallinn, the Estonian capital. Defense Ministry spokesman Madis Mikko stated, “If let’s say an airport or bank or state infrastructure is attacked by a missile it’s clear war. But if the same result is done by computers, then what do you call it?”

Item: China Telecom reported that according to the China Institute of Earthquake Monitoring, on December 26, 2006, between 8:26 P.M. and 8:34 P.M., Beijing time, earthquakes of magnitude 7.2 and 6.7 occurred in the South China Sea. The undersea communication cables Sina-US, Asia-Pacific Cable 1, Asia-Pacific Cable 2, FLAG Cable, Asia-Euro Cable, and FNAL cable were all severed. The location where these cables broke was about 15 kilometers south of Taiwan, severely affected international and national telecommunication in neighboring regions for weeks until repairs could be made.

Other reports at the time stated that communications directed to the Chinese mainland, Taiwan, the USA, and Europe were all seriously interrupted, and that Internet connections to countries and regions outside the Chinese mainland were very difficult. In addition, voice communication and telephone services were also affected.

These news reports were a dramatic understatement. China and Southeast Asia saw their communication capacity fall by over 90 percent, in what mainland Chinese began referring to as the “World Wide Wait.” What this outage underscored was the dilapidated state of China’s telecommunication technology. As the global news service AFP put it, “China is relying on 19th-century technology to fix a 21st-century problem.”

Finally, a few paragraphs about something nobody believed was possible: the total and complete disappearance of the Internet in a major region of the world.

Item: At 12:30 A.M. on the morning of Friday, January 28, 2011, the Internet died in Egypt. At that moment, all Internet links connecting Egypt to the rest of the world went dark, not so coincidentally, as demonstrators protesting against the thirty-year rule of President Hosni Mubarak’s brutal regime were gearing up for a further round of marches and speeches. Egypt had apparently done what many technologists thought was unthinkable for a country with a major Internet-based economy: it unplugged itself entirely from the Internet in an effort to stifle dissent. Leaving aside the issue of why this took place, which is not at all hard to understand, the technical aspects of how it happened are worth a quick look.

In a country like the United States, there are numerous Internet providers and enormous numbers of ways of connecting. In Egypt, four providers actually control almost all links to the system, and each of these operates under strict control and strong licensing from the central government. In contrast to the United States, where it might be necessary to call hundreds or even thousands of service providers in order to try to coordinate them all to throw the “kill switch” at the same moment, in Egypt this coordination problem could be solved with a few phone calls. So the reason this total blackout could happen in Egypt is that it is one of the few countries where all the central Internet connections rest in so few hands that they can all be cut at the same time. Here we see an obvious complexity mismatch between the control system of the Egyptian Internet and its users.

Experts say that what sets Egypt’s action apart from those of countries like China and Iran, who have also restricted segments of the Internet to close off dissent, is that the entire country was disconnected in a coordinated effort, and that every type of device was affected, ranging from cell phones to mainframe computers. One might wonder why this hasn’t happened more often in places like Iran or even the Ivory Coast, where political dissent is an ongoing irritant to the ruling authorities. The reason is largely economic. In today’s world, a country’s economy and markets are just too dependent on the Internet to shut it down over such an ephemeral matter as a possible regime change. Dictators come and go, but money never sleeps.

Human error (or design) is certainly the primary means by which the Internet can be compromised. But, as always, the joy is in the details. And the specifics can run through a spectrum of methods ranging from Kaminsky-style assaults on the DNS system to those aimed at the end user. Even attacks aimed at the Internet’s social fabric have been suggested, such as spamming people with death threats to convince them that the Internet is unsafe to encouraging webmasters to unionize and go on strike. In short, there are many ways to bring down the system, or at least huge segments of it. The most amazing fact of all is that it hasn’t happened more frequently.

These stories could easily have been multiplied severalfold. But none of them represent the kind of event that would send our global society tumbling into an abyss, even though all are disasters to one degree or another. More ominously, all could have easily been scaled up to a genuine worldwide catastrophe if events had fallen just a bit differently. The most important fact, however, is that none of the countries involved were really prepared to deal with this kind of attack on their infrastructure. As the old Viennese saying goes, these situations were desperate—but not serious. The take-home message is clear: the infrastructures humans most rely upon for just about every aspect of modern life are totally dependent on computer communication systems, the vast majority of which are linked via the Internet. So whenever an infrastructure fails, whatever the actual reason, the first finger often points at anonymous hackers taking down the system for fun and perhaps for profit. Sometimes these claims are even correct. But “sometimes,” or even “occasionally,” is too often for systems so critical to the functioning of modern industrialized society. Consequently, we have to understand how such cybershocks can happen and what we might do to minimize the damage they would cause.

As a starting point in “deconstructing” the problem of cybershock, it’s useful to first understand how big the Internet really is in order to get a feel for what might be involved in totally shutting it down.

WHEN THE MUSIC STOPS

THE INTERNET IS ALMOST UNIMAGINABLY LARGE BY WHATEVER MEASURE you care to employ. Here are a few statistical facts to ponder.

With these staggering statistics at hand, we see graphically the huge complexity of the Internet as a network of billions of nodes linked by many more billions of connections, all of which are dynamically coming and going every moment of each day.

 

COMEDIAN LOUIS CK HAS A STAND-UP ROUTINE ABOUT FLYING IN A plane equipped with high-speed Wi-Fi. Suddenly, the man sitting next to him breaks into an outburst against the airlines when he loses the connection. Louis CK asks, “How quickly does the world owe him something that he knew existed only ten seconds ago?” We humans indeed become accustomed very quickly to new technological gizmos and build them into our way of life almost overnight, particularly when they facilitate communication. Be it telephones, jet airplanes, or e-mail, we are hardwired for connecting with others—and the quicker, the better.

To find out how dependent today’s man and woman are on the Internet, the chip maker Intel commissioned a survey of the matter a couple of years ago. The company asked more than two thousand men and women of all ages and walks of life whether they would rather forgo sex for two weeks or give up access to the Internet for the same period. Result? An amazing 46 percent of the women surveyed and 30 percent of the men opted to give up the sheets. Even more broadly, among all discretionary expenditure items—cable TV, eating out, fitness club membership, and even shopping for clothes (this one’s really hard to believe)—the Internet ranked as the highest-priority item on the list. In total, almost two-thirds of the adults questioned said they simply could not live without the Internet.

Interestingly, in a similar survey by Dynamic Markets in 2003 of corporate employees and information technology managers in Europe and North America about the stress of being cut off from e-mail, it turned out that e-mail deprivation ranked as a higher stress inducer than…divorce. Or getting married. Or moving to a new residence. These people were then asked how long it would take after the e-mail went down before they would become angry. A full 20 percent said “Instantly!” And a whopping 82 percent of the group said they would be very angry by the end of one hour. In October 2010, Avanti Communications reported in a survey of companies worldwide that nearly 30 percent of the firms claimed they could not function without the Internet, while only a paltry 1 percent said that they could operate normally without it. The bottom line is clear: we not only love our Internet, but life as we know it literally cannot go on without it. Talk about a life-changing technology!

But web surfing and e-mails are conveniences, something that’s usually not really a matter of life and death. How important is the Internet when it comes to more elemental existential matters like eating, drinking, earning a living, and staying healthy? Answer: Way beyond just very important; in fact, it borders on being life-threateningly important. To underscore this fact, here are but a few of the infrastructures we rely on every day that will vanish from our lives if the Internet goes down.

Personal and Commercial Financial Transactions: Whether you pay by credit card, check, or bank transfer, your money moves over the Internet. Of course, financial institutions have backups. But those require humans to process paperwork, which takes time, a lot of time compared with the rapidity needed to carry out a transaction via ATMs, e-banking, or Internet shopping.

At the “big money” level, the situation is far worse. While it’s difficult to get a handle on the total volume of worldwide financial transactions processed daily through the Internet, a glimpse of the magnitude of these transactions is available by looking at the volume of daily foreign exchange trades. In 2007, the amount of money flowing through the system each day was nearly $4 trillion; by now, that amount must be pushing $10 trillion or more. And this is every trading day. What would happen if the Internet crashed and those transactions had to be done by fax, telephone, or even snail mail, as in the past? I shudder to think. One thing’s for sure, and that is that life will be a mess worldwide for weeks, months, and possibly years after such a crash, even if the outages were only for a few days. Companies will fail, governments may collapse, and in general, chaos will reign supreme.

Retail Commerce: Almost all retail stores and supermarkets rely upon automated inventory control to keep the shelves stocked and ready for your shopping pleasure. For example, every time you buy an item at a chain store like H&M or at a bookshop like Barnes and Noble, the cash register immediately notifies a central computer of the item you purchased and the store’s location, signaling the warehouse that a replacement item should be sent. That system—along with almost all retail commerce—would vanish within nanoseconds of the collapse of the Internet. The same applies to other retail outlets like gasoline stations, pharmacies, and food shops that provide the wherewithal for daily life.

To get a feel for the size of the problem, nearly $14 billion is spent each day in the USA alone in the course of a billion individual transactions—just for food and associated retail items. Only a minuscule fraction of these transactions could take place if the Internet communication system employed to log the transaction, update inventories, and the like were to fail.

Health Care: Almost all patient records are stored online. So doctors, hospitals, and pharmacies would have a difficult time accessing patient’s records if the Internet went down. This, in turn, would lead to a major degradation in the immediate availability of health-care services. While you could probably still get health care in the absence of your records, what about without your health insurance card and/or the records standing behind it? Will the doctor or hospital welcome you when they have no way of verifying whether or not you can pay? No problem, you say. I’ll pay in cash. Really? Where are you going to get that funds when the ATM machines are frozen and bank records inaccessible? You may have a tough time getting your hands on the kind of funds that health-care facilities will demand. So getting sick in an Internet-less environment will be far worse than it is today.

Transportation: Airlines and trains depend on the Internet for scheduling and monitoring their services. It’s safe to say that a shutdown of the Internet would entail a shutdown of airports around the world, as well as huge scheduling problems for ground transport, including the trucks and trains that deliver the necessities of daily life to food stores and other retail shops.

This list could be considerably extended to encompass infrastructure failures of all types—communication, electric power, government services, corporate activity, and the like. But that would be overkill. This abbreviated list already makes the point transparently clear that every aspect of the lives we now take for granted would be seriously, and probably dramatically, imperiled by a major Internet failure. With these facts in mind, let’s start looking at how such a failure might take place.

INTO THE HEART OF THE PROBLEM

POSSIBLE CRASHES OF THE INTERNET CAN BE CRUDELY SEPARATED into two categories: (1) systemic crashes due to the intrinsic limitations of the Internet structure itself and stress imposed by the exponentially increasing volume of traffic that the system is called upon to serve, and (2) deliberate attacks on the system by hackers, terrorists, or other agencies intent on holding the Internet hostage to their wishes and goals. I’ll address the second category in the next section. Into the first category fall both hardware and software failures. Here are a few not-so-well-chronicled examples that outline some of the possibilities.

Black Holes: When you can’t reach a specific website at any given time, the usual reasons are that the site has been abandoned, the server is down, the site is being maintained, or another easily explainable cause. But sometimes the site will simply not load. Occasionally, there does exist a path between your computer and the one hosting the site you’re trying to reach, but the message gets lost along the way and falls into a “black hole” of information never to be seen again. Researchers have discovered that more than 7 percent of computers worldwide experienced this sort of error at least once during a three-week test in 2007. Some researchers estimate that more than two million of these temporary black holes come and go every day.

One of the reasons for these information sinks is routing difficulties stemming from the billion or so users of the Internet sending and receiving messages each day. As this traffic increases, routers responsible for matching up the message source with its intended recipient suffer a serious case of complexity overload, like a human brain that’s called upon to process too many incoming requests and responses in too short a period of time. In the human case, ongoing stress of this sort may lead to a nervous breakdown. The Internet equivalent is a type of breakdown that some computer scientists, like Dmitri Krioukov from the University of California, San Diego, worry about, namely that the Internet in toto may collapse into a black hole.

This is a good juncture to mention another fine example of a complexity mismatch. When the Internet was initially formed, people believed that the network (the links) would be dumb, but the end points (the nodes) would be smart. But maintaining security at the end points is proving to be a challenge, and we are beginning to see complexity overloads as each new type of security attack arrives. A Tainter-style collapse may well occur if people begin to lose faith in the Internet. They will not want to buy online, they will avoid social networking, and so forth. In essence, the Internet would fade into irrelevancy.

Power Consumption: The power consumed each day in carrying out the more than two billion Google searches adds up to electrical power usage in excess of what’s consumed by three thousand households in Google’s hometown of Mountain View, California. Now consider that YouTube, a Google subsidiary, accounts for over 10 percent of total Internet bandwidth. Add in social-networking sites like Facebook and Twitter, throw in video-streaming hosts such as Netflix, and you begin to get a feel for who are the bandwidth hogs of the Internet. Each of these services requires vast data centers, or “server farms,” to handle this flood of bits and bytes that has to be moved through the network 24/7.

The heat produced by these data centers must be verified from the facilities housing the servers, in order to keep them at normal room temperatures of around 20 degrees (Celsius). This heat is generally just pumped outdoors rather than reused, making its own contribution to global warming. Moreover, the power consumed in cooling the data centers approximates the power consumption of the servers themselves. Ominously, this situation is growing by leaps and bounds, not declining. So if technological advances don’t step in and put a halt to this “heat death,” we could easily end up with data centers unable to cool themselves, in which case they would effectively melt down when the server CPUs or other hardware simply burns out. The end result of that process is clear: when the data centers disappear, so does the Internet.

Cable Fragility: The optical fiber cables on the ocean floor that carry phone calls and Internet traffic around the world are less than one inch thick. This is a very thin line, literally and figuratively, upon which to base the foundation for a globally connected world. Interestingly, these cables break regularly. But there is generally no disruption of service when this happens, as the telecomm companies have backup systems in place and simply switch to alternative routes while the main lines are being repaired. But not always!

A good example of what can happen occurred in 2008 when two of the three cables passing through the Suez Canal were cut on the ocean floor near Alexandria, Egypt. This seriously disrupted phone and Internet service from the Middle East and India headed for Europe, forcing that traffic to take the eastward path around the globe instead.

Through accidents of geography and geopolitics, there are several choke points in the worldwide communication networks, Egypt being one of them. Since the cheapest way to carry the traffic over long distances is to put the cables underwater, a place like Egypt bordering on both the Mediterranean and the Red Sea, which in turn connects to the Indian Ocean, is an attractive choice. As a result, cables carrying information from Europe to India follow the route through the Suez Canal—just like ships. But Egypt is not the only choke point. The ocean floor off the coast of Taiwan is another, which accounts for why the December 2006 earthquake that severed seven of the eight cables in that region slowed down communication in Hong Kong and elsewhere in Asia for months until the cables could be repaired. Hawaii is still a third choke point for traffic connecting the United States to Australia and New Zealand. Any or all of these choke points form juicy targets of opportunity for slowing down the Internet in large areas of the world.

Router Scalability: Each minute, hundreds of Internet connection points drop offline. We don’t notice since the network simply fences off the dropped connection links and creates a new route going around them. This reconfiguration takes place because the subnetworks making up the entire Internet communicate with one another through what are termed “routers.” When a communication link changes, nearby routers inform their neighbors, who then transmit the knowledge to the entire network.

A few years ago, researchers in the United States came up with a method to interfere with the connection between two routers by disrupting the protocol they use to communicate, making it appear that the link between the routers is offline instead of active. Note that this disruption is local, disrupting just the connection link between a router and its immediate neighbors. Recently, though, Max Schuchard at the University of Minnesota and his colleagues discovered how to spread this disruption to the entire Internet.

Schuchard’s technique is based on a denial-of-service (DOS)-style attack. This involves bombarding a particular website or sites with so much incoming traffic that the servers at the target site cannot handle the volume and shut down. Schuchard’s experiment had a technical twist that would allow it to take down the entire Internet using a network of about a quarter of a million “slave” computers dedicated to the task. Details are beyond the scope of this book, but the general idea is to create more and more holes in the router network so that eventually communication becomes impossible. Schuchard says, “Once this attack got launched, it wouldn’t be solved by technical means, but by network operators actually talking to each other.” Restoration of Internet service would involve each subsystem being shut down and rebooted to clear the jam created by the DOS attack, a process that would take several days, if not longer. Is this procedure a viable way to take down the Internet?

An attacker who commands a quarter of a million “zombie” computers generally isn’t going to use them to crash the Internet, but will instead employ that network for nefarious commercial purposes. But that rule doesn’t apply to governments. One such scenario would be for a country to simply cut itself off from the Internet, like Egypt did during the uprising against the Mubarak regime in early 2011. That country could then launch an attack against an enemy, or for that matter, the remainder of the Internet, while keeping its own internal network in place.

In either case, Schuchard’s work shows that regardless of who perpetrates such an attack there’s not much that can be done at present to combat it. So far, nothing of this sort has come close to taking place. But, then, that’s just what the study of X-events is about: surprising, damaging things that have yet to take place.

Router scalability serves as a lead-in to our second major Internet failure category, human error and/or malicious intent.

NOT BY ACCIDENT, BUT BY DESIGN

ONE MORNING IN APRIL 2009, MANY PEOPLE IN SILICON VALLEY woke up to discover that they no longer had phone, Internet, mobile, or cable television services. AT&T issued a public announcement that fiber-optic cables had been severed in many places. Speculation immediately began running rampant that these cuts had been perpetrated by workers who actually service the cables, since their union contract had expired just a few days before the service outage. Moreover, the cuts were sharp, not ragged, seeming to have been done with a hacksaw, and were within easy driving distance of each other. Paradoxically, what made the cuts even scarier was that they were easy to fix. It makes one wonder what might have happened to service if the perpetrators, whoever they were, had poured gasoline onto the cables and melted them. Or if a group of malcontents had formed together to coordinate such an attack to destroy fiber connections in areas where the density of cables is high.

All in all, this malicious hardware attack on the entire telecommunication infrastructure, including sawing through the Internet cables, gave new meaning to the term “hacker.” Of course, what’s usually meant when this expression comes up is an attack based on meddling with software on the Internet, not physically destroying underground or underwater hardware installations. So let’s have a quick tour of a few common ways to kill the Internet softly by disinformation and destructive program alterations rather than by the destruction of matter.

Certainly the most well-publicized type of software attack is a virus of some sort. Just like their biological namesakes, computer viruses take over the operating system of their host computers and force them to carry out the instructions coded into the virus rather than the instructions from the machine’s own operating system. In late 2009, a vicious little devil called the Stuxnet virus infected forty-five thousand computers worldwide, displaying a pronounced fondness for industrial control machines built by Siemens AG of Germany that were being used mostly in Iran. Since the Iranians were using this equipment as part of their nuclear power program (and probably nuclear weapons development, as well), most knowledgeable people felt the attack was created by individuals working for a country or a well-financed private organization focusing on the disruption of Iranian nuclear research. (For aficionados, the Stuxnet was what is technically termed a “worm” rather than a virus. But for our purposes it makes no difference.)

Stuxnet was a malicious piece of computer code that heralded a new form of warfare: killing by false information instead of by guns and bombs. Why send troops to knock out critical infrastructure (i.e., power plants or water-treatment facilities) when you can do it remotely from halfway around the world by bits and bytes? As more and more military operations are conducted virtually by drones like the US Predator aircraft, it’s possible such weapons could be compromised and used on friendly troops instead. This is to say nothing of the conceivable breaches of national security systems and intelligence networks. Or, for that matter, the command and control of nuclear weapons, about which I’ll say more later.

Also, we cannot rule out the existence of another Kaminsky-style glitch lurking somewhere in the deep structure of the Internet, something different from the DNS problem he uncovered but equally dangerous. Of course, this sort of “Kaminskyesque” flaw falls into the category of an “unknown unknown,” vaguely analogous to the threat of an alien invasion. Like dark invaders from outer space, a design flaw deep in the Internet may or may not exist. And even if it does, it may never turn up on our doorstep. There is simply no way to rationally evaluate this possibility. So for now I file it with the other unknown unknowns and move on to a second way of disabling parts of the Internet: a far-reaching denial-of-service (DOS) attack.

 

ON JULY 4, 2009, SEVERAL US GOVERNMENT AGENCY COMPUTERS were subject to massive DOS attacks that went on for several days. Systems affected included those at the Treasury Department, the Secret Service, the Federal Trade Commission, and the Department of Transportation. According to private website monitoring firms, the DOT website was 100 percent down for two full days, so that no users could get through to it during one of the country’s heaviest travel weekends. According to Ben Rushlo, director of Internet technologies at Keynote Systems, a firm that keeps track of website outages, “This is very strange. You don’t see this. Having something 100 percent down for a 24-hour-plus period is a pretty significant event. The fact that it lasted for so long and that it was so significant in its ability to bring the site down says something about the site’s ability to fend off [an attack] or about the severity of the attack.”

In fact, DOS attacks are not at all uncommon, even though it’s notoriously difficult to measure just how many such attacks regularly occur. In 2005, Jelena Mirkovic and her colleagues estimated the number at about twelve thousand per week. And surely this level has not decreased since that time. Moreover, DOS attacks are relatively easy to mount using widely available hacking programs. They can be made even more damaging if thousands of computers are coordinated for the attack, so that each of the computers is sending messages to the target. This is the very sort of attack I mentioned earlier that knocked out computer systems in Estonia. A similar assault took place in Georgia during the weeks leading up to the war between Russia and Georgia, when the Georgian government and corporate sites experienced outages that were again attributed to attacks by the Russian government. The Kremlin, of course, denied responsibility. But independent Western experts traced the incoming traffic to specific domain names and web registration data, concluding that the Russian security and military agencies were indeed the perpetrators of this particular attack.

To bring our understanding of DOS attacks down to daily life, the social networking service Twitter was totally knocked out for several hours in 2009 by such an attack carried out by a lone blogger, coincidentally also located in the Republic of Georgia. The attack was targeted at a blogger whose handle is “Cyxymu,” Cyrillic spelling for the town of Sukhumi in the breakaway territory of Abkhazia. According to Ray Dickenson, chief technology officer at Authentium, a computer security firm, “It’s as if a viewer who didn’t like one show on a television channel decided to knock out the whole station.”

Viruses/worms and DOS attacks are headline grabbers, probably because they target the Internet at the level at which users actually interface with the system—their own computers and/or the service providers. Attacks at this level are good for exciting the media and are easy for just about any Internet user to understand and relate to in his or her daily life. But while not impossible, it’s unlikely that the Internet as a whole is threatened by such “surface” attacks. To bring down the entire Internet, or even a major chunk of it, you have to dig a lot deeper into the system, as described in the story I told earlier about Dan Kaminsky and the DNS hole in the network. Or perhaps you have to assemble a global team of hackers of the type that broke into the networks of Citibank, the IRS, PBS television, and other major financial and media organization following the WikiLeaks events in 2011.

ADDING IT ALL UP

IN 2006, COMPUTER SECURITY EXPERT NOAM EPPEL PUBLISHED AN article on the Internet titled “Security Absurdity: The Complete, Unquestionable, and Total Failure of Information Security.” As one might imagine from the title, this piece attracted a lot of attention from professionals and firms dealing with Internet security. (As an amusing footnote to underscore the problems Eppel identified, during the writing of this chapter I tried finding that original article to have a look at recent comments that may have been added since I downloaded the piece in late 2007. To my astonishment, I discovered that every Google hit in my search for the article sent me to a website called www.securityabsurdity.com, which appears to be a site that’s hijacked whatever the original site was that contained the actual article. Needless to say, Eppel’s article was nowhere to be found.)

Eppel identifies sixteen different categories of security failures that infest the Internet. Included among the items on his hit list are spyware, viruses/worms, spam, and DOS attacks. As far as I can tell, very little, if anything, has been done to address effectively any of the problems arising from even one of the sixteen categories on Eppel’s list. As he noted, the situation is very much like the story of the frog in a pot of boiling water. If you place the frog in the pot when the water is cold and then gradually bring it to boiling, the frog sinks into a torpid state after a bit as the water heats up, and ultimately sits quietly as it boils to death. According to Eppel’s story, the computer security industry is analogous to that frog. The system is dying. But the death is tolerated simply because we’re accustomed to it. In short, the security industry is failing in every possible way because it is being outinnovated. And who is doing that innovating? Answer: A vast community of suppliers of so-called security systems, computer criminals, spammers, and others of this ilk, not to mention the willing complicity of computer users who buy in to the flimflam peddled by the “professionals.”

Just to get a feel for the severity of the problem of Internet security for the everyday user, studies have been carried out to determine the time it takes for a brand-new “virgin” computer to become infected with some sort of spyware, virus, identity theft, or other sort of malware from the moment it’s plugged in, booted up, and connected to the Internet. The average time to infection turns out to be about four minutes! It’s reported that in some cases the time before someone else takes complete control of the computer turning it into a “zombie” is as short as thirty seconds! There is little doubt that what we’re facing is not a security epidemic, but a full-fledged pandemic.

Even in the face of these sorts of results (and you can do the experiment yourself if you don’t believe them), a quick look at sites that report Internet security breaches in real time will show there’s nothing amiss. For instance, I’ve just finished looking at the real-time threat-monitoring sites from firms selling antivirus packages (to keep the lawyers away, I’ll refrain from mentioning names so as to protect the guilty). Looking at their threat maps for security problems around the world, you’ll see the odd blip here and there. But in each case, the overall threat level for the Internet as a whole is reported in at most the yellow zone, and for most regions it’s solidly in the green. Yet on a casual web surf using the term “Internet security threat,” I turned up article after article saying that the number of threats is dramatically increasing from the previous year’s level. What’s amusing, if not worrying, is that some of these articles have been prepared by the very same firms whose threat maps never show the Internet under assault. If this isn’t a living example of a frog in the pot of water, I don’t know what is.

It’s worth noting that the needs and wishes of the computer security business are not the only element acting to lock us into the existing Internet. Technology companies, too, are trapped. They have to sell today’s products and there is high uncertainty in making investments in new technology. Corporate information administrators have to defend past purchasing decisions. So how do we carry out a “makeover,” or introduce an entirely new Internet? The US National Science Foundation’s GENI Project has created a virtual laboratory for exploring future Internets at full scale and aims to create major opportunities for understanding, innovating, and transforming global networks and their interactions with society. Other private groups are exploring the same territory, with the goal of figuring out how to smoothly transit from the existing Internet to a much safer, more user-friendly version without throwing out the baby with the bathwater.

The bottom line here is that there is no such thing as true security on the Internet. At some degree of everyday usage, the Internet functions without any obvious holes. But that doesn’t mean the holes aren’t there. And it doesn’t mean they’re not getting bigger. The question is when will they get so big that too many people, corporations, or governments fall into them and can’t crawl back out. When that day comes, the Internet’s days are numbered, at least the Internet as we know it today. The current system is using a 1970s architecture to try to serve twenty-first-century needs that were never envisioned in those halcyon days of a bipolar world. (Try using a 1970s computer today to access the modern Internet!) The two systems in interaction have created a huge complexity gap that’s widening by the day. Soon it will have to be narrowed—by hook or by crash.