10
MANAGING THE MESS
EVERY PRESIDENT IN the last two decades has known that our networks are vulnerable to exploitation. The first President Bush told us in 1990, “Telecommunications and information processing systems are highly susceptible to interception, unauthorized electronic access, and related forms of technical exploitation.”
1 Nineteen years later the Obama White House was delivering essentially the same message. “Without major advances in the security of these systems or significant change in how they are constructed or operated,” its
Cyberspace Policy Review warned, “it is doubtful that the United States can protect itself from the growing threat of cybercrime and state-sponsored intrusions and operations.”
2 But this was no longer news.
In between, President Clinton warned in 1998 of the insecurities created by cyberbased systems and directed that “no later than five years from today the United States shall have achieved and shall maintain the ability to protect the nation’s critical infrastructures from intentional acts that would significantly diminish” our security.
3 Five years later would have been 2003.
In 2003, as if in a repeat performance of a bad play, the second President Bush stated that his cybersecurity objectives were to prevent cyberattacks against our critical infrastructure, reduce our vulnerability to such attacks, and “[m]inimize damage and recovery time from cyber attacks that do occur.”
4 Bush’s objectives were essentially a restatement of Clinton’s but with a new and welcome emphasis on resilience and recovery. However, none of these objectives has been met.
Such pronouncements have emerged like clockwork from the White House echo chamber, along with study groups, working groups, and “public-private partnerships” that produce recommendations that gather dust on some shelf. In August 2009, a presidential advisory committee issued a report on persistent insecurities in the public network and noted that previous assessments had reached similar conclusions in 1993, 1995, 1999, 2002, 2005, and 2007! This chronicle of executive inaction will be of interest chiefly to historians and to members of Congress who, in the aftermath of a disaster that we can only hope will be relatively minor, will be
shocked to learn that the nation was electronically unprepared to deal with cyberespionage or -attacks. Yet the situation is growing worse.
5 In 2010 the commander of USCYBERCOM, the NSA’s General Keith Alexander, acknowledged for the first time that even our classified networks have been penetrated.
6
Most people in government are neither stupid nor lazy, and the civilian and military personnel responsible for our networks are painfully conscious of our vulnerabilities and work hard to minimize them. Some current efforts in Washington to deal with cyberinsecurity are promising—but so was Sisyphus’s fourth or fifth trip up the hill. The higher cybersecurity recommendations rise in the bureaucracy, the greater the chance they’ll be watered down to achieve consensus, or sidelined. This is why we get continual declarations of urgency but little real progress. Translating repeated diagnoses of insecurity into effective treatment requires the political will to marshal the financial and organizational resources necessary to do something about it. The recent Bush administration came by that will too late in the game. After his inauguration, President Obama dithered for nine months over the package of excellent recommendations from a nonpolitical team of civil servants,
7 but the administration’s lack of interest was palpable. Unfortunately, Obama’s cybersecurity proposal of May 2011, coming after a two-year delay, will not move the security needle far. The proposal is not without merit. In place of a patchwork of state laws dealing with notifications following data breaches, for example, it would create a single nationwide standard, and it would strengthen criminal penalties on cybercrime. But these changes deal with the consequences of insecurity; they will not make us more secure. The proposal would strengthen the Department of Homeland Security’s cybersecurity authorities, but not by much. It breaks no new ground and would do little to raise security standards.
8 Obama’s budget reflects a higher priority for cybersecurity than ever before, though proposals for the increase originated under his predecessor. And the president deserves credit for creating a new joint military organization called U.S. Cyber Command housed with the NSA at Fort Meade, Maryland, and led by the NSA director. But except in the Pentagon, progress on cybersecurity remains poor.
CYBERCOM integrates the defense of most of the Defense Department’s networks and other national security networks, like those in the intelligence agencies. When directed, it also conducts military cyberspace operations. Many issues remain to be worked out regarding CYBERCOM’s operations, but it is a robust command
9 with meaningful authorities over Defense Department and national security networks, and it is supported by the nation’s most advanced capabilities.
10 CYBERCOM also brings under one roof the historically separate offensive authorities of the Defense Department and the defensive and intelligence-gathering authorities of the NSA. This is important for two reasons. First, as we have seen, it is impossible in most cases to tell the difference between a foreign penetration designed merely to gather intelligence and one to preposition a cyberattack weapon. Calling a meeting of lawyers to determine who can deal with that kind of situation and what tools they can use doesn’t work when facing threats at network speed.
Second, we need offensive tools for strategic defense. When an aircraft carrier group goes to sea, for example, the admiral in charge of the group flies an air patrol with a radius of perhaps a thousand miles. Fighter aircraft have offensive capabilities, but in this case their mission is to defend ships, and the admiral needs to know what’s coming before it arrives. Holding fire till you see the whites of their eyes may have worked at the Battle of Bunker Hill, but not anymore. If you wait for the incoming danger to reach you, you won’t be able to defend against it. CYBERCOM solves this problem by letting the general in charge of defending national security networks use offensive tools outside his networks in order to know what’s coming.
11 To be blunt, espionage is an essential aspect of defense. To know what’s coming, we must be living inside our adversaries’ networks before they launch attacks against us.
12
The side of government that doesn’t deal with national security matters (and that’s most of it) presents a different and sadder picture. If an adversary launched a slow-motion, coordinated attack to corrupt the operations of the Treasury Department and of our key companies and infrastructure, how many hours or days would it take us to figure it out? We don’t even have a mechanism in place to know what would be happening, let alone to do something about it. The departments of the executive branch—State, Treasury, Justice, Homeland Security, and so on—are isolated silos that in most circumstances are incapable of coordinated action. To understand why most government departments work so poorly together while the military coordinates its activities so well, it helps to look back at a time when the army and navy didn’t work well together at all.
CONGRESS CREATED THE War Department in 1789 and the Navy Department nine years later, in 1798, and the two remained rigorously and jealously independent until after World War II. The secretary of war was the civilian head of the army, the navy had its own secretary, and each reigned supreme in his earthen or watery realm.
13 As a result, joint operations in wartime were hazardous affairs that produced as much friction as cooperation. Relations between the army and navy were so bad in Cuba during the Spanish-American War that “the army commander refused to turn captured Spanish ships over to the navy or allow a navy representative to sign the surrender document.”
14 In theory the president could command them both, but by the twentieth century the task of presiding over the government had become too complex for the president to concern himself with the details of government operations, civilian or military. Unfortunately, this did not become clear until after the Pearl Harbor disaster.
15 During the war and for years afterward, the two services used different management systems, ensuring that logistical coordination was all but impossible.
16 “The whole organization belongs to the days of George Washington,” reported Britian’s senior liaison officer in Washington, Sir John Dill. His colleague, Air Marshal Sir John Slessor, was even more scathing: “The violence of interservice rivalry in the United States has to be seen to be believed and was an appreciable handicap to their war effort.”
17
In 1947 Congress split the War Department into the Department of the Army and the Department of the Air Force. (Land-based aircraft had previously been part of the army.) In 1949, Congress forced those two departments and the Navy Department into the newly created Department of Defense—but it did not abolish the separate military departments. DoD is still the only department of the U.S. government that contains departments within it, albeit at the subcabinet level, and the secretary of defense must still occasionally struggle to assert his control over the subordinate civilian military chiefs.
18 The National Security Act of 1947 created the Joint Chiefs of Staff, but neither that act nor the reforms of 1949 altered the separate command structures at the service level or the clubby and ineffective organization of the Joint Chiefs. We had a commander of the U.S. Army in Europe, for example, and we had a commander of the Atlantic Fleet, so the fleet and the army were commanded separately even when operating in the same theater. This dysfunctional organization contributed significantly to the vulnerabilities that led to the bombing of the marine barracks in Beirut in 1983, killing 299 American and French servicemen, and to the utter failure of the mission to rescue the hostages in Iran in 1980.
19
In 1986 Congress shook this creaky system of military fiefdoms to its foundations when it passed the Goldwater- Nichols Act.
20 Goldwater-Nichols demoted the separate military departments into organizations that merely recruit, train, and equip soldiers, sailors, and airmen. It stripped the army, navy, and air force of all operational command authority and for the first time vested effective command authority in joint commands, each headed by a four-star officer from one of the services and today called a combatant commander, or COCOM. The COCOM’s staff comes from all the services, and all units from all services within his theater fall under his command. At this writing (mid-2011), for instance, the commander of Pacific command in Honolulu is a navy admiral, but his deputy is an air force general and his senior enlisted officer is an army sergeant major. This pattern, or something like it, is repeated in all U.S. combatant commands, though the mix of forces of course varies depending on the mission. Pacific command is heavily naval for obvious reasons. Central command, which runs the wars in Iraq and Afghanistan, comprises members of the army, navy, air force, and marines working side by side from headquarters to the boots on the ground.
Joint organization has paid extraordinary operational dividends, but it was a struggle to make it work. Overcoming intense servicelevel loyalties took time. To accomplish that, the law made it effectively impossible to achieve high rank in any of the services without spending a “joint duty” tour in one of the other services. As Admiral Mike McConnell told me when he was the director of National Intelligence, “When I was a young naval officer, if I had said I was interested in a tour with one of the other services, my career would have been finished. After Goldwater-Nichols, I couldn’t get ahead without it.” Nine years after the act was implemented, one of its leading military opponents hailed it as “a major contribution to the nation’s security.”
21 This understates the case. This act is one of the most important organizational reforms in the history of the United States government—as important as our technological edge in making our military the most powerful in the world. All our military services are proud of it—and all of them resisted it fiercely at the time.
Why isn’t the rest of the government organized this way?
THIS QUESTION SHOULD be at the forefront of public discussion about our civilian government, but it’s rarely even asked. To be sure, there are great differences between the civilian departments and military services that command their members’ dress and behavior and can send them to war. But it’s simply wrong to assume that the organization of the military can teach us nothing about the organization of our civilian departments. From an organizational point of view, the military side of our government is light years ahead of the civilian side in its ability to attack problems jointly. This picture runs totally against the common perception of the defense establishment as bloated and inefficient, which it
is when it comes to acquisition, purchasing, and various other support functions. Everyone has heard the stories of hundred-dollar toilet seats, overpriced hammers, and weapons systems that exist only because they benefit the constituents of powerful members of Congress. Operations, however, are a different matter. The American military’s ability to plan and execute stupendously complex, efficient operations anywhere on the planet is astounding. This could not occur without the seamless integration of the services in the field. We see this not only overseas but also here at home. Hurricane Katrina and the destruction of New Orleans in 2005 were a humiliation for civilian government at all levels. The problem went far deeper than the incompetence of the then director of the Federal Emergency Management Agency and various local officials. President George Bush’s fumbling and hesitant mishandling of the disaster could not explain it either. Civilian government simply lacked the means to coordinate the necessary actions across the various departments of government: food relief, flood relief, law enforcement, housing, public order, compensation of victims, and so on.
22 At times it seemed that only the National Guard stood between chaos and some semblence of order. The inability to coordinate across departments hamstrings civilian government every day, not just in emergencies. We have fifteen federal agencies that oversee food safety, eighty-two programs to improve teacher quality spread over ten federal agencies, and eighty different economic development programs. These efforts waste billions and are without unified direction.
23
All strategic problems, including cybersecurity, require cross-departmental integration. But it does not exist, and the interagency “coordination” process is clumsy and inefficient—just like military operations before 1986. Even as we begin to spend hundreds of millions of dollars on cybersecurity, we are pouring the money into the usual isolated departmental fiefdoms. Cross-departmental governance is extremely difficult—and not just in the United States. Doing it well requires an office with authority over the departments and the power to muscle entrenched and often parochial bureaucracies, and we don’t have it. I am not suggesting the militarization of civilian departments. I am proposing the creation of a civilian mechanism of directive authority and responsibility above the departmental level, within the executive office of the president. The media, always addicted to the cliché, told us we were getting a cyber “czar” in 2009,
24 but the newly created cyber “coordinator” has no directive power and has yet to prove his value in coordinating, let alone governing, the many departments and agencies with an interest in electronic networks.
This lumbering charade of “coordination” rather than directed integration has been the theme of American federal interdepartmental relations since World War II.
25 After the war, in 1947, Congress created the National Security Council, but the NSC’s role is restricted to advising the president on national security
policy.
26 Policies are not operations. Policies are set in the clouds, operations occur on the ground, and the NSC has no power to drive policy from the clouds to the ground. After 9/11, President Bush created a Homeland Security Council along the NSC model. Its role was merely to “coordinate the executive branch’s efforts” in dealing with terrorism.”
27 Translation: It has the power to arrange meetings—but the real power over budgets, programs, and personnel remains with the departmental secretaries. Their power is written into law; even the president can’t override it. As a result, America’s federal government is run by an awkward compromise among powerful fiefdoms—much like military operations in World War II.
28 This is not a viable model for governing a powerful nation in the twenty-first century.
This ineffective arrangement suits Congress, however, because the power of individual executive departments mirrors the power of congressional committee chairmen who control their budgets and oversee their programs and personnel. Fragmentation of executive functions reflects fragmentation on Capitol Hill. The 9/11 Commission saw this clearly. Of the scores of recommendations made by the 9/11 Commission,
all were adopted except those regarding Congress’s appallingly fragmented committee system. As the commission noted, “The leaders of the Department of Homeland Security now appear before eightyeight committees and subcommittees of Congress.” And they all hold hearings that squander vast amounts of valuable executive time. This system is not only irrational, it’s abusive of the people who are trying to make the government work. “So long as oversight is governed by the current congressional rules and resolutions,” the commission said, “we believe the American people will not get the security they want and need.” Ruefully, the commission added that Congress was unlikely to reform itself without sustained public pressure,
29 and the last decade has borne out that pessimistic conclusion. Even the 9/11 disaster was insufficient to produce change among our legislative princelings.
CONGRESS
ISN’T THE only cause of operational dysfunction in cybersecurity, however, even in defense. For example, the most effective tool we have for testing the security of an information system is “red teaming.” We assemble a red team of professional cyberburglars who are really good guys, and set them to work against one of our own systems. Not surprisingly, the gold standard for white-hat breaking and entering is set by the NSA’s Information Assurance Directorate, whose red teams are virtually impossible to keep out.
30 But the ways they get in can teach you volumes about how to tighten your security. Unfortunately, however, the NSA’s red teams require the consent of the system’s owner before they may lawfully test a network, even within the Defense Department. This is like walking into a middle-school cafeteria and asking who wants to take a pop quiz. Not many hands go up. As a result, the Defense Department cannot apply red teaming where, based on risk assessment, it’s most needed. Instead they use it haphazardly, with permission.
31 DoD doesn’t need Congress in order to fix this.
On the nondefense side of cybersecurity, the story is worse. The Department of Homeland Security is a confederation of twenty-two agencies that were hurriedly nailed, glued, and stitched together in the wake of 9/11. It includes the Secret Service, the Coast Guard, customs, emergency management, cybersecurity, and a host of other functions from organizations with their own traditions, cultures, and incompatible electronic systems. Melding and governing this confederation has been a work in progress, to say the least. It took four decades (some would say five) to make DoD the effective boss of the military services after World War II, and it will take years before the DHS becomes an integrated department. The DHS has the legal authority and role to protect federal information systems other than “national security systems,” which include those of the intelligence agencies. But the DHS lacks the talent, know-how, tools, and systems to do the job. The department has therefore struck a reluctant bargain with the NSA to use NSA tools and, in some cases, borrows personnel to improve federal cybersecurity in nondefense departments. Americans want more security, but they don’t want our intelligence agencies in charge across the board. Consequently, the job is being done, more or less, with the NSA’s help under the umbrella of the DHS’s limited legal authorities. There is no alternative. Duplicating the NSA’s capabilities would be astronomically expensive, and it could not be done even if money were no object, because there wouldn’t be enough world-class expertise to staff two such agencies.
32 Nor should the DHS apologize for an arrangement that, on a limited basis, represents the kind of cross-departmental operations we should be aiming for. Meanwhile, neither the DHS nor the cyberczar has directive power to deal with agencies whose electronic security is poor, and Congress has been unwilling to create a permanent White House office of cybersecurity.
33 Progress on cybersecurity on the civilian side of government has therefore been painfully slow and unsatisfactory.
Meanwhile, we have awakened to the fact that our national security depends heavily on privately owned critical infrastructure and our economic might—assets that lie outside the defense-military-intelligence realm. Outside that realm the government isn’t protecting us at all. If nonmilitary targets were attacked from abroad by land, sea, or air, the government would respond. But apparently this is not true when it comes to cyberattacks. This is a little noticed but momentous change in the oldest, most basic function of government, which is protecting the nation. Is this all bad? Maybe not. How could the government be responsible for protecting all our information systems unless we turned over control of all communications to the government? Perish the thought. Besides, the world is flexible and moves fast. Government is rigid and moves slowly.
34
In short, the executive branch of the federal government is fragmented and Congress is dysfunctional. But why hasn’t the private sector delivered better security?
The Failure of Private Incentives
Change in the private sector is driven principally by two factors: market demand and liability. Unfortunately, liability has played virtually no role in achieving greater Internet security. This may be surprising until you ask: Liability for what, and who should bear it? When you buy software, it comes shrink-wrapped in transparent plastic, with a warning that if you break the wrapping, you accept the manufacturer’s licensing terms (which usually limit its liability to the price you paid for the program). When you install the software, the installation program also typically requires you to click a button that says you agree to those terms. No click, no install. So right out of the box, you agree to terms that severely limit the manufacturer’s liability for defects. And because software is licensed rather than sold, the implied warranty that might otherwise be available under the Uniform Commercial Code does not apply. Suing the software manufacturer for allegedly lousy security is therefore a game not worth the candle. Besides, how do you put a monetary value on the damages, say, from finding your computer is an enslaved member of a botnet run out of Russia or Ukraine? And how do you prove the problem was caused by the software rather than your own sloppy online behavior?
Asking Congress to create standards for software defects would be asking for trouble: All software is defective, because it’s so astoundingly complicated that even the best of it hides surprises. Deciding what level of imperfection is acceptable is not a task you want your congressman to perform. Any such legislation would probably drive some creative developers out of the market. It would also slow down software development—which might not be all bad if it led to higher security. But the public has little or no understanding of the vulnerabilities inherent in poorly developed applications. On the contrary, people clamor for rapidly developed apps with lots of bells and whistles, so an equipment vendor that wants to control this proliferation of vulnerabilities in the name of security is in a tough spot.
Banks, merchants, and other holders of personal information do face liability for data breaches, and some have paid substantial sums for data losses under state and federal statutes granting liquidated damages for breaches. In one of the best-known cases, Heartland Payment Systems may end up paying about $100 million as a result of a major breach, not to mention millions more in legal fees. But the defendants in those cases are buyers, not makers and designers, of the hardware and software whose deficiencies create so many cyberinsecurities. Liability presumably makes these companies somewhat more vigilant in their business practices, but it doesn’t make hardware and software more secure. This has scary implications. Many major banks and other companies, for example, already know they have been persistently penetrated by highly skilled, stealthy, and anonymous adversaries, very likely including foreign intelligence services and their surrogates. These firms spend millions fending off attacks and cleaning their systems, yet no forensic expert can honestly tell them that all advanced, persistent intrusions have been defeated. (If you have an expert who says he can, fire him immediately.)
Insurers play an important role in an effective liability regime, raising standards because they tie premiums to good practices. Good drivers, for example, pay less for auto insurance. Engineers who follow practices approved by their insurers pay less for professional liability insurance. Without a liability dynamic, however, insurers play virtually no role in raising cybersecurity.
If liability hasn’t made cyberspace more secure, what about market demand? The simple answer is that software consumers buy on price, and they haven’t been willing to pay for more secure software. In some cases the aftermath of identity theft is an ordeal, but in most instances of credit card fraud, U.S. banks absorb 100 percent of the loss, so their customers have little incentive to spend more for security. Most companies also buy on price, especially in the current economic downturn.
Unfortunately, we don’t know whether consumers or corporate customers would pay more for security if they knew the relative insecurities of the products on the market. As J. Alex Halderman of the University of Michigan has noted, “[M]ost customers don’t have enough information to accurately gauge software quality, so secure software and insecure software tend to sell for about the same price.”
35 This could be fixed, but doing so would require agreed-upon engineering standards for judging products and either the systematic disclosure of insecurities or a widely accepted testing and evaluation service that enjoys the public’s confidence.
Consumer Reports plays this role for automobiles and other products, and it wields enormous power. The same day that
CR issued a “don’t buy” warning on the 2010 Lexus GX 460, Toyota took the vehicle off the market. A software-security rating service along the lines of
CR, written in plain English (for consumers rather than computer engineers), would be a public service.
In short, the picture is bleak. But it’s far from hopeless.
What the Government Should Do
Here are seven areas in which federal initiatives would significantly improve cybersecurity by driving change in the private sector without legislating government standards that would inevitably prove clumsy and ineffective. They could be accomplished relatively quickly. They would not create new bureaucracies. They would not require reorganizing the executive branch. And they would enhance both privacy and security.
1. Trade regulation and contracting
•
Use the government’s enormous purchasing power to require higher security standards of its vendors. The Federal Acquisition Regulation and its Defense equivalent are the bibles of U.S. government procurement of goods and services. The National Institute of Standards and Technology is the most influential standards-setting body in the United States. Together they could drive higher security into the entire market by ensuring federal demand for better products. These standards would deal with such topics as verifiable software and firmware, means of authentication, fault tolerance, and a uniform vocabulary and taxonomy across the government in purchasing and evaluation. Sound arcane? Maybe so, but we won’t have federal standards for products if we can’t even agree on how to evaluate them.
In support of this effort, the Office of Management and Budget (OMB) should collaborate with the Office of the National Counterintelligence Executive to create a model for acquisition risk analysis that should be applied uniformly throughout the government. And the OMB, which wields the budget hammer, should enforce it.
36 Different agencies have different tolerations for risk. What may be acceptable in the Department of Housing and Urban Development may not be acceptable at the NSA, for example, but all our agencies should be measuring risk the same way. At present, that’s not the case.
•
Forbid federal agencies from doing business with any Internet service provider that is a hospitable host for botnets, and publicize the list of such companies. The Department of Homeland Security and the FBI know who these ISPs are. Publishing a list of ISPs with which the government refuses to deal would also give other federal entities, state and local agencies, and private businesses a rational basis on which to take the same step. So long as businesses make individual rather than group decisions about bad ISPs, they would have no antitrust liability.
•
Direct the Department of Justice and the Federal Trade Commission to definitively remove the antitrust concern when U.S.-based firms collaborate on researching, developing, or implementing security functions. Companies often cite the fear of antitrust law to explain their lack of cooperation on cybersecurity. The fear is overblown, but the government can remove it entirely. The Justice Department and FTC have the ability, without changing any laws, to approve one or more supervised forums in which government and businesses could exchange threat information in near real time without endangering competition. This step should be undertaken in collaboration with the European Union’s antitrust authorities in order to reduce the risk of conflicting international standards.
2. Role of the service providers
•
Require Internet service providers to notify customers whose machines have been infected by a botnet. The big ISPs, like AT&T, Verizon, and Comcast, could quickly and easily cause a dramatic drop in Internet crime, but for a combination of legal and commercial reasons they don’t. These firms monitor their networks 24/7, and they hire top people to watch and interpret patterns in the network traffic. They’re not eavesdropping on particular conversations; they’re observing larger trends in order to route traffic surges efficiently and protect their own networks from attack—because they are constantly being attacked. So if thousands of computers suddenly start sending messages to a server in Belarus, the ISPs right away see an anomaly in the traffic pattern. And they don’t have to watch it very long to know they’re facilitating—or perhaps even hosting—a botnet. Whether they can trace the botnet back to its source depends on the cooperation of foreign ISPs, which may not be forthcoming, but they can see which computers are enslaved.
“ISPs are in a unique position to be able to attempt to detect and observe botnets operating in their networks.” That’s the conclusion of a paper submitted by several Comcast officials to the independent Internet Engineering Task Force.
37 There are thousands of ISPs in the world, but fifty of them account for more than half of all spam worldwide.
38 Yet it’s unlikely that you or anyone you know has been warned by their ISP that they’re part of a botnet. Even a behemoth like Microsoft had to get a court order in 2010 to force the ISP VeriSign to take down 227 .com domains that the botnet Waledac was using to spit out 1.5
billion spam messages a day.
39
If you ask an ISP official off the record why they don’t take down botnets as a matter of course, the reason you’re likely to get is “privacy.” This is confusing if you’re not a privacy lawyer, because ordinary people with common sense don’t think the electronic privacy laws were meant to protect criminal behavior on the Internet. But under federal law it is a crime for anyone to intercept any wire, oral, or electronic communication.
40 This applies to you, me, Comcast, AT&T, and federal officials unless they have an order from a judge. Most privacy lawyers have thought this means that an ISP can’t set a filter for botnet traffic on your computer—again, I mean watching traffic flow, not reading your e-mail—without violating the law. Since the law carries minimum damages in the amount of one thousand dollars per violation, a class action would undoubtedly be filed, and settling it would be very expensive. Fortunately, however, the act contains what’s called the “service provider exception.” That means that a provider of Internet services, like an ISP, can monitor traffic
if it needs to do so in order to protect its own network—
but not to protect your network. When this law was written in 1986, its authors conceived of the Internet as a big bicycle wheel, or a series of connected wheels, and imagined each spoke as a network. Companies were told, in effect, that it’s okay to monitor and protect their own spokes—or if you were an ISP, your own hub. But if you’re a hub, protecting a spoke is none of your business, so don’t do it. In other words, every actor on the Internet, whether the actor was Grandpa or AT&T, was thought to occupy his own little territory that was separate from everyone else’s little territory.
This is no longer a realistic conception of how the Internet works, if it ever was. You may think of yourself as just a spoke in a wheel, but you communicate through a hub. Your communications are part of the hub’s traffic; they occur over the hub’s networks. What’s more, if your machine is corrupted, you threaten to corrupt those of everyone else who communicates with you through that hub and beyond—because the hubs and spokes are all connected. The reverse is also true: Corrupted users who communicate with you will contaminate you, whether they intend to do so or not. An attack on one part of a network is therefore an attack on the network itself—period. If you’re going to defend the network, you’ve got to be able to defend it at every point. We do this in other aspects of civic life, but so far we don’t do it on the Internet. For example, you can’t legally drive a car with no brakes or headlights on a public roadway. If you do, you’re a menace not only to your own safety, but also to the safety of everyone else. But that’s not how privacy law currently works regarding electronic communications.
As presently understood, the service provider exception merely lets an ISP protect its own proprietary system. But who has the responsibility to protect the whole ball of wax? Nobody. Thus we have a legal regime that in the name of privacy and freedom lets fraud thrive. This is perverse. As currently understood (I believe it could be interpreted more expansively), the law does not create more privacy and freedom (which would be good). It just creates more insecurity. At least two ISPs have begun an interesting experiment of notifying their subscribers when they are infected and putting them into a “walled garden.”
41 The walled garden stops short of a quarantine or blockade, but others will know you’re in it and can decline to deal with you. The incentive to get cleaned up is therefore strong, and the ISPs are in a prime position to capture the remediation business.
So far, however, walled gardens have not caught on. The reason is that ISPs compete on market share. Market share drives the price of their stock, which in turn drives their behavior. To drive market share, the ISPs compete on breadth of coverage—we’ve all seen the back-and-forth ads with blue and red maps—but they don’t compete so much on quality, and not at all on security. The plain truth is, they’d rather have you corrupted but on their network than clean but on somebody else’s network. And they fear that if they told you you’re in a botnet, you’d unfairly blame them and switch to one of their competitors.
Reducing botnets would not solve all the problems of our Internet security, but it would be the single biggest step toward cleaning our networks, and one of the easiest. To accomplish it, we should permit—not require—ISPs to block traffic from infected customers according to a subscriber’s wishes. But we should require them to flag all such traffic, so others could refuse to accept it. Rules like this should apply in a neutral way to all technologies and companies and to all components of the media, whether e-mail, Web sites, or proprietary channels such as social networking sites. ISPs could then compete on how well they do this, and customers could decide whether they want the service at all.
These services could be bundled into basic fees or priced separately, like the telephone service that lets you refuse calls from those who block their numbers. If others are put on notice and still want to accept dangerous traffic, let them. You’re free to visit bad neighborhoods at 3:00 A.M. for any encounter you choose—but if you come back from an electronically bad neighborhood with an electronically transmitted disease, you should be tagged, so others can avoid you if they choose. Permitting the government to define the network behaviors that trigger this requirement would be a bad idea, however. Governmental regulation cannot keep up with the technology,
42 and letting the government decide whose communications are dangerous would be a bad precedent. Rather, government’s role should be limited to stating the general requirement and tasking a private consortium of ISPs to create flexible rules to implement it.
The primary enforcement model here is not the police or highway department; it’s public health. Early twentieth-century public health authorities had to deal with highly infectious diseases, such as tuberculosis (alas, they still do). At the extreme they resorted to quarantine measures, but promoting better hygiene was their focus. Cyber systems are becoming like human systems that constantly encounter microbes and viruses but learn to fight them off. One appealing aspect of this approach is that the government doesn’t have to do most of it. The ISPs are communication gateways and uniquely well suited to act as health monitors. Another advantage of the steps I propose is that the market would quickly correlate infections with particular software makers and network platforms. This, in turn, could drive competitive improvements in the way software is engineered. Flawless software engineering is an impossible goal. Rather, like an organic system, software should be designed to contain the consequences of failures and to permit recovery with minimal disruption.
43
The public has no idea how to achieve security, and even if it were willing to be inconvenienced to get it (it isn’t), few people or businesses know how. Most people are not computer scientists, just as they are not auto mechanics. They just want seamless convenience—but without creating unreasonable risk to their personal credit or corporate secrets. That’s where the anarchic Internet has failed. This is why virtual private networks are becoming the norm in the business world, and why the Internet has begun to fragment into a world of proprietary profit-making channels like social networking sites or Apple’s iPhone (which rigorously controls the applications you can run on it).
44 We are watching a classic pattern in which a capitalist culture nurtures innovation, lets it flourish, and finally figures out how to make it pay. There is no point celebrating the first phase while lamenting the last. They go hand in hand.
45
3. Energy standards
•
Direct the Federal Energy Regulatory Commission (FERC) to require the North American Electric Reliability Commission (NERC) to establish standards that limit the ability of utilities to connect their industrial control systems directly or indirectly to a public network. We saw earlier that our scheme for establishing reliability standards for this critical industry is off the rails. The Energy Department’s inspector general has reached the same conclusion, noting in appropriately bland language that current standards do “not include essential security requirements.”
46 FERC’s ability to create standards has been hamstrung by the legislation that created it, and NERC is heavily influenced by the grid’s owners and operators. Amending the statute would be desirable, but it won’t happen soon. Meanwhile, FERC should direct NERC to begin establishing standards in this area, where none now exist. If we’re serious about protecting our critical infrastructure, we must begin to restrain the connection of the electricity grid to public networks.
4. Tax code
•
The Internal Revenue Code is a powerful driver of corporate behavior. Use it. The Internal Revenue Code is full of incentives and punishments to encourage or discourage behavior according to Congress’s collective preferences. If we want to encourage capital investment, for example, we accelerate the rate at which capital depreciates, creating larger paper losses to write off against current income. If we want to discourage fancy business lunches, we limit the percentage of the expense that can be written off. If we want to help working families with modest incomes, we create a child-care allowance. And so on. We’ll know when Congress has become serious about securing the nation’s networks when it begins using tax incentives to encourage investment in cybersecurity.
5. Research
Congress should increase support for public and private research in the following areas:
•
Attribution techniques and identity standards Reliable, swift attribution of hostile foreign or unlawful behavior on our networks is as essential in cyberspace as it is on the sidewalk. It’s easy to commit espionage and other cybercrimes because nobody knows who you are unless they devote inordinate resources to finding out, and sometimes it can’t be done at all. As we saw in chapter three, this is known as the attribution problem, and it has three levels. First, what machine launched the attack or the malware? Second, who was at the keyboard? Third, whom were they working for? For intelligence and criminal investigation purposes, all three levels are critical, but we’re often stuck on the first level. In the consumer context, authentication techniques should vary according to the value of the transaction. Browsing a catalog could be done anonymously, but buying the furniture should require verification of the buyer’s credentials. In a business context, authentication techniques should vary according to the sensitivity of the information you want to access. In a criminal context, the government should be able to trace and attribute behavior under legally defined circumstances, which ordinarily should include judicial oversight. In the decades before the recent advent of phone number spoofing, no one thought it was an invasion of privacy when a phone number could be conclusively associated with a particular phone. You could block your caller ID—and others could block anonymous calls. In most circumstances, however, your call could be traced. To return to that level of accountability in cyberspace, we need a robust public-private research effort into better attribution techniques.
47
•
Verifiable software and firmware, and the benefits of moving more security functions into hardware The greatest technological challenge to cybersecurity is the near impossibility of reliably evaluating the security of electronic systems, hardware, and code. We’ve known since the mid-1980s that finding subversive code in the much simpler systems of that era was overwhelmingly difficult. Today’s systems are far more complex. We cannot assure security by examining a computer chip with millions of logic gates or a software program with a million or more lines of code.
48 This inability has become a strategic problem for Western countries, because we have mostly offshored our chip manufacturing and software writing to companies in Asia. These companies, with the guiding hand of foreign intelligence and security services, can then plant hooks or backdoors in systems that we depend on but cannot evaluate.
49 This problem cannot be fixed with a mercantilist commitment to buying only your own nation’s goods. “Made in USA” doesn’t tell you much when a product that’s assembled here contains components from several different countries. Or when a U.S. company makes an identical product in three different countries. Or when a product that’s actually made in Texas contains software written in Russia. Or when the software was written in this country—by Chinese programmers.
50 The industrial supply chain is global, and reversing that trend would be an economic disaster. But globalization brings security vulnerabilities. Public support for research in this area would increase our ability to evaluate critical components and encourage products to be designed with evaluation in mind.
•
Feasibility of an alternative Internet architecture When I was a little kid, I met a big kid who knew how to make long-distance phone calls for free on a pay phone. He’d make pinging sounds into the phone, and these sounded to the operator like he was dropping quarters into the slot, so she’d put the call through. He could do this because the phone company had built the system so that data (his voice) and instructions (his payments, which authorized a connection) resided in the same memory. AT&T had a huge investment in that technology, but eventually AT&T abandoned it for an electronic switching system that was far more secure.
The Internet today works like pay phones did when I was a kid. Joe Markowitz, a former director of the CIA’s Community Open Source Program Office, thinks it’s time to move away from that model. “We should get rid of IP”—that’s the current Internet protocol—“and go to a stratified network where we take the control channel out of the subscriber space,” Markowitz says.
51 Advocating a hugely expensive change in Internet architecture now would be premature. But it is not premature to fund research into alternatives that would address this fundamental weakness of the Internet.
6. Securities regulation
•
Electric utilities that issue bonds should be required to disclose in the risk factors section of their prospectuses whether the command-and-control features of their SCADA networks are connected to the Internet or other publicly accessible networks. As we saw in chapter five, companies that expose their SCADA systems to the Internet have assumed the risk of severe disruption. That risk should be disclosed to the holders of their securities and to bond rating agencies. Issuers might rebel, but many of them that follow this risky practice
know it creates an “unresolved security issue.”
52 SCADA networks were built for isolated, limited access systems. Allowing them to be controlled via public networks is rash.
•
Toughen public audit standards for cybersecurity. Publicly traded companies with insecure networks jeopardize shareholders’ investments. To be sure, the degree of risk may vary sharply among companies. A company that processes financial transactions, or whose value depends on its trade secrets, will have a substantially higher risk than a company that does not. But business interruption is a material risk for every public company, so bond rating agencies should evaluate that risk, and the Securities and Exchange Commission should audit for it.
7. International relations
•
The United States should engage like-minded democratic governments in a multilateral effort to make Internet communications open and secure. Electronic outlawry is an international phenomenon. No government acting alone can reduce it to tolerable levels. Concerted multinational pressure should be brought to bear on states that do not punish international cybercrime. To some degree even China and Russia can be enlisted in this effort, but accomplishing this goal will require a willingness to bring severe pressure, including financial pressure, on uncooperative governments.
Beyond crime prevention, however, the prospect for a broad strategic consensus among the major powers is limited. China, Russia, Iran, and other authoritarian countries would like nothing better than a return to the days when governments could control what their populations could read, know, and publish. These governments will regrettably succeed to some degree within their own countries.
53 During the Egyptian revolution of 2011 we saw an authoritarian government that owned the Internet “pipes” shut down that country’s electronic communications in a vain effort to keep its citizens isolated from events in Cairo.
54 In Russia, the Soviet Union may have fallen, but the Leninist tactic of stealing and twisting the vocabulary of freedom to oppressive ends has not. The Russian rhetorical foray against “information aggression” is a prime example of the revival of this tactic. Aggression to the Russian government is anyone’s attempt to say anything it doesn’t want its population to hear. The effort by Russia, China, Zimbabwe, and other authoritarian governments to drive this view into international organizations and international law has become intense, and it must be vigorously opposed by a united coalition of democratic nations with a common program. This effort will require more than concerted opposition to an agenda devised in Moscow or Beijing, however. It will require a clear vision of what we want as well as what we don’t want—and that implies a hardheaded program for Internet communications that includes security as well as liberty.
What the Private Sector Should Do
But why wait for the government to do something?
The immediate vulnerabilities in most systems are not technological or legal, and the government does not create them. They stem from the failure to implement available technology and to manage people and systems intelligently. A report released in early 2011 showed that 73 percent of companies surveyed had been hacked, but 88 percent of them spent more money on coffee than on securing their Web applications.
55 In most public and private organizations I’m familiar with, the biggest contributors to information insecurity are managerial indifference and erratic human behavior. Your employees, your partner, and your spouse and kids don’t practice good cybersecurity. You probably don’t either. A recent survey of IT professionals in Europe showed that half of them failed to follow basic security practices for mobile devices.
56 Posting cybersecurity rules and expecting your employees or partners to obey them is a waste of breath and paper. When systems are designed to leave security in the hands of users, you can forget about security. We shouldn’t expect our colleagues—let along Grandma and Grandpa—to understand and implement security options on their computers, and few people of any age or level of sophistication are willing to tolerate security measures that are even slightly inconvenient. Besides, even well-trained people make mistakes that can compromise entire systems. These facts will not change. Systems must therefore be implemented and managed to take them into account .
57
Here are seven steps that every organization should take to enhance its electronic security. This is not a how-to manual or an information security plan, which would be highly detailed and tailored to a specific organization. But these steps indicate some of the basic components of such a plan.
1. Clean up your act. A confidential survey commissioned in 2009 showed high levels of botnet code on the systems of a large number of household-name U.S. companies. I’m not at liberty to identify the companies—but it’s likely that others will duplicate the survey and make the names public, because the survey was based on data gathered from open sources. It showed that a botnet attack—whether for criminal, political, or military purposes—could be launched from the systems of major U.S. corporations against other U.S. targets. If that occurred, the liability consequences could be devastating. As I noted earlier, cybermilitary operations could be launched against the United States from within the United States. This survey demonstrates that such operations could be launched from the systems of major U.S. companies. When another survey like this is eventually published, the shock wave is likely to be felt on Wall Street. Smart companies will clean up their acts before that happens. The alternative is to behave like the electric utility that had no means to monitor the communications traffic on its networks—but was confident it was secure because it had discovered no intrusions.
2. Control what’s on your system. If you manage a commercial enterprise, you have the ability to know who is running unauthorized hardware or software on your system. The guy running peer-to-peer software may be undressing your company electronically without your knowing it. The other guy who connects his family laptop to your system may be infecting you with malware that could shut you down. You can monitor this and stop it from happening. You can also learn a lesson from the military and require employees to use only the encrypted memory sticks you issue them, and then clean and reissue them periodically.
58
3. Control who’s on your system. This is a matter of both physical and electronic access. Some systems are more sensitive than others and need to be locked up in special rooms accessible to very few. Others can be more open. But physical access to every system should be controlled. Some people have responsibilities that require them to have electronic access to information that others don’t need to see. This sort of role-based access control used to be unusual except in sensitive government agencies ; that’s no longer the case. As the WikiLeaks fiasco demonstrated, there was no reason to give an army private in Iraq access to sensitive diplomatic traffic involving meetings with Zimbabwe’s opposition leader or Iceland’s economy. Giving your mailroom clerk access to your proprietary engineering drawings is the same sort of mistake. She doesn’t need it, and she can destroy you with it.
Deciding who gets access is actually the easy part of access control, however. Companies must also remove access when people change jobs or leave the company. Many employees feel entitled to take sensitive information with them when they leave, and repeated surveys show that many plan to do so. They steal whatever’s most sensitive: the customer database, M&A plans, financials, R&D plans.
59 Managing this vulnerability requires close integration between human resources, information security, and physical security.
4. Protect what’s valuable. You can’t protect everything, and you certainly can’t protect everything equally well. This is a matter of understanding the difference between diamonds and toothbrushes, as McGeorge Bundy put it.
60 Identify the business plans and intellectual property whose loss would cause serious harm to your company. Personally identifiable and sensitive health care information must also be protected carefully. None of this information belongs on your e-mail server. Design your server architecture and access controls accordingly. And make sure the information is encrypted to a high standard.
Most data breaches are caused by carelessness and bad management—like allowing sensitive information to be carried around on portable devices without encryption. In August 2008, a laptop containing the unencrypted personal information of about thirty-three thousand travelers registered by the U.S. Transportation Security Administration’s Fast-Pass program—this is the program that lets trusted, preregistered travelers zip onto airplanes without the usual security hassles—was stolen from the San Francisco airport. A month later Tennessee State University reported as missing a thumb drive with the financial records of nine thousand students. That kind of information does not belong on a flash drive, and it should have been encrypted. If you’re a manager, the question is not whether your employees will lose laptops or flash drives. The question is how many will get lost—and what will be on them .
61 Yet most companies have no policies governing mobile media. Even fewer enforce the policies they have.
5. Patch rigorously. Patches are software fixes for newly discovered software vulnerabilities; software vendors issue them regularly. Yet studies have shown that many penetrations of commercial systems take place through unpatched vulnerabilities. In 71 percent of those cases a patch had actually been available but not used
for more than a year .
62 Firms that behave this way are like drivers who leave the keys in their car overnight on a city street with the windows open. They shouldn’t be surprised when it’s gone in the morning. The patch regimen you should follow depends on the intricacies of your system. Some firms should automate patching. Others, where patching cannot be centrally controlled, should automatically shut out users who fail to install them. Some firms that do have central servers cannot patch automatically, because they must first test the effect of patches on interrelated systems. In any case, a systematic and rigorous approach to patching is elementary. If you can’t manage it yourself, providers of cloud services can do it for you.
6. Train everybody. If you don’t train and retrain your personnel, don’t be surprised when they do things that horrify you. The organizers of the DEFCON conference in 2010 ran a contest to see who could get the most information from a Fortune 500 company. Lots of charm and a few lies produced appalling results. One contestant, for example, called an employee out of the blue and said he was a KPMG auditor working under a deadline and needed help, fast. He got the employee talking, and before long he had loads of confidential information.
63 This is a social engineering technique, like sending an e-mail that pretends to come from a friend of the recipient. These scams exploit the weakest link in the system—us. In this world, a company that fails to train its employees in social engineering techniques and other aspects of computer security is running unnecessary risks.
7. Audit for operational effect. Audits are not merely tortures you suffer at the hands of the government. They are tools of managerial control. If you can’t audit your electronic networks, you have no idea what’s occurring on them. If you don’t want certain activities to happen on your networks, design them to make those activities impossible. If you want to permit a particular activity only under certain circumstances, design the system so that the activity cannot occur unless it is authorized. Then make sure the authorization can be audited and the audit trail can’t be tampered with.
64 Unfortunately, many companies that should be doing this don’t.
8. Manage overseas travel behavior. In most countries, you shouldn’t expect any privacy in Internet cafés, hotels, offices, or public places. Hotel business centers and phone networks are regularly monitored in many countries. In some countries hotel rooms are often searched. Transmitting sensitive government, personal, or proprietary information from abroad—or even taking it abroad—is therefore risky. The risk is not limited to the information you take with you, however. Security services and criminals can also insert malicious software into your device through any connection they control—like in the hotel. They can also do it wirelessly if your device is enabled for wireless. When you connect to your home server, the malware can migrate to your business, agency, or home system, can inventory your system, and can send information back to the security service or freelance culprit. You cannot eliminate this risk, but you can minimize it by following the guidelines published by the Office of the National Counterintelligence Executive.
65
Steps like these must be driven deep into the bones of an organization to become effective, and that requires leadership and followthrough. Whatever money companies spend on expert advice in setting up and managing their systems will be cheap compared to the cost of losing their intellectual property, or the leaking of sensitive information, or rebuilding their systems from scratch after they discover advanced, persistent malware that their best experts can’t clean up.
I BEGAN THIS BOOK with an image of Philip Johnson’s iconic Glass House, whose transparency eventually became intolerable even for its designer. And then I proceeded to show how all of us—including our companies, our government and military, and even our intelligence agencies—are now living in a collective glass house. Unlike the transparency of Philip Johnson’s sleeping arrangements, information transparency now threatens much more than our diminished modesty. The level of Internet crime is staggering. Our companies and government are under relentless cyberassault twenty-four hours a day, and they are bleeding—we are bleeding—military secrets, commercial secrets, and technology that drive our standard of living and create our power as a nation. The astounding advances in the electronic processing and storage of information that have given us so much wealth and pleasure have also left us nearly defenseless against endemic crime and systematic espionage by foreign intelligence services, criminal gangs, and unscrupulous competitors. Much of the crime originates in Eastern Europe and Nigeria. The most persistent espionage—particularly economic espionage—originates in China. Yet as bad as this hemorrhage of vital information continues to be, the impending danger is even greater. As we saw when we examined our electricity grid and other critical infrastructure, electronic systems do not merely create and store information; they keep the lights on and make things work. If you can penetrate electronic networks to steal information, you can also corrupt them or shut them down. And unlike Philip Johnson, we don’t have an electronic version of his Brick House we can move into. Put simply, we have become America the Vulnerable.
The United States cannot defend the electronic networks that control our energy supply, keep aircraft from colliding in midair, clear financial transactions, or make it possible for the president to communicate with his cabinet secretaries. We cannot permit this situation to continue and remain in control of our destiny. For the time being, no nation is likely to risk all-out conflict by attacking our infrastructure, but as I indicated in “June 2017,” our power can be undermined by incursions that do not lead to open war. In any case, it would be profoundly foolish, and weak, to consign our security to the goodwill of other nations, whose intentions may change. It may be unwise for China, Russia, or the United States to upset the fragile limitations on cyberoperations that each now observes, but other powers will achieve parallel capabilities and may prove less restrained. Meanwhile, the vast gap between the capabilities of advanced nation-states and of transnational terrorist organizations and criminal gangs will continue to narrow, and these groups are unlikely to be restrained by the fears of retaliation or risk of economic collapse that create a measure of stability among interdependent nations. In the meantime, the assaults on our systems continue around the clock, and cyberespionage against American and other Western companies has reached an unparallelled level.
This mess can be managed, and the vulnerabilities reduced, but only with an energized commitment from a government willing to bring concerted power to bear on its own departments and from a private sector willing to harden its systems. It will also require Americans to recognize that the common good depends on the common network, which must be defended and strengthened. The steps I’ve proposed are a modest but essential beginning.