DEGRADING DEFENSE
THE AMERICAN MILITARY-INDUSTRIAL complex is the world’s fattest espionage target. While the scope and intensity of economic espionage have assumed startling proportions, the “traditional” espionage assault on our national defense establishment dwarfs anything we have ever before experienced. This assault is constant, it is relentless, and it is coming from all points on the compass in ways both old and new. As I previously indicated, about 108 foreign intelligence services target the United States. The number of Russian intelligence officers in the United States, which fell sharply at the end of the cold war, is actually greater than it was when the Soviet Union collapsed. The Russian services are adept at both human and technical intelligence, and they now target both economic and military secrets. The other top-tier intelligence threats come from China and Iran. The Cuban services, trained by the KGB, are also highly skilled and disciplined at human intelligence, but Cuba is not a strategic threat to the United States. Since the late 1990s, however, the overall threat has become both more intense and more complex, for reasons that should now be obvious: Stealing information electronically is cheap, easy, and low risk.
What are foreign services after? They want to know how to get into our systems and remain there undetected. They want information about plans, weapons, and people—the identities, roles, and responsibilities of everyone who works in an organization. That information identifies who has access to the other kinds of information they really want. In the case of the Chinese, we know what they want because they have told us in writing.
Red Flower of North America
In the mid-1960s, when all of China was a poor and chaotic tyranny, a modest, compulsively hardworking engineer named Chi Mak was given permission to leave the mainland and move to Hong Kong, which was British, free, and rich. For a time he worked in a tailor shop, but by the 1970s he was working for the Hongkong Electric Company and, in his off-hours, keeping copious notes on the hull numbers and other details of U.S. Navy ships coming in and out of Hong Kong.1 In 1978 he emigrated to the United States, where the defense contractor Rockwell employed him as a stress analyst for the space shuttle. Air- and spacecraft are subject to enormous stresses in flight. Understanding those stresses, knowing how to test for them, and figuring out how to withstand them are what stress analysts do. We’re good at this, while the Chinese have a lot to learn. By 1979 Chi Mak was supplying stress and testing information to Chinese officials. Meanwhile, living a modest married life in a California suburb, he made friends, worked hard, and earned the respect of his colleagues. He became an American citizen and gained a secret-level clearance. According to the government, he also took masses of classified and unclassified documents home, some of which he shipped back to China, much to the satisfaction of his handlers. No one asked him to “hurt” the United States, of course. His handlers merely wanted him to “help” China. From time to time he was invited to return to China to lecture on specified aspects of his work, such as “F-15 Jet Fighters,” “Helicopter structure design,” and “Fight lift,” and he did—secretly. All the while, he communicated with his handlers in code, using the name Red Flower. When Boeing acquired Rockwell the pattern continued. Chi Mak was reliable—so reliable that his handlers sometimes used him as a conduit for information from other spies, with code names like Chrysanthemum. That was bad tradecraft, because when the FBI rolled up one spy it had a direct link to others. But it was convenient, and, as we’ve already seen, people will often break rules for the sake of convenience.
The Chinese were specific about what they wanted and gave Chi Mak written task lists that went far beyond his original area of expertise. They were after the technology that makes the U.S. Navy the world’s best fleet as well as its biggest. China’s interest in our navy is not surprising. An armed conflict between the United States and China would likely be a naval confrontation, and naval modernization is one of China’s highest priorities.2 Topping the task list was the Quiet Electric Drive propulsion system that would allow our warships to run with less noise than a Lexus sedan—and make them extremely difficult to track. By that time Chi Mak was working for L3 Power Paragon and was the lead engineer on the quiet drive project.3 His task list, which he had shredded but which the FBI had painstakingly put back together, also included sixteen other topics, including information about the DD(X) (the next generation of U.S. Navy destroyers), aircraft carrier electronics, submarine torpedoes, electromagnetic artillery systems, and so on. This technology cost American taxpayers tens of billions of dollars and many years to develop. But the People’s Republic of China didn’t pay us for it. The PRC now has warships that look remarkably like ours. Their new ships also have radars as good as those on the DD(X), but with one difference: They know the characteristics of our radars, but they have modified their own in ways we must guess at.
Chi Mak spied for about twenty-five years. The FBI arrested him and his wife in 2005 just as they were yanking his brother Tai Mak and Tai’s wife, Fuk Li, out of the security line at Los Angeles’s airport be-fore they could board a midnight flight to Hong Kong. A PRC countersurveillance team was at the airport to watch them board. The Chinese knew they were coming, because Tai Mak had left a telephone message: “I work for the Red Flower of North America and will be traveling to the PRC on October 28th and will be bringing the assistant.”
Tai Mak and his wife were arrested carrying a CD-ROM disk stuffed with encrypted secrets in disguised file formats that were further hidden under innocent-looking music files. A typical CD-ROM disk holds about 740 megabytes of data. In paper terms, this is about three fourths of a pickup truck full of printed pages. Not long afterward the FBI rolled up an entire network of spies whose lines of communication ran through Chi Mak.4 He is the first spy (that we know of) through whom we lost critical military secrets and who was not a government employee. He will not be the last. If further proof were required, the case thus illustrates how thoroughly the functional boundary between the private sector and the government has dissolved.
Chi Mak is in prison for twenty-four years—but not for espionage. The basic espionage statute criminalizes giving or selling national defense information to someone not entitled to receive it “to the injury of the United States or to the advantage of any foreign nation.”5 The Justice Department will not prosecute under this statute unless the information transmitted is classified, however, and although Chi Mak had access to classified information, the government could not prove that he actually sent classified information to China. He was therefore convicted of economic espionage, under a different statute.6 His brother Tai was sentenced to ten years for conspiracy to commit export violations, and Chi’s wife Rebecca was sentenced to three years for acting as an unregistered agent of the PRC. Their son Billy, also part of the conspiracy, was sentenced to the time he had served while awaiting trial. All three were to be deported upon the conclusion of their sentences.7
I am not proposing that we expand our classification system. If we sometimes classify too little, we often classify too much. But when it comes to research, our classification system creates a rigid dichotomy between basic science and developmental research, with no intermediate category. It assumes a kind of on/off switch, a point at which all of a sudden laboratory work ceases to be pure science and becomes the less glorious business of developing something practical. Identifying technologies with practical applications, military or not, is difficult, especially in the early stages. Venture capitalists spend their fortunes and professional lives trying to do it, and the best of them lose their bets three or four times out of five. As technologies move closer to practical applications, however, the odds shorten. Take lasers, for example. Research on lasers is an inquiry into how certain kinds of light behave. In its early stages we don’t classify that research, even if it’s sponsored by the Defense Department, nor should we. There is a somewhat arbitrary point, however, at which it becomes apparent that certain lasers are likely to be weaponized. That process involves both science and engineering. At a still later stage there are no scientific questions to answer, just engineering challenges. Our classification process is not sophisticated enough to recognize the middle ground, where we can identify technologies that are likely to be classified in the near future. We should not only become more nimble at protecting R&D in that middle ground, we should also be scrupulous about whom we permit to do the work. As a nation we are not good at this.
The Chinese intelligence services meanwhile have made an art of exploiting the seam between the classified and the preclassified, where technologies begin to emerge from basic science but which, under our rules, are not yet military R&D. This is the sweet spot to an espionage strategist as well as to a venture capitalist, and this is an area where the Chinese services spend a great deal of effort. In essence, the PRC is leveraging the Pentagon’s R&D budget in support of its own war-making capability.
The Chinese also specialize in illegally exporting dual-use technology and military equipment that is on our Munitions List. In 2009, for example, an ethnic Chinese U.S. citizen pleaded guilty to violating the Arms Export Control Act by sending a low-temperature fueling system that would be used by the People’s Liberation Army. Jian Wei Ding and Kok Tong Lim pleaded guilty in the same year to conspiring to illegally export high-modulus carbon fiber to China and other countries.8 Of the twenty-three arrests and convictions for industrial espionage in 2008, eight involved China, nine involved Iran. No other country was close behind. The list of technologies involved included hard-core military equipment, such as infrared assault-rifle scopes, military aircraft components, missile technology, and others that show the convergence of military and civilian technologies, such as engineering software, telecommunications equipment and technology, and certain kinds of amplifiers.9
Chinese espionage methods are unusually varied. In contrast to the Russians, who are highly professional, the PRC often enlists amateurs from among a huge pool of sympathizers. In 2008 there were about 3.2 million ethnic Chinese living in the United States, heavily concentrated in a few large cities. About half of them were born in China.10 They are among our best-educated and most productive residents. Additionally, there were about 600,000 visa applicants from the Chinese mainland alone in 2009, of which 98,500 were for students. These numbers are rising at an annual rate of 16 to 20 percent.11 In the 2009–10 academic year, there were about 128,000 Chinese students studying here.12 About a million Chinese tourists visited the United States in 2010.13 These figures are a sign of American strength, not weakness. The huge presence of ethnic Chinese in North America nevertheless represents a rich recruiting ground for Chinese intelligence services, and they exploit it. In the most startling example, an employee of Rockwell (later Boeing) named Dongfan “Greg” Chung volunteered himself. “I don’t know what I can do for the country,” meaning China, he wrote to a professor in the PRC. “Having been a Chinese compatriot for over thirty years and being proud of the achievements by the people’s efforts for the motherland, I am regretful for not contributing anything.”14 Later he became part of the Chi Mak spy ring and contributed quite a bit before he was arrested and convicted.
The motivations for espionage are varied.15 In some cases they are ideological, as with Chi and Chung.16 The more common motivations, however, are money, ego gratification, and the resentments that come from thwarted ambition. Kuo Tai-shen, a Taiwanese businessman, was recruited recently to milk secrets from a midlevel Pentagon employee named Gregg Bergersen, who knew he was violating the law but thought he was helping Taiwan rather than the PRC. By feeding information to Kuo, Bergersen was greasing the skids for the kind of lucrative postretirement contract he saw others getting but which had eluded him. He thought he was reeling in a Taiwanese high-roller client, when in fact he was a pigeon in a PRC trap. The FBI caught Bergersen on tape exchanging classified information with Kuo for cash.17 Now they’re both in prison.
The rationale for espionage, as the novelist Alan Furst remarked, is a matter of MICE: “the m stands for money, the i for ideology, the c for coercion, and the e for egotism.”18 The Chinese have not omitted coercion. Several years ago a PRC agent in the United States made on offer to an ethnic Chinese executive of a major U.S. corporation. But the target was loyal to his adopted country and his company; he turned it down. Some time passed. Then they approached him again. This time the pitch was different: Look, the agent essentially said, your mother back in China is old and sick. She should be in the hospital. Unfortunately our hospitals are crowded, and she’s not near the front of the queue. Would you like to reconsider our offer?19
Counterintelligence officials work these cases night and day and can talk about very few of them. It would be a mistake to believe there are no others we have not caught. Many red flowers are growing in North America.
Cyberattacks on the Pentagon
I began this book with the story of how Chinese hackers broke into Pentagon systems and carted off ten to twenty terabytes of data—so much information that, had it been on paper, they would have needed miles and miles of moving vans lined up nose to tail to cart it away. That attack, known as Titan Rain, was part of a series that began in 2003. It targeted the Defense Information Systems Agency, the Redstone Arsenal, the Army Space and Strategic Defense Command, and several computer systems critical to military logistics.20 Attacks like these persist, and their intensity is breathtaking. In 2007, there were 43,880 reported incidents of malicious cyberactivity against our Defense networks. In 2009, the figure was increasing at a 60 percent rate and was already at 43,785 by midyear. A large percentage of these attacks originate in China and are part of a series reportedly known as Byzantine Hades.21 Defending against these attacks is extremely expensive. During one recent six-month period the U.S. military alone “spent more than $100 million . . . to remediate attacks on its networks.”22 The Office of the Secretary of Defense had to go off-line for more than a week in 2007 to protect itself from attacks coming from China and to clean up its systems.23 The same kinds of attacks that occurred in 1998 were still happening in 2009, and they continue today.24
Penetrations of Defense networks follow the same paths we encountered when we examined commercial networks in the previous chapter. Peer-to-peer software, or P2P, is a major problem. It permits your computer to communicate with my computer directly, bypassing the Internet service provider (like AT&T, Comcast, or British Telecom) that funnels e-mail and other traffic between us. With P2P there’s no funnel. Just as in a commercial context, unless P2P is configured just right, it drops the drawbridge into an information system. The Washington Post disclosed in 2009 that personal information on “tens of thousands” of U.S. servicemen and -women, including Special Forces members and others working in war zones, had been downloaded and were found floating electronically in China and Pakistan. The army barred the unauthorized use of P2P in 2003, and the entire Defense Department followed suit in 2004.25 But that merely serves to emphasize a point that cannot be stated too often: Policies regarding information systems that are not expressed technically are little more than blather. No one pays attention. If you don’t want people to be able to run unauthorized P2P on your system, you must design and build your system so that such software cannot be run—or that it pinpoints exactly where it is.
Other penetrations exploit vulnerabilities in the same commercial software we all use—vulnerabilities that let an attacker inject malicious code right into a system. This is often done with a tactic called buffer overflow. A buffer is a region of computer memory where data is stored temporarily. Buffer overflow permits data that is too big for the buffer to overwrite adjacent software code. This can be done intentionally; it’s an attack methodology. In my opinion, software companies could eliminate this vulnerability, but they haven’t done it yet. Consumers and commercial buyers don’t demand it, and the Pentagon hasn’t been willing to pay for it.
Information that used to be top secret now leaks through an open fire hydrant. WikiLeaks gets the most attention in this department; we’ll visit that topic in chapter 8. But WikiLeaks is not the worst offender. Starting in 2007 or before, and extending well into 2008, cyberintruders robbed Pentagon systems of terabytes of information on the electronics of our next-generation Joint Strike Fighter aircraft, the F-35. Electronic systems associated with the project repeatedly were broken into, sometimes exploiting P2P vulnerabilities. While flight control systems have supposedly been isolated from the Internet, the intruders appear to have obtained data about the plane’s design, performance, and other electronic systems. We can’t be sure of what they took, however, because while we didn’t think it was worth the trouble to encrypt the data ourselves, the thieves did so on the way out.26 Most of these penetrations originate in China. As the U.S.–China Economic and Security Review Commission put it, “A large body of both circumstantial and forensic evidence strongly indicates Chinese state involvement in such activities, whether through the direct actions of state entities or through the actions of third-party groups sponsored by the state.”27 Some of these penetrations are technologically shrewd, but often they target the weakest link in any computer system—the user. Defense workers, including in the military, are just as impatient with security practices and just as susceptible to phishing attacks as everybody else. Like workers everywhere, they are also adept at subverting security rules and mechanisms designed to keep their systems healthy. As we’ve seen repeatedly, when convenience butts heads with security, convenience wins—even in war zones.
IN APRIL 2006, an enterprising reporter from the Los Angeles Times paid $40 to an Afghan kid for a secondhand thumb drive in the bazaar outside Bagram Airfield in Afghanistan, home to thousands of U.S. and allied troops. He decided to see what was on it. What he found were the names, photos, and contact information for Afghans willing to inform on the Taliban and al-Qaeda. He also found secret documents detailing “escape routes into Pakistan and the location of a suspected safe house there, and the payment of $50 bounties for each Taliban or Al Qaeda fighter apprehended based on the source’s intelligence.”28 When it falls into the wrong hands, this kind of information gets our friends killed. But there it was—on sale in the bazaar for $40. It was probably stolen; there’s a thriving black market for stolen electronics around Bagram. How the device found its way to the bazaar is a trivial detail, however. These devices aren’t going away, and some of them will always be lost or stolen. In war zones as well as downtown office buildings, information walks around. So of course this incident was hardly a one-off story. British researchers did an experiment in May 2009: They bought up a few hundred hard drives in the UK to see what was on them. Along with banking and medical records, confidential business plans, and so forth, they discovered the test launch procedures for a ground-to-air missile defense system designed by Lockheed Martin.29 All this information had been thrown away like trash. Whether the context is military or civilian, when you replicate secrets and make them portable, they will be duplicated, ported, and lost. (In just this way Apple lost its top-secret next-generation iPhone in 2010 when one of its engineers left it in a bar.30) So while most organizations need to work more diligently to reduce information leakage, that’s merely the easy, tactical lesson. There are also two strategic lessons.
First, organizations must learn to live in a world where less and less information can be kept secret, and where secret information will remain secret for less and less time.
Second, the technologies we require for efficiency not only threaten the security of information we don’t want to share, they also open us up to electronic infections that can kill us. Good stuff flows out, nasty stuff flows in.
To illustrate that point, let’s walk the thumb drive back into Bagram Airfield to understand how an adversary could penetrate our information systems by walking something in instead of sneaking something out. Remember the Afghan boy in the bazaar. He and other merchants and peddlers have lots of thumb drives. Some were stolen from American service personnel right on the base, but not all. Let’s imagine his uncle bought a box of them, brand new, that fell off a truck in Kabul. Maybe Uncle knew a good price when he saw it, didn’t ask any questions, and bought them for his nephew to sell. These little drives happen to be the same brand, even the same color, as those the American heathen have been buying lately on the base, at a higher price. Or perhaps this particular box of thumb drives wasn’t stolen. Perhaps Russian military intelligence (the GRU) or Russian external intelligence (the SVR, successor to the KGB’s First Chief Directorate) has been quietly seeding Kabul and Bagram with thumb drives specially poisoned with malware. And now, let’s suppose the boy doesn’t sell the drive to a reporter from the Los Angeles Times but instead to Corporal John Smith, U.S. Army, 902nd Military Intelligence Group, 910th Military Intelligence Brigade, now forward deployed from Ft. Meade, Maryland, to Bagram.
John’s working the swing shift this week, 4:00 P.M. to midnight, so after amusing himself in the bazaar and eating a meal, he clocks in to the watch floor, where he mans four different computers with four different screens: an Internet connection, a connection to the Pentagon’s unclassified system (the NIPRNET), a connection to the Pentagon’s secret system (the SIPRNET), and a connection to the Pentagon’s top-secret system (JWICS, pronounced “Jaywicks”). These systems are “air gapped.” That means there is supposedly no way to move electronically from one of them to another. (There are in fact some exceptions, but they don’t matter here.) John is connected to each of these systems through a PC under or next to his cramped little metal desk. That night he’s remotely supporting an operation in Paktika Province, and the captain on the phone urgently asks him for any information he has on a certain informant from a particular village. Is the guy for real, or is he a double agent? Should we trust him? And exactly where is that village anyway? Give me five minutes, John tells the captain. I’ll call you back. John runs a database check on the informant, but he has nothing on the village. The map on SIPRNET is old and the scale is too small. He checks the NIPRNET but finds nothing better. He tries variant spellings and still comes up empty. When he goes to the Internet, however, Google Earth has exactly what he needs. But how will he get that map to the captain, whose only connection is through SIPRNET?
By using his new thumb drive, that’s how. Every PC on his desk has a USB port, and those ports make a mockery of the air gap. Corporal Smith isn’t careless, and he isn’t stupid. He knows the prohibition on moving information between air-gapped systems, but he’s working hard and he can hear the anxiety in the captain’s voice through the phone line. He’s got an urgent problem to solve, and he knows how. So he downloads the map onto the thumb drive, then removes the thumb drive from the commercial PC and inserts it into the USB port on the SIPRNET box, uploading the map so he can shoot it out to the captain, who can see it for himself. He calls the captain back: Problem solved. A little later John pops the thumb drive back into his commercial PC to download a snapshot his wife has just sent of their eight-month-old daughter. Having once figured out a way through his thicket of systems, he will repeatedly use the thumb drive to move between them. He wonders how he ever managed without it.
John solved his problem, but here’s what else he did.
Russian intelligence had doctored the thumb drive he bought in the bazaar with poison code called agent.btz.31 They couldn’t have known that he would buy that particular memory stick, but they produced lots of them, and the likelihood that one or more American military personnel would buy some of them was very high. The likelihood that one of the buyers would violate U.S. military policy by moving information between air-gapped systems was also very high. The Russians would have known this for the same reason that every corporate information security officer knows it: People don’t obey rules that add inconvenience to their lives. Anybody whose job requires working on four different information systems will at some point have to move information between those systems in order to do his job responsibly, even if rules prohibit it. So Russian intelligence put together a low-cost operation with a high probability of generating lots of trouble for the American military, not to mention excellent information and high entertainment back in Moscow.
When John inserted the poisoned thumb drive into his commercial computer in order to download the Google map, it automatically uploaded malicious code that sent a beacon back to a Russian command post through the Internet. The beacon merely said “Here I am, ready to accept instructions.” When he removed it and then inserted it in the USB port on his secret SIPRNET computer, the drive uploaded malicious code that began sniffing its way around. It was programmed to look for specific items of interest to Russian intelligence. It was also programmed to package these items for exfiltration upon command. But how could it get the information back to the clever Russian fellows who devised this trick? The Russians are not connected to SIPRNET.
Here again the key to the operation was not technological brilliance but human failure. It was quite likely that the owner of the thumb drive would repeat the maneuver between systems, and in fact that’s what John did when he reinserted the drive into his commercial system to download the photo of his baby daughter and then later reinserted the drive into his SIPRNET computer. He had become the unwitting agent of Russian exfiltration, because the package of data the Russians wanted to exfiltrate was now downloaded onto the thumb drive, and he was then transferring it to the commercial computer, where it would speed out to the Russians over the Internet. Was John the only American serviceman to fall into this trap? Of course not. Pretty soon SIPRNET and NIPRNET were thoroughly corrupted. But neither Central Command nor any other U.S. joint command can operate without them.32
I invented Corporal Smith and virtually all the details in this account, but something very much like this must have happened.33 I can’t explain how the NSA devised and executed a skillful plan to clean this shrewd garbage out of Pentagon systems, but I can say that it hasn’t been simple. Agent.btz is programmed to think. When it’s threatened, it morphs. When one exfiltration path is thwarted, it finds others. It’s another version of a Whac-A-Mole game. The agent.btz story is one of many reasons why I refer to SIPRNET as IPRNET. The S in SIPRNET stands for secret, and you’d have to be delusional to believe this system isn’t penetrated. You’d also have to be crazy not to believe that the Pentagon’s top-secret system, JWICS, isn’t penetrated. Even the NSA now operates on the assumption that its systems are penetrated. “There’s no such thing as ‘secure’ anymore,” said my former colleague Debora Plunkett, the director of the NSA’s Information Assurance Directorate (that’s the defensive side). “The most sophisticated adversaries are going to go unnoticed on our networks,” she said. Systems must be built on that assumption.34
In November 2008, when Strategic Command belatedly became aware of the seriousness of the threat posed by agent.btz, it issued a blanket prohibition against all removable media, including thumb drives, on Defense systems.35 The outcry was immediate and deafening, and STRATCOM reversed itself within about a week. Why? Because they realized what the Russians must also have known when they devised the operation: Our military personnel cannot do their jobs without moving information between systems that are supposedly insulated from one another. So STRATCOM withdrew the prohibition and instead issued a sort of “safe sex” order for removable media. Basically it said that before you let somebody stick their thumb drive into your USB port, you’d better know where that thumb drive came from and where else it has been. This order was better than nothing—but not by much. As one STRATCOM specialist put it, the Defense Department “cannot undo 20-plus years of tacitly utilizing the worst IT security practices in a reasonable amount of time, especially when many of these practices are embedded in enterprisewide processes.” Translation: We can’t do our job without these devices, and it’s going to take time to fix this. The headline in Wired got it right: HACKERS, TROOPS REJOICE: PENTAGON LIFTS THUMB-DRIVE BAN.36
It’s easy to make fun of the Defense Department or any other large organization trying to manage information across worldwide networks. But nobody does it perfectly, and many organizations do it very badly. Still, leaving aside the NSA and the CIA, no government agency has more at stake in protecting its security than the Pentagon and, in fact, no other agency is better at it. As other agencies and the White House dither, the Pentagon actually does things. They have begun to issue authorized removable media, prohibit the use of everything else, and distribute portable kits that can excise viruses from disks, thumb drives, and other removable media.37 How well this program works remains to be seen.
The Russians aren’t the only ones who know how to run operations like the thumb-drive caper. I explained in a previous chapter that the Chinese have been caught spreading infected thumb drives around British trade fairs. And the same trick has been tried in Washington, D.C. The Executive Office of U.S. Attorneys in 2008 advised that “two USB thumb drives” were discovered in one of their buildings. One was found in a men’s restroom, another on a fax machine. “The drives contain malicious code that automatically and silently executes when the drive is plugged into a system. The code captures certain system information and transmits it out of [the Justice Department].”38 How many of these drives were actually picked up and used? We have no idea.
If poisoned thumb drives haven’t yet been found in your agency or company, chances are they will be. USB devices are used to spread about one in four computer worms.39 The convergence of the cyberthreats faced by the military and civilian organs of government, and by public and private organizations, is nearly absolute. Exotic technologies that were once the exclusive province of state-sponsored intelligence agencies with massive resources are now available to everyone, everywhere, and they are cheap. At the same time, social networking sites that a few years ago were the province of hipsters and college kids have become official U.S. government channels of communication.
Our Enemies Are Not Stupid
In 1996 an official evaluation of the U.S. military’s unmanned aerial vehicles pointed out that, while certain UAVs were designed to transmit encrypted video images, the Predator UAV relied on unencrypted data links. That meant that anybody with modest technical skills could intercept the images from the satellite downlink. This could have significant consequences, ranging from signal interception to active jamming.40 Nevertheless, the warning went unheeded. The weakness in the Predator’s communications was no longer theoretical, but the military could remain deaf to it because, well, had anybody actually proven it was a problem? Had anybody been embarrassed by it? Besides, were our adversaries in places like Baghdad or Belgrade smart enough to figure this stuff out? Nobody seemed to think so.
In 2002, however, a British engineer whose hobby was scanning satellite downlinks stumbled across footage of NATO surveillance videos from Kosovo that were transmitted on an open commercial satellite channel. He warned the military. If he could watch NATO soldiers looking for infiltrators across the Kosovo border, so could the infiltrators they were tracking. You can watch it too—on CBS, which ran the story.41 The military’s first response was that the footage wasn’t really sensitive, which was nonsense. Then they explained that our NATO partners didn’t have the equipment to decrypt the signals if we encrypted them. That may have been true, but it simply meant that NATO was operating on antiquated assumptions about how the world works. Did anybody fix it? No.
A 2005 CIA memo reportedly said that officials in Saddam’s government had been able to watch real-time video of U.S. military installations in Turkey, Kuwait, and Qatar before Iraq was invaded. Apparently they had “located and downloaded the unencrypted satellite feed from U.S. military UAVs.”42 Still, nothing changed.
Then, in the summer of 2009, some of our soldiers arrested a Shiite militant, confiscated his laptop, and found intercepted Predator video feeds. Surprise! Shortly afterward they found similar images on other confiscated laptops. Somebody was systematically intercepting and disseminating them from our drones. Militants were being taught to do this—probably by Iranian intelligence. They were watching us watching them. Unfortunately, we couldn’t hear them laughing. When the Wall Street Journal exposed this tragicomedy in late 2009, it noted that the weakness in the Predator link had been understood since the Bosnian campaign (actually, it had been understood even earlier, in 1996), but “the Pentagon assumed local adversaries wouldn’t know how to exploit it.” In fact, local adversaries in Iraq were exploiting it with Russian software you could buy for $25.95 on the Internet.43 We thought they were stupid; they weren’t.
HOW DO WE explain this kind of behavior? The worst wrong answer is to assume it is unique to the U.S. Department of Defense. In fact, it’s typical of people in large organizations who must make difficult budgetary decisions (retrofitting Predators with encryption devices is expensive) and who, like most of us, find it easy to ignore inconvenient facts. The United States may still have more advanced hardware than the rest of the world, but we are no longer years ahead of our major international rivals. Failing to understand this is smug, careless, lazy, and dangerous. Unfortunately, however, there are smug, careless, lazy, and dangerous people in every organization. They can be trained to some degree, but organizational entropy is a fact of life. As the management sage Peter Drucker put it, “The only things that evolve by themselves in an organization are disorder, friction and malperformance.”44 Things fall apart. But in a world where almost everything is connected to almost everything else, the effects of such entropy are far more dire than when Drucker made his observation decades ago, because a gap anywhere in a system can infect information everywhere in the system.
The people who run and use our most vital information systems must take this notion of infection to heart. That thumb drive in Bagram (and untold others like it) passed on malware that weakened the Pentagon’s entire system, just as touching your mouth with a germ-laden finger can infect your entire body. It’s also useful to think of an information system as akin to a hospital, different parts of which require different conditions. The public walks in and out: doctors, nurses, administrators, patients, visitors, and passersby. No one is sterile, and some carry deadly diseases, yet surgeons must nevertheless perform operations. In the operating room we rigorously control who and what can come in and out, what they can wear, and how they must be scrubbed. We maintain a far higher level of hygiene there than we do in the lobby. We apply an intermediate standard in the kitchen—higher than in the lobby, but not as high as in the operating room. The hospital’s data center applies still another standard: There we care less about bacteria but more about temperature. Risk assessment and management occurs in all of these places constantly, yet we know that at some point, regardless of our precautions, although infection can be greatly reduced, it cannot be eliminated.
Our leaders must similarly accept that their information systems are compromised and must plan accordingly. This is harder than you might think. People don’t readily accept inconvenient news if they can avoid it—especially if the danger is invisible. In my experience, convincing leaders that their systems are probably corrupted is like trying to explain the germ theory of disease to an illiterate. If they can’t see it, they don’t believe it. However, the evidence is now clear for anyone who chooses to see it. Military leaders, agency heads, and CEOs must require systems that are both more secure and more robust. They must also learn to fight and manage with corrupted systems, must know how to restore those systems in the regular course of business—and must practice doing it. If they don’t, they not only risk losing the data that makes them powerful, valuable, and intelligent, they also risk that their operations will grind to a halt.
As we’ll see in the next chapter, the emerging danger is no longer simply the loss of critical information. The newest threat targets our essential infrastructure—from air traffic control to financial markets to the electricity grid. Losing control over any of these systems would create widespread disruption and loss, and bring our society to a standstill.