IN October 1997, a few months before Solar Sunrise, when the Marsh Commission released its report on the nation’s critical infrastructure, few officials were more stunned by its findings than a White House aide named Richard Alan Clarke.
As the counterterrorism adviser to President Clinton, Clarke had been in on the high-level discussions after the Oklahoma City bombing and the subsequent drafting of PDD-39, Clinton’s directive on counterterrorism, which eventually led to the formation of the Marsh Commission. After that, Clarke returned to his usual routines, which mainly involved tracking down a Saudi jihadist named Osama bin Laden.
Then the Marsh Report came out, and most of it dealt with cyber security. It was a topic Clarke had barely heard of. Still, it wasn’t his topic. Rand Beers, a good friend and Clinton’s intelligence adviser, had been the point man on the commission and, presumably, would deal with the report, as well. But soon after its release, Beers announced that he was moving over to the State Department; he and Sandy Berger, Clinton’s national security adviser, had discussed who should replace him on the cyber beat, and they settled on Clarke.
Clarke resisted; he was busy enough on the bin Laden trail. Then again, he had been the White House point man on the Eligible Receiver exercise; Ken Minihan, the NSA director who’d conceived it, had briefed him thoroughly on its results and implications; cyber security might turn out to be interesting. But Clarke knew little about computers or the Internet. So he gathered a few of his staff and took them on a road trip.
Shortly after the holidays, they flew to the West Coast and visited the top executives of the major computer and software firms. What struck Clarke most was that the heads of Microsoft knew all about operating systems, those at Cisco knew all about routers, those at Intel knew all about chips—but none of them seemed to know much about the gadgets made by the others or the vulnerabilities at the seams in between.
Back in Washington, he asked Minihan for a tour of the NSA. Clarke had been a player in national security policy for more than a decade, since the Reagan administration, but for most of that time, he’d been involved in Soviet-American arms-control talks and Middle East crises: the high-profile issues. He’d never had reason to visit, or think much about, Fort Meade. Minihan told his aides to give Clarke the full dog-and-pony show.
Part of the tour was demonstrating how easily the SIGINT teams could penetrate any foreign network they set their eyes on. None of it reassured Clarke; he came away more shaken than before, for the same reason as many officials who’d witnessed similar displays through the years. If we can do this to other countries, he realized, they’ll soon be able to do the same thing to us—and that meant we were screwed, because nothing on the Internet could be secured, and, as the Marsh Report laid out in great detail, everything in America was going up on the Net.
Clarke wanted to know just how vulnerable America’s networks were right now, and he figured the best way to find out was to talk with some hackers. He didn’t want to deal with criminals, though, so he called a friend at the FBI and asked if he knew any good-guy hackers. (At this point, Clarke didn’t know if such creatures existed.) At first, the agent was reluctant to share sources, but finally he put Clarke in touch with “our Boston group,” as he put it—a team of eccentric computer geniuses who occasionally helped out with law-enforcement investigations and who called themselves “The L0pht” (pronounced “loft”).
The L0pht’s front man—who went by the handle “Mudge”—would meet Clarke at John Harvard’s Brewery, near Harvard Square, in Cambridge, on a certain day at seven p.m. Clarke flew to Boston on the designated day, took a cab to the bar, and settled in at seven on the dot. He waited an hour for someone to approach him; no one did; so he got up to leave, when the man quietly sitting next to him touched his elbow and said, “Hi, I’m Mudge.”
Clarke looked over. The man, who seemed about thirty, wore jeans, a T-shirt, one earring, a goatee, and long golden hair (“like Jesus,” he would later recall).
“How long have you been sitting there?” Clarke asked.
“About an hour,” Mudge replied. He’d been there the whole time.
They chatted casually about the L0pht for a half hour or so, at which point Mudge asked Clarke if he’d like to meet the rest of the group. Sure, Clarke replied. They’re right over there, Mudge said, pointing to a large table in the corner where six guys were sitting, all in their twenties or early thirties, some as unruly as Mudge, others clean-cut.
Mudge introduced them by their tag names: Brian Oblivion, Kingpin, John Tan, Space Rogue, Weld Pond, and Stefan von Neumann.
After some more small talk, Mudge asked Clarke if he’d like to see the L0pht. Of course, he replied. So they took a ten-minute drive to what looked like a deserted warehouse in Watertown, near the Charles River. They went inside, walked upstairs to the second floor, unlocked another door, and turned on the lights, which revealed a high-tech laboratory, crammed with dozens of mainframe computers, desktops, laptops, modems, and a few oscilloscopes, much of it wired—as Mudge pointed out, when they went back outside—to an array of antennas and dishes on the roof.
Clarke asked how they could afford all this equipment. Mudge said it didn’t cost much. They knew when the big computer companies threw out hardware (a few of them worked for these companies under their real names); they’d go to the dumpster that day, retrieve the gear, and refurbish it.
The collective had started, Clarke learned, in the early 1990s, mainly as a place where its members could store their computers and play online games. In 1994, they made a business of it, testing the big tech firms’ new software programs and publishing a bulletin that detailed the security gaps. They also designed, and sold for cheap, their own software, including L0phtCrack, a popular program that let buyers crack most passwords stored on Microsoft Windows. Some executives complained, but others were thankful: someone was going to find those flaws; at least the L0pht was doing it in the open, so the companies could fix them. The NSA, CIA, FBI, and the Air Force Information Warfare Center were also intrigued by this guerrilla operation; some of their agents and officers started talking with Mudge, who’d emerged as the group’s spokesman, and even invited him to give talks at high-level security sessions.
Not that the intelligence agencies needed Mudge to tell them about holes in commercial software. The cryptologists in the NSA Information Assurance Directorate spent much of their time probing for these holes; they’d found fifteen hundred points of vulnerability in Microsoft’s first Windows system. And, by an agreement much welcomed by the software industry at the time, they routinely told the firms about their findings—most of the findings, anyway: they always left a few holes for the agency’s SIGINT teams to exploit, since the foreign governments that they spied on had bought this software, too. (Usually, the Silicon Valley firms were complicit in leaving back doors open.) Still, the NSA and the other agencies were interested in how the likes of Mudge were tackling the problem; it gave them insights into ways that other, more malicious, perhaps foreign hackers might be operating, ways that their own security specialists might not have considered.
For his part, Mudge was always happy to give them advice and never charged a fee. He figured that, any day now, the feds could come knocking at the warehouse door—some of the L0pht gang’s projects were of dubious legality—and it would be useful to summon, as character witnesses, the directors of the nation’s intelligence and law enforcement agencies.
For the next few hours on that winter night in Watertown, the L0pht gang held Clarke’s rapt attention, telling him all the things they could do, if they wanted. They could break the passwords stored on any operating system, not just Microsoft Windows. They could decrypt any satellite communications. They had devised software (not yet for sale or distribution) that could hack into someone’s computer and control it remotely, spying on the user’s every keystroke, changing his files, tossing him off the Internet or whisking him away to a site of their choosing. They had special machines that let them reverse-engineer any microchip by de-capping the chip and extracting the silicon dye. In hushed tones, they told him about a recent discovery, involving the vulnerability of the Border Gateway Protocol, a sort of supra-router for all online traffic, which would let them—or some other skilled hackers—shut down the entire Internet in a half hour.
Clarke didn’t know whether to believe everything they said, but he was awed and shaken. Everyone who’d briefed him, during his crash course on the workings and pitfalls of the Internet, had implied or stated outright that only nation-states possessed the resources to do just half the things that Mudge and his chums were saying—and, in some cases, demonstrating—that they could do from this hole in the wall with little money and, as far as he could tell, no outside support. In short, the official threat model seemed to have it all wrong.
And Clarke, the president’s special adviser on counterterrorism, realized that this cyber thing was more than an engrossing diversion; it fit into his bailiwick precisely. If Mudge and his gang used their talents to disrupt American society and security, exploiting the critical vulnerabilities that the Marsh Report had outlined, they would be tagged as terrorists—cyber terrorists. Here, then, was another threat for Clarke to worry about—and to add to his thickening portfolio.
It was two a.m., after a few more drinks, when they decided to call it a night. Clarke asked them if they’d like to come down to Washington for a private tour of the White House, and he offered to pay their way.
Mudge and the others were startled. “Hackers”—which is what they were—was still a nasty term in most official corridors. It was one thing for some spook in a three-letter agency to invite them to brief a roomful of other spooks on a hush-hush basis—quite another to be invited to the White House by a special adviser to the president of the United States.
A month later, they came down, not only to see the West Wing after hours but also to testify before Congress. The Senate Governmental Affairs Committee happened to be holding hearings on cyber security. Through his Hill contacts, Clarke got the L0pht members—all seven of them, together, using their pseudonyms—placed on the witness list.
Clarke had a few more conversations with Mudge during this period. His real name, it turned out, was Peiter Zatko. He’d been a hacker since his early teen years. He hated the movie WarGames because it encouraged too many other people his age, but nowhere near his IQ, to join the field. He’d graduated not from someplace like MIT, as Clarke suspected, but from the Berklee College of Music, as a guitar major, at the top of his class. By day, Zatko was working as a computer security specialist at BNN, a Cambridge-based firm, though his looming public profile accelerated his plans to quit and turn the L0pht into a full-time commercial enterprise.
He and the other L0pht denizens made their Capitol Hill debut on May 19, 1998. Only three senators attended the hearing—the chairman Fred Thompson, John Glenn, and Joe Lieberman—but they treated the bizarre witnesses with respect, hailing them as patriots, Lieberman likening them to Paul Revere, alerting the citizenry to danger in the digital age.
Three days after Mudge’s testimony, Clinton signed a Presidential Decision Directive, PDD-63, titled “Critical Infrastructure Protection,” reprising the Marsh Commission’s findings—the nation’s growing dependence on computer networks, the vulnerability of those networks to attack—and outlining ways to mitigate the problem.
A special panel of the NSC, headed by Rand Beers, had cut-and-pasted early drafts of the directive. Then, at one of the meetings, Beers informed the group that he was moving to the State Department and that Dick Clarke—who, for the first time, was seated next to him—would take his place on the project.
Several officials on the panel raised their eyebrows. Clarke was a brash, haughty figure, a spitball player of bureaucratic politics, admired by some as a can-do operator, despised by others as a power-grabbing manipulator. John Hamre, the deputy secretary of defense, particularly distrusted Clarke. Several times Hamre heard complaints from four-star generals, combatant commanders in the field, that Clarke had directly phoned them with orders, rather than going through the secretary of defense, as even the president was supposed to do. Once, Clarke told a general that the president wanted to move a company of soldiers to the Congo during a crisis; Hamre looked into it, and found out the president had asked for no such thing. (Clinton eventually did sign the order, but to Hamre and a number of generals, that didn’t excuse Clarke’s presumptuousness.)
Hamre’s resentment had deeper roots. A few times, when he was the Pentagon’s comptroller, he found Clarke raiding the defense budget for “emergency actions,” purportedly on behalf of the president. Clarke invoked legal authority for this maneuver—an obscure clause that he’d discovered in the Foreign Assistance Act, Section 506, which allowed the president to take up to $200 million from a department’s coffers for urgent, unfunded requirements. Hamre had enough headaches, dealing with post-Cold War budget cuts and pressure from the chiefs, without Clarke swooping down and treating the Pentagon like his piggy bank.
As a result, although they held similar views on several issues, not just cyber security, Hamre hid things from Clarke, sometimes briefing other department deputies in private, rather than in a memo or an NSC meeting, in order to keep Clarke out of the loop.
Around the time of Solar Sunrise and Moonlight Maze, a special prosecutor happened to be investigating charges that President Clinton and the first lady had, years earlier, illegally profited from a land deal in Arkansas. Orders went out from the White House counsel, barring all contact between the White House and the Justice Department, unless it went through him. Clarke ignored the order (he once told an NSA lawyer, “Bureaucrats and lawyers just get in the way”) and kept calling the FBI task force for information on its investigation of the hackings. Louis Freeh, the bureau’s director, who didn’t like Clarke either, told his underlings to ignore the calls.
But Clarke had protectors who valued his advice and gumption. When one agency head urged Sandy Berger, the national security adviser, to fire Clarke, Berger replied, “He’s an asshole, but he’s my asshole.” The president liked that Clarke was watching out for him, too.
Midlevel staffers were simply amazed by the network that Clarke had woven throughout the bureaucracy and by his assertiveness in running it. Once, shortly after coming over from the NSA to be Vice President Gore’s intelligence adviser, Rich Wilhelm sat in on a meeting of the NSC counterterrorism subgroup, which Clarke chaired. High-ranking officers and officials, from all the relevant agencies and departments, were at the table, and there was Clarke, this unelected, unconfirmed civilian, barking out orders to an Air Force general to obtain an unmarked airplane and telling the CIA how many agents should board it, all with unquestioned authority.
An aide to Clarke named John McCarthy, a Coast Guard commander with a background in emergency management, attended a Saturday budget meeting, early on in his tenure, where Clarke, upon hearing that an important program fell $3 million short of its needs, told McCarthy to get the money from a certain person at the General Services Administration, adding, “Do it on Monday because I need it on Tuesday.” The GSA official told McCarthy he’d give him $800,000, at which point the bargaining commenced. Clarke wound up getting nearly the full sum.
When Clarke replaced Rand Beers, the NSC deputies had been drafting the presidential directive on the protection of critical infrastructure, going back and forth on disagreements and compromise language. Clarke took their work, went back to his office, and wrote the draft himself. It was a detailed document, creating several forums for private-public cooperation on cyber security, most notably Information Sharing and Analysis Centers, in which the government would provide its expertise—including, in some cases, classified knowledge—to companies in the various sectors of critical infrastructure (banking, transportation, energy, and so forth), so they could fix their vulnerabilities.
According to the directive, as Clarke wrote it, this entire effort would be chaired by a new, presidentially appointed official—the “National Coordinator for Security, Infrastructure Protection, and Counter-terrorism.” Clarke made sure, in advance, that he would be this national coordinator.
His detractors, and some of his admirers, saw this as a blatant power grab: he already had the counterterrorism portfolio; now he’d be in charge of critical infrastructure, too. Some of his critics, especially in the FBI, saw it as a substantively bad idea, to boot: cyber threats came mainly from nation-states and criminals; tying the issue to counterterrorism would misconstrue the problem and distract attention from serious solutions. (The idea also threatened to sideline the FBI, which, in the Solar Sunrise and Moonlight Maze investigations, had taken a front-and-center role.)
Clarke waved away the accusations. First, as was often the case, he regarded himself as the best person for the job: he knew more about the issues than anyone else in the White House; ever since the problem had arisen, he and Beers were the only ones to give it more than scant attention. Second, his meetings with Mudge convinced him—he hadn’t considered the notion before—that a certain kind of terrorist could pull off a devastating cyber attack; it made sense, he coolly explained to anyone who asked, to expand his portfolio in this direction.
As usual, Clarke got his way.
But his directive hit an obstacle with private industry. In Section 5 of PDD-63, laying down “guidelines,” Clarke wrote: “The Federal Government shall serve as a model to the private sector on how infrastructure assurance is best achieved and shall, to the extent feasible, distribute the results of its endeavors.”
This is what the corporate executives most feared: that the government would be running the show; more to the point, that they would be saddled with the nastiest word in their dictionary—regulations. They’d sensed the same threat when they met with the Marsh Commission: here was an Air Force general—and, though retired, he referred to himself as General Marsh—laying down the rules on what they must do, as if they were enlisted men. And now here was Dick Clarke, writing under the president’s signature, trying to lay down the law.
For several months now, these same companies had been working in concert with Washington, under its guidelines, to solve the Y2K crisis. This crisis—also known as the Millennium Bug—emerged when someone realized that some of the government’s most vital computer programs had encoded years (dates of birth, dates of retirement, payroll periods, and so forth) by their last two digits: 1995 as “95,” 1996 as “96,” and so forth. When the calendar flipped to 2000, the computers would read it as “00,” and the fear was that they’d interpret it as the year 1900, at which point, all of a sudden, such programs as Social Security and Medicare would screech to a halt: the people who’d been receiving checks would be deemed ineligible because, as far as the computers could tell, they hadn’t yet been born. Paychecks for government employees, including the armed forces, could stall; some critical infrastructure, with time-coded programs, might also break down.
To deal with the problem, the White House set up a national information coordination center to develop new guidelines for software and to make sure everyone was on the same page. The major companies, such as AT&T and Microsoft, were brought into the same room with the FBI, the Defense Department, the General Services Administration, the NSA—all the relevant agencies. But the corporate executives made clear that this was a one-time deal; once the Y2K problem was solved, the center would be dismantled.
Clarke wanted to make the arrangement permanent, to turn the Y2K center into the agency that handled cyber threats. Sometime earlier, he’d made no secret of his desire to impose mandatory requirements on cyber security for critical infrastructure, knowing that the private companies wouldn’t voluntarily spend the money to take the necessary actions. But Clinton’s economic advisers strenuously opposed the idea, arguing that regulations would distort the free market and impede innovation. Clinton agreed; Clarke backed down. Now he was carving a back door, seeking to establish government control through a revamped version of the Y2K center. That was his agenda in taking over the drafting of the presidential directive—and the companies weren’t buying it.
Their resistance put Clarke in a bind. Short of imposing strict requirements, which the president had already struck down, he needed private industry onboard to make any cyber security policy work: the vast majority of government data, including a lot of classified data, flowed through privately controlled networks; and, as the Marsh Report had shown, the vulnerability of private entities—the critical infrastructures—had grave implications for national security.
Clarke also knew that, even if the government did take control of Internet traffic, few agencies possessed the resources or the technical talent to do much with it—the exceptions being the Defense Department, which had the authority only to defend its own networks, and the NSA, which had twice been excluded from any role in monitoring civilian computers or telecommunications: first, back in 1984, in the aftermath of Ronald Reagan’s NSDD-145; and, again, early on in the Clinton presidency, during the Clipper Chip controversy.
Clarke spent much of the next year and a half, in between various crises over terrorism, writing a 159-page document called the National Plan for Information Systems Protection: Defending America’s Cyberspace, which President Clinton signed on January 7, 2000.
In an early draft, Clarke had proposed hooking up all civilian government agencies—and, perhaps, eventually critical infrastructure companies—to a Federal Intrusion Detection Network. FIDNET, as he called it, would be a parallel Internet, with sensors wired to some government agency’s monitor (which agency was left unclear). If the sensors detected an intrusion, the monitor would automatically be alerted. FIDNET would unavoidably have a few access points to the regular Internet, but sensors would sit atop those points and alert officials of intrusions there, as well. Clarke modeled the idea on the intrusion-detection systems installed in Defense Department computers in the wake of Solar Sunrise. But that was a case of the military monitoring itself. To have the government—and, given what agencies did this sort of thing, it would probably be the military—monitoring civilian officials, much less private industry, was widely seen, and loathed, as something different.
When someone leaked Clarke’s draft to The New York Times, in July 1999, howls of protest filled the air. Prominent members of Congress and civil-liberties groups denounced the plan as “Orwellian.” Clarke tried to calm these fears, telling reporters that FIDNET wouldn’t infringe on individual networks or privacy rights in the slightest. Fiercer objections still came from the executives and board members of the infrastructure companies, who lambasted the plan as the incarnation of their worst nightmares about government regulation.
The idea was scuttled; the National Plan was rewritten.
When the revision was finished and approved six months later, President Clinton scrawled his signature under a dramatic cover note, a standard practice for such documents. But, in a departure from the norm, Clarke—under his own name—penned a separate introduction, headlined, “Message from the National Coordinator.”
In it, he tried to erase the image of his presumptuousness. “While the President and Congress can order Federal networks to be secured,” he wrote, “they cannot and should not dictate solutions for private sector systems,” nor will they “infringe on civil liberties, privacy rights, or proprietary information.” He added, just to make things clearer, that the government “will eschew regulation.”
Finally, in a gesture so conciliatory that it startled friends and foes alike, Clarke wrote, “This is Version 1.0 of the Plan. We earnestly seek and solicit views about its improvement. As private sector entities make more decisions and plans to reduce their vulnerabilities and improve their protections, future versions of the Plan will reflect that progress.”
Then, one month later, the country’s largest online companies—including eBay, Yahoo, and Amazon—were hit with a massive denial-of-service attack. Someone hacked into thousands of their computers, few of which were protected in any way, and flooded them with endless requests for data, overloading the servers to the point where they shut down for several hours, in some cases days.
Here was Clarke’s chance to jump-start national policy—if not to revive FIDNET (that seemed out of the question for now), then at least to impose some rules on wayward bureaucracies and corporations. He strode into the Oval Office, where Clinton had already heard the news, and said, “This is the future of e-commerce, Mr. President.”
Clinton replied, a bit distantly, “Yeah, Gore’s always going on about ‘e-commerce.’ ”
Still, Clarke persuaded the president to hold a summit in the White House Cabinet Room, inviting twenty-one senior executives from the major computer and telecom companies—AT&T, Microsoft, Sun Microsystems, Hewlett-Packard, Intel, Cisco, and others—along with a handful of software luminaries from consulting firms and academia. Among this group was the now-famous Peiter Zatko, who identified himself on the official guest list as “Mudge.”
Zatko came into the meeting starstruck, nearly as much by the likes of Vint Cerf, one of the Internet’s inventors, as by the president of the United States. But after a few minutes of sitting through the discussion, he grew impatient. Clinton was impressive, asking insightful questions, drawing pertinent analogies, grasping the problem at its core. But the corporate execs were faking it, intoning that the attack had been “very sophisticated” without acknowledging that their own passivity had allowed it to happen.
A few weeks earlier, Mudge had gone legit. The L0pht was purchased by an Internet company called @stake, which turned the Watertown warehouse into a research lab for commercial software to block viruses and hackers. Still, he had no personal stake in the piece of theater unfolding before him, so he spoke up.
“Mr. President,” he said, “this attack was not sophisticated. It was trivial.” All the companies should have known that this could happen, but they hadn’t invested in preventive measures—which were readily available—because they had no incentive to do so. He didn’t elaborate on the point, but everyone knew what he meant by “incentives”: if an attack took place, no one would get punished, no stock prices would tank, and it would cost no more to repair the damage than it would have cost to obstruct an attack in the first place.
The room went silent. Finally, Vint Cerf, the Internet pioneer, said, “Mudge is right.” Zatko felt flattered and, under the circumstances, relieved.
As the meeting broke up, with everyone exchanging business cards and chatting, Clarke signaled Zatko to stick around. A few minutes later, the two went into the Oval Office and talked a bit more with the president. Clinton admired Zatko’s cowboy boots, hoisted his own snakeskins onto his desk, and disclosed that he owned boots made of every mammal on the planet. (“Don’t tell the liberals,” he whispered.) Zatko followed the president’s lead, engaging in more small talk. After a few minutes, a handshake, and a photo souvenir, Zatko bid farewell and walked out of the office with Clarke.
Zatko figured the president had enough on his mind, what with the persistent fallout from the Monica Lewinsky scandal (which had nearly led to his ouster), the fast-track Middle East peace talks (which would go nowhere), and the upcoming election (which Vice President Gore, the carrier of Clinton’s legacy, would lose to George W. Bush).
What Zatko didn’t know was that, while Clinton could muster genuine interest in the topic—or any other topic—at a meeting of high-powered executives, he didn’t care much about cyber and, really, never had. Clarke was the source, and usually the only White House source, of any energy and pressure on the issue.
Clarke knew that Zatko’s Cabinet Room diatribe was on the mark. The industry execs would never fix things voluntarily. In this sense, the meeting was almost comical, with several of them imploring the president to take action, then, a moment later, assuring him that they could handle the problem without government fiat.
The toned-down version of his National Plan for Information Systems Protection called for various cooperative ventures between the government and private industry to get under way by the end of 2000 and to be fully in place by May 2003. But the timetable seemed implausible. The banks were game; a number of them had readily agreed to form an industry-wide ISAC—an Information Sharing and Analysis Center—to deal with the challenge. This wasn’t so surprising: banks had been the targets of dozens of hackings, costing them millions of dollars and, potentially, the trust of high-rolling customers; some of the larger financial institutions had already hired computer specialists. But most of the other critical infrastructures—transportation, energy, water supply, emergency services—hadn’t been hacked: executives of those companies saw the threat as hypothetical; and, as Zatko had observed, they saw no incentive in spending money on security.
Even the software industry included few serious takers: they knew that security was a problem, but they also knew that installing truly secure systems would slow down a server’s operations, at a time when customers were paying good money for more speed. Some executives asked security advocates for a cost-benefit analysis: what were the odds of a truly catastrophic event; what would such an event cost them; how much would a security system cost, and what were the chances that the system would actually prevent intrusions? No one could answer these questions; there were no data to support an honest answer.
The Pentagon’s computer network task force was facing similar obstacles. Once, when Art Money, the assistant secretary of defense for command, control, communications, and intelligence, pushed for a 10 percent budget hike for network security, a general asked him whether the program would yield a 10 percent increase in security. Money went around to his technical friends, in the NSA and elsewhere, posing the question. No one could make any such assurance. The fact was, most generals and admirals wanted more tanks, planes, and ships; a billion dollars more for staving off computer attacks—a threat that most regarded as far-fetched, even after Eligible Receiver, Solar Sunrise, and Moonlight Maze (because, after all, they’d done no discernible damage to national security)—meant a billion dollars less for weapons.
But things were changing on the military side: in part because more and more colonels, even a few generals, were starting to take the problem seriously; in part because the flip side of cyber security—cyber warfare—was taking off in spades.