> CHAPTER 8

> MUCH @STAKE

THE BEGINNING OF the year 2000 brought the absolute peak of the dot-com bubble. Though they were by no means a driver of the stock-market insanity or the venture capital greed behind it, digital security companies benefited from being hired by the likes of AOL and Yahoo, computing and e-commerce firms, and others. Some of the companies employed a handful of talented penetration testers, who would break into clients with permission and then advise them on how to fix the holes they came through. Giant consulting firms with employees of mixed abilities had a bigger presence. Then there were the major antivirus companies like Symantec and McAfee. Their products were better than nothing, and their business models raked in cash. The companies charged annual fees to consumers and businesses and blocked what viruses they could. When a client got infected anyway, the companies added the new signature to the detection database so the same virus wouldn’t hit the next guy—unless the virus changed slightly between infections, that is. Unfortunately, making minor changes to a virus was trivial for hackers targeting a specific victim, and soon enough such changes became an automated part of broader attacks.

Companies earned good money with defense, but overall they failed to make customers safe. On the contrary, as businesses bolted together different software, hardware, and networks, security actually got worse. Because every program of any size has critical flaws that can be leveraged by an attacker, greater complexity aided hackers and handicapped defenders. But the software suppliers lacked the incentives that drove ordinary manufacturers to make safer products. The software companies had convinced the courts that product-liability laws did not apply to them. Technically, they licensed their products instead of selling them, and they forced users to waive the right to sue at the moment of installation. The biggest customers could try demanding assistance via service agreements or code audits. But even if customers won the right to examine the code for flaws, they had no right to warn other customers about what they found. More fundamentally, most major products had few good alternatives, and they all had flaws. Not even the biggest companies shopped for software mainly on the basis of security. At best, they encouraged employees to use and contribute to open-source projects like Linux. That helped in the server market but posed little threat to desktop operating systems, let alone the applications that ran on them.

Barring some executive-branch, legislative, or courtroom surprise, the L0pht crew figured the next best way to improve the world’s security would be convincing the biggest software makers to do the right thing, even if they didn’t have to. Public embarrassment, led by the Cult of the Dead Cow, had done more than anything else to persuade Microsoft to take security more seriously. But Microsoft was just one company, and shaming businesses brought the L0pht no cash. With just a little income from selling tools like password crackers, the L0pht couldn’t scale. So Mudge and some of the others wondered if they could somehow get invited inside more software companies to at least make the bad stuff better. They could also consult with big banks and other customers, giving them ammunition to demand better software from suppliers. Enough new business and they could hire more hackers. If the L0pht did it right, they could work with both buyers and sellers and protect hundreds of millions of people.

Mudge wasn’t sure the rest of the crew would see things the same way, but he didn’t want to keep going the way things were. He, Christien Rioux, and Chris Wysopal were writing most of the L0pht programs that earned money—tools to scan networks, crack passwords, and so on. But that meant those three had to keep laboring to improve those programs even if they wanted to research something new. Tired of the burden, Mudge suggested getting outside investment from a venture capital firm, so they could all do what they enjoyed. Though he knew it would offend some purists who hacked for hacking’s sake, not for money, Wysopal reasoned that hackerdom would be much better off with them getting paid to tinker. “Maybe it was something that was impossible to do, and we had wishful thinking that we could figure it out,” he said. To Mudge, it was about distribution in the Kevin Wheeler sense, getting the word out about how unsafe things were and how to improve them. “We were the best garage band in the world at that time. And the only people who know you are the people on your block and maybe their friends,” Mudge said. “So you take money from a record label. It comes with baggage, but the message gets out further.”

The L0pht had no shortage of outside interest. A logical early contender was Cambridge Technology Partners. It was a security consulting group with some credibility that had just been featured on 20/20, in a segment where Cambridge hacker Yobie Benjamin and others broke into a major unnamed bank on camera in a penetration test. When the L0pht met with Cambridge Technology, the members suggested that the company hire them for a penetration test against it. That way, Mudge said, the executives would know the L0pht’s capabilities. In agreeing, Cambridge made a fatal mistake. After the last of the legal authorizations was signed, Joe Grand went straight for the executives’ voice mail and tried the most obvious four-digit codes to listen to the messages: 1234, 1111, 4321. In short order, they knew what Cambridge was going to offer to buy the L0pht, what its best offer would be if the first offer got rejected, and, most awkwardly, what the executives thought of the L0pht’s members. They really only wanted Mudge, Christien, and Wysopal. That was infuriating, but the discovery also gave them a license to have fun. The L0pht went back into negotiations with unusual demands, asking for a Winnebago like the guys in the hacking movie Sneakers. Then they turned over their report on the pen test. They weren’t mean enough to include quotes from the voice mails, but it was obvious what had happened. They never heard from Cambridge again.

It got better with an approach from Battery Ventures, an established venture capital firm. Battery had just backed a fledgling start-up called @stake. @stake had hired Luke Benfey’s old housemate Dave Goldsmith from Cambridge Technology and Window Snyder as well. They agreed to a $10 million deal that folded the L0pht into @stake when it closed in January 2000. Around then, overexcited public relations people told media the real names of Mudge, Christien, and Wysopal. They tried, too late, to claw the information back. And yet, the world didn’t end. Professionals were brought in as the top executives, leaving the old L0pht crew free to continue doing their research. Hackers had such admiration for the L0pht that @stake pulled some of NSA’s best to the private sector, and the new company became an odd marriage of security brains and money.

But the culture of the unkempt rebels in the rank and file clashed with that of the suits making sales pitches and controlling the budget. Sketchy pasts and big personalities abounded. Some employees missed a major customer meeting because they had been up all night doing drugs. Other meetings should have been missed but weren’t: one L0pht veteran was having sex with a prostitute in the office when her rear end knocked into a phone and joined them to a conference call with a customer’s CEO. And later, a former employee was jailed for playing a role in one of the largest thefts of credit card numbers ever detected.

More subtle issues also surfaced. Would @stake continue the L0pht’s practice of issuing advisories about dangerous bugs? Or would it only do that about companies it did not work for as a consultant? If it wouldn’t embarrass a company that was paying it, that could get dangerously close to extortion: “Hire us and we’ll shut up about your product.” Though @stake continued the tradition of coordinated disclosure that the L0pht had pioneered, its policies were impure. A bug found in a noncustomer’s software—or found off the job in a customer’s wares—could be disclosed, but it could also be used for business development. Customer bugs found during an engagement were kept quiet.

@stake needed to sort out its disclosure policies quickly, because none other than Microsoft hired it for major work at the company. Despite the past antagonism, the @stake crew made a huge positive impression at Microsoft. Like a task force of star detectives, they possessed a sixth sense about where problems hid in the code. They followed connections from one product to another, and they looked at work patterns as well. Several versions of Windows had substantially better security because of @stake, and in 2002 Bill Gates released a memo declaring that security was now the company’s top priority.

Microsoft soon hired Snyder and other @stake veterans in-house. Snyder would stay three years. In the beginning, the company had no single person responsible for security issues in upcoming versions of the operating system. Snyder raised her hand. She still had to fight for things that cost money, like delaying a release to fix bugs. Arguing with the managers of a version about to go “gold” for general release, she said Microsoft should first plug two medium-level vulnerabilities, because someone outside would find them and build on those flaws to make something more dangerous. She lost the vote and a few days later was proven right. After that, the other managers stopped arguing with her. Snyder brought in many of the best outside security consultants, and she was responsible for Windows XP Service Pack 2, which dramatically improved the company’s posture. Snyder also helped expose isolated executives to outside researchers by creating the BlueHat security conferences, at which hackers spoke for an audience of Microsoft employees.

@stake staff and veterans entered new territory in other ways as well, including by publishing research that brought unintended consequences. David Litchfield, a Scot on his way to becoming the world’s best-known database security expert, was gone from @stake and testing the security of an SQL database for a German bank when he had a harder time than usual breaking in. Litchfield tried sending various single bytes and found one that crashed the system. That led to more experimenting and then a short program that might be able to take control of the database. More digging found a surefire way to exploit a similar flaw. Litchfield warned Microsoft and asked if he could present a talk on the matter at Black Hat, the more professional version of Def Con that now ran just before it on the calendar. Microsoft had no problem with that; it would have a patch ready by then. Litchfield’s talk included sample code, and he warned everyone to install the patch. Six months later, an unknown coder released SQL Slammer, a self-replicating worm that shut down large parts of the internet in 2003. Only about 10 percent of machines had been patched, Litchfield guessed. Certainly many of the companies would not have been hurt if he had not published actual code. So Litchfield resolved only to describe such dangerous flaws in the future, not release proof-of-concept code, unless he could be sure nearly everyone had patched.

@stake Chief Technology Officer Dan Geer further tested the company’s willingness to speak the truth by cowriting a 2003 paper arguing that Microsoft’s monopoly was bad for security. Geer’s team said that Microsoft’s dominance made it worthwhile for hackers to focus on finding its weaknesses, because they would provide a golden key that would get them in almost everywhere. It was true, but it was also a provocation, and it came just as Microsoft’s court-certified monopoly was finally waning under pressure from a rejuvenated Apple. @stake unceremoniously fired Geer by press release.

The one truly insurmountable problem for @stake was venture capital math. Battery Ventures knew that most of the companies it invested in would fail, so it concentrated on the ones it thought could potentially deliver “100x” returns, the home runs. But the money coming in to @stake was in consulting, and the company could never have produced those kinds of returns. To satisfy its investors, @stake would have had to grow as big as one of the largest management-consulting firms. @stake limped on through its 2004 sale to Symantec, which gradually absorbed it.

The @stake story was a strange shotgun union of two powerful and growing forces: venture capital and hacking. In its short arc, @stake established an enormously important precedent for security: that outsiders could go into big companies and make the systems and products there safer. Perhaps more importantly, @stake hackers dispersed and founded many more companies in the next few years, and they became security executives at Microsoft, Apple, Google, and Facebook.

But those same years revealed psychological fragmentation in the movement along with the physical diaspora. The cDc of Def Cons 1998 through 2001 had ridden the crest of a wave of hacker sensibility. Each year the crowds grew in number, young, irreverent, and on the cusp of mass recognition, if not big money. That short period was as important for technology culture as the Summer of Love, in 1967 San Francisco, was for the hippies. Laird Brown’s hacktivism panel in the summer of 2001 set a high-water mark for that kind of enthusiasm, for open-source, idealistic efforts to protect people even from their own government.

But any youthful protest ethic faces a challenge when its adherents need to find jobs and pay their bills. That concern increased in 2001, one year into the great bust that followed the dot-com boom. Not everyone could get a job with @stake or other boutiques. But it was a second, more direct blow that scattered young hackers in different directions for many years: the terrorist attacks on the World Trade Center and the Pentagon.

Those driven primarily by money were already paying less attention to ethical quests, such as the fun and games in keeping Microsoft honest. Now, in the months after the 9/11 attacks, those driven largely by causes also had a strong contender for their attention: rallying against the worst attack on American soil since Pearl Harbor. This was true for rank-and-file hackers, who took assignments from the military or intelligence agencies, and even cDc’s top minds, including Mudge.

Mudge had instant credibility, since he had taught government agents and they used his tools. Government red team penetration-test leader Matt Devost, who had covered cDc in a report given to a presidential commission on infrastructure protection, used L0pht tools to break into government networks. Spies loved Back Orifice and BO2k because if they left traces behind, nothing would prove US government responsibility.

Two years before 9/11, an intelligence contractor I will call Rodriguez was in Beijing when NATO forces in the disintegrating state of Yugoslavia dropped five US bombs on the Chinese embassy in Belgrade, killing three. Washington rapidly apologized for what it said had been a mistake in targeting, but the Chinese were furious. In a nationally televised address, then Chinese vice president Hu Jintao condemned the bombing as “barbaric” and criminal. Tens of thousands of protestors flowed into the streets, throwing rocks and pressing up against the gates of the American embassy in Beijing and consulates in other cities.

The US needed to know what the angry crowds would do next, but the embassy staffers were trapped inside their buildings. Rodriguez, working in China as a private citizen, could still move around. He checked with a friend on the China desk of the CIA and asked how he could help. The analyst told Rodriguez to go find out what was happening and then get to an internet café to see if he could file a report from there. Once inside an internet café, Rodriguez called again for advice on transmitting something without it getting caught in China’s dragnet on international communications. The analyst asked for the street address of the café. When Rodriguez told him exactly where he was, the analyst laughed. “No problem, you don’t have to send anything,” he explained. “Back Orifice is on all of those machines.” To signal where he wanted Rodriguez to sit, he remotely ejected the CD tray from one machine. Then he read everything Rodriguez wrote as he typed out the best on-the-ground reporting from Beijing. Rodriguez erased what he had typed and walked out, leaving no record of the writing.

Even before 9/11, Mudge had been talking to Richard Clarke and others at the National Security Council. Often, Mudge argued for privacy. The government had wanted to put location tracking in every cell phone as part of Enhanced 911 services, for example. Mudge told the NSC that the privacy invasion was unnecessary, that information from cell phone towers would be good enough for any serious official need.

One day in February 2000, after a rash of denial-of-service attacks that bombarded big websites with garbage traffic so that regular users couldn’t connect, Richard Clarke brought Mudge into a White House meeting with President Bill Clinton and a bunch of CEOs. “It was, I think, the first meeting in history of a president meeting people over a cyber incident,” said Clarke, who had organized it to show White House responsibility on the issue and build the case internally for more government oversight. After answering Clinton’s questions on what was fixable and what wasn’t, the guests walked out of the office. The CEOs saw the reporters waiting and prepared their most quotable platitudes. Instead, the press swarmed Mudge, as even those who didn’t know him assumed that the guy who resembled a Megadeth guitarist was a hacker meeting with the president for good reason. “Of course Mudge stole the show,” Clarke said.

But in order to be taken seriously, Mudge had to tell the truth. Once, an NSC staffer brought him in and asked what he knew about a long list of terrorists and other threats. What did he know about Osama bin Laden? About the group behind the sarin attack in the Japanese subway? About the Hong Kong Blondes?

At that one, the blood drained from Mudge’s face. “What do you mean?” he asked.

“We’ve been informed it’s a small, subversive group inside China that’s helping dissidents with encrypted communications,” the staffer replied.

“I’ve heard of them,” Mudge offered.

“What can you tell us?” the staffer persisted.

Mudge figured the government hadn’t put a lot of resources into the goose chase because signals intelligence and other sources would have turned up nothing and convinced seasoned professionals that it was a red herring. But he didn’t want the country to waste any energy that could go toward supporting real people in need.

He shrugged and looked straight at the staffer. “We made them up,” Mudge admitted.

After 9/11, Mudge went into overdrive. President Bush was warned that a cyberattack would have been worse than the planes, and he listened. Mudge then started exploring what a “lone wolf” terrorist hacker could do. “I’m finding ways to take down large swaths of critical infrastructure. The foundation was all sand. That rattled me,” Mudge said. Looking into the abyss exacerbated Mudge’s severe anxiety, his tendencies toward escapist excess, and his post-traumatic stress disorder, which had its roots in a violent pre-L0pht mugging that had injured his brain. He went into a spiral and eventually broke down. “Ultimately, I just cracked a bit,” Mudge said. He spent days in a psychiatric ward. (Anxiety and burnout in the face of the near-impossible, high-stakes task of defending networks was not yet recognized as a major industry problem, as it would be a decade later.) Unfortunately, some of Mudge’s treatment compounded the situation. As is the case with a minority of patients, his antianxiety medications had the opposite of the intended effect. Eventually, Mudge fired his doctors, experimented with different medications and therapy, and worked his way back to strong functionality. But when he returned to @stake after many months, it was too fractious and uninspiring for him to be enthusiastic about reclaiming his post. The dot-com bust had forced layoffs of L0pht originals while managers were drawing huge salaries. The emphasis was on the wrong things.

Outside of @stake, hackers began disappearing from the scene for six months or more. When they came back, they said they couldn’t talk about what they had been doing. Those who went to work for the intelligence agencies or the Pentagon, temporarily or permanently, included many of the very best hackers around, including a few present or former cDc members and many of their friends in the Ninja Strike Force. They wanted to protect their country or to punish Al-Qaeda, and in many cases they got to work on interesting projects. But many of them would not have passed the background investigations required for top secret clearances. To get around that problem, a large number worked for contractors or subcontractors. One way or another, a lot of their work went into play in Afghanistan and Iraq.

Some hackers felt great fulfillment in government service. Serving the government in the wake of the terror attacks gave them a chance to fit in when they hadn’t before, united by a common cause. But for too many of this cohort, what started with moral clarity ended in the realization that morality can fall apart when governments battle governments. That was the case with a cDc Ninja Strike Force member I will call Stevens. As Al-Qaeda gained notoriety and recruits from the destruction, the US Joint Special Operations Command, or JSOC, stepped up the hiring of American hackers like Stevens. Some operatives installed keyloggers in internet cafés in Iraq, allowing supervisors to see when a target signed in to monitored email accounts. Then the squad would track the target physically as he left and kill him.

After 9/11, the military flew Stevens to another country and assigned him to do everything geek, from setting up servers to breaking into the phones of captured terrorism suspects. Though he was a tech specialist, the small teams were close, and members would substitute for each other when needed. Sometimes things went wrong, and decisions made on the ground called for him to do things he had not been trained in or prepared for mentally. “We did bad things to people,” he said years later, still dealing with the trauma.

Others had similar experiences. A longtime presenter at hacking and intelligence community gatherings, former clergyman Richard Thieme, gave talks about the burdens of protecting secrets that should be known and about the guilt suffered by people made to carry out immoral orders. After he asked people to send in their stories, some listeners provided accounts like Stevens’s. “It occurs to me how severely the trajectory of my own career has taken me from idealistic anarchist, to corporate stooge, to ambitious entrepreneur, to military/intelligence/defense/law enforcement adviser,” wrote one. “Many cyber guys started out somewhere completely different and then somehow found themselves in the center of the military-industrial complex in ways they would never have been prepared for.” Once there, the difficulty in keeping secrets is “potentially more extreme because the psychological make-up and life-story of the cyber guy would not have prepared him for it.”

Wrote another:

When one joins an intelligence service at the start of one’s career, one is involved in low level, apprentice-like, tasks and assignments usually far removed from traumatic action or profound moral considerations, much less decisions. In the course of a career such actions/decisions slowly grow into being, almost imperceptibly for many people. One may suddenly “awake” to where one is and realize that he/she had not been prepared for this, and also realize that one is now deeply into the situation, perhaps well beyond a point that one would have stepped into if it had been presented from the start. If this is the case, it’s too late to turn back.

When you are on the ground, Thieme said, “the rules people think they live by are out the window.” People who score too high on morals tests are rejected by intelligence services, he said, because a conscientious whistle-blower is even more dangerous than an enemy mole.

Working for a contractor was just one way hackers with criminal histories and dicey connections could do business with the feds. Without even going to that much effort, they could perform something close to pure security research for cash. Penetrating many of the most valuable and difficult intelligence targets required the government to have secret knowledge of a software flaw. Those flaws had to be severe enough to allow external hackers to gain control over a targeted machine. And they also needed an exploit program that would take advantage of the flaw and install software for spying. The National Security Agency, and to a lesser extent other parts of the military and the CIA, had been quietly developing storehouses of such flaws for years, along with the exploits to take advantage of them. But both needed to be continually replenished. Once exploits were used, they could be discovered. Even if they weren’t, it was dangerous to use the same technique elsewhere, because the target or a third country could realize the attacks were connected and draw conclusions about who was responsible.

As the American government ramped up its spying efforts after 9/11, it needed to discover new vulnerabilities that would enable digital break-ins. In the trade, these were often called “zero-days,” because the software maker and its customers had zero days of warning that they needed to fix the flaw. A ten-day flaw is less dangerous because companies have more time to develop and distribute a patch, and customers are more likely to apply it. The increased demand for zero-days drove up prices.

After the dollars multiplied, hackers who had the strongest skills in finding bugs that others could not—on their own or with specialized tools—could now make a living doing nothing but this. And then they had to choose. They could sell directly to a government contractor and hope that the flaw would be used in pursuit of a target they personally disliked. They could sell to a contractor and decide not to care what it was used for. Or they could sell to a broker who would then control where it went. Some brokers claimed they sold only to Western governments. Sometimes that was true. Those who said nothing at all about their clients paid the most. For the first time, it was relatively straightforward for the absolute best hackers to pick an ethical stance and then charge accordingly.

It was in no one’s interest to describe this market. The government’s role was classified as secret. The contractors were likewise bound to secrecy. The brokers’ clients did not want attention being paid to their supply chain. And the majority of hackers did not want to announce themselves as mercenaries or paint a target on themselves for other hackers or governments that might be interested in hacking them for an easy zero-day harvest. So the gray trade grew, driven by useful rumors at Def Con and elsewhere, and stayed out of public sight for a decade. The first mainstream articles on the zero-day business appeared not long before Edward Snowden disclosed that it was a fundamental part of US government practice, in 2013.

As offensive capabilities boomed, defense floundered. Firms like @stake tried to protect the biggest companies and, more importantly, get the biggest software makers to improve their products. But just like the government, the criminal world had discovered hacking in a big way. Modest improvements in security blacklisted addresses that were sending the most spam. That prompted spammers to hire virus writers to capture thousands of clean computers that they could use to evade the spam blocks. And once they had those robot networks, known as “botnets,” they decided to see what else they could do with them. From 2003 on, organized criminals, a preponderance of them in Russia and Ukraine, were responsible for most of the serious problems with computers in America. In an easy add-on to their business, the botnet operators used their networks’ captive machines to launch denial-of-service attacks that rendered websites unreachable, demanding extortion payments via Western Union to stop. They also harvested online banking credentials from unsuspecting owners so they could drain their balances. And when they ran out of ideas, they rented out their botnets to strangers who could try other tricks. On top of all that, international espionage was kicking into higher gear, sometimes with allies in the criminal world aiding officials in their quests.

Out of @stake came fodder for both offense and defense. On offense, Mudge pulled out of his tailspin and worked at a small security company, then returned to BBN for six years as technical director for intelligence agency projects. His @stake colleague and NSA veteran Dave Aitel started Immunity Inc., selling offensive tool kits used by governments and corporations for testing, and for spying as well. He also sold zero-days and admitted it in the press, which was seldom done in those days due to ethical concerns and fear of follow-up questions about which customers were doing what with the information. Aitel argued that others would find the same vulnerabilities and that there was no reason to give his information to the vendors and let them take advantage of his work for free. From the defender’s perspective, “once you accept that there are bugs you don’t know about that other people do, it’s not about when someone releases a vulnerability, it’s about what secondary protections you have,” Aitel said, recommending intrusion-detection tools, updated operating systems, and restrictive settings that prevent unneeded activity.

A London @stake alum moved in above a brothel in Thailand, assumed the handle the Grugq, and became the most famous broker of zero-days in the world. Rob Beck, who had done a stint with @stake between Microsoft jobs, moved to Phoenix and joined Ninja Strike Force luminary Val Smith at a boutique offensive shop that worked with both government agencies and companies. Careful thought went into what tasks they took on and for whom. “We were pirates, not mercenaries,” Beck said. “Pirates have a code.” They rejected illegal jobs and those that would have backfired on the customer. One of @stake’s main grown-ups, CEO Chris Darby, in 2006 became CEO of In-Q-Tel, the CIA-backed venture capital firm in Silicon Valley, and Dan Geer joined as chief information security officer even without an agency clearance. Darby later chaired Endgame, a defense contractor that sold millions of dollars’ worth of zero-days to the government before exiting the business after its exposure by hackers in 2011.

On defense, Christien Rioux and Wysopal started Veracode, which analyzed programs for flaws using an automated system dreamed up by Christien in order to make his regular work easier. After Microsoft, Window Snyder went to Apple. Apple’s software had fewer holes than Microsoft’s, but its customers were more valuable, since they tended to have more money. Snyder looked at the criminal ecosystem for chokepoints where she could make fraud more difficult. One of her innovations was to require a developer certificate, which cost $100, to install anything on an iPhone. It wasn’t a lot of money, but it was enough of a speed bump that it became economically unviable for criminals to ship malware in the same way.

Going deeper, Snyder argued that criminals would target Apple users less if the company held less data about them. But more data also made for a seamless user experience, a dominant theme at Apple, and executives kept pressing Snyder for evidence that consumers cared. “It was made easier when people started freaking out about Snowden,” Snyder said. “When people really understand it, they care.” In large part due to Snyder, Apple implemented new techniques that rendered iPhones impenetrable to police and to Apple itself, to the great frustration of the FBI. It was the first major technology company to declare that it had to consider itself a potential adversary to its customers, a real breakthrough in threat modeling. Still later, Snyder landed in a senior security job at top chipmaker Intel.

David Litchfield feuded publicly with Oracle over the database giant’s inflated claims of security. He went on to increasingly senior security jobs at Google and Apple. @stake’s Katie Moussouris, a friend to cDc, stayed on at new owner Symantec and then moved to Microsoft, where she got the company to join other software providers in paying bounties to hackers who found and responsibly reported significant flaws. Moussouris later struck out on her own and brought coordinated-disclosure programs to many other organizations, including the Department of Defense. She also worked tirelessly to stop penetration-testing tools from being subject to international arms-control agreements.

Private ethics debates turned heated and even escalated into intramural hacking. Some highly skilled hackers who found zero-days and kept them condemned the movement toward greater disclosure. Under the banner of Antisec, for “antisecurity,” the most enthusiastic of this lot targeted companies, mailing lists, and individuals who released exploit code. In the beginning they argued that giving out exploits empowered no-talent script kiddies, like those who might have been responsible for SQL Slammer. But some of them simply didn’t want extra competition. The mantle was taken up by hacker Stephen Watt and a group calling itself the Phrack High Council, which made the Antisec movement pro-criminal. Watt later did time for providing a sniffer, which recorded all data traversing a network, to Albert Gonzalez, one of the most notorious American criminal hackers. In a 2008 Phrack profile that used his handle only, Watt bragged about starting Project Mayhem, which included hacks against prominent white hats. “We all had a lot of fun,” Watt said. Later on, the Antisec mission would be taken up by a new breed of hacktivists.

Ted Julian, who had started as @stake marketing head before it merged with the L0pht, cofounded a company called Arbor Networks with University of Michigan open-source contributor and old-school w00w00 hacker Dug Song; their company became a major force in stopping denial-of-service attacks and heading off self-replicating worms for commercial and government clients. Song would later found Duo Security and spread vital two-factor authentication to giant firms like Google and to midsize companies as well.

Song got to know cDc files and then members online before being wowed in person by the Back Orifice release. In 1999, he put out dsniff, a tool for capturing passwords and other network traffic. While Arbor was mulling more work for the government, Song quietly developed a new sniffer that captured deeper data. He planned to show it off for Microsoft executives at Window Snyder’s first BlueHat conference in 2004. Song went and talked about his improved sniffer, which analyzed instant-message contacts and documents and did full transcriptions of voice over IP calls, such as those on Skype. He produced a dossier on Microsoft employees as part of the demonstration. Then he decided the danger of such a surveillance tool outweighed the security benefit of catching insiders stealing data. He convinced the other Arbor executives to drop the contracting plans and bury his project.

One of @stake’s young talents had worked out of the San Francisco office. Alex Stamos had joined not long out of UC Berkeley due to admiration for Mudge and the other founders. As @stake got subsumed by Symantec, he decided to start a new company with four friends. @stake had shown that it was possible to run a business that had a massive positive impact on the security of ordinary people. But it had two key flaws that he hoped to fix in the new company. The first was that it had taken venture money, which put it at the mercy of unrealistic financial goals. Declining outside investment money, Stamos and his partners, including Joel Wallenstrom and Jesse Burns from @stake, put up $2,000 each and bootstrapped the new consulting firm, iSec Partners. Instead of being heavy with management and salespeople, it operated like a law firm, with each partner handling his own client relationships.

The iSec model also attempted to deal with Stamos’s other problem with @stake: that, in his words, “it had no moral center.” Stamos made sure that neither he nor any of his partners would have to do anything that made them uncomfortable—any big decision would require unanimous agreement by the five.

iSec picked up consulting for Microsoft in 2004, after @stake was gone, and it helped with substantial improvements to security in Windows 7. Four years later, it got an invitation to help on a huge project for Google: the Android phone operating system. Android had been developed so secretly that Google’s own excellent security people had been left out of the loop. iSec was called in just seven months before its launch. Among other things, iSec saw an enormous risk in Android’s ecosystem. In a reasonable strategy for an underdog fighting against Apple’s iPhone, Google planned to give away the software for free and let phone companies modify it as they saw fit. But iSec realized that Google had no way to insist that patches for the inevitable flaws would actually get shipped to and installed by consumers with any real speed.

iSec wrote a report on the danger and gave it to Andy Rubin, father of Android. “He ignored it,” Stamos said, though Rubin later said he didn’t recall the warning. More than a decade later, that is still Android’s most dangerous flaw. Stamos was frustrated by being called in as an afterthought, and he began to think that working in-house was the way to go. Eventually, he joined internet mainstay Yahoo as chief information-security officer. Wallenstrom became CEO of secure messaging system Wickr; Jesse Burns stayed at iSec through its 2010 acquisition by NCC Group and in 2018 went to run Google’s cloud security. Meanwhile, Dave Goldsmith in 2005 started iSec’s East Coast rival Matasano Security, which attracted still more @stake alums to work from within to improve security at big software vendors and customers. He later became a senior executive at NCC.

The opening decade of the millennium was a strange and divisive time in security. “It was a time of moral reckoning. People realized the power that they had,” Song said. Hundreds of focused tech experts with little socialization, let alone formal ethics training, were suddenly unleashed, with only a few groups and industry rock stars as potential role models and almost no open discussion of the right and wrong ways to behave. Most from @stake stayed in defensive security and hammered out different personal ethical codes in companies large and small. While they played an enormous role in improving security over the coming years, perhaps the most important work inspired by cDc didn’t come from either corporations or government activity.