IN ITS EARLIEST days, the chief moral issues for the teens in the Cult of the Dead Cow were how badly to abuse long-distance calling cards and how offensive their online posts should be. But as they matured, the hackers quickly became critical thinkers in an era when that skill was in short supply. In an evolution that mirrored and then led the development of internet security, cDc went on to forge rough consensus on the complex but vital issue of vulnerability disclosure, to show that enabling strong security could be a viable business, and to merge the hacking spirit with activism on behalf of human rights. It also kept a remarkably big tent, roomy enough to include support for acts of civil disobedience as well as work for the military, as long as both were principled. They all helped push a realistic understanding of security challenges and ethical considerations into mainstream conversations in Silicon Valley and Washington. As the big picture in security grows darker, those conversations are the best hope we have.
One lesson from the Cult of the Dead Cow’s remarkable story is that those who develop a personal ethical code and stick to it in unfamiliar places can accomplish amazing things. Another is that small groups with shared values can do even more, especially when they are otherwise diverse in their occupations, backgrounds, and perspectives. In the early days of a major change, cross sections of pioneers can have an outsize impact on its trajectory. After that, great work can be done within governments and big companies. Other tasks critical for human progress need to be done elsewhere, including small and mission-driven companies, universities, and nonprofits. It gets harder to keep the band together over time, but cDc’s impact lives on in those whom members hired, taught, and inspired. That said, a movement cannot control its children. The Citizen Lab and Tor are one thing, while Lulz Security and Gamma Group are another. Trolling and fake news also owe something to cDc, and neither is anything to be proud of.
As I was nearing the end of the writing process, a moderately well-known security professional asked his Twitter followers for some current ethical issues facing the industry. His feed was inundated with questions. If you live where encryption is outlawed, do you help activists encrypt anyway? If you discover a malware campaign that appears aimed at a reviled terrorist group, do you expose it? If you make a monitoring tool, do you sell to nonsanctioned but repressive regimes? If authorities want you to sell a zero-day vulnerability to a broker instead of warning the vendor, do you? If your government asks your antivirus company to search on computers for a specific signature that is not malware, do you? The questions will go on forever, and there need to be better ways of getting debate and answers. One thing that would help is a shift toward public-interest technology like that of the Citizen Lab. Lawyers are expected to do charity work, and there are plenty of public-interest jobs, noted author Bruce Schneier. Neither is true for technologists yet.
Beginning around 2000, after most of the people in this book had left college, accredited US engineering and computer science programs were obliged to require some education in ethics, typically a single course. Too often, those courses are taught by philosophers with no grounding in practical work. The best texts in the field use case histories, such as the Challenger space shuttle explosion. Before that disaster, an outside engineer on the shuttle had recommended against a cold-weather launch. He then let his management talk him into changing his mind.
Some of the top professional associations, such as the Institute of Electrical and Electronics Engineers, have slowly evolving ethical codes. But their membership is limited, the codes are enforced only if someone complains, and some guidelines are too abbreviated to be of much use when members seek advice. There is no regulation or continuing-education requirement, both of which govern practicing lawyers. Even the canon of security literature isn’t that widely read. “Engineers have a profound impact on society,” said former IEEE president and current engineering college dean Moshe Kam. “But quite frankly, there is no glory in dealing with this.”
Even those who spend considerable energy wrestling with such issues rarely speak in public about it, which means others don’t get to learn from them. Facebook’s Alex Stamos is one exception. Another is Dug Song, the Michigan security expert who came up in the hacking group w00w00 and founded Duo Security, bought by Cisco in 2018 for more than $2 billion. In a 2016 speech to students at the University of Michigan, Song argued that moral reasoning was fundamental to what should be a noble endeavor, since technology is the only thing that increases human productivity. “Security is about how you configure power, who has access to what. That is political,” Song said.
Rather than thinking about the world as binary, good or evil, Song said he found it helpful to think of the matrix in the role-playing game Dungeons & Dragons, with one axis running from good to evil and another one running from lawful to chaotic. Darth Vader, he explained, is lawful evil: he wants order, it’s just for a bad cause. In that vein, he described w00w00 as neutral on both axes. On balance, Snowden might have been chaotic good, and the NSA might have been lawful evil, he said. Phrack was chaotic evil, L0pht lawful neutral, and, Song told me, cDc was chaotic good. Whatever the law says, Song believes that professional ethics requires him to contribute to the social good.
Of all those involved in the burgeoning technology industry, which now includes the world’s six most valuable companies, security experts like those in cDc were the first to grapple daily with matters of conscience and immense impact on safety, privacy, and surveillance. But such broad issues are now spreading throughout the tech world. Facebook, Twitter, and YouTube are doing poor jobs of stopping propaganda and are letting automation promote content that is engaging because it is extreme. Google is mulling bringing censored search back to China, which it left on principle in 2010. Yet it bowed to employee pressure and walked away from a Pentagon contract to help analyze drone footage that could be used in targeted killing. Apple fought the FBI on back doors but agreed to store user data in China. Workers at Amazon are protesting that company’s sale of facial-recognition technology to police, and those at Microsoft are fighting deals with the Trump immigration authorities that are separating families at the border. Technology as a whole is engulfed in what may prove to be a permanent moral crisis, and the best place to turn for wisdom on how to handle it is the people who have been through this before, whether they serve in giant companies or start-ups, nonprofits or Congress.
The more powerful machines become, the sharper human ethics have to be. If the combination of mindless, profit-seeking algorithms, dedicated geopolitical adversaries, and corrupt US opportunists over the past few years has taught us anything, it is that serious applied thinking is a form of critical infrastructure. The best hackers are masters of applied thinking, and we cannot afford to ignore them.
Likewise, they should not ignore us. We need more good in the world. If it can’t be lawful, then let it be chaotic.
San Francisco–Boston–New York–Washington–Austin–Los Angeles