Cory Doctorow
When you’re small, you’re taught that there are two kinds of people:
As you get older, if you’re lucky enough to have the right kinds of teachers and the right opportunities for learning, you realize that there are indeed two kinds of people:
The name for the systems we build to elevate virtue and check vice is “politics,” and, practiced correctly, it can produce a sum that is greater than the whole of its parts: a superhuman machine capable of superhuman feats. Literally: welding together more than one person to accomplish a common end enables outcomes that are more than one person could ever hope for. Superhuman.
For a quarter-century, dreamers, theorists, schemers, builders, optimists, pessimists, crackpots, and geniuses have run a series of experiments in connecting people using networked computers, experimenting with governance structures (ICANN, github, Unix file permissions, standards development organizations, Reddit upvotes, statutes, treaties, regulations), normative frameworks (manifesti, terms of service, moderator guidelines), tools (encryption, error correction, ranking algorithms), and businesses (eBay, Amazon, iTunes, Google Play, the Silk Road), trying to find ways to upregulate desirable behaviors and downregulate the bad ones.
The societal view of this project has been a mess: at first, it was widely dismissed as a distraction, a toy for nerds who believed that somehow their Star Trek message-boards had some political consequence. Then, in a hot second, the outside world pivoted to mock tech activists for their supposed “techno-optimism” and alleged indifference to the ways in which tech had become a dominant political force in the world.
These two points of view—tech doesn’t matter versus tech matters too much to listen to nerds—have one thing in common: they’re both most commonly evinced by people who don’t understand tech very well. This is a book that aims to bridge that divide in understanding, and at this political moment that’s an important contribution.
I don’t mean that critics lack an appreciation of how tech businesses work (although many do), or of the human frailties of technologists themselves (these are often self-evident): I mean that, at a nuts-and-bolts level, they tend not to understand what’s going on when they sit in front of a computer and make it do things, or have things done to them. When better-informed activists point out the technical incoherence of some critiques (e.g. “Why can’t YouTube just use an algorithm to block hate speech?”), they are accused of a mulish intransigence dressed up as technical objections. All too often, the answer to “that’s just not possible,” is “NERD HARDER!”
The thing is, technology activists are, in fact, enthusiastic about technology. They really do believe that with technology, we can create structures that upregulate the angels of our better nature and downregulate the venal, cowardly, and unworthy impulses. If that was all that tech activists believed, then it would be totally fair to call them monsters of hubris, the unwitting handmaidens of technological oppression through ubiquitous surveillance and control.
But that’s not all tech activists believe. They believe that everything could be so great but only if we don’t screw it up. After all, if you’re an information security person, your whole job is to sit around and think of how terrible people will abuse the systems you’re charged with protecting. If you’re an information security specialist, you have to both love and fear tech—the way that a demolitions expert loves and fears dynamite.
The hacker way has its problems: “move fast and break things” was always self-serving bullshit fronted by overgrown toddlers who wanted to grift their way to millions (billions!) without adult supervision.
But the way hackers do policy—deploying deep, hard-won technical expertise to build and modify systems with some real concern for systems’ societal effects and an approach that reduces how much agreement we need before we can work together—is admirable. It is animated by a profound understanding of the perils of tech gone wrong as much as by an exuberance at the transformative power of tech.
This is a book about hackers and hacker politics: the nuts-and-bolts and the big picture. The people who appear in it (including, briefly, me) are, like our fellow human beings—deeply flawed, cracked vessels who struggle to contain bad impulses and to let the pure water of our noble ones pour out freely.
Hacker politics are anti-authoritarian because hackers know that authorities are just as damaged as they are. Hacker politics are pluralistic because hackers know that unchecked power is a catastrophe in the making, because without those checks, the bugs in the system will run wild and brick the device before you even know there’s something wrong there. It is a political ethos that accounts for the fallibility of human beings as much as for their unlimited potential. It is designed to avoid making things worse, even if doesn’t always know how to make them better.
Hacker politics may not solve our problems, but as this book makes clear, they are going to be part of the solution.