12

Mainstreaming Hackerdom

A New Condition of Freedom

A City upon a Hill

The Boston-Cambridge area in Massachusetts, where the hacker story began, has been a high tech hub longer than Silicon Valley.1 It is also a hub for higher learning—Boston College, Brandeis, Harvard, MIT, Northeastern, Suffolk, Tufts—the list is long. The area is densely populated. From an airplane at night, the lights of the Eastern seaboard stretch like jeweled tissue an immense distance along the coast. This is the brain center of the American continent.

It is also the birthplace of the American creation myth, Massachusetts being the place where the first Congregationalists landed to settle in the New World—a landscape that prompts one to ruminate on the ardor with which Americans have historically regarded their democracy. Even as their ships carried them to these shores, these early migrants gathered on deck to pledge themselves to a new way of being. On the Mayflower, they pledged themselves to the principle of rule under law by the consent of the governed.2 On the Arbella, they prayed to build a new society that would set an example of communal affection and unity. Their settlement in New England would be like “a city upon a hill,” watched by all humankind. They would show the world that through God’s grace the wicked could be restrained “so that the rich and mighty should not eat up the poor” and “the poor and despised” should not be compelled “to rise up against and shake off their yoke.” They, “the regenerate,” would model the interdependence of men and show that “every man might have need of others.” Indeed, theirs would be a society built on the “bond of love.”3

And if they failed? Their city upon a hill, their leader John Winthrop warned, would “be made a story and a by-word through the world” of God’s judgment.4

Congregationalism was named for its innovation in governance. Congregationalists disintermediated bishops and presbyteries and emphasized the right and responsibility of each congregation to decide its own affairs without having to submit to any higher earthly authority.5 The Congregationalists believed democracy was a form of government required by God. Having started their movement of democratization in the Church, they went on to have a profound influence on early American democracy. The democratic tradition begun in the first Plymouth Colony was soon followed by the Massachusetts Bay Colony, Connecticut, Rhode Island, New Jersey, and Pennsylvania.6

Boston and Cambridge would become the cradle of American democracy, fomenting the American Revolution against the British monarchy. American tourists still flock here to follow the “Freedom Trail” of the Revolution and marvel at the daring men and women who created their nation.

This part of North America is and always has been in earnest. Throughout the 1800s, early abolitionists and reformers of all kinds lectured, met, and organized here. Walking through the streets of Beacon Hill, one can’t help thinking of Henry James’s descriptions of these early do-gooders after the Civil War, picture them bustling down the modest brick sidewalks by gas lamplight, off to the African Meeting House, the Baptist church, and the Athenaeum. Meeting in each other’s homes. Striving to manifest morality in politics.

The past is never far from mind in this geography, but it is the opportunities and conundrums of digital society that are the bleeding edge focus of every elite institution in Boston and Cambridge now. Every one of them has a program, an angle, and hacker-themed projects. Just look at their websites.

The Harvard Business School’s website states that its Digital Initiative

studies the digital transformation of the economy, and seeks to shape it by equipping leaders and engaging our community with cutting-edge research.

As a think/do start-up at Harvard Business School, D/I brings together leading scholars and practitioners to explore the re-invention of business in a digital, networked, and media-rich environment. We traverse disciplines, methods, sectors, and communities to develop and share novel insights, approaches, and values in this evolving space.

As a statement of purpose, it is a bit “everywhere and nowhere,” as John Perry Barlow might say:

We think about the changes spurred and facilitated by the digital transformation in three key layers: economics and strategy, organizations and culture, and individual skills and capabilities.

In fact, reading it to the end is a little like chewing dry meal, but it is an excellent specimen of the genre:

Our expansive view of “Digital” encompasses technologically-based phenomena such networks, media, mobile, data, cloud services, as well as the associated strategies, gatekeepers, leaders, crowds, creators, designers, platforms, and property rights that are every bit as important. The interactions among them are what excites us most.

D/I is informed by the values and practices from the Internet ecosystem, embracing the potential of networks and platforms, embodying the virtues of user-centered design and agile development, and knowing the power of design-thinking, collaboration, openness, and interoperability. We revel in our service as an HBS innovation test-bed. We endeavor to experiment, to create and facilitate new conversation and co-creation, ultimately fostering exploration of the intersections among disciplines, methods, sectors, and geographies.

To be fair, there are so many directions this new digital world could go that the number of research projects, academic careers, and business applications that could be generated over the rest of the century is boggling.

Since the 2016 presidential election, the Kennedy School for Public Administration at Harvard has changed its webpages’ emphasis from stressing the importance of centralized global governance structures to highlighting issues such as citizen participation, transparency, collaborative solutions, elite capture of democratic processes, and the democratic potential of cities. “Millions are disappointed with their elected leaders, frustrated about an economy that does not work for them, and angry at elites they view as self-serving,” the website paraphrases the dean as saying. “There is no greater public problem today than the lack of trust in our political and economic systems, and there is no more important challenge than restoring that confidence.”7 The Kennedy School has “Technology and Democracy” fellowships, and it has run a #Hack4Congress project:

While the founders of the American republic may have conceived Congress as the linchpin of our democracy—the branch of government closest and most responsible to the people—few would argue that our contemporary Congress shares much in common with this early republican ideal. …

… Congress needs “fixes”—but where will these new tools and solutions come from? By bringing together political scientists, technologists, designers, lawyers, and lawmakers under the banner of #Hack4Congress, the Ash Center hopes to foster new digital tools, policy innovations, and other technology innovations to address the growing dysfunction in Congress.8

MIT’s Media Lab has some of the more thoughtful webpages. In one video, Joi Ito, then director of the Lab, explains,

AI’s rapid development brings along a lot of tough challenges. For example, one of the most critical challenges is how do we make sure that the machines we “train” don’t perpetuate and amplify the same human biases that plague society? How can we best initiate a broader, in-depth discussion about how society will co-evolve with this technology, and connect computer science and social sciences to develop intelligent machines that are not only “smart,” but also socially responsible?

Libre Planet, the Heart of Free Software

I have finally arrived in Boston. I want to see what’s going on at these institutions. It seems that a mainstreaming of hacker experiments has begun within academia and is gaining momentum especially within the Boston-Cambridge hub. What does this augur for hacking? What are the prevailing ideas about where we are headed? And how do academics think progressive hackers and citizens might succeed or fail in building democracy out into cyberspace?

It is March 2017, and the number of hacker-themed events taking place here in this month alone is dizzying. It’s a schedule that would challenge the stamina of the most fervent Boston reformer. Among other things, the 2017 Libre Planet conference is happening this week. This is the annual conference put on by Richard Stallman’s Free Software Foundation. I plan to go there first, to see Stallman in the flesh and walk the halls of MIT where the first hackers lived and breathed code around the early mainframe computers of the late 1950s.

The Libre Planet conference kicks off with a party at the Free Software Foundation offices in Boston and a Chinese dinner hosted by FSF for female participants. I arrive at their location in Franklin Street on a crisp, black night, and jostle up the stairs to the second floor. Wired writer Steven Levy once depicted Richard Stallman as a beleaguered survivor of a dying hacker culture,9 but happily, that is no longer the case. The place is packed, with people squeezing down hallways, draped across the boardroom table, and busily attending the beer table. And there is a surprising number of women, and older hackers, a real mixture of types, some looking like clean-cut Washington aides and others more alternative. I talk for a while with a young hacker from Seattle sporting a large beard and a kilt, then briefly with someone who worked for the Clinton campaign, or was it the Democratic National Committee, and says the United States is no match for the sophistication of Russian hackers. Then I trickle out with about twenty other women and we go off to enjoy copious amounts of Chinese food at a favorite hacker restaurant, where I end up arguing amiably with a virulent libertarian all night. After swearing she (herself a single mother) would gladly let me and my kids die in the street if I were unlucky enough to find myself unemployed like so many others these days, she kindly buys my train ticket at the end of the evening when I’m caught short of US cash, saving me a late-night walk across the Boston Common.

The next morning, I take the train from Boston over the Charles River to Cambridge: first stop MIT, Kendall Square. As I climb the stairs of the station to street level, the campus architecture seems to float above me, light and slightly futuristic. There are vast, quiet spaces to walk between buildings, with none of the hubbub of the Harvard quadrangle. Many MIT buildings have names but tend to be known by their assigned numbers, including the one where I am headed—the architecturally adventurous Stata Building, number 32.

I register for the Libre Planet conference and pick up the schedule. Marvin Minsky’s AI Group, the academic home that harbored the “golden age” of hacking at MIT in the 1960s and 70s (becoming the AI Lab in 1970), merged in 2003 with the Laboratory for Computer Science to form CSAIL, short for the Computer Science and Artificial Intelligence Laboratory. CSAIL has been located in the Stata Building since 2004. I ought to be visiting its offices, but for now I imagine the group like a recent release of an original program I will download later.10

One of the conference sessions I want to attend is located in an older building next door. As I wander across a side passage to scope it out, I look again at the schedule, realizing with a tingle that this is Building 26. Building 26, where the earliest hackers tinkered and the object of their desire, the TX-0, was housed on the second floor.11 The hallway has the polished linoleum, metal doors, and retro feel of a 1950s high school. The posters on the bulletin board outside the Games Lab reveal the current millennial cohort’s political preoccupations:

Noam Chomsky

Racing to the Precipice: Global Climate, Political Climate

March 23, 2017

5–6 pm

Voices of Resistance

Dario Fo

Accidental Death of an Anarchist

March 21, 2017

Free

Isn’t God a Moral Monster?

March 7, 2017

Free dinner served

Change the World

Major or Minor in Political Science

MIT Poli Sci

The Libre Planet conference agenda shows that the free software community is hard at work with a multitude of hacking projects. One session is optimistically named “The Cloud Is Dead.” But the talk I’m most looking forward to is Eben Moglen’s, near the end of the day. As a young constitutional attorney and Columbia University law professor, Moglen volunteered to defend Phil Zimmermann for exporting PGP (Pretty Good Privacy) cryptography in the 1990s.12 Then, as Moglen relates the story, Richard Stallman called him up and said he had a few legal issues he needed help on. Moglen replied that he had been using Stallman’s GNU Emacs program13 every day for years, so it would be a long while before he got to the limit of the free legal advice he could offer Stallman.

And so began a long friendship and collaboration between the two. Moglen worked out the legal wording for Stallman’s Copyleft license. He started the Software Freedom Law Center at Columbia University, which provides pro-bono legal services to developers of free, Libre, and open-source software. His Freedom Box project aims to put a personal server into everyone’s home so that users can avoid the privacy-gouging business models of commercial internet service providers (ISPs). Moglen himself was a hacker when very young. More recently, he led the legal side of drafting Stallman’s new GPLv3 license and organized a public comments process for people to critique it.

Moglen has likened the current mass surveillance engaged in by governments and businesses to an environmental disaster, like air or water pollution. He sees it as the environmental destruction of people’s freedom, and in his opinion, a few laws or even hacker tools are not going to solve it. It will take a number of major governments and big companies rejecting the whole model of “surveillance capitalism” to turn things around.

He begins his talk at the Libre Planet conference with a recitation of recent dystopian events, including the election of Donald Trump,14 and addresses the members of the audience as if they were his old friends: “We are at the place, I would say, for which the free software movement exists. This is really what it was that motivated a bunch of young people to take a fairly abrupt view of how technology ought to be designed and operated—because of this. Because of the possibility of this.”

He suggests that the free software movement has done a great deal among governments and industry to mainstream the idea that users of technology have rights. It is the one basic public-policy point, he says, that you can make around the world today to almost everybody to good effect:

And we have positioned ourselves as the experts on what it means for users to have rights. We have a whole bunch of answers to real practical questions on what would it mean for users to have rights.

Copyleft is part of those answers, and GNU, and a series of philosophical and political positions about technology are part of those answers. [But] those answers are still way more than most of the rest of the world wants to receive. A drink from a fire hose, in two senses: too geeky for most of the world’s decision-makers and most of the world’s users and too uncompromising for a whole bunch of people who are within organizations or social structures which make it hard for them to agree with us as completely as they would like to because there are real impediments to doing that—the interests of their businesses, the natures of their livelihoods, the dependencies that they themselves have on the things that we would urge them not to be dependent upon, and so on.

Which has meant that all my life doing this, most of us did fit in this room, where it was always a pleasure to see people and it’s a pleasure to see you now, and it’s wonderful to see so many old friends. But here we are.

And in a conversational tone we can say that for the last couple of years we have been watching the world come to a crisis about which we know an awful lot. Which doesn’t mean we can do an awful lot. We have, after all, got a really serious problem at the moment because even though the thing we’re dealing with feels exactly like what we have grown up to be to deal with it, the scale of it has escaped our ability to believe that we’re going to be effective very quickly.

And the question “What can we really do?” is the one that’s resounding within our crania. We can do, but what can we really do about a mess this big? Which in truth is where we are.

Moglen hardly breathes between sentences, and his talk is such an intimate reveal of a veteran hacker’s thoughts in conversation with other hackers that I feel compelled to write down every word in order to capture the urgency of it. He continues:

We have now before us the greatest teachable moment in our history as a movement with respect to everybody else. There are more people out there in the world who would be receptive to a message about why users of technology have rights and why those users’ rights are so crucially important to the survival of political liberty. There are more people out there who want that message than have ever wanted it in all our time.

It began with Snowden, to be sure. But even that is now comparatively small in scale to the global anxiety felt largely by young, relatively technologically adept people, who feel two things happening—societies slipping out of control around them to their own immediate disservice and a whole bunch of things going on that they themselves are a part of but that they know are not quite right.

They feel it about Facebook. They feel it about Twitter. They understand why the deliberative quality of the world around them seems to be falling apart, but they don’t exactly know how their own social media habits contribute to that falling apart.

They do understand that they’re being watched all the time. … They do know that there is something wrong with it. … And they would love to hear something can be done that isn’t as desperate as “the red pill.” … They would really like to hear that you don’t have to give up everything you’ve ever done in order to get behind the Matrix and save the world.

The audience chuckles.

“So, what can we really do? Actually, I need to rephrase the question,” Moglen says, leaving us hanging for a beat in suspense:

What can we really do before otherwise we do get wiped out? I do need to point that out. We really are in the world we always thought we would be. When Richard called it slavery, it wasn’t a metaphor. It was simply an archaic political term for what it is like living in a world in which machines that you can’t understand and you can’t modify and you can’t do anything about control you and everybody else.

“What if Google does replace the Linux kernel in Android with a nonfree kernel?” he asks the audience. “Users would have no rights at the bottom of the only mobile computing stack in which they have any rights.” Or what if we lived in cashless societies and “citizen rating” determined what you pay and when you get it or if you get it at all?

Let me tell you what the result is. It’s called slavery. But it doesn’t feel like that because it feels like inconvenience. It feels like friction in life, and if only you would be subconsciously adjusting to be a more compliant person, it would get more convenient. I hope you do understand that that’s a much better way to run a despotism than running a gulag. … So what can we do?

He says (and I’m paraphrasing here) that we can save Copyleft—the idea that users have rights that can be vindicated in law and by broad consensus—because governments and other crucial research funders are beginning to discriminate against projects that want to use Copyleft. We can get tools into the hands of ordinary people who want to be free. They have to be built for purpose, and they have to be usable and ready to go.

This is no time for us to be in one of our purist or perfectionist moods, he says. We have to deliver the bits right now to people we want to be our allies. We need to get into the schools. We need to start teaching people how to think about the technology now and let the technology catch up to their educated expectations. We need to pass on our ideas to eight- and twelve- and sixteen-year-old people.

We do have a Snowden generation of people who were eleven, twelve, and thirteen when Edward Snowden came along. They were born at the beginning of the twenty-first century, and psychographically they look quite different from the people around them. According to Moglen, “They were very affected by Mr. Snowden—as I said, I think because he looks so much like Harry Potter, but perhaps for other reasons too—and what they learned there is going to provide an important opportunity for us when they grow up in another five years.” If we have bits to offer them then, “in sort of shiny packages”—code that offers real freedom and real privacy—they will take them up in a heartbeat. “And I know manufacturers around the world who know that, too.”

So we have to educate the consumers of the near future. We are two product cycles out on the guys building stuff. He does think that Android will be an enclosed system by then, he says, and “should that be true, we’ll have had to do a lot of consumer education for things that don’t quite exist yet but that we must have.”

Richard Stallman and the Free Software Foundation Awards

Eben Moglen leaves the stage to great applause, and a little later Richard Stallman—the man himself—shuffles to the table beside the lectern and sits down. I’ve seen him more than once that day, striding off with others and looking too busy to interrupt. And now he is here to give out the annual Free Software Foundation awards.

The first part of his talk is aimed at sorting out a few doctrinal points. He speaks to the audience in an avuncular manner that reminds me of how Wau Holland used to address younger members of the Chaos Computer Club.

Now I want to criticize a couple of the talks today which labeled things with the name of a despicable person. … [This is] one of those details where I occasionally disagree with Eben. Our “age” is not the age of Trump. No, there’s a lot more going on in our age than him. What makes these more than a slight mistake is that they assign to him a bigger victory than he really had, essentially declaring us defeated when we’re still fighting [he says, his inflection rising in an encouraging tone].

So don’t say that he trumps anything. In fact, I suggest usually not using his name. I typically call him the troll or the cheater or the liar or the loser because he actually lost the election. …15

Now, we face a fight that’s gone on for decades, and we’re nowhere near complete victory. It may go on for more decades, so the most important point is to remind people what we’re fighting for. … We’re fighting for freedom, and we want complete success for freedom. We want to free everyone. We want to escape from nonfree software. …

Even when we compromise and tolerate something that’s intolerable, we have to remember and tell each other that it is intolerable and that that’s a battle that we’re going to have to fight and win someday. …

When it comes to big data, often we see the wrong approach by mainstream human rights organizations. Their first thought is to put limits on its permissible use. … But when a hero is designated as a spy or a traitor, the government will always give itself permission to use that data to catch the hero.

I’m talking about people like Edward Snowden. Let’s have three cheers for Edward Snowden!

Without a moment’s hesitation, he leads the crowded amphitheatre in a spontaneous cheer:

Hip hip.

“HOORAY!” [everyone yells enthusiastically].

Hip hip.

“HOORAY!!”

Hip hip.

“HOOORAY!!!”

“For a democracy, we have to forbid that data from ever being collected. … Data, once collected, will be misused.”

Now, one of the vital things that we need to do is reverse engineering, so we’ve decided to do more to encourage people to do it. It’s very hard to do, but if you’re clever, you can find a way to do it. We have some priorities, which you’ll find on the GNU website.

And to recognize people who do this hard job, we plan to give out a reverse engineering medal from time to time. We don’t have one now, but when someone does a heroic job, we’ll give one out.

People chuckle, and Stallman goes on to cover some of the technical problems the Free Software Foundation is looking at now, including tools for buying digital books anonymously and free software security patches for the internet of “stings” (he’s a punster) and for sites that can be run only with nonfree Javascript.

When the time for the awards arrives, Stallman stands to give them out: “The 2016 award for the advancement of free software goes to a true champion of free software—Alexandre Oliva!” Stallman describes how Oliva has contributed to GNU and even liberated the Brazilian taxpayer by creating a free software program to replace the government’s proprietary one.

A lovely, big, shambling guy with a head of black curly hair tinged with gray comes up to receive the award, chuffing with emotion and possibly crying:

Thank you, this is such a wonderful dream. Please do not wake me up. Are you sure it’s not moonlight or Miss Columbia or Philippines? This is too good to be true!

I first met GNU in 1991. That was just before Linux was published. So I really met GNU, and I fell in love at first sight.

But five years later, I met Richard, and that defined the rest of my life. It was so inspiring to hear that technology was not just “nology.” There was an ethical aspect, and there were social aspects. They were a lot more important than everything I learned before. So I sort of didn’t have a choice.

He pauses here, looking down at the floor to gather his words. “I learned that I would have to work on it from that point on. And I did. But I was never sure what I was doing was right because—here he sighs heavily, not once but twice—“I’m so insecure, I’m so … ,” he trails off. “This award means a lot to me because it tells me that something I was doing was right.”

He pronounces the last word in such a mild yet emotional tone that the crowd stands up in an extended ovation that is so warm it makes my heart feel three times bigger.

When the awards are over and everyone is chatting and gathering their stuff to leave, Stallman calls out, “Look at GNU.org/help to see a list of various kinds of work we need people to do! … There will be something there for everyone’s capacity.”

Pros, Cons, and Disobedience Awards

Richard Stallman’s Free Software Foundation is one moderately sized nonprofit organization struggling to promote the cause of free software and develop hacker tools. It is closely connected to the hacker scene and has loose affiliations with academia. As larger academic institutions like Harvard and MIT jump into the mix, embracing the hacker ethos and setting up programs for hacker-themed work, three things become apparent.

First, these institutions underscore the rising significance of hacking with their interest in the phenomenon.

Second, these institutions are mainstreaming hacking by raising elite and popular awareness of it and by leveraging new resources and respectability for hacker experiments. You know cultural mainstreaming is going on when the Chaos Computer Club is lauded in Bloomberg News as a new hero that the US Democratic Party might wish it had on its block:

The Hackers Russia-Proofing Germany’s Elections: The Chaos Computer Club, a multigenerational army of activists, has made the country’s democracy a lot tougher to undermine.

… The loose confederation of about 5,500 hackers isn’t a bunch of bored teens in it for the lulz. Its 29 local chapters are stocked with professionals who run security for banks, head encryption startups, and advise policymakers. The group publishes an occasional magazine, produces a monthly talk radio show, and throws the occasional party, too.

All this has made CCC into something that sounds alien to American ears: a popular, powerful, tech-focused watchdog group.16

And you know there is mainstreaming going on when MIT’s Media Lab creates a Disobedience Award. This was one of Joi Ito’s innovations a number of years after taking over the job of director there. Carrying a cash prize of $250,000 and funded by Reid Hoffman, cofounder of LinkedIn, the Disobedience Award is being offered for the first time in 2017. It will be handed out for “responsible disobedience” across any discipline, including scientific research, civil rights, freedom of speech, human rights, and the freedom to innovate.17 In a promotional video for the award, Reid Hoffman says, “My hope is that the prize helps us understand the way that we make progress as a society and as humanity is by recognizing the right heroes who take personal risks, and sometimes that risk is a form of disobedience to help us evolve as humanity.” Martha Minow, dean of Harvard Law School,18 says in the video, “Social movements, political movements, legislation, art, education, all contribute to changes in human consciousness. That’s what produces the conception of rights, and it’s in the light of demand for rights that rights become real.” Martin Luther King Jr. is quoted. Even Malala Yousafzai says something. Joi Ito summarizes, “Questioning authority and thinking for yourself is an essential component of science, of civil rights, of society. At some level, civil disobedience is at the root of a lot of … creativity.”

Ito himself has the cherished identity of a “creative” and an “innovator” in the tech world, and he is in many ways the ultimate mainstreaming and crossover figure between the various tech subcultures, hacking included. He was an early investor in Flickr, Kickstarter, Twitter, and a host of other start-ups, which made him wealthy. He obtained his education in unorthodox ways, pursuing undergraduate degrees in computing and physics but finishing neither19 and taking online courses instead from the New School for Social Research.20 As a young man, he worked as a disc jockey in the alternative scene in Chicago and started a nightclub in Japan, where he helped to introduce rave culture. Timothy Leary “adopted” him as a godson. He worked for a time in traditional media before pivoting to start the first commercial internet service provider in Japan. He became a tech blogger and was on the early editorial mastheads of both Mondo 2000 and Wired magazine.

Associated with the “free culture movement,” which advocates against strict copyright laws and for the free exchange and remixing of created material, he became chairman of Creative Commons, where he served for more than five years. He has sat on the boards of ICANN, Mozilla’s Open Source Initiative, and Sony and more recently the Knight Foundation, the MacArthur Foundation, and the The New York Times Company.21 He has spoken at the World Economic Forum, lunched privately with President Obama, visited Tunisia weeks after the Arab Spring revolution, and gone to the Vatican to talk with Catholic clergy about ethics. He is like the Forrest Gump of tech in that he seems to show up everywhere. Foreign Policy magazine named him one of the top global thinkers: he told them the “best idea” was “users controlling their own data.”22 Soon after coming to MIT, he introduced mindfulness meditation training to the Media Lab.23 He may be hacking the institution itself.

Not to get carried away, I remind myself that over the last few decades universities have for the most part become, and resiliently remain, corporate-style enterprises. When I call Ito’s office to make an appointment to speak to him, I am told that he is booked up for March and April before he leaves for paternity leave. He might be willing to answer some questions by email, but his (very nice) assistant has to speak with the Communications Department first. In the twenty-first-century university, this is not surprising. Even regional universities have Communications Departments that intercede to protect the corporate reputation, enforce the branding, reassure the sponsors, and vet communications products.24

There may be a Media Lab Disobedience Award, but no university has yet stepped forward to become the public custodian for the complete (published and unpublished) Snowden archive. Some academics have looked into it, but they have not taken on the responsibility: it’s hard enough to get their institutions to confer an honorary doctorate on Snowden. Snowden himself has suggested that a certain unnamed public university with a center that has the technical capability to protect the archive (perhaps the University of Toronto’s Citizen Lab?) had been asked by a certain unnamed news organization (perhaps Laura Poitras and Glenn Greenwald’s The Intercept?) to become the repository for the documents. “And,” said Snowden, “they said, ‘Whoa!! That is too hot for us!’”25 In a similar vein, the Harvard Kennedy School granted Chelsea Manning a fellowship when she was released from prison but rescinded it when CIA director Mike Pompeo canceled an appearance there and sent a scathing letter to the administration stating it was “shameful of Harvard to put its stamp of approval on her actions.”26

And here is the third thing that is apparent in the increasing institutional embrace of hacking: universities’ agendas do not always align with hackers’. Harry Halpin has said a problem arises when academics become the gatekeepers of what gets pursued. Large project funding is usually funneled through universities if it is not purely commercial, and only academics are eligible to receive it. Other hacker concerns are that academia itself has an increasingly extractive business model. Corporate sponsors may exploit the university as a public institution, and well-paid professors may exploit hackers as employees and contributors. Conflicts of interest may arise when ideas are commercialized. The very platform billionaires who have skimmed the wealth out of the digital economy are often the ones giving money to universities to fix the system. Large, philanthropic funding depends on the whims and biases of the donor class and is not a deliberative and systemic response by society. Given the profound impact that digital technology and platforms will have on all of our futures, do we want to continue with such an approach?

Take one complicated example. Reid Hoffman, Pierre Omidyar, the Knight Foundation, and the Hewlett Foundation have recently donated $27 million to set up the Ethics and Governance of Artificial Intelligence Initiative with the MIT Media Lab and the Harvard Berkman Klein Center for Internet & Society. It is “a global initiative to advance AI research for the public good.” Part of its purpose is to put money into the hands of universities so that they can look at these problems in a more disinterested way than the tech industry can itself, which is running a similar but private initiative called OpenAI.27

This is another partnership between Hoffman and Ito, and it is interesting to watch the many videos that the two fellow venture capitalists, friends, and “thought partners” have done separately and together. They talk about steering toward a digital utopia instead of a dystopia. And yet there are contradictions, even a certain tone deafness.

In one 2016 video, they agree that to adapt to the accelerating change and unpredictability of the digital age, individuals, organizations, and businesses need to embrace “risk over safety.” To illustrate, they launch into an enthusiastic discussion of their venture capitalist approach in which they make “a set of bets” with a fixed downside (a fraction of your capital—say, $5 to $10 million you can afford to lose), and “the game” is to find one of those “unicorns” like Uber or Airbnb that with network effects will take off in a “superlinear” curve and make you “a massive winner.”28

In the video, they don’t talk about the social distribution of risk in this kind of job-killing, monopoly-platform, casino economy. They do not talk about the risk of systemic collapse. They seem not to grasp that, for the listener, it is not the least edifying to hear that cocky traders in unicorns have taken over the economy and they are comfortable embracing risk over safety.

Toward the end of this video, Ito does say, “We have growth in nature,” and it is usually controlled, but if it gets out of balance, we call it a tumor. We have certain “capitalist-DNA based tumors,” he says, that need to be addressed but without eliminating capitalism itself.29

His own biases as a venture capitalist aside, how could Ito talk about the risk of systemic collapse or eliminating capitalism when the Media Lab is primarily funded by a consortium of ninety capitalist corporations?30 Corporations that pay membership fees of $250,000 a year that entitle them to share in any commercialization of Media Lab research?31

MIT’s Media Lab

On a dreary day, I walk into the place itself. The Media Lab’s silver-skinned atrium is six stories high, and inside it looks like an early iPod, with the kind of white, minimalist design Steve Jobs perfected for Apple. Covering one wall is a large triptych. It’s a panoramic photograph of Marvin Minsky’s home. A founder of MIT’s Artificial Intelligence Lab (not to be confused with the Media Lab, which opened a decade and a half later, in 1985), Minsky was known for fostering hackerism at MIT in any way he could.32 There has been a recent memorial celebration of his life and career at the institution and the triptych is a temporary exhibit. In stark contrast to the minimalism of the Media Lab building, his living space, displayed here, is a riot of color and curious objects.

Wired writer Steven Levy once described the MIT Media Lab as “the smarty pants citadel of digital creativity.”33 The names of the Media Lab “groups”—Affective Computing, Civic Media, Human Dynamics, Lifelong Kindergarten, Scalable Cooperation, Social Machines, and Viral Communications—are displayed on a touch-screen directory beside curvy, branding symbols, each of which is meant to represent the letters “ML” in a distinctive way. As I ride a glass-walled elevator up to the top floor, I catch glimpses of creative behavior on passing floors. The word media in Media Lab’s name stands for “stuff,” and lab stands for the interdisciplinary approach the place takes toward engineering problems. Its website states: “We are an antidisciplinary research lab working to invent the future of #” with a rotating stream of words that float into the space following the hashtag and decorously disappear—“engineering,” “hacking,” “extended intelligence,” “trust,” “wellbeing,” “crypto currency,” “consumer electronics,” “3d printing,” “privacy,” “open source,” “perception,” “art,” “space,” “agriculture.”34

I’ve come to observe an event hosted by the director of the Media Lab’s Human Dynamics Group and an eminent name in engineering, Sandy Pentland.35 One of his collaborations, with John Clippinger, is called the Institute for Innovation & Data Driven Design (ID3). ID3 is a nonprofit organization and its website states it was “formed to develop and field test legal and software trust frameworks for distributed, self-signing digital assets, currencies, and data-driven services, infrastructures, and enterprises.”36 Clippinger, a research scientist at the MIT Media Lab and former senior fellow at Harvard’s Berkman Center for Internet & Society (now called the Berkman Klein Center for Internet & Society), is also a veteran of the AI Lab (he was a graduate student there in the early 1970s) and of the student movement of the 1960s and 1970s.

Clippinger’s online descriptions of the ID3 project sound very twenty-first-century hackerish and quite similar to Tim Berners-Lee’s “Solid” project over at CSAIL. Like Solid, ID3 aims to build a new “redecentralized” internet at the applications layer, on top of the physical infrastructure of the existing internet. Clippinger has some intriguing things to say about the way law and governance will change in this new ecosystem. Conventional law and policy-making, Clippinger says, are

too ham-fisted, slow moving, impractical, and unenforceable to address the robust needs of commerce and social life on open networks. In a sense, law itself must be re-conceptualized if it is to function well in networked environments. Now is the time to engineer a great leap forward to digital, network-native forms of law, where rule of law derives from the collective sentiments of a given community or network of users and functions in a more algorithmic, and self-executing way.37

Clippinger suggests that the ID3 platform will be able to handle the smallest of user ambitions to the largest—say, from neighborhood carpooling to social media apps to creating job markets, distributing basic necessities, starting commercial enterprises, and even establishing new kinds of financial institutions. It will “enable sustainable, bottom-up forms of governance to take root and grow.” Clippinger posits that

Ever since Hobbes proposed the State as the only viable alternative to the dread state of nature, citizens have entered into a notional “social contract” with “the Leviathan” to protect their safety and basic rights. But what if networked technologies using the Social Stack could enable individuals to negotiate a very different sort of social contract (or contracts)? What if digital systems enabled people to band together into quasi-autonomous governance units for mutual protection and provisioning without resorting to government while reaping superior forms of services and protection?38

Elsewhere, John Clippinger has enthusiastically embraced the writing of the great American jurist Oliver Wendell Holmes Jr. for his description of the evolutionary nature of the common law. “So it didn’t have a fixed, logical form,” Clippinger has reflected, “but it was really a form of how you create social cohesion, expression, and ordering. … This notion of a living law … decentralized, distributed, reinventing itself, adapting to its local terms … how do you design systems like that?”39 Clippinger is aware, no doubt, that the common law, like computer technology, is not a magical medium transforming human inputs. The value of cross-disciplinary work, perhaps, is that the poetry of one discipline inspires another.

Today’s event at the Media Lab is not about a vision of digital common law and distributed governance, however. It is touting a vision of neoliberal governance that is still being proselytized in academic circles. The speaker, Parag Khanna, is a senior research fellow in the Centre on Asia and Globalisation at the Lee Kuan Yew School of Public Policy at the National University of Singapore. In his new book, Connectography: Mapping the Future of Global Civilization, he argues that globalized supply-chain networks define the current world order. “Connectivity, not geography, is our destiny,” the book’s blurb states. The current race between countries is to see who can connect to the most markets, the blurb continues, and the United States can “regain ground” by “fusing with its neighbors into a super-continental North American Union of shared resources.” “The world’s ballooning financial assets are being wisely invested into building an inclusive global society. Beneath the chaos of a world that appears to be falling apart is a new foundation of connectivity pulling it together.”

In the publicity material for the event, Khanna comes across as a neoliberal apologist, a young man making his career by doubling down on prescriptions for creative destruction wrought by globalized, deregulated networks, even as populist opposition to this agenda surges around the world.

The Wall Street Journal has called his book “a well-researched account of how companies are weaving ever more complicated supply chains that pull the world together even as they squeeze out inefficiencies.” According to the Journal, Khanna “has succeeded in demonstrating that the forces of globalization are winning.”

“How to organize or self-organize the world?,” he asks his audience of young MIT students. We are moving, he says, from the Westphalian world of nation states begun in 1648 to the supply-chain world begun in 1989. Countries are building so much infrastructure across borders that they have extended their sovereignty.

He advances his PowerPoint presentation to the next slide. First, he says, we had “End of Cold War + expansion of capital markets + infrastructure build out + communication revolution = total globalization.”

He clicks to the next slide: And now, in the supply-chain world, we have “Free markets + competitive advantage + division of labor + lots of Big Data = perfect capitalism.”

Perfect capitalism, he says, means (he clicks to the final slide, and here he abandons the math signs) “perpetual optimization of land, labor, capital; inefficiency is the enemy; relentless competition.”

But in neither his talk nor his book, which I read later, does Khanna probe deeply into whether globalized, digitized capitalism has actually yielded “extended sovereignty” for nations or resulted in competition, prosperity, fair distribution of rewards, or well-being.

He concedes that development will not be even. It will adhere to the strands of networks, and the parts in between are becoming or will remain hinterlands. In the wealthier countries of the supply-chain world, more people are employed in the most lucrative digital tech and finance sectors, for example. But it is hard to quantify the wealth that is being created, Khanna admits. For example, the $6 trillion to $7 trillion of trade in services is difficult to attribute to specific countries because organizations are constantly changing their structure (and their tax strategies, he might have added). Spotify payments by Swedes do not show up in Sweden’s GDP, he says. He has an economist friend who joked that the amount of trade in services is so hard to quantify, you might be better off studying quantum physics.

Brushing that aside, and summing up his talk, he asks, “How do we leapfrog to a better way of governance?” We need to move beyond democratic populism to technocratic governance, he argues—beyond antiglobalization, anticapitalism, and antitech to a global utilitarianism. From “bad inequality to good inequality.”

To me, the maps of global networks Khanna is showing us look like the apogee of neoliberal complexity, with measurements of success abstracted from people’s actual well-being—a global engine of extractive capitalism and an intensifier of concentrations of power. I’m with the populists who think that while technocrats like Obama and the Clintons were telling us to surrender to the creative destruction of global networks (the global investment treaties, deregulated finance, privatization, unaccountable supra national governance structures, and platform monopolies they were imposing on us), they were actually executing a massive redistribution of risk and wealth. Since the 2008 economic meltdown, I want to tell Khanna, the trajectories of inequality have continued. In the United States in 2017, the top 10 percent of Americans own 77 percent of the country’s wealth—a higher percentage than they owned in the Gilded Age. The twenty richest Americans own more than the bottom half of the population (some 152 million people).40 Meanwhile, the US is innovating a new kind of poverty in which the costs of housing, education, and healthcare have risen so astronomically that they are now beyond the means of most Americans, who live constantly on the brink of ruin.41

I feel my anger rising as Khanna omits talking about the interstices in the beautiful, networked, technocratic world he is describing—the blighted towns destroyed by deindustrialization, the real unemployment and precarious employment figures, the lack of a living wage, the evictions, the rising homelessness, the 76 percent of Americans who in 2014 had no savings whatsoever,42 and the 42.5 percent of Americans who in 2016 lived on less than double the poverty line.43 During the 2016 presidential election campaign, there was talk of rising mortality rates due to suicide and alcohol and drug abuse, the “diseases of despair.” Yes, there is despair, but there is also rage.

Is Khanna ignoring the fact that after forty years of globalization, centralization, deregulation, and technocratic governance—the neoliberal agenda he seems to be promoting—people have realized they’re getting screwed, and they are withdrawing their consent?

When I pose this question to him immediately after his talk, he concedes, “There is cynicism about the positive power of tech and connectivity because it is undermining local, rooted community. But if there had been a federal jobs retraining program in the US, we would not have ended up with Trump.”

As I listened to Khanna, I was reminded of what Wau Holland once told his young audience at the international hackers’ meeting at the Paradiso: “Everybody must face the question ‘What am I doing?’”

Months after leaving Boston, I would think of this Wauism again when I came across a new blog post by Joi Ito. Either his thoughts about the digital economy had evolved since his earlier video talks, or he had decided to be more frank about them. In this blog, he was beginning to sound downright disobedient to the corporate culture in which he and the Media Lab were embedded. He sounded like a hacker taking on the Khannas and Silicon Valley unicorn hunters of the world:

We live in a civilization in which the primary currencies are money and power where, more often than not, the goal is to accumulate both at the expense of society at large. This is a very simple and fragile system compared to the Earth’s ecosystems, where myriads of “currencies” are exchanged among processes to create hugely complex systems of inputs and outputs with feedback systems that adapt and regulate stocks, flows, and connections. … [Today] values and complexity are focused more and more on prioritizing exponential financial growth, led by for-profit corporate entities that have gained autonomy, rights, power, and nearly unregulated societal influence.

The new species of Silicon Valley mega companies … are developed and run in great part by people who believe in a new religion, Singularity. … The notion of Singularity—that AI will supersede humans with its exponential growth44 … is a religion created by people who have the experience of using computation to solve problems heretofore considered impossibly complex for machines. They have found a perfect partner in digital computation, a … system of thinking and creating that is rapidly increasing in its ability to harness and process complexity, bestowing wealth and power on those who have mastered it. In Silicon Valley, the combination of groupthink and the financial success of this cult of technology has created a positive feedback system that has very little capacity for regulating through negative feedback.45

Those who think that “given enough power, the system will somehow figure out how to regulate itself, [that] the final outcome [will] be so complex that while we humans couldn’t understand it now, ‘it’ would understand and ‘solve’ itself,” are naïve, Ito concludes, since the more likely scenario is not a limitless ascending Bell curve but an ascending, and then diving “S” curve. “Most people outside the singularity bubble believe in S-curves,” Ito writes: “namely, that nature adapts and self-regulates and that even pandemics will run their course. Pandemics may cause an extinction event, but growth will slow and things will adapt.”46

Harvard and the Berkman Klein Center for Internet & Society

Harvard, like MIT, lies across the Charles River from Boston but further afield. It is the site where the Congregationalists set up Newtowne, their first settlement in the Massachusetts Bay colony. Not long afterward, they set up the University with the goal of educating their future leaders. The Education for Leadership Act of the General Court of Massachusetts appropriated funds for that purpose in 1636.

Harvard. Stolid and prim, its brick buildings replete with white dentilled cornices and multipaned windows, its freshman dormitories bearing names such as Wigglesworth and Pennypacker Hall, its signs telling the public to stay out of its buildings. Although there could be no more “establishment” a place (Harvard still turns out a ruling class), specters of insurrection waft close by in Cambridge Common, where George Washington once gathered his Revolutionary army.

Many young people want to get into Harvard because of the networks it can open up for their careers. (Parag Khanna spoke of global networks of professional and technocratic belonging, too, and of the professional “circuit” as a means of concentrating privilege and power.) Groups of college-age kids and their parents trail by and knot in the campus squares; groups from many different countries, including the Chinese parents, grandparents and kids who file by in the some of the longer trails of hopeful applicants. Over the last few decades, many universities have become globally competing research and credentialing businesses, and Harvard is one of the biggest.

Set against this background, the Berkman Center for Internet & Society at Harvard (“Klein” was added to the name in 2016) began in 1998 as something countercultural to the rest of the institution. In the best academic tradition, its founders wished it to be an importer of new, radical ideas already circulating outside the institution.

In 1999, a strategic planning committee met to consolidate the initiatives that the recently formed center wanted to pursue. The cofounders were Charles Nesson and a bright young law student, Jonathan Zittrain. Others involved included Larry Lessig, who came to take the Berkman chair and was planning his book Code and Other Laws of Cyberspace, and the ubiquitous John Perry Barlow, cypherpunk cum Berkman fellow traveler, and the Center’s first official fellow. Nesson recorded their aspirations to set up an open knowledge domain that would share with, rather than compete with, other universities. He also recorded the administration’s response. As a document that throws light on a seminal moment at the institution, it bears reading in its entirety:

Open Code / Open Content / Open Law: Building a Digital Commons in Cyberspace

Dear Colleagues,

The most important message I took from the May 20 [1999] strategic planning meeting was that the case has to be made for the importance of open code to a wide audience. [Richard Stallman might have preferred Nesson to have said “free software.”]48

I want to share the story of the Berkman Center’s own case to make within Harvard, and I hope others of you will share stories of your own.

Earlier this spring, the Berkman Center proposed the formation of a legally independent nonprofit entity—a consortium of educational centers to foster the development of open software, open research, and open content (<http://www.opencode.org>). The Provost of Harvard responded to this announcement with a note stating that the permission of Harvard’s President and Fellows would be required for the Berkman Center to sponsor the formation of such an entity “outside Harvard” (<http://www.opencode.org/faq/>).

This is, I believe, a request from the hierarchy of Harvard to be persuaded of the wisdom of the path we at the Berkman Center espouse. It is an opportunity for us to present our case for open code in an open way to the leadership of a great educational institution—an institution with a glorious past, a glowing present, and an uncertain future.

Even more important, it is an opportunity to explain, not only to the administrative hierarchy of Harvard, but also to others in similarly situated institutions and to the world at large, why openness in code, content, and law is essential to the future. It is an opportunity for us, in conjunction with other institutions, to attract and engage an international audience to consider the argument for openness, to deliberate in structured and moderated discussion, and to form rough consensus. …

Unlike the frontier Columbus opened when he discovered America, there are no pre-existing purple mountains and fruited plains in cyberspace. Cyberspace exists only as we build it, and how we build it is up to us.

So, the key strategic insights for me from our May 20 meeting relate to who we are and what we can do. We represent the integration of three important communities: coders, teachers, and lawyers. We have the capacity to challenge the boundaries of our separate cultures in service of an open cyber environment. We can combine our talents to design open architecture. We can, as coders, build it. We can, as teachers, fill it with open content. We can, as lawyers, defend it. …

We are making an argument for open information technology. We need to understand, articulate and project our argument. We need to explain the relationship of open code to freedom, justice, security, and education. We intend to initiate and foster a campaign for open IT that makes the issues of openness central to the institutional, local, national, and international politics of the future. We are building the environment in which we intend this argument to develop (<http://opencode.org/courseware>).

The Internet was born of public spirit out of government and education. It grew in the eighties as an open domain. In the nineties it was discovered by capital investors, who realized that investment in Internet produced exponential return. So began a still-growing rush of capital into the Internet that has produced an unprecedented growth of the proprietary domain. But there has been no balancing growth of the open domain. Rather, we must organize and build it. We need to convince our institutions—government, academic, philanthropic—that the creation of a substantial open domain serves their missions. …

Harvard, like other similarly situated institutions, faces three broad options: (1) Do nothing. Just keep going as we are, with pens and yellow pads; (2) Invest in helping teachers reach new audiences and teach in new ways; (3) Set up Harvard.com—commit to the commercial online education business. …

The model of university as producer of knowledge-as-product-for-sale is a closed one. Knowledge is treated as property to be copyrighted, patented, classified, licensed, and litigated. Under this closed model, creative work cannot progress without negotiations about license fees. … As faculty become work-for-hire, money becomes the currency of the campus, and legality the dominant feature of relationship.

Under this model, the nature of Harvard will change fundamentally—for the worse, I think. … The Berkman Center aspires to demonstrate a different model—open IT, we call it. We encourage cooperative work dedicated to the open domain … [and] in the public interest. Intellectual community and creative process is our product, knowledge the by-product. This approach galvanizes spirit and produces educational works of great distinction and wide public utility. …

But there are questions. In particular, can such a model be sustained by tuition and endowment?

Who will support IT?

Who will join our list?(<http://eon.law.harvard.edu/cgi-bin/opencode/join_in.cgi>)

Who will participate in our next lecture and discussion series? (<http://cyber.law.harvard.edu/online>)

Who will contribute talent? (<http://cyber.law.harvard.edu/people>)

Who will contribute funds? (http://cyber.law.harvard.edu/sponsors.html>)

Who will work with me in a patent group to advance the open genome and defend open code? (<http://www.opencode.org>)

Who will work with Larry Lessig to found the Berkman Press?

Who will work with John Perry Barlow to develop open MP3?

Who will work with Eric Eldred to build a Copyright Commons?(<http://cyber.law.harvard.edu/cc>) …

Who will tell friends we need help?

We think we have a working business model. We service an open knowledge domain to an audience of customers we judge best able to contribute to it. That is and always has been Harvard’s mission and the mission of educational institutions in general. Open IT is a mission we hold in common with other great institutions, so let us join to build a magnificent common resource for us all.

Charles Nesson

aka eon49

Nesson and his colleagues succeeded in persuading their administration to let them set up an open educational software project they called H2O with other universities that might otherwise have been viewed as competitors. And the Berkman Center went on to become involved in almost every cutting-edge policy issue related to digital technology over the succeeding three decades. Ultimately, Nesson and his colleagues would lead the way to mainstreaming a “cutting-edge approach” to all things digital throughout Harvard.

The Berkman Center has always focused on technical as much as on legal solutions. The Center’s stated mission is “to explore and understand cyberspace; to study its development, dynamics, norms, and standards,” and its stated method is “to build out into cyberspace.”50 Today there are many internet and society-themed research centers around the world. These include centers at Berkeley, Cambridge, the Indian Institute for Technology, Oxford, Stanford (set up by Lessig in 2000), St. Galen (Switzerland), University of Toronto, and Yale. Recently, the Berkman Klein Center set up a global Network of Internet & Society Centers with other institutions.51

One Berkmanite has referred to the difficulty of keeping track of the “Berkmaniacal collection” of projects and activities the center undertakes each year.52 These have included projects on Creative Commons licensing, cyber security, democratic debate, digital finance, digital health, harmful speech online, identity management, internet filtering and tampering, internet governance, internet robustness, media law, and stopping malware, as well as court cases.53 And increasingly, there has been a focus on social science research.

Emergence

There is a dizzying number of interesting people to speak with at the Berkman Klein Center. However, the two I most want to see are Samer Hassan and Yochai Benkler. They have been thinking for a long time about how something like a hacking movement might succeed despite the odds against it. Maybe they can tell me how hackers and citizens might build democracy out into cyberspace despite the formidable obstacles arrayed against them.

Samer Hassan is from Spain. I arrange to meet him at the student cafeteria at around two o’clock, just as the crowds are thinning out. He arrives at the top of the escalator, forty-something, bearded, and with a straightforward manner about him. We sit at a long table in the dining hall and settle into a conversation that outlasts the waning afternoon light. Since coming to Berkman in September 2015, Hassan has been traveling back and forth between Cambridge and Madrid, where he is an associate professor in computer science. He comes to the work from a human rights and sociology background, having done his PhD thesis in social theory and complex systems. He works on decentralized collaboration, social movements, and collaborative communities. These are communities like Wikipedia (a much studied subject at Harvard), but he says that collaborative communities also encompass Hack Labs, which have been around for more than thirty years, and Fab Labs (fabrication labs that are open to nongeeks), a newer phenomenon that grew out of MIT.54 These networks, Hassan says, are huge. They are typical grassroots movements—open communities of people who are collaborating to create common goods, whether an encyclopedia or a community center, with an open-license approach (like Free Software’s GPL licenses or Creative Commons’s licenses). These initiatives can be online or offline.

The Madrid social center called Tabacalera, which opened in 2010,55 is one example. Spain has always had a tradition of social centers squatting in (as opposed to renting) their spaces. The name for this type of center is Centro Social Occupado Autogestionado (CSOA), meaning “occupied, self-managed social center.” The tradition has existed for decades in Barcelona, but squatting social centers have been established in every big city in Spain. Madrid’s Tabacalera is a huge municipal initiative of 9,000 square meters of space for community groups and social activity. It is autonomous and self-managed by people who live in the neighborhood and by contributors, not by the state. A lot of people from the Indignados, or 15M, movement helped set it up.

The tradition of these social centers connects directly to the anarchist movement in Spain. The Spanish, Hassan says, have much more affinity to social movements than they do to the idea of trying to reform the state. Until very recently, in fact, these social groups would not participate in electoral politics; they would not deal with the state but would organize outside the state.

When I ask Hassan what hacking means in this context, he says that hacking is between reform and revolution. Hacking is a disruptive reform that is revolutionary. It makes reformers happy because it does reform but does not break the system. Revolutionaries are happy because it is not complying with the values of the system and is triggering emergent effects that might ultimately break down the system. In complex systems analysis, a small change can trigger huge changes and emergent properties, so that the whole macro structure changes. So, Hassan observes, you are actually hacking a “reform” with the hope that its emergent properties change the whole.

I ask him what emergent properties are. He replies with a phrase that sounds like a riddle: it’s when you have a property that is not in the macro but emerges from the interaction of the micro. The beautiful shapes that a flock of starlings makes when the birds are on the wing are emergent patterns. The birds do not want to form this pattern or have this pattern in their minds, Hassan explains. When they fly together, they know only the micro interactions between them. You could take three rules—don’t get too close, don’t cross paths, and proceed forward—and the flock pattern will emerge but not as a preplanned, predestined, or imposed thing. It is completely bottom up. Emergence is always bottom up, he says.

I am reminded that Chaos Computer Club member Rop Gonggrijp once said that Wau Holland “felt chaos theory offered the best explanation for how the world actually worked.” The club, Gonggrijp said, was about “adapting to a world which is (and always has been) much more chaotic and non-deterministic than is often believed.”56

Hassan asks me if I know the book The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary. I do. Eric Raymond wrote the book to advocate a free software approach for corporations, recommending that they leave out the ethics of free software and adopt it as a self-interested process of open-source software. Hassan’s point, I suppose, is that social systems can shift rapidly to new means of production with no one planning or predicting it. It is more complicated than just the political question of rallying a critical mass of people to a certain way of doing things. The process by which free and open-source software became the bones of much of the present digital world was an emergent process, not a purely political one.

At the Complutense University of Madrid, where Hassan is an associate professor in the Computer Science Department, his favorite course to teach is cyberethics. In it, he engages student engineers in a participatory way about the social implications of tech—privacy, censorship, free software, and the commons. “It’s actually revelatory for many,” he says. At many universities, including MIT, this kind of course is optional, but his is compulsory. “If you want a computer science degree, then you have to take this course. When you program an app, you have to think about its social implications.”

After nearly three hours, Hassan looks at his cell phone for an important message. “Aargh! Yochai was supposed to get back to me, and now it’s been three days.”

I ask him what Yochai Benkler is like to work with.

“I think he’s adorable,” he says.

Enlivening a Moral Imagination

The entrance to Hauser Hall on the campus of Harvard Law School is a beautiful update on Craftsman functionality and design. A single full-length portrait of US Supreme Court justice Oliver Wendell Holmes Jr., proponent of the common law, hangs at the end of the passage to the classrooms. Yochai Benkler has his office in this building on the third floor, where I have a scheduled meeting with him in half an hour.

Through the power of the free internet, I’ve watched several important lectures Benkler has given. I’ve also ordered his books and waited for them to arrive at the Harvard bookstore (although I later discovered they could be freely accessed at http://www.benkler.org).

Benkler’s first book, The Wealth of Networks: How Social Production Transforms Markets and Freedom, published in 2006, made his reputation in academic, business, and governance circles, and now he is a busy Harvard professor. The Wealth of Networks argued that “large-scale cooperation, such as free and open-source software or Wikipedia, was not a bizarre side story of the Net, but a core vector through which the transition to a networked society and economy was happening. … Online cooperation was happening … it was a stable feature of this new environment … and it was central to the future of networked society.”57 These days, Benkler might be the first to agree that business followed his insights but not in the way he envisioned. It took up his idea of large-scale, peer-based collaborative production in a big way—as a new engine of value extraction. No doubt business was delighted to learn that people will work collaboratively for free. It realized it could base whole business models on the premise, profiting both from people’s labor and their personal information. The Wealth of Networks garnered awards and a lot of recognition for Benkler, but his hope that society would embrace the full promise of commons-based peer production (a term he coined) to achieve democratic and humane ends was not realized.

Benkler’s next book, The Penguin and the Leviathan: How Cooperation Triumphs over Self-Interest, which I read last night in preparation for our meeting today, was published in 2011. It reads like a popular business book and seems to be an attempt to convince the business community that adopting his moral propositions along with the rest of his message would be in their self-interest, too.

What seems to reveal most about Benkler’s thinking, however, is a talk I found online titled “The Idea of the Commons and the Future of Capitalism,” which he gave for the Creative Commons Global Summit in 2015.58 In the video recording of the talk, he speaks of labor and a sharply growing income inequality. He tells his audience that today there is an increasing technological embodiment of contingent work, now sometimes called “the sharing economy.” But this is not sharing, he says: this is extraction, and we need to insist that sharing is sharing and contingent work under extractive conditions is contingent work under extractive conditions. “And don’t use us to legitimate you.”

Records, Benkler says, show that income inequality in the United States was flat during the 1950s, 1960s, and 1970s under both the Democrats and Republicans and then shot up after that regardless of who was in power. By the mid-1970s, the word solidarity, which had been ubiquitous from World War I to the late 1960s, lost its hold on people’s imaginations, and the neoliberal ideas of incentives and rationality took hold. In a world that was uncertain and complex, neoliberalism told us there was no way to plan. The most rational way to view the world was as a collection of individuals—strangers acting on each other in their own self-interest. Collective models had failed. We needed to free markets from collective impositions. Property was the core economic engine.

It is rare to hear a law professor talk about economic history in such an expansive way and even rarer to hear one speak about the state of the social imagination at different times in history. As I sit in the entrance of Hauser Hall watching parts of the video again, I find myself looking forward to asking him more about this.

In the video, Benkler continues: “Then the theoretically impossible started to happen,” he says—free and open-source software, the Creative Commons, and Firefox began to appear. They showed that people could govern themselves with collective models and could reach a rough consensus. It was not the state agencies that broke the Microsoft monopoly; it was free software. “Whoever would have predicted that a bunch of free software developers would beat Microsoft in its core web server function?” Benkler asks. And yet free and open-source software “moves”; it grows through boom and bust. It happens not at the periphery but at the very core of innovation and growth.

According to Benkler, the 1980s and 1990s saw the implementation of neoliberal policies pushing everything into market- and price-based approaches, even inside companies and in nonprofit organizations. “What we are seeing now,” he says, “is a re-emergence of a networked information society where, for the first time since the Industrial Revolution, the most important inputs into the core economic activities of the most advanced economies are widely distributed in the population.” Yes, there are new efforts to harness these inputs for more market-oriented and extractive ends, but “a solution space” is emerging for an entire range of problems—the opportunity to build social production into the general system.

One result is that “the commons” has started to emerge as an organizing principle in place of the older concept of the “public good.” Why? Benkler says the commons is appealing because it is a reimagining, not a complete rejection, of the property model. You keep your “self”—your individual integrity, your sense of being able to be both part of and apart from the collective—and at the same time you recognize that creativity, freedom of speech, and thought all depend on a robust public domain. It is not that there is property and the little bit that is left over is the commons. Rather, the commons are integral to all market societies, whether the roads, the navigable waters, or the basis of knowledge (or the internet, I think Benkler might have added—the “connectivity commons”).

Benkler’s message is that we cannot exist in complex society without commons because uncertainty and complexity mean that property, like central planning, also fails. And individual incentives also are imperfect because when you have to standardize every little bit so that you can price it, you lose a lot of information. By contrast, the public domain and commons-based exploration have allowed diverse people to use diverse resources to apply diverse knowledge and to experiment. They show that our innovation, growth, and creativity are an evolutionary process, not something that can be managed from the start, and a process that requires enormous experimentation. Without the commons, modern market society would atrophy. The work can be done individually or collaboratively, commercially or noncommercially. A diversity of vehicles (coops, markets, or purpose built) can be used, but what is critical is that the commons locates the authority to act where we can actually act—in our own bodies with our own social relations. Where the law locates it elsewhere, Benkler says, the commons claims authority back:

So we’re moving to a concept of cooperative human systems … [not only in theory] but also in terms of just building systems. We’re seeing the emergence of a science of cooperation which has both basic science characteristics and design characteristics to build functioning cooperative systems. We are at the very early part of this moment. It’s just the moment when the paradigm shift can even be conceived, but that is the science of the future, and that is the organization design and platform design of the future. That’s where we’re going … the recreation of future market society.

Capping this bold assertion, Benkler summarizes his argument. The binary pair of property and markets versus state planning does not exhaust the means of achieving growth and material well-being. Cooperative social action in the commons can also support growth and be more efficient and sustainable. We can turn away from a governance of pure delegation to citizenship exercised as peer governance and a progressivism aware of the fallibility of the state. “We’re … part of an intellectual moment in the history of early twenty-first-century capitalism,” Benkler tells his audience. “We’re standing at the end of forty years of the dominance of an idea that has underwritten … an extractive model of capitalism. It’s not the only model. There is another model, and we [the Creative Commons movement] represent its very core.”

As I climb the stairs of Hauser Hall to the third floor, I wonder what the man himself will be like and how he views his own work now. Yochai Benkler’s assistant ushers me into his office with a glass of water she has poured for me. Benkler shakes my hand. He has a full salt-and-pepper beard, wears gold wire glasses, and is dressed as I have seen him in his videos in jeans and a white shirt with the cuffs rolled up. He is fit in his early fifties, a person whose most immediately striking trait is that he is comfortable in his skin. He has a gentle kind of gravitas.

I have sent him some questions in advance of this meeting and asked him to talk to me about what interests him most. He is taking a chance on whether meeting me will be time wasted, and therein lies his kindness. We have half an hour scheduled, so once he has greeted me warmly, he starts right in. His desk (it’s not clear to me whether this is where he actually works) is a table with chairs around it for others to sit. There are piles of papers.

He says that of all the issues I’ve flagged, the biggest is whether people can make a living from collaborative production and how you get them to come together to do so. It was one thing to suggest they could in 2006 when he wrote his first book, another thing to write what he did in The Penguin and the Leviathan in 2010, and another thing again in 2017 now that things are much changed. He is pessimistic. A decade ago, there were over a million free software developers. They made their stuff, they sold and serviced it, and they had some success. If there was one place where worker-owned coops could take off, it should be in free software services and development. But there is not one—not even a partnership. Consultancy firms, law firms, and accountancy firms operate as partnerships, and even partnerships are absent in the digital sector.

The starkest disappointment for Benkler over the past decades has been the failure of community networks—the failure of the people who built or intended to build the physical infrastructure of the internet and make it publicly owned and available to everyone. This was in 2001 and 2002, even before WiFi was established as the standard means to connect to the internet. It was not a technical issue that prevented the physical infrastructure of the internet from being free and public. It was people’s habits of mind: they had unfounded fears about privacy and security, and so they let the private sector take it over. When people have the tech and the will to do it and yet don’t, it’s sobering. Guifi in Catalonia is one example of a free public infrastructure, but these are rare now.

I mention that I was speaking with Samer Hassan the day before and that, knowing what he did about complex systems theory, Hassan was optimistic there might be certain innovations that could trigger rapid, major change.

Benkler says he is not optimistic. He laughs. He has a more dour view, he says. Have I looked at the history of real-world coops? Benkler has studied this history, he tells me, and he cannot find any reason why some succeed and others do not. It seems that whether coops are formed and sustained or not depends on locally and culturally rooted factors. If there was a coop of dairy farmers in Wisconsin in the 1940s, then that is still the way farmers produce milk there now. If you take the same coop and try to move it to New Hampshire, then it falls apart. It is a question of socializing people’s habits of mind around the idea.

Yes, social systems are complex, and things can shift very quickly. We saw it with free software and Wikipedia. Benkler has been around this block for a long time, he says. He remembers when economists mocked him for asserting that unlicensed wireless was as important as licensed wireless, and yet now it is the primary carrier of data. When you have commons-based peer production yielding substantial pieces of infrastructure and informational goods, he says, first it is laughable, then it is a threat (they said it was Marxist or would destroy learning), and then it just becomes normal. He firmly believes you can move through this cycle, at least with discrete innovations.

But he says he does not believe in magic, in the sense that systemic change could take place suddenly at any time. Histories have their own dynamics. What he sees now is more centralization, more appropriation, and the state back in full force in surveillance and social control. He sees Uber and Airbnb turning collaborative potential into a system for extracting value from labor and for regulatory avoidance. It will require some external shock or strong act of will to push us back in a liberatory direction.

But if some universal platform arose to replace Google and Uber, wouldn’t that be revolutionary?, I ask. (I don’t mention Tim Berners-Lee’s “Solid” project or John Clippinger’s ID3 project, but this is the kind of thing I have in mind.)

That is something that Samer Hassan is working on right now, Benkler says, and he agrees it could change things a lot if it were adopted. It is a general-purpose, blockchain-enabled system to allow people to share their labor and work flow—the components of a system that would allow people to use it to make a living.59 “There is a difference between despair, what you can imagine, and what you can predict absent some major disruption,” he says, smiling a little ruefully.

I ask him if the future will depend on the idea of a guaranteed basic annual income.

Yes, he replies, but the devil is in the details when dealing with an idea advocated both by those on the left and by followers of Milton Friedman (the father of neoliberal economics). It would have to be large enough to do away with capitalism. Where would the budget come from? The poverty line in the United States is about $11,000; multiply that by about 330 million people and you are looking at $3.5 to $4 trillion a year. Eliminating all entitlement and defense spending still would not be enough to pay for it. The fiscal side is genuinely challenging. You might ask whether you’re better off with much more targeted policies. But it is the biggest idea right now and bears some careful work.

(I reflect that even a Marxist like David Harvey is skeptical of guaranteed annual income schemes that are not accompanied by a change in social and political relations on a deeper level. That is what Silicon Valley wants, Harvey has said—death by Netflix!60 Other commentators, such as political scientist Sheldon Wolin, have observed that without some controls over the cost of basic goods like housing, education, and healthcare, capitalists will always find a way to capture the public money put into guaranteed income schemes by hiking up prices.)61

Benkler is an Israeli American who worked on a kibbutz in Israel in his youth. When I ask him whether the experience has influenced any of his views, he says that he tries to base his work not on his anecdotal knowledge but on proper research sources.

It strikes me that one must have a better idea of human nature after working in a commune, but I leave it at that. One of his book acknowledgments ends with a note to his partner: “Finally, to my best friend and tag-team partner in this tussle we call life … with whom I have shared nicely more or less everything since we were barely adults.”62

Many kibbutzim today, Benkler says, are just group ownership in a classical capitalist mode. They employ various kinds of labor and lead a nice capitalist life in what you might call a club, if you wanted to be uncharitable. His father was in Israel’s bus drivers’ coop, he says. For many years, the only way to become a member was to be a son. Membership was inherited. Otherwise, you were a wage employee. There are people who say that the famous Mondragon cooperative in Spain has different classes of membership, he tells me. Then you have consumer coops and credit unions, and these are not so different from a customer loyalty club. So these organizational forms relieve you from some of the worst extractive practices of the oligarchic capitalism that we have lived through for the past forty years, but they are not true cooperatives. In Detroit, he says, with the near total collapse of the local economy, they seem to be doing some genuinely innovative things.63

Have I read Cory Doctorow’s book? Benkler asks me. “He has written a beautiful science fiction novel that has just come out, called Walkaway. He writes about a world where a utopia and a dystopia exist side by side. The dystopia is fully extractive surveillance capitalism. In the utopia, people have just walked away from that to create a postownership, collaborative society with 3D printing and networks. What’s beautiful about the book is that it imagines a world that takes the economics of free software … and transfers it to the economics of shelter, food, and the necessities of human life.”

Your work is really about changing social norms, isn’t it?, I ask. As Benkler has said in The Wealth of Networks, the value of discussing these possibilities is not really to predict what will happen but rather to enliven people’s moral imaginations as to what is possible. (In the book, his exact words are “The object of a discussion of the institutional ecology of the networked environment is, in any event, not prognostication. It is to provide a moral framework within which to understand the many and diverse policy battles we have seen over the past decade and which undoubtedly will continue into the coming decade.”)

He sits forward.

“Yes, a critical part of this discussion is about shaping a moral imagination—people’s sense of the feasible, the plausible. I’ve been studying oligarchic capitalism, and in the span of fifteen years during the 1970s and 80s—only fifteen years—I saw the mindset of the regular captain of industry change radically. He used to see himself as a steward of the stakeholder.” He was promoted from within, Benkler says, to be a competent and loyal servant of the company. He was not after money but status, and it did not enhance status to be seen going after money. “Now, everyone sees money as the only way to status. There’s been a fundamental shift in what a well-socialized person can and should do in the world. A decade ago, Wikipedia, MyBarackObama, was what cool people did. Ten years ago, what I predicted in my first book looked real; today, it looks like a utopia.”

I ask him if he has a theory about how and why revolutions succeed.

He has a general approach, he responds, based on the belief that there are integrated systems in human relations. These are the technical, institutional, cultural, social norm and knowledge frameworks that more or less structure most human behavior and the degree of room people have at any given time to change behavior. Then, he says, you get a shock. It could be Admiral Perry arriving on your shore, the Great Depression, or the internet. And that is the moment when a lot is up for grabs. With the shock comes change, adjustment, and then stabilization. When things are stable, it is harder to make change.

The point is to be ready, to know what a transformative moment looks like, so that when a moment of perturbation arrives, you can diagnose which of the integrated systems is the most likely place where you can effect change. That is where he thought we were ten years ago, he says. He is not sure anymore if we are still there because the rate of technological change is moving very fast.

Right now, he says, we see the rise of economic nationalism and illiberal majoritarianism in countries like Hungary, Russia, Turkey, and the United States. “I guess it’s essential,” he says, “to still believe that we are in the middle of a transformative moment and there is still opportunity to identify which systems are most susceptible to redesign, and to work out how to interface with the other systems so they can adapt together and not pull us back.”

Much more than half an hour has gone by. Benkler has been generous, given the premium every Harvard professor puts on his or her time. I close my notebook and thank him.

Walking back across campus to the train station in Harvard Square, I pass piles of hard snow and long lines of prospective students and their parents, willing to pay large tuition fees if only they can get onto the admissions list.64

The Epicenter of a Civilization

Boston is a city where the abolitionist and former slave Frederick Douglass spoke many times. The African Baptist Church of Boston, also known as the African Meeting House, was one of his venues. Built in 1806, it has recently been refurbished and opened as part of the Museum of African American History. I go there one snowy afternoon toward the end of my stay in Boston.

When I enter the church through a door that connects it with the rest of the museum, I’m alone in the company of a young docent. She tells me the history of the place, then pauses to let me take in its neatly painted pews and second-floor gallery. It is a perfectly proportioned speaking hall. The silence resonates with all the speeches ever made from its plain wooden pulpit.

The docent invites me to climb up into the pulpit, and I comply, feeling a little foolish. I’m not prepared for the emotion that seizes me when I stand there. It feels as if I’ve entered a magnetic field between two giant opposing forces. It is overwhelming to stand where great orators like Douglass stood and looked out and spoke of freedom—to come upon this spot as an accidental tourist and suddenly recognize it as the epicenter of a city, a people, a civilization that loves freedom.

In his autobiography, Frederick Douglass tells the story of how his white master took a newspaper away from him one day when he realized his wife had been teaching Douglass how to read. As the man seized it, he told his wife that one could not teach slaves to read because it would make them uneasy in their slavery. Douglass used to observe that this was the best argument he had heard in favor of abolition. And learning to read was in fact his pathway to freedom.

Eben Moglen, the constitutional litigator, law professor, and old ally of Richard Stallman, relates this story in one of his own speeches, given at Columbia University not long after the Snowden revelations. Moglen himself is a great orator, with a sense of both history and tragedy that makes him sound premodern. In his Snowden speech, which can be found online, he introduces the idea that we live in slavery when we do not have control over the code in our devices. If we want to recover our condition of freedom,65 we need to work together. It takes a union, he says, to end slavery.

And if we do not recover that condition of freedom in the digital age? In his Snowden speech, Moglen quotes the eighteenth-century historian of the Roman empire Edward Gibbon:

In the third chapter of his History of the Decline and Fall of the Roman Empire, Edward Gibbon gives two reasons why the slavery into which the Romans tumbled under Augustus and his successors left them more wretched than any previous human slavery.

In the first place, Gibbon said, the Romans had carried with them into slavery the culture of a free people—their language and their conception of themselves as human beings presupposed freedom. And thus, Gibbons says, oppressed as they were by the weight of their corruption and military violence, the Romans yet preserved for a long time the sentiments, or at least the ideas, of a freeborn people.

In the second place, the empire of the Romans filled all the world, and when that empire fell into the hands of a single person, the world was a safe and dreary prison for his enemies. … To resist was fatal, and it was impossible to fly.66

These are tragic thoughts. But I recall also what Moglen declaimed at the Libre Planet conference. He said—and I believe it, too—that people feel something is wrong. They feel a whole bunch of things are going on that they themselves are a part of but that they know are not quite right. They feel it about Facebook, they feel it about Twitter, and Google, and Uber, and electronic voting machines. They fear what this century could hold. Millions of people are waiting to hear the message, millions want to hear the message—that users of technology have rights, that technology must serve and not subject humankind, that it is citizens who must ultimately control code, that we can and must code for democracy.

The job before us, really, is to build a new condition of freedom.

Historically, technology tends to develop in a nonlinear, haphazard manner. How innovations come about, catch on, and succeed in different places and circumstances is a process not wholly determined by the nature of the technology. It also hangs on human agency, culture, and to some degree, chance.

We can’t be sure how digital technology is going to develop next. But if it is possible that the shift will come through some emergent pattern, some “positive chaos,” then it is all the more crucial for people who care about democracy to be alert and actively experimenting. We are many. There will be something to do for everyone’s capacity.