A new, privacy-secure civilian internet might be built in this century, yet even as one part of the European Union begins to look seriously at the project, other parts of the EU are engaged in new efforts to curtail the civilian use of cryptography. Hackers have not even begun to solve the technical problems of “privacy for the weak” and a new crypto war is brewing.
In truth, the pitched battles over digital issues critical to democracy are intensifying and multiplying as I write this book. A new crypto war is brewing—and so is a new transparency, or information, war—and a war over net neutrality, and a war over free software, and a war over digital monopolies. Privacy and transpency are at stake, but so are control over the internet, ownership of the software in all of the systems and devices we use in our societies, and even the sustainability of world economies. Viewed soberly, the hacker quest to secure democracy for the citizen seems almost quixotic.
Harry Halpin was not understating the situation when he said that despite the exponential growth of the hacker scene coalescing around the Chaos Computer Club, the hacker worry is that “right now we have to address all the policy issues at once.” The challenge is monumental, not just in its technical aspects but also in its political dimension. These policy issues are not well understood or even known by most people, even by citizens who take a keen interest in the health of their democracies. “Privacy for the weak, transparency for the powerful” has been a good slogan around which hackers and digital rights activists have built some political awareness among the general population, but what people really need is a whole new civics education, a new civics discourse—a digital era civics.
Returning to my reasons for setting out on this journey into the world of hackers and hacking, I feel the urgency of my conviction that people need to see the world that is rapidly changing around them as clearly and comprehensively as hackers see it. Only then will they be able to move their societies and resources in directions that might preserve their democracies.
Consider first what hackers are calling the new cryptowars. Since the terrorist attacks in Paris in 2015, many EU security officials and politicians have doubled down on their calls for laws that would ban encryption or would provide government agencies with the means to break it. In the United States, the same thing has been happening. In December 2015, a Muslim American couple attacked a holiday office party in San Bernardino, California, killing fourteen people.1 After police found the iPhone the husband used for work, the Federal Bureau of Investigation (FBI) asked Apple engineers to create a back door to the phone and turn off its security features, including those that wipe the phone’s stored content if someone enters the wrong passcode more than ten times.
In March 2016 I’m in San Francisco to interview cyberlawyer Cindy Cohn. She represented a Berkeley graduate student in the cryptowars of the 1990s, and since 2000, she has been working at the Electronic Frontier Foundation (EFF), the early digital civil liberties group based in the San Francisco Bay Area.2 Apple has refused the FBI’s request, and the case has turned into a public relations battle. On one side is FBI Director James Comey, claiming that law enforcement’s vision is “going dark” because of the widespread availability of encryption and other security technologies in consumer products,3 and on the other side is Apple, championing the privacy of users on pain of legal sanction. Apple has been saying that if it creates a back door for one user’s iPhone, it will be inventing the means to break into all users’ iPhones. The FBI has insisted under oath that only Apple has the ability to get inside the device. Meanwhile, hackers, Edward Snowden among them,4 have highlighted the ironies in the standoff. They claim the FBI has the technical means to “break” into the iPhone—many hackers could do it. And Apple is hardly a hero of civil rights. The FBI likely recognizes the San Bernardino case is its best opportunity to “take on” encryption in the court of public opinion, and Apple recognizes that the commercial value of the iPhone might suffer if the company is publicly seen to provide a back door to the device.
Right now, EFF is representing many of the internet security experts participating as “friends of the court” in the Apple iPhone case. Cohn was to have appeared in court this week, but she is able to have lunch with me because the hearing was suddenly adjourned. The FBI backed down. After weeks of grandstanding in the media about the lack of back doors for law enforcement and the necessity of Apple’s cooperation, the government stated in a last-minute court filing that it might have found other means to break into the phone.5
In Cohn’s opinion, the cryptowars of the 1990s never really ended. Government agencies such as the FBI and the National Security Agency (NSA) have never stopped trying to shut down or circumvent civilian use of encryption. As Edward Snowden revealed, the NSA’s Bullrun program has been battering at civilian encryption for years by getting tech companies to insert vulnerabilities into encryption systems and devices to make them exploitable, by obtaining details about commercial cryptographic tools through industry relationships, and by working through international bodies to push international encryption standards “it knows it can break.”6 The FBI has been campaigning against civilian encryption using the “going dark” metaphor since at least 2010.7
In the early 1990s, at the outbreak of the first cryptowars, Cohn was a young lawyer with an ordinary practice in a law firm in San Mateo, California.
I ask her how she got involved with EFF.
She laughs. “Actually, it was due to a gorgeous barista from France who worked in a local Palo Alto coffee shop and ended up becoming my roommate. In other words, quite by chance!”
They became friends. A lot of nerdy computer guys hung out at the place and had crushes on the barista. She and Cohn threw a housewarming party and invited some of the regulars from the coffee shop, EFF founder John Gilmore among them. Cohn ended up going out with one of Gilmore’s friends. When the federal government went after Phil Zimmermann, EFF started looking for ways to challenge the government’s cryptography regulations constitutionally. Gilmore asked Cohn if she would be willing to take on one of their constitutional challenges involving a Berkeley PhD student named Daniel Bernstein.8
Cohn asked Gilmore about the technology at the heart of the matter. “Does it blow things up?” He replied, “No, it keeps things secret.” Four cases—the Bernstein case, two other constitutional challenges, and Zimmermann’s case—were each litigated in different jurisdictions, but all of the parties kept in touch and loosely coordinated their strategies.
Cohn won the Bernstein case at two court levels. The lower court judge held Bernstein’s code was constitutionally protected speech, establishing an important legal precedent. The court at the next level upheld the decision and elevated Bernstein’s efforts even higher, declaring encryption a democratic boon: “Government attempts to control encryption … may well implicate not only First Amendment rights of cryptographers,” wrote Judge Betty Fletcher, “but also the constitutional rights of each of us as potential recipients of encryption’s bounty.”9
“It’s an awesome decision,” Cohn says, “but so frustrating because it can’t be cited. At the second court level, the government asked for en banc review [that is, a review involving all the judges of the court], and while that was pending, it deregulated cryptography, so the decision became moot.”10 Today, the export of cryptography is no longer illegal. The regulations require people like Daniel Bernstein to email the government with an export request and a link to the code being exported. It is just an administrative notice process.
I ask Cohn what the next legal battle will be in the cryptowars. “There is a danger that cryptography could be outlawed again,” she says. Or if not outlawed, the government could exert pressure that amounts to the same thing. “Right now, the situation is this. CALEA (the US Communications Assistance for Law Enforcement Act) remains the main piece of legislation on government access to our telecommunications. It used to apply only to telephone companies but was extended to internet companies after about a decade. It does not prohibit these companies from using or allowing cryptography. Indeed, it now specifically protects that. However, CALEA also says companies have to make their communications tappable. The reality is that the government pressures companies into setting up their security so that the companies always have access to the content of messages.”
Cohn and I leave the local Vietnamese restaurant where we have had lunch and return to the EFF offices to drink tea on the rooftop patio. Earlier in the day, I walked here taking a route along Van Ness, a busy thoroughfare that cuts across San Francisco, routing traffic through the civic core of the city and past the grand City Hall, the Opera House, the Veterans’ Building, the colossal Symphony Hall, and the Public Library—imposing edifices of state and culture from the previous two centuries. This civic heart of San Francisco reminded me of Berlin’s Museum Island in the middle of the Spree River, which is crammed with buildings from the days of the German empire and the treasures it looted from other people. The San Francisco precinct is only an island in traffic, and the fortunes with which it was built were made in the Wild West out of a different history of primitive accumulation. Yet it is a similar demonstration of amassed wealth and power—a similar bulky, predigital sensibility imposing its will on the world and looking outdated in the twenty-first century.
Just up the road, a few blocks into the Tenderloin district where the EFF offices are located on Eddy Street, things got sketchy. I saw disturbed men roaming up and down the street, talking to themselves. Members of the Silicon Valley’s new digital elite are rapidly taking over the city’s real estate, pushing others out. The mentally ill and the destitute, always the most stubborn to displace, soil the edges of the central, Gilded Age city. The buildings around Eddy Street, I saw, were grand but mostly empty. Some were luxury car dealerships from the twentieth century, such as Rolls Royce and Cadillac, its splendid Moorish showroom empty except for a multiplex cinema stuffed inelegantly into one corridor. BMW was moving. Mini, a more recent comer, was hanging on. Construction lots and empty shops crowded the side streets.
EFF occupies a building, now painted an industrial gray, that looks like it was once a large house. The steps up to the entrance are fenced in. When I finally arrived at its door, I had to buzz up from the sidewalk to be allowed through a locked gate. Cohn tells me the security system is a vestige from when the building was owned by Planned Parenthood. EFF continues to use parts of it to fend off the regular break-in attempts in the neighborhood.
In yet another sign of the times, EFF has been expanding rapidly, from about thirty staff members to over eighty now, with new office space purchased across the street. Since the Snowden revelations, donations have been pouring in.
Hackers and concerned scientists have countered recent government attacks on encryption with an expert report titled “Keys under the Doormat: Mandating Insecurity by Requiring Government Access to All Data and Communications,” which argues there is no technical way to provide law enforcement “back door” access to encrypted content without undermining the security of the internet as a whole.11 The introduction to the report notes many of the signatories worked together on a response to the government’s Clipper Chip proposal in the 1990s12 (the same Clipper Chip the cypherpunks fought vigorously to defeat).
As these experts would be the first to admit, encryption and Tor are not unimpregnable. A 2012 document leaked by Snowden, titled “Tor Stinks,” indicated the NSA was having trouble cracking it. However, more recently, Tor developers have warned that users who regularly use Tor to browse the internet can now be fairly easily identified: “Our analysis shows that 80% of all types of users may be deanonymized by a relatively moderate Tor-relay adversary within six months. … Our results also show that against a single AS [autonomous system] adversary roughly 100% of users in some common locations are deanonymized within three months.”13 Reportedly, experts have also found a method for distinguishing Tor users by their style of typing.14
Nevertheless, if a critical mass of people started using encryption, Tor, and other privacy tools, everyone’s security would be enhanced, because the larger the crowd, the harder it is to find someone in it. Widespread adoption would also bring more resources to developers, allowing them to keep one or two steps ahead of anyone who might try to crack the system.
To support mass adoption, EFF worked with the Tor Project15 to develop versions of Tor that can run on Windows, Mac, and Linux.16 EFF also ran a “Tor Challenge,” which added over sixteen hundred volunteer nodes to the Tor network.17 Users have also been expanding, with Tor adding 36 million users in 2010 alone.18 As of 2014, it had over six thousand nodes.19 Tor developers hope eventually to have hundreds of thousands of relay nodes20 and are building a Tor home WiFi router that would sell for about a hundred bucks, so that one day all users might become nodes in the Tor network.21
Together, EFF and the Tor Project developed a web encryption tool for dissemination called “HTTPS everywhere.” It encrypts users’ browsing history when connecting to participating websites. In early 2017, EFF reported that half of browsing traffic was encrypted.22
The German-born political theorist Hannah Arendt believed “totalitarianism was not an all-powerful state, but the erasure of the difference between private and public life.”23 The civics lesson of the new cryptowars is that we are free only when we can control what people know about us and have some guarantee of autonomy and security in our thoughts and lives.24 Draconian infringements on privacy that we institute in times of emergency to apply to some “other” almost inevitably end up metastasizing and applying to us.
It will be hard for governments and corporations to shut down privacy defaults and expectations if enough users adopt alternatives to the privacy-sucking funnels they are currently herded into. Librarians understand this. Facing warnings from federal agencies25 and uncertain legal consequences in some countries,26 these true champions of civil liberty have been actively running Tor nodes and training library users on encryption and Tor for some time now.
The hacker hope is that a critical mass of users asserting their privacy online with hacker-made tools could potentially establish what Jonathan Zittrain has called “code-backed norms”27 strong enough to rival state and corporate-imposed norms; that is to say, social norms, which might not have any legal backing, which might even contravene existing law, but are enforced by code and the will of the people.
While brewing political battles over cryptography threaten “privacy for the weak,” even darker clouds are amassing to obscure the hope for “transparency for the powerful.” Confounding new information wars are intensifying, with conflicting stories about Russian hacking of the US election, accusations of “fake news,” disturbing revelations about the political use of the Facebook platform, increasingly controversial information releases being made by WikiLeaks, and the threat of censorship by governments of Western democracies and “progressive” tech companies alike. How to make sense of it all?
Information wars are as old as politics. They involve the use of information and communication to gain competitive advantage over opponents. One might say they are endemic to democratic systems, in which various interests vie to win over the hearts and minds of individual voters. But in the digital age, information warfare has taken on properties both exhilarating and frightening. Although digital tech provides enormous scope for transparency, vigorous debate, and public understanding, it does the same for manipulation, censorship, and repression.
When WikiLeaks first hit the major media with the Manning leaks, cypherpunk fellow traveler John Perry Barlow tweeted, “The first serious info war is now engaged. The field of battle is WikiLeaks. You are the troops.”28
PayPal moved to shut down WikiLeaks’s accounts and strangle it financially. The loose affiliation of hackers and trolls acting under the name Anonymous picked up Barlow’s quote. Brandishing it as part of their “Operation Avenge Assange” manifesto,29 they swept down like the Furies of the internet, bombarding PayPal’s website with a DDoS (distributed denial of service) cyberattack and posting their calling card:
Knowledge is free.
We are Anonymous.
We are Legion.
We do not forgive.
We do not forget.
Expect us.
When the security contractor HBGary proposed to deanonymize Anonymous, a small group calling itself LulzSec but operating under the broader banner of Anonymous30 digitally eviscerated the company and its chief executive, Aaron Barr. They defaced the company’s website, erased a terabyte of its data and research, stole its emails, and released Barr’s home address and social security number through his Twitter account.31 The emails revealed HBGary had a plan to leak fake documents to WikiLeaks in order to discredit the group.32
At the end of 2010, CBS pronounced, “WikiLeaks is winning the information war so far.”33 In 2012, Time magazine ranked Anonymous as one of the world’s one hundred most influential “people.”34
“Transparency for the powerful” seemed to be on the ascendant in the first part of the twenty-first century following the WikiLeaks disclosures of 2010, the Anonymous DDoS of PayPal and HBGary, Edward Snowden’s game-changing revelations of 2013, and the Panama Papers leaks of 2015, which revealed widespread tax evasion and corruption among oligarchs and politicians.
Then the 2016 US presidential election happened.
Anyone looking for the civics lesson in the new information wars might find it helpful to review the sequence of events leading up to and after that election. In July 2016, a few months before the vote, WikiLeaks leaked nearly twenty thousand emails hacked from the Democratic National Committee (DNC).35 The emails appeared to show that the DNC was trying to aid Hillary Clinton’s campaign and sideline that of Bernie Sanders.36
There were allegations the source of the leak was Russian hackers working to get Donald Trump elected—assertions Assange denied.37 Russian agencies were alleged to be running troll farms, computer-automated “bots” that flooded social media platforms, and fake news operations to manipulate American voters in payback for an election influence campaign Clinton had run against Putin when she was US Secretary of State.38 The Russians were alleged to have used data analytics to target certain kinds of voters with fake stories, from cruder ones (Clinton had Parkinson’s disease and ran a child-sex ring out of a DC pizza restaurant) to more subtle ones that mixed real facts with plausible distortions.39 Russia was alleged to have targeted journalists and flooded their social media accounts with fake news to which they might be individually susceptible.40 Russian sources paid for ads on Facebook. Many of these used the powerful psychological techniques of internet memes.41 Researchers at the University of Southern California found that during a five-week period in fall 2016, nearly 20 percent of political tweets were generated by bots of unknown origin, a portion of which could well have been Russian.42
In April 2017, Facebook issued a report acknowledging that a lot of preelection disinformation had been spread via its platform and assured the public that it had increased its security. Google and Twitter tweaked their algorithms to counter bots and cyberpropaganda.43
“If there has ever been a clarion call for vigilance and action against a threat to the very foundation of our democratic political system, this episode is it,” James Clapper, the former Director of National Intelligence testified before Congress on May 8, 2017.44 But wasn’t he the same James Clapper who had baldly lied to Congress in 2015, stating the NSA had not “wittingly” conducted mass surveillance on American citizens?45
Russia was also alleged to have interfered with electronic voting machines. NSA contractor Reality Leigh Winner leaked a classified document to The Intercept that suggested Russian military intelligence had hacked at least one supplier of voting software and sent phishing emails to over one hundred election officials just days before the November election.46 The story was reported in June 2017, and Winner was swiftly arrested. The week before the story ran, Vladimir Putin had denied Russia interfered in foreign elections “on a state level,” floating the idea that freelance Russian hackers with “patriotic leanings” may have done the hacking.47 “Hackers are free-spirited people,” Putin explained. “They are like artists. If they are in a good mood in the morning, they wake up and paint. It is the same for hackers. They wake up today, they read that something is happening in interstate relations, and if they are patriotically minded, they start to make their own contribution to what they believe is the good fight against those who speak badly about Russia.”48
In November 2017, just days after the November 8 US election, The Atlantic magazine broke the story that WikiLeaks had been secretly communicating with the Trump campaign.49 Julian Assange had been pushing Donald Trump Jr. to disseminate its leaks (which Donald Jr. did in at least one case), had been asking to publish Trump’s tax records in order to undercut the perception that WikiLeaks had a pro-Trump bias, and had been urging the Trump campaign not to concede the election if Trump lost. At an October 10, 2016, political rally, Donald Sr. had effused, “I love WikiLeaks!” In December 2016, Assange had asked Donald Jr. to ask the president to suggest to Australia that it should make Assange its ambassador to the United States.50 And the WikiLeaks Twitter account pushed the fake news story of a Clinton child-sex ring run from a pizza shop until as late as January 2017.51
Many supporters of WikiLeaks were dismayed to see it degenerate from a beacon of transparency into a vehicle used by Assange to advance his personal agenda. He was acting like a freelance political operative, not a publisher intent only on the accuracy of the material he was publishing.52
In February 2018, Robert Mueller, the special counsel investigating Russian interference in the 2016 election, announced the indictment of thirteen Russian trolls alleged to have been working out of a large Russian trolling operation in St. Petersburg known as the Internet Research Agency.53 It was likely funded by an oligarch friend of Putin.54 Reporter Adrian Chen, who had visited the place and done a thorough investigative story on it in 2015,55 said descriptions of the troll farm and the effects of Russian hacking were exaggerated. In a 2018 tweet, Chen wrote, “Tried to tamp down the troll farm panic on @chrislhayes show last night. It’s 90 people with a shaky grasp of English and a rudimentary understanding of US politics shitposting on Facebook.”56
That may have accurately described the trolling complement assigned to the “American desk” at the Internet Research Agency when Chen investigated it, but hackers have said the Russians interfering in the US election were clearly more sophisticated adversaries than US agencies were prepared to counter and were adept at using the kinds of social media tactics and memes that hacker groups like Anonymous pioneered.57 The United States was especially vulnerable to such an attack with its monopolized social media platforms that could be easily weaponized and its long-standing, polarizing “culture war” that had already debilitated its political system.
If this roller coaster weren’t enough, in March 2018, the news that the Trump campaign had been using the sophisticated data analytics services of a company called Cambridge Analytica to target voters worsened with the awful confirmation that the data-harvesting firm it partnered with had been allowed to suck the whole social graph of Facebook users. Facebook threatened to sue The Observer to prevent the newspaper from breaking the story.58 As described by The Observer, when a user downloaded an app that was supposed to pay them to take a personality survey for academic research, the app scraped all of the user’s Facebook data and all of the data of the user’s Facebook “friends” as well, without the latter’s knowledge or consent.59 The app was downloaded by 300,000 users, yielding a reported fifty million useable profiles.60 According to an inside whistleblower, Cambridge Analytica then built a system to target users one by one with tailored posts, using psychometric parameters: it was individualized targeting on an industrial scale.61 Alexander Nix, head of Cambridge Analytica, claimed the company possessed a massive database of four thousand to five thousand data points on every adult in America.62 The company had also worked for the “Leave” campaign in the UK Brexit vote. No one had imagined political manipulation on a scale like this before.
Or had they? It soon became apparent that targeting its users was Facebook’s business model. Facebook does not sell your data. It sells you. It serves you and other users up for targeting in the demographic slices required by advertisers and companies like Cambridge Analytica.63 And the way that Facebook allowed apps to operate between 2010 and 2015 permitted third-party software developers like Cambridge Analytica’s data-harvesting partner to covertly scrape and keep as much of Facebook’s user data as they wanted.64
A 2011 Federal Trade Commission consent decree65 obliged Facebook to prohibit third-party apps from scraping personal user data. Facebook was supposed to audit, but its approach was reportedly lax.66 Potentially tens of thousands of apps did the same thing Cambridge Analytica’s partner firm did.67
After damaging news reports began to be published, Facebook requested Cambridge Analytica’s partner firm to erase the data it had collected from Facebook users. This was during the 2015 Republican primary, while Cambridge Analytica was working for candidate Ted Cruz and before it switched to working for the Trump campaign. Facebook said it believed the firm complied with its request, but it remained unclear whether the fruits of the data ever ended up being used in the Trump campaign.68
Then journalists began to connect the dots to conclude that Barack Obama’s campaign team, lauded for its social media skills, had probably done a similar thing in the 2012 election—sucked out Facebook users’ whole social graph without the consent of users’ friends.69 Tech bloggers pointed out that although Facebook, with over two billion users and a staggering amount of data, was the most worrying example of a company with an intensive data-harvesting business model, this was in fact the business model of most digital platforms.70
Between Russian hacking, WikiLeaks’s meddling, and the possible use of Facebook for political ends by the Trump campaign, Americans had their cognitive ability to understand their own political environment seriously disrupted in the 2016 election cycle. And it looked a lot like Julian Assange’s proposal for disrupting political elites in his early blog essays on the theory of leaking. Disruption and cognitive dysphoria were certainly the result in 2016 for the Democratic Party, the target of these three initiatives. A “conspiracy that cannot think is powerless to preserve itself against the opponents it induces,” Assange had written in one of his earlier, prescient essays. “It falls, stupefied; unable to sufficiently comprehend and control the forces in its environment.”71
It is uncertain whether these events actually affected the outcome of the 2016 election. Although the Democratic Party has seized on the idea of Russian and WikiLeaks interference as an excuse for its loss, the most immediately damaging effect of the hacking could be that the Democrats’ preoccupation with it forestalls any serious reform of the party before the 2020 election.
Even so, Russian hacking should not be dismissed. Perhaps the most insightful take on the new information wars has come from a historian of the twentieth century, Timothy Snyder, the author of the best-selling primer On Tyranny: Twenty Lessons from the Twentieth Century, published shortly after the 2016 presidential election.72 Snyder, who has studied Russia over a long period and reads Russian newspapers, has said the Russian government’s tactic of inducing information dysphoria was honed on its own population. It is meant to support a certain kind of political equilibrium in which the Russian population is induced to tolerate as much as possible a state of oligarchy and radical inequality. These conditions are stabilized and institutionalized in Russia by way of “a very steady, efficiently and beautifully produced, diet of ‘fake news’ complimented by a series of … manufactured triumphs abroad.”73
“This is a certain model,” Snyder says. It is a model that Western democrats need to know is out there, and it is attractive to certain kinds of people—oligarchs and the far right in their own countries, for example. As a political model, it can stabilize a status quo of gross inequality, “but what it can’t do is generate reform [because reform would mean the kleptocrats would have to go], and it can’t generate wealth.”74
Why would Russia want to use these methods on the populations of other countries? Snyder posits that in order to maintain power at home, Russian elites,
came to understand … that you have to remove the competition. You have to make the rest of the world more like Russia … and to do so, partly by supporting the Far Right, but also partly by promoting this idea that there’s no such thing as truth, that it’s all relative, that there are no facts, because in that environment, political activity and political opposition become incoherent and impossible. They succeed at that at home and now they’ve been trying to bring that abroad and they’ve done so with some success, and one has to recognize their intelligence, and one has to be clear about their aims, because we are now in the middle of that.75
Snyder says propaganda “is not just a kind of muddling reality or meddling in reality.” Rather, “You fill the public sphere with things that aren’t true and you contradict yourself all the time.”76 Thinking of Donald Trump as a pathological liar is misguided in this context. Confusion and demobilization are regime policy.
Then, says Snyder, you blame the journalists—“the people responsible for factuality.” You talk about having to crack down on them. Then nobody knows what truth is anymore, nobody trusts the media, and “you end up having a monopoly, or at least the strongest position, in the manufacture of the symbols of the day. That’s clearly what they’re up to. And it’s probably more central and more important than we generally realize.”77
According to Snyder, Americans need to understand that the Russians have not merely hacked the last US election. More insidiously, the authoritarian philosophy and methods of the Russian regime have migrated to the West and are being employed by Western politicians and their supporters, most prominently by the Trump regime. The civics lesson here is that democratic societies require a shared belief in factuality, a trust that we can at least agree on a methodology for ascertaining facts and a belief that facts matter. If you destroy that belief, then you destroy democracy. “That’s the cheap and easy way to do it,” Snyder says, “and that’s what the twenty-first-century authoritarians have discovered. … That’s the process that is under way before our eyes.”78
So are there technical fixes to this? Hackers would say monopoly corporations like Facebook, Google, and Twitter should not be acting as society’s censors, with their secret algorithms for ranking and filtering information. But even when the problem is parsed carefully to look for better solutions, how can we expect technology to repair a weakened and imperiled civic space? The danger of putting too much emphasis on technical fixes is that we treat the computer as some kind of magic, oracular machine. In the digital era, the media has fractured into a few big outlets and thousands of smaller online ones, which has led to a situation where we are all living, to some extent, in our own bubbles, reading different messaging scripts. We are all vulnerable to believing the things we hear or read when they are repeated over and over again. The problem is not the plurality of information sources or even the trustworthiness of some of them. It is that we rely too heavily on computers for information, discourse, and connection.
Limit your exposure to the internet, Snyder recommends. Read books and long articles. Support investigative journalism. Take time to speak to other people, especially those you think you don’t agree with. Make eye contact and small talk. Practice the kind of politics where you show up in person. Volunteer, and maybe run for office. Then you can react to propaganda according to the mental and social preparation you have made79 and not just according to your click and network biases. A healthy civic space takes some sustained effort and physical presence on the part of citizens, some serious commitment to the value of social cohesion.
Just as the fight over privacy (the cryptowars) and transparency and truth (the information wars) is getting dire, net neutrality is also being threatened with serious stakes for democracy.
Net neutrality is something ordinary users take almost for granted because it was built in when the internet was created. Conceived as a network of networks, the early internet gave people a basic code, or set of protocols, to connect computer servers all over the world for the purpose of sharing information. It was a network of interoperable networks. The net was decentralized in that the servers that made up the networks were numerous and diverse. Private and public, big and small, they were not controlled by any dominant player. The services people used, like email, were interoperable. You could use your own application and still be able to talk to someone using a different one. The net was open or generative of innovation in that anyone could use it, connect their server to it, and invent and offer new applications to add to the rich ecology of the whole. In sum, the net was “neutral” in that the basic protocol did not do anything but send information between servers. It did not monitor or discriminate against content or users. Anyone could participate, move around it freely, speak and associate freely, and use it for their own purposes. In short, there were no gatekeepers.
For years, internet service providers (ISPs)—the companies, such as AT&T, Comcast, and Verizon, that provide connections between one’s home or business and the internet—had lobbied to become the gatekeepers of the internet so they could exploit that position for their profit. A typical “retail” ISP network connects anywhere from dozens to millions of homes, businesses, and cell phones to the rest of the internet. “Retail” ISPs, in turn, connect to “backbone” ISPs, which provide high-capacity, long-haul transmissions across the internet.80
ISPs planned to assume this gatekeeper role over individual users and other players in the internet by blocking the range of websites and services that users could access, either outright or through speed and data caps linked to content. In this way, they planned to force people to use the services and content they offered themselves and to increase charges. Hackers and digital rights activists recognized this would destroy the internet as a public good and turn it into the commercial property of monopolies. The ISP plans were largely beaten back. It took the Obama administration about six years to act in support of net neutrality, but in 2015 its Federal Communications Commission (FCC) reclassified broadband service (including internet) from an “information service” to a “telecommunications service” (a utility) subject to much greater regulation over neutrality and privacy.
One of the Trump administration’s first moves was to reverse this. Trump appointed Ajit Pai, a former lawyer for Verizon, as chair of the FCC.
Once installed, Pai scheduled a December 14, 2017, vote at the FCC to reverse the statutory classification and gut net neutrality. Barely a week before the vote, he played a video of a comic skit at the Federal Communications Bar Association dinner in which he and a Verizon executive played themselves:
Verizon VP: As you know, the FCC is captured by industry, but we think it’s not captured enough.
Ajit Pai: What plans do we have in mind?
Verizon VP: We want to brainwash and groom a Verizon puppet to install as FCC chair. Think Manchurian Candidate.
Ajit Pai: That sounds awesome!
Verizon VP: I know, right?81
Internet pioneers Tim Berners-Lee, Vint Cerf, Steve Wozniak, and others wrote an open letter calling on senators to push the FCC to cancel the December 14 vote: “We are the pioneers and technologists who created and now operate the Internet, and some of the innovators and business people who, like many others, depend on it for our livelihood. … This proposed Order would repeal key network neutrality protections.”82
The signatories noted that the FCC had “not held a single open public meeting to hear from citizens and experts about the proposed order” and had seemingly ignored a forty-three-page technical comment submitted earlier by over two hundred prominent internet pioneers and engineers. The technical comment had stated, “the FCC (or at least Chairman Pai and the authors of the [proposed order]) appears to lack a fundamental understanding of what the Internet’s technology promises to provide, how the Internet actually works, which entities in the Internet ecosystem provide which services, and what the similarities and differences are between the Internet and other telecommunications systems the FCC regulates as telecommunications services.”83 The results of the Pai Order, they said, “could be disastrous.”84
More than twenty-three million comments were also filed at the FCC in response to its rule-changing proposal. About a million of these were bot-generated, falsely using the names of real people, and roughly half a million were filed from Russian email addresses. Fifty thousand consumer complaints went missing from the record, and the FCC’s comment system became the subject of a Government Accountability Office investigation and an inquiry by the New York State attorney general.85
Despite the irregularities, the December vote went ahead, and the FCC voted in favor of gutting net neutrality.86 As of January 2018, twenty-one state attorneys-general were suing to block the repeal, along with Mozilla, the NGO Free Press, and the Open Technology Institute.87 On May 16, 2018, Senate Democrats got three Republicans to support them on a vote under the Congressional Review Act to block the repeal. The fight then moved to the House of Representatives. President Trump held veto power.88
The civics lesson people need to grasp in the net neutrality wars is that the fight for net neutrality is a fight for the future of free communication. As hackers would underline, in the digital era, if you expect to decide freely what you listen to and watch, receive, send, publish, create, and even think as a citizen, net neutrality is essential. Net neutrality is also about the future of media—which media outlets survive, what stories get told, and which are suppressed. It is a fight about innovation and free markets because an information economy depends on unfettered access to the internet. And it is a fight about political freedoms because political speech and organizing take place increasingly through the internet.
City councils have begun to take action to preserve net neutrality. They do not think it is a good idea to hand the internet and their local information economies over to a handful of monopoly ISPs. They are hacking this model. Seattle Council member Kshama Sawant has called on Seattle to invest in building its own municipal broadband infrastructure “so no internet corporation has the power to prioritize making money over our democratic rights.” Public opinion, she said, was clear: “76% favor net neutrality, even including 73% of Republican voters.”89 It is an expensive infrastructure build, but around 185 other municipalities in the United States have done it.90
Chattanooga, Tennessee, is a model many look to. In a ten-year civic rebuild, the city’s municipally owned electricity company built a physical fiber optic internet infrastructure with speeds as fast as one gigabit per second (about fifty times faster than the US average). They call it the Gig.91 Danna Bailey, VP of the municipal electric board, has said, “We don’t have to worry about stockholders, our customers are our stockholders. We don’t have to worry about big salaries, about dividends. We get to wake up everyday and think about what, within business reason, is good for this community.”92
There are other Gig cities, including Lafayette, Louisiana, and Bristol, Virginia, but none is as advanced as Chattanooga. Google has plans to roll out fast-speed fiber optic systems in selected cities, but most big telecoms do not see a profit incentive in what would be an expensive rebuild of their existing systems.93
They do see the profit in blocking cities from establishing their own broadband infrastructure. When Chattanooga lobbied the FCC to allow it to expand its broadband to neighboring communities, many of which get only a dial-up connection from the big telecoms, the industry moved swiftly, telling the FCC to block the city’s plan, as well as a similar plan for Wilson, North Carolina.94 A number of state legislatures (whose members receive big donations from Big Telecom) have passed state laws banning cities from building their own broadband networks. In Colorado, thirty-one counties have pushed back, voting to exempt themselves from the state law.95
As privacy, transparency and truth, and net neutrality are under new attack, so too is free software. Richard Stallman’s bedrock principle for freedom and democracy in the digital age—the idea that code should be free and in the control of citizen-users—is in danger of being overcome by “digital restrictions management” regimes (called “digital rights management”96 regimes by the corporations that impose them) that deprive citizen-users of property rights and, hackers have argued, turn them into serfs of those corporations.
The recent struggles on the free software front are well known to hackers and people in the tech world, but for most ordinary users, the civics issues in these struggles need explaining.
I have heard hackers call the body of free software that forms the backbone of the internet and World Wide Web and that runs much of the digital world now (GNU/Linux) their Mahabharata. The Mahabharata was an epic Sanskrit poem. Composed between the fourth century BCE and the fourth century CE, its cumulative creation was part of the flowering of a civilization on the Indian subcontinent.
Like the authors of the Mahabharata, hackers have created their epic work over years of collective effort. They intend that their software, like the poem, will be widely studied, added onto, improved, and adapted. It is their contribution to Western civilization and to democratic society.
But copyright is ostensibly at odds with this aim. Since the 1970s, society has treated software as property subject to the law of copyright (and sometimes patents). Copyright laws give all control over a created work97 to the copyright holder (initially, the creator of the work): a copyrighted work cannot be accessed, copied, distributed or modified without the consent of the holder. An exemption, known as “fair use,” allows these uses, without permission, for limited purposes. In the United States, for example, the Copyright Act of 1976 (and its amendments) allows the “fair use” of copyrighted material without permission of the copyright holder for purposes “such as criticism, comment, news reporting, teaching … scholarship, and research.”98 Other uses are evaluated as potentially “fair” on a four-factor test that balances the interests of the copyright holder with societal interests. Generally, courts have found that uses that do not undermine the commercial value of a work or are “transformative” of the original work—a parody of an original song or the use of a photo in a collage artwork, for example—are fair uses.
Copyright does not cover creators’ ideas (in the way patents do) but, generally speaking, covers only creators’ particular expression of ideas. The public policy behind copyright law is to enable creators to be paid for their works while still encouraging the generation and free dissemination of ideas, learning, knowledge creation, and innovation.
Recall that Richard Stallman was not against having users pay for a copy of a software program. He wanted to ensure that after users pay for or otherwise obtain an authorized copy of software, they will have “four freedoms”—(0) the freedom to run the software for any purpose, (1) the freedom to study how it works and to change it to do their computing as they wish, (2) the freedom to redistribute copies to help others, and (3) the freedom to distribute copies of their modified version to benefit the whole community.99 “Hacking” the traditional law of copyright, Stallman invented the “copyleft license,” called the GNU General Public License (GNU GPL, or GPL). The GNU GPL specified that the purchaser of software would have these four freedoms as a matter of contract. In short, “free software” is about giving users control over their own computing.
Stallman’s struggle to “free” software from the repressive limits of traditional copyright “defaults”100 were paralleled, in the same time period, by activists’ struggles to “free” knowledge and artistic expression from them. The Creative Commons initiative, with which Harvard University law professor Larry Lessig (“Code is law”) was closely engaged, invented the Creative Commons license, under which authors and artists can specify what the public can do with their works. Creators can choose among several versions of the Creative Commons license and set out different rights and responsibilities for users. Another related initiative, Open Access, sought to “free” public records and publicly funded research from enclosure by government agencies, for-profit academic journals, and search engines. Open access activists built new interfaces and sought reforms to the law.
In contrast to these careful legal approaches to the problem of traditional copyright defaults, other activists and users simply asserted a moral right (not to be confused with “moral rights” under copyright law)101 to crack and modify, file share, “pirate,”102 and remix the copyrighted works of others in potential violation of the copyright defaults attached to these works.
A host of treaties and legislation were written by states and corporate lobbyists to prevent and punish these latter activities, including the Anti-Counterfeiting Trade Agreement (ACTA), the Digital Millennium Copyright Act (DMCA), the Preventing Real Online Threats to Economic Creativity and Threat of Intellectual Property Act (the PROTECT IP Act or PIPA), the Stop Online Piracy Act (SOPA), and the Trans-Pacific Partnership (TPP). The 1998 US Digital Millennium Copyright Act (DMCA), which implements two 1996 World Intellectual Property Organization treaties, sanctioned the use of “digital rights management” (DRM) or coded access controls to copyrighted material and made it criminal to circumvent these and to create or share tools for circumvention.103 In other words, the DMCA made it criminal to try to access DRM-shielded code without permission, which was a heavy-handed approach to punishing individuals for copyright infringement. Any attempt to do more than run an authorized copy of software—any attempt to share it or adapt it to make it work the way the user wanted it to—became a criminal act.
DMCA effectively made it criminal even to try to see, study, and criticize shielded code, essentially making an end-run around the “fair use” rights that copyright legislation provides. In short, DMCA locked code in a black box no one could open on pain of criminal punishment. More insidiously, it allowed copyright holders to impose whatever “terms of use” they wanted on users, such as restrictions on what platforms or devices they could use, a contractual right to unilaterally erase purchased material, the ability to turn off devices, and the installation of malicious functions on the users’ computer (such as tracking and scraping functions to collect the user’s personal information). Code that cannot be seen and studied might contain malicious functions that a company has not informed users about and might introduce security vulnerabilities that hurt users’ computers and expose them to third-party attacks. DRM effectively allowed the copyright holder to take control of a user’s computer to do the copyright holder’s bidding instead of the user’s and prohibited the user from even looking at what was being done.
On the other hand, the DMCA did provide that internet service providers and other intermediaries, such as owners of websites, were not responsible for the copyright infringements of their users,104 a good thing from a digital rights perspective. The entertainment industry in the United States sought to reverse this. The Stop Online Piracy Act (SOPA) and its sister, the PROTECT IP Act (PIPA), which were pushed by the movie industry and other corporate interests, would have made ISPs and website owners responsible for the copyright infringements of their users. But in 2012, these legislative bills were put on hold indefinitely when the online community lit up in a week of protest so fierce it knocked legislators onto their heels.105 EU states were just about to ratify an international treaty, the Anti-Counterfeiting Trade Agreement (ACTA), that committed them to pass domestic legislation along the lines of SOPA and PIPA when similar protests in Europe caused the EU Parliament to reject the agreement.106 The United States sought to export digital restrictions management to other countries in the subsequent Trans-Pacific Partnership treaty (TPP), which were intended to benefit the already dominant American information sector (a sector that includes software, gaming, film, and music and that grossed over $1.5 trillion in 2014).107 In early 2017, President Trump would pull the US out of TPP negotiations, but the “digital rights management” language remained in the text that other Pacific Rim countries were considering until the Canadian delegation insisted that the most problematic provisions—including copyright term extension, DRM rules, and intermediary liability—be suspended from the agreement that was finally signed.108 As of April 2018, Trump was considering US reentry to the treaty.109
For the past two decades, corporations in the United States have been pushing the envelope of digital restrictions management under the DMCA legislation, arguing that consumers do not own the software inside the manufactured products they buy (products like phones, computers, coffeemakers, fridges, clothes washers, and vehicles). This goes far beyond the original purpose of copyright law, which was to ensure that creators (traditionally, artists, writers, performers, composers, and architects) receive reasonable remuneration for their work.
Digital rights organizations like EFF have been trying to push back. The implication of the corporate position is that as consumer goods increasingly become digitized and connected, consumers will not own, control, or have the ability to repair most of the things they buy and depend on. Like serfs, consumers will be merely tenants, and their overlords will be able to set and change the conditions of consumers’ tenancies as they decide. The corporate position arguably amounts to a destruction of property rights for the ordinary consumer and an immense augmentation of property rights for big corporations—digital feudalism—or at least a new rentier type of economy (an economy in which one class holds title to finite property assets and passively profits by charging others for access to that property). While the business model for social media platforms sells its users’ attention and its users’ data or profiles, this second, pernicious business model devised by early twenty-first-century digital capitalism milks its users with rents, fees, and updates. Both models crush competition and monopolize markets through network effects and anticompetitive practices.
As described in chapter 1, even when a product is built around the free software kernel Linux, the manufacturer can thwart users’ right to modify the kernel by designing the product’s hardware to block or restrict any code that does not have the manufacturer’s signature. Users are allowed to make their own versions of Linux, but they cannot sign those versions with the manufacturer’s secret key, so they cannot make their versions run on the product. This practice (“tivoization”) is named after the product TiVo, a digital video recorder where free software developers first came across it. With tivoization, the manufacturer can put malicious functionalities into the code and stop users from removing them. In 2005, Richard Stallman created the GPLv3 license with the help of lawyers working with him, notably Columbia University law professor Eben Moglen.110 The GPLv3 license gave free software developers a way to contractually prohibit users (including companies) of their free software from “tivoing” it. Unfortunately, the original body of free software adopted by commercial interests was released under the earlier GPL or GPLv2 licenses. Added to this, Linus Torvalds has rejected the use of the GPLv3 for Linux software going forward.111
Potentially, someone could start building computers and smartphones that were made entirely of free software and free hardware and subject to GPLv3 licenses. But the capital investment required to go up against the existing monopoly manufacturers is daunting.112
For a glimpse of what the future with DRM could look like, consider what the digital restrictions management regime imposes on the early twenty-first-century farmer. In the Copyright Office’s regular rule-making process under the DMCA, John Deere, the largest manufacturer of farm equipment, recently submitted that when farmers buy its tractors, they obtain not what most people would call ownership but rather “an implied license for the life of the vehicle to operate the vehicle.”113 A license agreement John Deere started to require farmers to sign in October 2016 “forbids nearly all repair and modification to farming equipment, and prevents farmers from suing for ‘crop loss, lost profits, loss of goodwill, loss of use of equipment … arising from the performance or non-performance of any aspect of the software.’”114
“If a farmer bought the tractor, he should be able to do whatever he wants with it,” one farmer told a Motherboard journalist. “You want to replace a transmission and you take it to an independent mechanic—he can put in the new transmission but the tractor can’t drive out of the shop. Deere charges $230, plus $130 an hour for a technician to drive out and plug a connector into their USB port to authorize the part.”115
Tractors and other pieces of farm equipment cost hundreds of thousands of dollars. Farmers’ livelihoods depend on them. These machines regularly require repair, and farmers are used to being able to do it themselves to keep things running during planting and harvesting seasons. If they have to wait for a company technician and rely on company tools and updates to keep their tractor running, they are at the mercy of the company, its service department, its year-by-year planned profit margin, its decisions to make products obsolescent, and its continued existence. “What happens in 20 years when there’s a new tractor out and John Deere doesn’t want to fix these anymore?” one farmer asked. “Are we supposed to throw the tractor in the garbage, or what?”116
To avoid this oppressive business model, many farmers have begun hacking their tractors. A black market for John Deere firmware has grown out of the Ukraine and Poland, where its DRM controls are cracked and the “freed” software then made available to desperate farmers through invitation-only online forums.117 Farmers risk being sued for breach of contract and prosecuted criminally under the DMCA for hacking their tractors, but it is a risk some feel they must take.
Consumers have rebelled against the DMCA-DRM regime and submitted over forty thousand comments to the US Copyright Office urging that consumers’ property rights be restored.118 Several pieces of legislation have been proposed to mitigate the effects of digital restrictions management, such as the Breaking Down Barriers to Innovation Act of 2015, a congressional bill that would improve the DMCA process, and “fair repair” legislation in the states of Minnesota and New York that would restore the right of owners to repair the electronic equipment they have purchased.119 One piece of legislation that has been passed makes it legal for users to unlock their cell phones in order to change carriers.120 There may be small concessions, but corporations are relying on this rentier economy based on leasing software the way earlier ruling classes relied on leasing land.
Think of this in light of the coming Internet of Things (IoT). Soon software will be ubiquitous in almost everything you use. There will be smart cars, smart homes, smart energy grids, smart health, smart cities, and smart government. If you and your democratically elected government cannot own and control the software you use, you will be at the mercy of the corporations that do. Governments use software, lots of it, and if they can’t look into it, then who is really in charge in a democracy?121 Jeremy Rifkin, a proponent of the Internet of Things and consultant to governments and businesses, has described the project in what could be its ultimate form: “The Internet of Things will connect everything with everyone in an integrated global network. People, machines, natural resources, production lines, logistics networks, consumption habits, recycling flows, and virtually every other aspect of economic and social life will be linked via sensors and software to the IoT platform, continually feeding Big Data to every node—businesses, homes, vehicles—moment to moment in real time.”122
The stated goals of the IoT are to improve energy efficiencies, boost productivity, and reduce the costs of the production and delivery of goods and services dramatically. But it could lead also to corporate rent extraction, surveillance, and other abuses on an epic scale.123
Cisco Systems, a multinational tech conglomerate deeply invested in the IoT, predicts total profits over the coming decade will be more than $14 trillion.124 In line with Rifkin, Cisco’s CEO calls the project the “Internet of Everything.”125 And opting out will become more difficult as the IoT progresses.
The civics lesson in the new free software wars is that locked, proprietary code has the potential not only to destroy your privacy but also to reduce you to a condition of serfdom, without property, autonomy, or livelihood beyond your overmasters’ control. As hackers have been warning, those who will not or cannot pay the rent in this neofeudal structure will become marginal, outlaw, or superfluous. It is a possible dystopia analogous to the one portrayed in the science fiction film The Matrix.126
Consider, finally, the new trust-busting wars. In the first part of the twenty-first century, the future of privacy, transparency and truth, net neutrality, and free software hang in the balance, and so does the health of Western economies. Why is this so?
As hackers and people in the tech world would tell you, digital platforms, by their nature, tend to throw up large monopolies. Platform capitalism is a winner-take-all competition. First, users gravitate to the dominant platforms because they want to use the social media platform that everyone else is using, the software that is interoperable with other things they use, the platform with the widest coverage or selection, and the user interface they have become accustomed to. Then as more people use a platform, it becomes more valuable to users, and the platform controls more attention and data, allowing it to grow even larger. Finally, early advantages tend to become consolidated into market dominance. These are called “network effects” and are a powerful barrier that hackers seeking to offer alternative services have to contend with.
Monopolization does not mean the end of the struggle for dominance. To remain dominant, companies are driven to expand their exploitation of data and labor, to position themselves as a gatekeepers, to merge with other platforms and markets, and to enclose their ecosystems.127
Data extraction is part of the business model of every capitalist platform, whether it is a social media, a shopping, a service platform, or even a software platform that supports a physical product. Data confers competitive advantage, so the imperative is always to collect more. And finding and holding an as yet uncolonized human activity is like finding and conquering a previously undiscovered land or people. As Nick Srnicek, author of Platform Capitalism, has observed, “Whoever gets there first and holds them gets their resources—in this case, their data riches.”128
Calls for platforms like Facebook to respect users’ privacy miss the point that getting around privacy is at the core of the business model. These platforms will constantly push the envelope of what is socially and legally acceptable when it comes to data collection. It should be no surprise that their common pattern is to go ahead and collect data without consulting users, to apologize and equivocate where there are complaints, and to roll back collection only if there is very strong pushback.129 As any Facebook user may have observed, the company has unilaterally and repeatedly changed its privacy settings since its inception, exploiting more intimate tranches of personal data each time.130
Extraction of users’ free labor—creating “likes,” reviews, and comments—is also part of many platforms’ business model.Value is extracted from workers as well. “Crowd-sourcing” platforms like Uber and Airbnb commodify the time and possessions of workers while treating them as independent contractors instead of employees. Platforms like TaskRabbit allow workers to make a pittance fetching and delivering things, not unlike lackeys in earlier periods of historic inequality. Amazon’s Mechanical Turk (MTurk) platform pays workers to complete microtasks that computers currently can’t do perfectly, like identifying numbers or subject matter from images or transcribing sentences from audio. Most of Amazon’s “click workers” make less than $2 per hour, yet the companies argue “click work” offers choice, autonomy, and equal opportunity.131 By intention, there are no guaranteed hours, benefits, minimum wage levels, or employment standards for any of these workers unless they can successfully litigate under labor legislation to have these apply. The final indignity these workers will suffer will be when they are ultimately replaced by machines, as when Uber drivers are replaced by the driverless cars being developed by Uber investor Google.132
Platform capitalism leads to a convergence of markets. Tech insiders say that digital companies have little to leverage in terms of user data and share price unless their platform takes over the whole sector of, say, book selling, taxi service, online shopping, search, or mapping. Companies therefore aim to aggressively buy up, outpace, or crowd out competitors. The lack of diversity perpetuates itself. Alphabet (Google), Amazon, Apple, and Facebook dominate their respective markets. Figures from different sources vary, but in 2017, approximately 80 percent of the world’s searches were done on Google, 77 percent percent of mobile social media occurred on Facebook, and Amazon had cornered the ebook market.133 These companies are now competing to take over each other’s sectors. Apple recently attempted to compete with Google’s grip on mapping; Google is currently trying to use its search and mapping monopoly to expand into a shopping and smart car platform; Amazon is leveraging its monopoly in book selling to become the monopoly platform for retail and delivery of all consumer goods, as well as a dominant cloud service.134
Increasingly, as Harry Halpin described, digital platforms are enclosing their territory and centralizing the web experience. Business models known as “walled gardens” or “vertical integration” restrict users to a company’s services, applications, and devices, preventing interoperability. The present goal of companies like Amazon, Apple, and Google is to become all-encompassing proprietary environments. Facebook, which serves the dependent user with social media, email, news, and shopping functions all from one platform, has already succeeded in convincing many people that it is the internet.135
Although digital platforms can attract huge amounts of equity investment if they look like they are going to dominate the attention of people in a new area of activity, a lot of these platforms are not very profitable in terms of producing income. As Douglas Rushkoff shows in his book Throwing Rocks at the Google Bus: How Growth Became the Enemy of Prosperity, “At the time of its billion-dollar purchase by Facebook, Instagram had raised $57.3 million, was valued at $500 million, and had generated $0 in revenue. … Likewise, Tumblr netted negative $13 million the year it was purchased by Yahoo for $1.1 billion. … Snapchat, a social media app, turned down a $3 billion offer from Facebook—all for its users’ 400 million daily, dissolving pings.”136
The endgame of this business model is an economy based largely on marketing and advertising. Yet the entire sector of advertising, marketing, public relations, and associated research accounts for less than 5 percent of the US gross domestic product (GDP)137 and 0.7 percent of the gross world product (GWP).138 Growing impediments to the sector are runaway “bot” activity and the rising use of ad blockers, which grew 41 percent in 2014 and 96 percent in 2015.139
Given that the income stream profit for all of these companies has come to out of the same 5 percent GDP that marketing and advertising make up, there seems to be no way they can justify their share prices.140 In that light, every start-up sale is a pumped-up speculation on whether a new app will corner the most user attention and data. The developers walk away with most of the money.
This is a large problem for the economy. Much of the equity in digital platform companies’ shares is essentially “dead,” or noncirculating, capital. These companies and their founders have more money than they can usefully spend. Together, Alphabet, Amazon, Apple, and Facebook have a market value of approximately $3.5 trillion.141 In 2015, Bloomberg News reported that “Apple Inc.’s cash topped $200 billion for the first time as the portion of money held abroad rose to almost 90%, putting more pressure on Chief Executive Officer Tim Cook to find a way to use the funds without incurring US taxes.”142
Hackers know well that as monopoly platforms take over a sector, they make it hard for others to exchange value. They make it very difficult to introduce alternative business models and platforms.
Another large problem for the economy is that although digital technology creates new kinds of jobs, it is also killing jobs at a startling rate. New apps do not just tend to become monopolies when they dominate; they often collapse entire sectors of economic activity. The gutting of the music and media industries by digital platforms attests to this fact, as do the suicides of taxi drivers.143 In many industries, digitized systems allow complex global supply chains to exploit the cheapest labor. Artificial intelligence is automating manual work and is making inroads on replacing higher-skilled work. Some experts estimate that job losses to technology over the next three decades could be as high as 70 percent and that unemployment could rise to about 50 percent.144 Eric Schmidt, the CEO of Google, warned the Davos Economic Summit in 2014 that many professional, middle-class jobs that so far have been considered immune from automation will disappear.
In short, a dominant platform can collapse an industry with, as Douglas Rushkoff puts it, “nothing to show for it but shares of stock and no earnings. … A couple of winners take it all while everyone else gets nothing. … Total economic activity decreases as money is sucked up into share value.”145
Some commentators believe that as profitability starts to become more of an issue, most of these digital platforms will end up charging some kind of rent or fee for service.146 This might be a storage fee for every business that uses Amazon’s cloud service, a cut of every financial transaction, a license fee for car makers using Google’s driverless platforms, and a massive system of micropayments for social media, news, and other services. As the Internet of Things allows every physical thing (vehicles, roads, fridges, doors, trash bins, toilets) to be turned into a service, companies providing the software might charge micropayments per use, in addition to license and software update fees.147
As suggested earlier, not everyone will be able to afford these charges. Together with unemployment, underemployment, stagnant wages, and soaring costs of living (for housing, healthcare, education, and tech), these developments could create a gaping digital divide.148
What about charity? Aren’t tech titans giving a lot back to the economy through charity? Even that is doubtful. “Good people of San Francisco, let’s talk about the Google buses. Why do we hate them so?” began a 2013 opinion piece in the San Francisco Chronicle titled “Why We’re Invisible to Google Bus Riders.”149 In the San Francisco Bay Area, the tech elite are a separate class, living in a separate reality from the rest of the population. In this region—where Facebook’s founder Mark Zuckerberg’s overall wealth was $73.1 billion in 2017 and that of Amazon’s founder, Jeff Bezos, was $85 billion;150 where $100,000 Teslas are common; and where there is consumer demand for “raw water” at $37 a bottle151—the Silicon Valley Community Foundation is one example of the tech titans’ brand of beneficence. With reportedly $13.5 billion of assets under management as of February 2018—surpassing the Ford Foundation as the third-largest philanthropy in the United States152—the foundation is a donor-advised fund. Local nonprofits call the foundation the “Death Star” and the “Black Hole” because, they say, “It is so hard to get money out of it.”153 Loopholes in the law allow donor-advised funds to avoid rules that make charities pay out a minimum percentage of their funds each year to support actual charitable work. “They got so drunk on the idea of growth that they lost track of anything smacking of mission,” one nonprofit consultant told an Atlantic reporter, speaking of the foundation.154
The civics lesson in the new trust-busting wars is that digital capitalism, as it is currently practiced, does not serve the commonwealth. It is likely not even sustainable. Left unchecked, it will continue to generate monopolies, gross inequality, and economies that do not work for the majority of people. As any student of history knows, this is a dangerous prospect for democracy.
There is growing sentiment in Western democracies that these early twenty-first-century digital monopolies should be split up or regulated in the public interest—and some of them turned into publicly owned utilities. Because the advantages that network effects, access to data, and path dependency give to established platforms make it almost impossible for new entrants to take on a monopoly like Google,155 breaking up these large corporations will have to be aided by the law. Hackers cannot take on the forces of monopolization alone.
During the late nineteenth century, hundreds of small railroads in the United States were being bought and consolidated by large companies. In answer to concerns about the concentration of power and anticompetitive practices in the railroad, banking, insurance, agriculture, and oil sectors, Congress passed the first antimonopoly (antitrust) legislation nearly unanimously in 1890. The Sherman Antitrust Act, named after Senator John Sherman, is still the core antitrust statute in the United States today. Sherman argued, “If we will not endure a king as a political power, we should not endure a king over the production, transportation, and sale of any of the necessaries of life.” The Sherman Act makes restraint of trade and the formation of a monopoly illegal and gives the US Department of Justice authority to obtain remedies in federal court. Later legislation set up the Federal Trade Commission, which allowed for the administrative enforcement of the act.156
During the Progressive Era, the administrations of Teddy Roosevelt and William Howard Taft used the Sherman Antitrust Act to sue forty-five and seventy-five companies, respectively. One of the best-known trusts busted up by Roosevelt was Standard Oil, which the US Supreme Court ordered broken into thirty-three separate companies.157 Some monopolies, such as telephone systems, were allowed to survive because, by their nature, their size served the public interest, but these were then regulated as public utilities. Other infrastructure, like roads and bridges, was made fully public or state owned. There was a general consensus that competition in industry and public control or ownership of basic infrastructure were necessary for a healthy economy, even a capitalist one.
There have been some early moves in Europe to “trust bust” digital Goliaths. The European Commission has brought and won cases against Apple, IBM, and Microsoft, for example. In 2017, the EU fined Google €2.4 billion for its anticompetitive practice of ranking its own services higher than those of others in its search results.158
European laws generally make it easier to prove an antitrust violation than laws in the United States do, but Germany’s antitrust legislation is cutting-edge. A coalition government agreement made in in Germany in 2018 included an update of antitrust laws, called “Competition 4.0.” The update is designed for the digital economy: it recognizes that the measure of market dominance among most digital platforms is data rather than prices. Data dominance leads to lack of competition and abuse of consumers. Germany’s Federal Cartel Office was the first to scrutinize Facebook’s monopoly position from a data-gathering perspective. Its preliminary administrative finding was that Facebook was “abusing [its] dominant position by using its social network … to limitlessly amass every kind of data.” It noted that Facebook gathered its users’ data from all of its products, including the messaging service WhatsApp and photo-sharing Instagram. It found Facebook gathered information through software on third-party websites, too, known as Facebook APIs (application programming interfaces). Any website that had a Facebook button reported user presence back to Facebook.159
European regulators have never gone after an American company to break it up, and that remains unlikely. But other trust-busting orders could be made. European regulators could block future acquisitions. Had Facebook been prohibited from buying WhatsApp and Instagram, those platforms would now be competitors of the company. At one time, Google was buying a startup every week. Regulators could categorize some of these companies as “essential services” and subject them to the same constraints as power companies and railways. That might mean requiring more public responsibility, more transparency, and caps on profits—and also telling social media companies like Facebook they must allow users to move their data over to competitors. Data portability could be key to cutting many platform monopolies down to a democratically acceptable size. Although it would likely change the social media business model to user-pay, it would certainly increase user self-determination and market competition. Finally, the EU and and European national regulators could prosecute Facebook under antitrust legislation. Remedies, short of breakup, could include an order requiring the company to change its terms of service and imposing fines of up to 10 percent of the company’s global turnover.160
The EU General Data Protection Regulation (GDPR), which came into force on May 25, 2018, will also have a meaningful impact on platform monopoly power because it will restrict the circumstances under which, and the purposes for which, companies can collect personal data.161
In the United States, the Federal Trade Commission (FTC) could impose sanctions on monopoly platforms, challenge acquisition deals, and potentially impose privacy controls. It is the government agency that could break up an American transnational monopoly for anticompetitive conduct. But to date, it has not done so.162 Silicon Valley has had close ties with the Democratic Party since Bill Clinton’s presidency. Throughout the Obama administration and into Hillary Clinton’s campaign for the presidency, the party kept insisting that tech was the economic engine that would move the country forward. Speaking of the scandal around Facebook’s practices in the Cambridge Analytica affair, Chuck Schumer, the Senate leader for the Democratic Party, suggested self-regulation was the answer: “Facebook has an obligation to try and deal with it.”163 Not long after that, posters with the hashtag #ZuckSchumer, some of them mashing Schumer’s face with that of Facebook CEO Mark Zuckerberg, began showing up in New York streets.164
Winston Churchill called his book on the forces that led up to World War II The Gathering Storm. The metaphor and the history of that time are resonant now. In the decades preceding the war, the economic system seemed to be failing people. Despite the trust busting that took place around the turn of the century, privilege remained entrenched. Large concentrations of power, disparities of wealth, and fragile financial systems contributed to a disastrous stock market crash of 1929 that decimated world economies. The scourge of unemployment that followed fueled popular anger and nationalist sentiment. Propaganda stoked extremism. Governments were sclerotic, often class-bound, and incapable of responding effectively to crises they did not understand well.
Incapable of enacting the reforms needed to bring in a new order, democracies floundered. Some Western democracies faced authoritarianism as demagogues harnessed the rising popular unrest.165 Historian Timothy Snyder has argued that fascism and communism were both responses to the globalization that took place in the second half of the nineteenth century with colonization and expansion of trade. Fascism and communism responded “to the real and perceived inequalities [globalization] created, and the apparent helplessness of the democracies in addressing them. They put a face on globalization, arguing that its complex challenges were the result of a conspiracy against the nation.”166 Fascists, in particular, embraced nostalgic myths of a glorious past “articulated by leaders who claimed to give voice to the people.”167 “Make Germany great again” was a stated aspiration of Hitler’s and of the Nazi propaganda machine.168
The descent into authoritarianism in Germany was incremental. In Milton Mayer’s recently republished classic They Thought They Were Free: The Germans, 1933–45, one of the first accounts of ordinary Germans’ experiences living through Germany’s descent into fascism,169 a colleague of Mayer’s, a philologist, observed, “We had no time to think about these dreadful things that were growing, little by little, all around us.” The Nazi regime, he said, perfected its method of diverting people through “endless dramas” involving real or imagined enemies, and the people were gradually habituated “to being governed by surprise.” “Each step was so small, so inconsequential, so well explained or, on occasion, ‘regretted,’” he noted, that people were no more able to see it “developing from day to day than a farmer in his field sees the corn growing. One day it is over his head.”170
Democracy failed in Europe in the 1930s, and it could fail in Europe and North America today.
Like the governments of the 1930s, Western governments today seem incapable of responding effectively to crises. Globalization in the late twentieth and early twenty-first centuries has created new winners and losers and new inequalities. The economic system that has been built over the last number of decades is extremely fragile. On top of this, the changes brought by digital technology have rapidly exacerbated inequality. In a few short years, the internet has become the infrastructure for our social, economic, cultural, and political interactions, yet few people inside government understand its emerging policy problems well.171
At the Chaos Computer Club’s annual congress in Berlin, around Christmas 2010, not long after the 2008 financial crash and still in the dawn of the digital era, Dutch hacker Rop Gonggrijp summed up the evident paralysis: “Most of today’s politicians realize that nobody in their ministries, or any of their expensive consultants, can tell them what is going on any more. They have a steering wheel in their hands without a clue what—if anything—it is connected to. Our leaders are reassuring us that the ship will certainly survive the growing storm. But on closer inspection they are either quietly pocketing the silverware or discreetly making their way to the lifeboats.”172