In fact, HackingTeam later told an investigative reporter it could “beam” its Remote Control System into a computer “over a Wi-Fi network.”1 This was in addition to more conventional methods of access used by the group, such as implanting malicious code with a USB stick or spear phishing (tricking a user to click on an infected link or attachment). For spear phishing, HackingTeam used authentic documents provided by its clients, including invitations to parties and election papers.2
Another method that critics of HackingTeam believe the group was using (and which the group referred to in its emails) is called “network injection.”3 This involves pinpointing where a target is browsing on the internet and sending a doctored version of a webpage the person has requested, so that while, say, a favorite Italian TV show is streaming as intended in the foreground, malicious code is burrowing its way into the target’s computer in the background.
As Christian Heck, the artist, was trying to convey to me in our dawn Crypto Party the day before I met the Hermes group, as wonderful as the internet is, it is also the twenty-first century’s ultimate weapon delivery system. It can deliver an anonymous attack on you or the Pentagon from anyone anywhere in the world. And you cannot protect yourself just by avoiding funny-looking emails and refusing to give your password to strangers. This is why people like the Berlin exiles go to such trouble to do their work on air-gapped computers (computers that have never been used to access the internet) and why Julian Assange, as a teenage hacker, kept his code hidden in a beehive.4
Ron Deibert, founder of the Citizen Lab at the University of Toronto, has said the easiest way for an attacker to install malware on your computer is to get you to download the apps and regular software updates you need because users will do this with little guarantee they are coming direct from a trusted source and are not infected with malicious code.5 Indeed, automatic software updates may be precisely what HackingTeam used to “beam” into targeted computers.
The bottom line is that if the NSA wants access into your computer, it is in, Bruce Schneier, another well-known security expert, has said.6 Even with free software or open-source software that you can study, watch over, or have checked out by a forensic expert, it may be difficult to detect the intrusion. Researchers at the French nonprofit Exodus Privacy, for example, have found that smartphones are infested with commercial tracking software from weather, flashlight, rideshare, and dating apps, collecting huge amounts of information from users. But to find the trackers, Exodus researchers had to build a custom auditing platform that searched through the apps for “digital signatures” distilled from known trackers: “A signature might be a telltale set of keywords or string of bytes found in an app file, or a mathematically derived ‘hash’ summary of the file itself.”7 The Yale Privacy Lab has tried to replicate Exodus Privacy’s results with only limited success.8 To find trackers, one has to know where to look and how to distinguish them.
Determining whether your computer has been infected with malware you have received through the internet is hard, but it is even harder to determine whether data you send over the internet is arriving securely at its destination. As Harry Halpin has pointed out, the internet was not designed to move information securely. It was built without any strong appreciation for provable security or advanced notion of how to achieve it. As a legacy technology, it has more pitfalls than a game of snakes and ladders.
My conversation with Hermes has prompted me to do more research into the problems hackers and users face in trying to achieve security or self-determination. These problems are fairly technical but are enlightening when you understand them.
To appreciate in broad terms the problem of designing “privacy for the weak”—that is, for the ordinary user—recall the hourglass metaphor used to depict the internet. This is the shape of the snakes and ladders board game you are playing. At the bottom is the physical layer—the wires and the routers that connect your computer to the server of your internet service provider (ISP) and the server to the continental routing exchanges and intercontinental cables that run along the ocean floors. Remember what Edward Snowden and others have said about the state surveillance operations at that physical layer of the internet and the cooperation between the “Five Eyes” states that make up the spying alliance Echelon (Australia, Canada, New Zealand, the United Kingdom, and the United States). This physical layer involves such a major infrastructure investment9 that it probably can be fixed only at the state level, with national governments committing not to spy on their populations and preventing, where they can, the routing of communications through states that will spy on them.
On top of the physical layer of the internet is the applications layer, the layer of software applications that anyone can build to run on top of the physical layer. These include applications such as email services (Google’s Gmail and Microsoft’s Outlook, for example), web browsers (Apple’s Safari, Google’s Chrome, and Microsoft’s Internet Explorer), search engines (Bing, Duck Duck Go, and Google), and the multitude of other apps that you sign up for or download (Bitcoin, Facebook, instant messaging, Lyft, Twitter, voice over internet protocol or VoIP, and Uber).
The Internet Engineering Task Force (IETF)—the body that develops and promotes voluntary internet standards—provided an initial set of protocols (coded rules) that covered the core functionalities of the early internet. These included protocols for functions like file transfer (file transfer protocol or FTP), electronic mail transport (simple mail transfer protocol or SMTP), remote login to hosts (telnet), host initialization (Bootstrap protocol or BOOTP), and networking support (domain name system or DNS). Other well-known protocols include hypertext transfer protocol (HTTP) and early protocols for applying encryption, including secure socket layer (SSL) and transport layer security (TLS). These latter two protocols encrypt communications between a client’s web browser and a server (say, the server used by Ashley Madison, the clandestine dating service). SSL is now considered so flawed it has been prohibited from use.
It is all snakes at the applications layer, for several reasons. First, all commercial applications today want to track you and gather as much information as they can about you. Second, there is a proliferation of protocols at this level, dating from all eras of the internet. They all remain out there “in the wild,” and they have many flaws. The complexity of the terrain—the numerous predatory apps and the patchwork of inadequately secure protocols one might encounter, coupled with the amount of applications people now use—make security “forward planning” and “provability” a developer’s nightmare.
The narrow waist, or transport layer, of the internet is the simple TCP/IP (transmission control protocol/internet protocol) protocol suite invented by Vint Cerf and Bob Kahn for connecting networks to networks, the layer that essentially establishes the internet. Unlike the telephone, which works by sending information over a dedicated open line between two parties, the internet sends information in spurts from computer to computer as capacity becomes available. The information is broken up into “packets,” or chunks of digital data, that are reassembled when they reach their destination. This is called “packet switching.” TCP/IP performs the task of delivering packets from the source host (the server of your email ISP, which might be Bell, Google, or Microsoft) across network boundaries to the destination host (the server of your best friend’s email ISP, for example, or the server of one of your favorite websites), based on the IP addresses in the “packet headers.” For this purpose, the IP protocol defines the packet structures that encapsulate the data to be delivered. It also defines addressing methods.
As Harry explained, the internet is designed so that the server (Bell’s, Google’s, Microsoft’s, Ashley Madison’s,10 and for that matter your workplace’s, your government’s, or the server of another government) has complete control over your individual device, and you have to trust whatever server you are dealing with. If you are sending emails and you and your correspondent do not own and control email servers in each of your homes, this is a problem. The user is always transparent to the server. You might encrypt your email messages, but the server sees the metadata. Bell sees your IP address, the IP address of the person you are sending an email to, the time, the size of the message, the attachments, the subject line, the browser you are using, and the browser’s configuration. And it logs all this information. This means that your commercial service provider can turn over your full records whenever a government agency requests them.
You have no control over what the server sends you, including the automatic updates for your apps, malware from your national security agency, ads, and proprietary code from commercial applications you can’t see into (remember proprietary code is closed and nontransparent) and perhaps did not even ask to have in your device.
In the board game of snakes and ladders, there is one long, giant snake you want to avoid landing on because wherever you encounter it, it slides you all the way back down to the beginning of the game. This need-to-trust-the server problem is the giant snake in the game you are playing. Developers describe it as an authentication and web-of-trust problem.
Above the applications layer of the internet is a content layer of human language and images, and above that is a notional “social layer” of the behaviors of users—why they adopt a privacy-sucking app like Facebook, use the internet to stream movies, talk by text instead of email, or put up with a large corporate business model instead of adopting smaller, cooperative models.
The design problems are, broadly, how to build the ladders in this game of snakes and ladders. How to assist the user to move up toward the goal of privacy and self-determination? It is a question with many facets:
How much progress have hackers made on these design problems? I’m watching an online privacy workshop for expert hackers from the second day of the Chaos Communication Camp, and Harry Halpin is on the panel.
“Okay,” says Harry. “The moment you’ve all been waiting for. The ugliest slide in the entire camp, and the ugly slide is about an ugly reality. Twenty-five years after the deployment of OpenPGP on the internet, we still do not have virtually any messages encrypted end-to-end, and this is a massive failure of our community. And basically, [we need] to understand how such a large failure happened and what are the different angles for how we can fix it.”
Harry clicks to change the PowerPoint slide on the large screen behind him. It is a photo of the reality TV psychologist Dr. Phil, with the caption, “Oh, so you’re using encrypted email? How’s that working for you?”
Hackers won the cryptowars of the 1990s and created a cryptotool, OpenPGP, that was stronger and more user friendly than anything that went before. But twenty-five years after Phil Zimmermann invented OpenPGP, apparently almost no one uses it. Hackers and hard-core dissidents do, and the financial industry and the military have their own highly secure encrypted systems, but the vast majority of electronic communication is unencrypted.
Harry has told me that, in the world of security, two concepts have proven to be flawless—encryption and Tor. After more than twenty years in deployment, onion routing11 and encryption have been proven to work. The unfortunate drawbacks are that they are not easy to use and can be defeated when used on the “legacy” internet.
Harry flips to his next PowerPoint slide, and the question “Why wasn’t the net encrypted by default?” appears on the screen. Communications would be a lot more secure if the internet (content and metadata) were encrypted by default. Then add-on tools like OpenPGP and Tor would not be needed.
Vint Cerf has said that he worked with the NSA on designing a secured version of the internet, but the security technology they were using at the time was classified, and he could not share it with his colleagues. On the 1983 “Flag Day” for the launch of TCP/IP and what would become the public internet, there was no encryption. If Cerf could start over again, he would introduce strong cryptography and authentication into the system.12
“What we’ve had to do,” Harry is saying, “we’ve basically had to bolt on the crypto after we’ve had massive deployment. A lot of people are trying to do this now, post-Snowden, but the fact of the matter is we’ve done it before. As you can see, every major protocol [from the early days of the internet] had some kind of crypto slapped on it in a roughshod manner after it was launched.”
Harry goes to the next slide: “Designing protocols is hard.”
A lot of the “bolt-on” encryption algorithms in protocols like RSA (Rivest-Shamir-Adleman), SSL (secure socket layer), and STMP (simple mail transfer protocol) have become weak and even obsolete as the computational power of computers has skyrocketed. As computers get strong enough to break older encryption algorithms, developers have realized that new encryption algorithms need to anticipate the future growth of computer power and not underestimate it. Ultimately, there may be quantum computers that will increase computational power beyond anything we have ever known.
“Algorithm agility has been, of course, a mixed bag at best,” Harry tells his audience. “It essentially allows a lot of downgrade attacks, and we have a lot of legacy algorithms—RSA 1.1.5 and whatnot—still in the wild. And while now the standards community is trying to upgrade all these algorithms—trying to get off of RSA into elliptic curve crypto …—the fact of the matter is that we still need [more] algorithm agility because in ten, fifteen, maybe twenty years (twenty years, of course, being we don’t know what the fuck we’re talking about, but ten years being a clear and present danger), we do, of course, have quantum computation coming up, so we have to start thinking about getting postquantum algorithms into our core protocol.
“But it doesn’t matter what [crypto] algorithms you put into your core protocol if your ‘actual state’ machine [for example, your laptop with its vulnerable Linux kernel] and your actual ability to prove the security of your protocol are flawed from the beginning.” As an example, he points to a diagram of the “man-in-the-middle” attack made on the TLS encryption protocol: “This is, of course, the TLS triple handshake attack, and it’s a kind of miracle that TLS worked as well as it did, but when you really get down to it, when you bolt this crypto on after the protocols are released into the wild, you will, of course, open yourself, by sheer virtue of complexity, to all sorts of attacks.”
If you look on Wikipedia, you can see the array of attacks made on the TLS and SSL protocols to date—renegotiation attacks, downgrade attacks (FREAK and Logjam attacks), cross-protocol attacks (DROWN attacks), BEAST attacks, CRIME and BREACH attacks, timing attacks on padding, POODLE attacks, RC4 attacks, truncation attacks, Unholy PAC attacks, Sweet32 attacks, and implementation errors (Heartbleed bug, BERserk attack, Cloudflare bug).
Harry flips the slide again: “Designing Privacy-Preserving Protocols Is Even Harder.”
“And while we at this point in the twenty-first century understand how to develop cryptographic protocols, what we don’t understand, what to do at all, is designing privacy-preserving protocols.”
“So typically, if you look at the older protocol stacks on the internet, we were just sort of throwing identifiers around willy-nilly, and we’re seeing more and more breaks [from this flaw]. Even in new pieces of software—for example, in TextSecure, probably one of the best postemail protocols out there—we’re revealing people’s phone numbers in multiuser chats.”
“The thing that all the [old] insecure protocols got right was that they were decentralized: that you could actually run your own [email program], and [the connecting protocols] were run through a standards body and we had some core agreement on them. … And the fact of the matter is that in the move to postemail … where we actually have some chance of getting end-to-end security right, decentralization is not being taken account of, so that you have end-to-end secure silos where you can’t communicate with each other [if both correspondents are not in the silo].”
As I am beginning to understand it, designing decentralized privacy-preserving protocols may be the ultimate design problem hackers now face. No one yet seems to have mastered it. Most postemail solutions to date are centralized because designing centralized systems is far easier. If you want to use the secure text app Signal, the people you are communicating with also have to be using it: it is a centralized system.
But true generativity and user self-determination cannot exist without decentralization. A decentralized system cannot be controlled by business or government.
Hackers agree they can’t fix the decentralized legacy protocols for email, but the question is whether it is worth trying to improve them. “Everybody’s saying email is screwed, that we cannot fix it,” a hacker named Meskio is saying on the video. “I do agree email is screwed. But I’m concerned how much this actually is a problem with OpenPGP or just with the implementations for PGP that we have right now. There are lots of projects which are coming out that are trying to reinvent messaging. It’s amazing that people are experimenting with that. We need it. But the reality is that the majority of people right now use email.”
Hackers are starting so many projects, in fact, that it bears remembering John Perry Barlow’s caveat that “events are boiling up at such a frothy pace that anything I say about current occurrences surely will not obtain by the time you read this. The road from here is certain to fork many times.”
Some are doing their best to ameliorate email. Others are chucking email and working on new kinds of communications. There is the Leap Access Project, trying to simplify key management; Mailpile, which is in a good state of beta; Pixelated; experimental Pond; Memory Hole; Coniks; Whiteout; and Signal/TextSecure, generally regarded as the most usable postemail solution to date.
But this plethora of projects raises another problem—the proliferation of standards. “So what I’d like to ask for people to do,” Harry says, “before we go on to the hard-core problems: Everyone’s producing their own protocols, people aren’t cooperating properly. We’ll go into this in more detail, but effectively there are some places where you can really make a difference in standardizing.”
Gus, a female hacker on the panel, takes over. She is from the group Simply Secure. There are too many tiny teams, she says. Please find existing projects and hook up with them. Then she outlines some common dilemmas hackers face in designing security. Support experts or newcomers? Educate users, or just make it work? Gather metrics, or respect your users’ privacy? Create new apps, or work to fix existing apps in widespread use? (The existing app, WhatsApp, for example, was used widely by youth in developing countries including by many Arab Spring dissidents. Unfortunately, it was bought up by Facebook in 2014.) Ideal security or ease of use?
“There are really just a couple of threat models: either you’re facing Mossad, or you’re not facing Mossad,” she tells the audience. “So a scrambled keyboard might be over the top.”
Here, Gus really puts her finger on the most pressing design problem—usability. No tool, no matter how good, is going to be widely adopted by ordinary users if they don’t find it easy to use.
Harry flashes another slide: “The net needs you!” It has a drawing of Edward Snowden with an Uncle Sam goatee (or is it day-old stubble?) blowing a whistle and pointing at the viewer. A list of current initiatives follows:
From his place of exile somewhere in Russia, Edward Snowden has been helpfully producing his own privacy-preserving tools for users and promoting some of the best ones made by other hackers. He has coinvented a cell phone case that monitors your cellular, GPS, WiFi, and Bluetooth connections and shows when your device leaks data.13 And he is developing an open-source app for Android called Haven that turns a phone into a sentry for a laptop. Using the camera and other sensors in a mobile phone to log changes in a room (sound, light, and motion), Haven can detect if your computer has been physically breached by an intruder in your absence.14
Harry picks up the mike again. “Modern Crypto is where most of the good postemail discussions are happening. If you’re interested in getting PGP working, the IETF OpenPGP working group chaired by the wonderful DKG has finally reopened. W3C is looking at how we could make Java Script not such a nightmare in the web security IG. And of course, there’s a huge policy debate. You can say, ‘This is just solutionism’—that we’re trying to solve mass surveillance by just throwing out protocols that are secure and encrypted and privacy-preserving. But you know, if you want to try solving the laws on this, good fucking luck.” He holds up his palms to the audience with a big shrug and a good-natured smile.
The alternative to all this retrofitting is to start fresh and build a new internet. But who will succeed in doing that?
DARPA (the Defense Advanced Research Projects Agency of the US Department of Defense) has spent over $100 million on a “Clean Slate” initiative to solve the technical issues “not fully appreciated” during the early development of the internet.15
But someone else is working on a new internet for the twenty-first century—the Chaos Computer Club. This is their take on the matter:
YOU BROKE THE INTERNET
We’ll make ourselves a Gnu one
The summer of 2013 [the summer Edward Snowden made his first revelations] will remain the moment we finally realized how broken the Internet was, and how much this had been abused. At first #youbroketheinternet was a cry of anger, but also a call to code the missing pieces for a new Internet architecture which doesn’t fall to pieces like a house of cards.
If deployed on top of technologies that were not designed for it, end-to-end encryption has proven to be “damn near unusable,” as Edward Snowden put it, let alone forward secure. But there are actually many new tools that have that feature at their foundation. Antiquated protocols like DNS, SMTP, XMPP and X.509 leak so-called metadata, that is the information of who is talking to whom. Also they put user data on servers out of the reach of their owners.
X.509, the certification system behind HTTPS and S/MIME, is broken and allows most governments and even many companies to run man in the middle attacks on you. The trust chain between the cryptography and the domain names is corrupt. Even if DNSSEC and DANE try to improve the security of DNS, they still expose your interest for certain resources. SMTP is so hopeless, you shouldn’t even use it with PGP and XMPP fundamentally has the same problems: as long as all involved servers know all about who is talking to whom, it is already by far too much exposed knowledge—even if the mere encryption of the connection, which again depends on X.509, hasn’t been undermined by a man in the middle, which is hard to find out if there is no human intervention and no reporting to the actual users when servers pass messages between each other.
This is not the way it has to be. We believe a completely new stack of Internet protocols is not only feasible, it already exists to a large extent. It merely needs better attention. Currently the majority of technology people are focused on improving the above mentioned protocols, even though they are broken by design … and can only be improved in some partial aspects. Vastly insufficient compared to what humanity deserves.
Others focus on anarchic technologies designed to undermine democracy, as if it was democracy’s fault that digital offences produce no evidence. They thereby foster platforms for bypassing social obligations like contributing taxes, but taxes are fundamental in order to produce infrastructure and social security for the weak. It is impressive how many people have been fooled into thinking negatively about taxes when they in fact depend on them for their own well-being. Only a tiny minority pays more taxes than it enjoys advantages from them.
This project is for those who want to look into a future of an Internet, which actually respects constitutional principles and returns democracy to a mostly functional condition.
Yet, nothing of this comes about if we don’t provide incentives. Without incentives, Internet companies find no business model in protecting fundamental principles of democracy. Whereas universities have already delivered several decades of excellent research and working prototypes in this field, they aren’t incentivized to produce an actually deployable product. Also standards organizations are powerless if the company that infringes civil rights the most is the one that will dominate the market. In practice, competition is at odds with philanthropy. Currently it takes enthusiasts to fill in the gaps between what researchers and companies have released and turn it into something that actually works for the population. We think we need incentives to polish the protocol stack of a GNU Internet, and by GNU we mean that the involved software needs to be free as in free speech, and that we need regulation to actually deploy an upgrade of the Internet to a version that protects its participants from eavesdropping and social correlation.16
When the #youbroketheinternet project first kicked off, some Chaos Computer Club members believed the EU might grasp its own geopolitical and commercial interests in supporting the creation of a new and secure civilian internet. In addition to countering US subversion of the original internet, the EU might want to support its own digital sector by challenging the dominance of US tech companies. Just as the future of energy might lie in clean tech, so might the digital environment’s future lie in secure, civically enhancing tech. The visionary jurisdiction that seized the advantage of being first mover in this space might trigger the next tech revolution and reap its benefits.17
The CCC made a map of what a new internet might look like. You can see it online: http://youbroketheinternet.org/map. It is a riot of colored, overlapping rectangles running up and across a table that lists different problems to be solved: “Politics & Publicity, Interface & Usability, HTML-based Social App, Native Social Application, Many-to-Many Scalability, One-to-One Application, Hashable Routing, Transports and Mesh Networking, Operating System, Libre Hardware.” There is an angry German bald guy in the middle and the logo “You Broke the Internet,” and the color coding is described as follows:
- Green: Projects that are available today.
- Dark green: Projects that are available but aren’t fully protective of metadata.
- Blue: Projects in development.
- Dark blue: Projects in development which will have little or no protection of metadata (but that doesn’t mean they can’t be an excellent piece in the general puzzle).
- Yellow: Projects that may be okay but depend too much on the security of servers.
- Orange: Products whose end-to-end encrypting client side has been open-sourced but whose server side remains proprietary (still the UIs may be very well worthwhile to re-use).
- Red: Brands that currently occupy the respective layers with unsafe technology.
- Dark red: Possibly cool but unsafe technologies that we need to replace.
Some projects appear on certain layers while leaving out others (in that case the beam passes under the grey box of the layer). The new Internet needs a complete GNU protocol stack equivalent to a connected light green beam across all layers, and then some more aspects that the map does not show.18
Studying the map, last updated in October 2015, induces in me a dull, aching feeling. Despite repeated efforts, I still don’t understand it very well. It seems a monumental project. In fact, this CCC project seems more ambitious than the one to put a hacker on the moon.
But CCC’s members’ belief that the EU might eventually grasp its own geopolitical and commercial interests in shaping a new internet would turn out to be astute. In August 2016, the EU kicked off a major initiative called the “Next Generation Internet” (NGI). And the Europeans were saying a lot of encouraging things, like the NGI should be “user-” and “human-centric”;19 that it needed “to reflect European social and ethical values”;20 and that “trust at a global scale does not come for free: at the heart of sustainable trust lies actual trustworthiness that requires significant investment of time and resources. … Transition at internet scale requires a systemic approach in addressing deep underlying technical issues, creating transition mechanism[s]—as well as (in some cases) changing legal and governance parameters.”21
NGI appears to be a serious societal venture.
The Chaos Computer Club was invited to join the Expert Group at the first Next Generation Internet consultation, convened by the European Commission in late 2016, to discuss the group’s ideas on NGI regulation.22 We may not want Europe taking over the internet any more than Google or the United States, as Harry has observed. But some state investment in the infrastructure, social ideas, research, and projects that hackers would support might be essential to getting a “new internet” off the ground.