NETWORK COMPETITION AND THE CHALLENGES AHEAD
The internet of things will help bring structure to global politics, but we must work for a structure we want. This is a challenging project, but if we don’t take it on our political lives will become fully structured by algorithms we don’t understand, data flows we don’t manage, and political elites who manipulate us through technology. Since the internet of things is a massive network of people and devices, structural threats will come from competing networks. There are two rival networks that seriously threaten the pax technica: the Chinese internet and the closed, content-driven networks that undermine political equality from within the pax technica.
The Chinese internet is already the most expensive and elaborate system ever built for suppressing political expression. The Chinese are trying to extend it by exporting their technologies to authoritarian regimes in Asia and Africa. Russia, Iran, and a few other governments are also developing competing network infrastructures. The Chinese government controls the entire network, the network is bounded in surprising ways, and the network can, and does, mobilize to attack other networks.
Within democracies, the real threat comes from campaigns against net neutrality and for new limited-access networks. When technology and content companies make it easier to get certain kinds of content, they create subnetworks of social and cultural capital. Putting research ideas behind paywalls—especially research that has been publicly funded—also creates competing networks of information. Political parties love to keep the party faithful in bounded information networks. So while our main rivals are external, there are also internal threats to the strength of our information networks. Information technology is only a means to a political end. However, the internet of things will be the most important means.
My Girlfriend Went Shopping . . . in China
The Chinese have a deliberate and vigorous strategy for combating the empire of the pax technica. When China’s political elites met in 2013 for a national congress to choose their leaders, propaganda from Beijing dominated official news feeds. Yet over social media, it was a boyfriend’s complaints about his girlfriend that went viral.1
Whenever my girlfriend goes shopping, she tends to get overly serious and way more than just fidgety about the whole thing. It always interferes with my usual pace of life. Anyway, she calls the shots at home, so [I] can’t complain. As my girlfriend stipulates, when it approaches her shopping date, I can only make working plans for up to three days, and if I go on a business trip, I need to get her approval first. These past few days I’ve been sitting on pins and needles, praying to God that I don’t do anything wrong to ruin her good shopping mood.
. . . I guess as long as she buys things for me, I shouldn’t complain too much. . . .
She usually doesn’t pay attention to me when she shops. Well, you do your shopping, and I’ll tend to my own business, I think to myself. So I take out my phone to surf the net a bit. But before I can open even one page, she pops up immediately: “You can’t just get online like this when I shop! What emails are you checking? If you dare check one more, I’ll deactivate your Gmail account!”
Many of China’s digerati are so used to censorship and surveillance that they quickly learn to talk in complex metaphors and trade tricks for getting access to the tools they want to use. In this coded rant, a young student on Renren—the Chinese Facebook—compared the 18th Chinese Communist Party Congress to his girlfriend’s latest shopping trip. The party is a vain girlfriend who can be jealous and abusive but is sometimes generous and occasionally responsive. The girlfriend watches everything he does online.
For a long time, China watchers worried about the daunting size of China’s active army. It wasn’t always well trained or well equipped, but it was disciplined. It was the world’s largest. These days there are more than 2.5 million men and women in the country’s active military, another 800,000 reservists, and 1.5 million paramilitary members. It’s a military built to project power outside the country and maintain control inside the country. These days, battalions of censors provide more social stability than the military muscle of the Chinese armed forces.
The Communist Party has developed a dedicated army to resist the spread of the technologies and values of the West. By one estimate there are more than 2 million “public-opinion experts,” a new category for jobs that involve watching other people’s emails, search requests, and other digital output. In other words, the army of censors is as large as the military, and often military units are given censorship and surveillance tasks.2
Government agencies need censors, but the government also makes tech startups and large media conglomerates hire their own censors to help with the task of watching the traffic. Pundits have referred to these people as China’s fifty-cent army because some get paid small amounts of money to generate pro-regime messages online. But that moniker makes the army of censors seem like freelancers who are inexpensive to hire. In actuality, they are a well-financed force deeply embedded in the country’s technology industry.
With demand for these jobs high and stable employment all but assured, young people have to pay for the education to become public-opinion analysts. According to China expert Guobin Yang, organizations like the People’s Daily can charge up to four thousand renminbi ($650) for four days of training to become an analyst.3
Lots of other authoritarian regimes employ censors, but let’s put the numbers in perspective. If there are 2 million people occupied with Chinese censorship tasks, and 500 million users, that’s one surveillance expert for every 250 people. Aside from the human resources put into censorship and surveillance, China’s device networks have three unique features: the government controls the entire network, the network is bounded in surprising ways, and the network attacks other networks.
First, the Chinese government owns and controls all the physical access routes to the internet. People and businesses can rent bandwidth only from state-owned enterprises. Four major governmental entities operate the “backbone” of the Chinese internet, and several large mobile-phone joint ventures between the government and Chinese-owned media giants offer additional connectivity.
An important part of Chinese network control is the way the party controls the intermediaries who build hardware and provide connection services.4 When representatives of the telecommunications firm Huawei were called before the U.S. Congress to answer questions about the security of the hardware equipment it sells, the company argued that its internal documents were “state secrets.”5 This admission might frighten us, but disclosures by Edward Snowden and from within our sociotechnical system reveal that firms and governments are also tightly bound up in background deals, mutually convenient understandings, and shared norms.
The research on China’s censorship efforts finds that the government works hard to support Chinese content and communication networks that it can surveil, and discourages its citizens from using the information infrastructure of the West. In one study, researchers went through the process of launching a social media startup in China.6 They took notes each time they encountered a new regulatory hoop to jump through, and they kept track of the amount of information they were making available to the state-security apparatus.
They had to hire consultants to help with compliance. They had to open log files and use specific hosting services within China. Most important, they learned that Chinese censors were not interested in censoring critical blog posts and essays. They were after posts about organizing face-to-face meetings, or messages about the logistics of protesting. The academics collected data on traffic spikes on Sina Weibo, China’s equivalent to Twitter. The traffic spikes on controversial topics around which people didn’t seem to be organizing themselves were often left to fizzle out on their own. But traffic spikes about organizing protests were quickly shut down—censors want to block connective action. In fact, China’s strategy of blocking independent political organization before it can start demonstrates that connective action is what the Party fears most. The government knows that the internet can catalyze activist organizations. So it has designed its internet to be watchable and works to make sure every unsanctioned civic group is stillborn.
Second, the Chinese resist Western device networks by making sure that connections within China are extensive and reliable, and connections to the rest of the world less so. Chinese device networks are bounded by the Great Firewall of China, as we call it in the West, though the more poetic translation is the Golden Shield Project. Some consider it the largest national security project in the world, and its singular task is to protect Chinese internet users from access and exposure to outside content.
By one estimate it cost the Chinese government $800 million to build the Golden Shield. Protecting itself from the Western internet is an ongoing venture.7 So the Chinese government has a simple two-part strategy for insulating itself from the pax technica. It blocks some technologies and copies other ones. Social-media applications are almost always redesigned for domestic use, and foreign firms operating in China must locate their servers in the country, where state security services can get at them easily. Platform standards and expectations are different from those of the pax technica. Requirements for real-name registration, for example, make the immense task of monitoring message traffic on the internet easier.
Practically, this means that messages from someone in China to someone outside China travel slowly. But the reason they travel slowly is important: the messages must go through one of several key digital servers. In network parlance these servers are mandatory points of passage: nodes that handle all the traffic flowing into and out of China. Such nodes can slow down traffic either because their capacity is limited or because the nodes are inspecting the packets as they go by.
This is deliberate, as the Chinese want to create their own rival internet. As one Chinese technology official said, “The big question is not whether or not China can build a world-class society while fighting the internet, the question is whether or not it can do so while building a giant intranet that is China-specific.”8
For individual users the impact may be felt in terms of extra lag time and the insecurity of knowing that this delay is because censors are actively searching content. Staring at the screen waiting for an email to arrive may seem like a minor inconvenience we all must suffer at some point. In the United States, we know that marketing firms, the NSA, and internet-service providers also scan and copy content we individually produce and consume. The metadata about our production and consumption is also captured for analysis.
But for China as a whole, the national surveillance strategy means a slow network that gets slower at politically sensitive moments.9 Imagine if the NSA’s spying was somehow even more extensive, as well as completely authorized, unrestricted, and immune to organized protest. Within China, the average internet speed is about three megabytes per second. By comparison, internet connections average three times faster in the United States and the United Kingdom. The bandwidth for traffic flowing from China to the rest of the world is far less than its already low national average.
This means that the government has created a structured way of allowing the Chinese internet to grow, while slowing the passage of content from outside the national network. Extensive monitoring and close collaboration between government and industry allow the Party to preserve the Chinese internet. These factors also mean that the internet of things will struggle to evolve in China. Either the government has to work out ever more sophisticated techniques for monitoring the traffic among devices, or it will have to give up on the idea of surveilling the entire network.
Third, the Chinese government is aggressively assaulting international information infrastructure. Corporate cyberespionage, design emulation, patent acquisition, and technology export are the key weapons of this attack. Some of these are subtle defenses that smack of cultural protectionism, while others are aggressive strategies for attacking outside networks. Cyberattacks on Western news media regularly originate from within China.10
The Chinese are trying to win over other nations, co-opting the device networks of poor countries into nodes of their rival network. The Chinese are not just seeking to protect their citizens from the West, they are aggressively expanding their networks to rival the pax technica through cultural content, news production, hardware, software, telecommunications standards, and information policy. In terms of content and news, their rival strategy involves:
• Direct Chinese government aid to friendly governments in the form of radio transmitters and financing for national satellites built by Chinese firms.
• Provision of content and technologies to allies and potential allies that are often cash strapped.
• Memoranda of understanding on the sharing of news, particularly across Southeast Asia.
• Training programs and expenses-paid trips to China for journalists.
• A significant, possibly multibillion-dollar, expansion of the People’s Republic of China’s (PRC’s) own media on the world stage, primarily through the Xinhua news agency, satellite and internet TV channels controlled by Xinhua, and state-run television services.11
Whether this propaganda and surveillance system is sustainable with the great volume of device networks to come is a big question. Simply put, China is aggressively building the main rival network. Many kinds of attacks—whether corporate espionage, military provocation, or political manipulation—get launched from China. The Chinese internet is the primary rival to the pax technica.
Despite China’s best efforts, there are signs that this block-and-copy strategy doesn’t always work. Tealeafnation.com regularly translates and publishes “bad speech” from the country’s dissenters.12 The 2012 policy of requiring new Sina Weibo users to register with their real names has proven tough to enforce, and verifying the identity of the millions of existing accounts almost impossible. Can China continue to keep its citizens on and within its own bounded internet? Other countries want to build their own national internets using Chinese technology, even if it means effectively joining China’s network by becoming dependent on China for innovation and by providing backdoor access to Chinese security services. The question remains: will China’s rival network actually expand in the years ahead?
While the government of China has worked hard to protect its citizens from the global internet, social media have made it easier and easier for technology users to pry into the private lives of the country’s corrupt officials. When the New York Times published an investigation into the extensive family assets of China’s premier Wen Jiabao, the website was blocked to Chinese users. The government tried to block every effort to discuss the investigation on Sina Weibo, the Chinese Twitter, with only modest success.13 Only a few months earlier, Bloomberg News had conducted an investigation of a prominent Chinese politician, Xi Jinping.14 Xi was rising to the position of Party secretary general, but his family wealth had been accumulated in suspicious ways. Bloomberg’s websites were blocked in China, but inside China the conversation continued on Sina Weibo.
There are ever more examples of how Chinese citizens use social media to push the limits of free speech. Li Chengpeng was a sports reporter for several decades. After the 2008 Sichuan earthquake killed more than 80,000 people and exposed the limits of the government’s ability to help people in crisis, he started writing about his nation’s social ills. He found his voice on Sina Weibo, and eventually more than 6.7 million users found him. His investigations on corruption now reach an immense audience, and he can bypass the state-controlled media. More important, he teaches his audience about which issues to track and cautions them to be savvy.
It is not possible to connect to the global internet “a little bit.” China has done much to shield its citizens, but people in the West can now have more exposure to Chinese culture and politics than ever before. This, too, comes with security implications. Essentially, it means that official agencies like the NSA can spy on the communications between Chinese officials. It also means that internet users around the world can investigate trends in China.
In 2011, after that same earthquake in Sichuan, Georgetown professor Phillip Karber noticed that the hills in the affected region had collapsed in strange ways. China was sending radiation experts to the disaster zone. So he started investigating with a team of undergraduate students. After three years of work, the investigation exposed a network of underground tunnels used by China’s Second Artillery Corps. The students translated thousands of pages of documents, studied Google Earth, scanned Chinese blogs, read military journals, and groomed their own contacts in China for information, producing a revised estimate of the number of nuclear weapons operated by that country’s military. Their work was the largest body of public knowledge yet published on China’s nuclear arsenal.
Experts in the United States have been estimating that the Chinese nuclear arsenal is relatively small, consisting of between eighty and four hundred warheads. But investigations found that the tunnel network was designed to support up to three thousand warheads. Professional analysts were skeptical.15 Partly as a result of public exposure, Chinese officials began revealing more information about the network of tunnels. In the United States, the report sparked a congressional hearing and renewed conversations among top officials in the Pentagon.
While the internet can help the West learn about life in China, it is also a conduit for public diplomacy for both sides. The Voice of America (VOA), an official U.S. government media organization, has a viral online video show called OMG! Meiyu, in which Jessica Beinecke, in fluent Mandarin, presents Western pop culture and discusses Western slang.16 Her two-minute videos have a broad audience. Perhaps most important, however, her followers interact with her and with one another. They ask her questions, and she responds in subsequent web videos. It’s not the traditional broadcast model long used by the Voice of America; rather, it’s an online exchange through and about current cultural phenomena. Explaining what the “Final Four” is or what it means to “get stuffed” is not high-level diplomacy.17 It is a crucial part of the VOA’s mission. Since China’s elites are finding it harder and harder to reach their young people over broadcast media, cultural outreach like this from the West over social media is even more important.
Technology designers and users in the United States and Europe may have done much of the initial creative design work to bring the internet into being. Silicon Valley still designs the tools that people in many countries use—especially those in the pax technica. But China is redesigning such tools for its citizens. Increasingly, it manufactures the hardware that many interests depend on.
Still, building a controlled, bounded, and invasive network limits the ability of the system to do what the internet has been good at, namely helping people solve collective action problems and generating big data for human security. Crowd sourcing and altruism flourish in open societies with open device networks. In closed societies, such projects generally appear only in times of crisis, when people see that building a temporary, issue-specific governance mechanism is worth risking the wrath of a tough regime. This means that tough regimes rarely can call on their publics to contribute altruistically to a government initiative.
Research on China’s feeble attempts at open government demonstrates that crowd sourcing doesn’t work well, and in China’s context is better thought of as “cadre sourcing.”18 This is because the kinds of information sought by the government have already been distorted by the government, enthusiastic cadre participants are more likely to report favorable information than accurate information, and news about independent crowd-sourcing initiatives don’t circulate far. Only during complex humanitarian disasters do people decide to take on the risks of contributing quality information. As device networks spread, civic initiatives will always have more positive impacts in open societies. Authoritarian societies are structurally prevented from making use of people’s goodwill and altruism. In the bounded device networks of an authoritarian regime, crowd-sourcing initiatives are likely to create negative feedback loops and big data efforts are likely to generate misinformation about the actual conditions of public life.
With this pernicious structural flaw, how much faith should we have that China’s rival information infrastructure will stay rivalrous? What will the government have to do to retain control of the internet of things that evolves within its borders? Or to return to the metaphor, what do you do about that “jealous girlfriend” who gets ever more domineering during her shopping trips?
I’ve known her for such a long time, from the first time we went shopping together to this eighteenth time. There have been sweet moments, but there were also moments of despair. She once tortured me [horribly] and made my life worse than death. She also took it upon herself to take care of me when I met with natural disasters. Despite all these headaches she’s been giving me, she has made some progress over the years nonetheless. She still has many shortcomings, but she’s more and more open to my criticism now.
China’s aggressive efforts to build a rival network are not the only form of resistance to the device networks of the pax technica. The Russians have been successfully pioneering another strategy, one emulated by Venezuela, Iran, and some of the Gulf States. The Russian gambit is not to build its own network from the ground up, it is to join the internet by sponsoring pro-regime internet users to generate supportive commentary online. One of the ways that they do this is through summer camp.
Every summer, thousands of young Russians gather at camps around the country. They do what everyone does at summer camp: make friends, flirt, get into a little trouble, breathe some fresh air, and exercise. These Russian summer camps, however, are different from those found in the West. For many decades these have been state-organized affairs, and have involved indoctrination into national myths. Teenagers learn a few survival techniques, meet new people from other parts of the country, and listen to patriotic lectures. But in the past few years, Vladimir Putin’s updated summer camps have also involved social-media training.
Putin’s political base was not online, so he enlisted the program directors of summer camps around the country to begin teaching social-media skills. Compared with the Chinese government, the Kremlin was not quite able to dedicate the same resources to content production, surveillance, and censorship. By 2006 tens of thousands of teenagers at youth camps around the country were getting short courses on Putin’s vision for the country and training sessions on blogging.
The most notorious of the camps is called the Seliger Youth Forum, and it has been held outside of Moscow each summer since 2005.19 The camps are organized by a pro-Kremlin youth movement called Nashi (“Who if not us?” in Russian). This group taps into the long tradition of summer camps for Russian youth.
Putin has his favorite summer camps, but he has also promoted particular universities and colleges over others. I visited Sholokhov University in Moscow in the summer of 2012, shortly before that year’s Seliger Forum. The university was organizing a large international conference on social-media analytics. To the Western participants it quickly became clear that our hosts didn’t want to study the development of social movements online. Their questions were different from ours. We had sociological research questions, but they had very practical questions about identifying the members of social networks.
During a tour of the campus, we learned that the university had begun as a home for humanities departments. But as the more prominent Moscow State University fell out of favor, Sholokhov began receiving more state funding. These days, it has the county’s top-ranked undergraduate degree in lie-detector equipment operation and interpretation.
Despite devoting increased attention to recruiting youth, Putin’s regime found that while it could dominate broadcast media, civil-society groups were flourishing through social media. When Russia’s civil-society groups and political opposition were squeezed out of the national broadcast media, marginalized in parliament, and pushed off the streets, they went online. Discussion forums, websites, and blogs establish the sociotechnical system needed to sustain their ties during Putin’s political freeze.
Civil-society leaders found a large, sympathetic audience of other disaffected Russians online. Russian internet users—even now—tend to be young, live in cities, and have had some experience with life overseas. Compared with the country’s new users, Russia’s established internet users are slightly more educated, slightly more liberal, and slightly less interested in Putin’s nationalist visions for the country. The opposition had been marginalized, but its creative use of the internet actually allowed it to flourish.
Whereas broadcast media are useful for authoritarian governments, citizens use social media to monitor their governments.20 For example, in early 2012, rumors circulated that a young ultranationalist, Alexander Bosykh, was going to be appointed to run a Multinational Youth Policy Commission. A famous picture of Bosykh disciplining a free-speech advocate was dug up and widely circulated among Russian-language blogs and news sites, killing his prospects for the job (though not ending his career).21
These are not simply information wars between political elites and persecuted democracy activists. The organization and values of broadcast media are very different from those of social media. Putin is media savvy, but his skills are in broadcast media. The Kremlin knows how to manage broadcast media. Broadcasters know where their funding comes from, and they know what happens if they become too critical. Indeed, Putin’s changes to the country’s media laws are specifically designed to protect broadcast media and burden social media.22 In Russia, critics have been driven into social media, where they have cultivated new forms of antigovernment, civic-minded opposition. Russian political life is now replete with examples of online civic projects achieving goals the state had to give up on.
One of the recent battles over network infrastructure within Russia involved the government’s webcam system for monitoring elections. To prepare for the election, the government spent half a billion dollars on webcams for every polling station in the country. With widespread skepticism about the transparency of Putin’s regime, this move was designed to improve the credibility of the electoral process.
The election was held on March 4, 2012, and within a few days a highlight reel of election antics went up on YouTube.23 For Putin’s political opponents, the webcams demonstrated citizen indifference toward the election. For Russia’s ultranationalists, the webcams revealed no systematic fraud. Alas, the elections commission decided that video evidence of fraud was not admissible. The video feeds were not systematically reviewed, and the electoral outcomes were never in doubt. When Russian-affiliated troops rolled into the Crimea in 2014, they did so with the full backing of a coordinated social-media campaign. The cohort of young Russians raised to be active on Twitter and blogs provided strong and consistent messaging on Putin’s behalf.24
When authoritarian governments try to exploit social media, they rarely have clear and consistent success at the game. There are four strategies for a dictator who seeks to “go social.” Governments can pay people to generate pro-government content. Governments can physically attack information infrastructure, or the cybercafés and homes of people who use their internet access for politics. They can use digital media for surveillance, and simply monitor the flow of communication for content that should be blocked or people who should be arrested. Or they can have their security services hack the devices used by civil-society groups.
Putin’s Kremlin has developed a comprehensive counterpropaganda response to social media, and this strategic package has bounded Russian information networks. Moreover, the strategies have proven transportable. Now many governments groom, hire, or train pro-government commentators for social-media work. Venezuela’s Ministry of Science and Technology has staff dedicated to hacking activist accounts so as to use those accounts to seed dissent and confusion among political challengers.25 Some governments track the physical location of computers, servers, and other hardware so that such infrastructure can be destroyed if needed. Tech firms in Europe and the United States produce some of the best surveillance and censorship software and hardware available today. Tools like Finfisher, which can remotely activate webcams, are widely available.26
Russia Today helps generate content for Russia’s social-media networks, and CCTV does the same for Chinese social networks. And just as the Chinese are exporting their network by expanding their infrastructure and training network engineers in other countries, Russia’s strategy is being transported. Azerbaijan, Iran, Venezuela, and Cuba now all have cohorts of paid, pro-regime social-media contributors. The success of Russia’s digital-media strategy is best measured by a simple outcome—Putin handily retains control and seems as strong as ever at home through a veneer of authorized power.
Other political parties and governments have begun attacking civil-society groups through devices. The Chinese regularly go after Tibetan groups online in the hope of finding activists based in China.27 Outside of repressive regimes, who would attack a civic group?
In the summer of 2012, WikiLeaks had to beat off a string of hacker assaults.28 The year before, Amnesty International faced a coordinated attack.29 Indian Kanoon, the online project described in the previous chapter that is dedicated to making Indian law searchable and accessible, was hit by a denial of service attack in October 2013.30 Many such attacks have occurred, many of them unreported, and those that do get reported tend to involve large nongovernmental organizations that expose the bad behavior of unethical businesses and authoritarian governments. Coupled with the growth of false grassroots, or “astroturf,” campaigns launched by political consultants and lobbyists worldwide, civil-society groups face security challenges and competition for public trust.
Unfortunately, this is the precursor of worse to come. Civil-society groups are largely unprepared for today’s cyberattacks, much less the volume of attacks and types of vulnerability that will come over the internet of things. They depend on open-source software, whose performance and security can be uneven, and on free services that include product-placement advertising. They tend to be run by volunteers and strapped for cash; rarely do they have the resources to invest in good information infrastructure. The world’s authoritarian governments are better positioned than civil society groups for the internet of things.
Any device network we build will create some kind of what Eli Pariser calls a filter bubble around us.31 We will be choosing which devices to connect, and those devices will both collect information about us and provide information to us. But the danger is not so much that our information supplies may be constrained by the devices we purposefully select. It is the danger that our information supplies may be manipulated by people and scripts we don’t know about.
The word “botnet” comes from combining “robot” with “network,” and it describes a collection of programs that communicate across multiple devices to perform some task. The tasks can be simple and annoying, like generating spam. The tasks can be aggressive and malicious, like choking off exchange points or launching denial-of-service attacks. Not all are developed to advance political causes. Some seem to have been developed for fun or to support criminal enterprises, but all share the property of deploying messages and replicating themselves.32 There are two types of bots: legitimate and malicious. Legitimate bots, like the Carna Bot, which gave us our first real census of device networks, generate a large amount of benign tweets that deliver news or update feeds. Malicious bots, on the other hand, spread spam by delivering appealing text content with the link-directed malicious content.
Botnets are created for many reasons: spam, DDoS attacks, theft of confidential information, click fraud, cybersabotage, and cyberwarfare.33 Many governments have been strengthening their cyberwarfare capabilities for both defensive and offensive purposes. In addition, political actors and governments worldwide have begun using bots to manipulate public opinion, choke off debate, and muddy political issues.
Social bots are particularly prevalent on Twitter. They are computer-generated programs that post, tweet, or message of their own accord. Often bot profiles lack basic account information such as screen names or profile pictures. Such accounts have become known as “Twitter eggs” because the default profile picture on the social-media site is of an egg.34 While social-media users get access from front-end websites, bots get access to such websites directly through a mainline, code-to-code connection, mainly through the site’s wide-open application programming interface (API), posting and parsing information in real time. Bots are versatile, cheap to produce, and ever evolving. “These bots,” argues Rob Dubbin, “whose DNA can be written in almost any modern programming language, live on cloud servers, which never go dark and grow cheaper by day.”35 Unscrupulous internet users now deploy bots beyond mundane commercial tasks like spamming or scraping sites like eBay for bargains. Bots are the primary applications used in carrying out distributed denial-of-service and virus attacks, email harvesting, and content theft.
The use of political bots varies across regime types. In 2014, some colleagues and I collected information on a handful of high-profile cases of bot usage and found that political bots tend to be used for distinct purposes during three events: elections, political scandals, and national security crises. The function of bots during these situations extends from the nefarious case of demobilizing political opposition followers to the relatively innocuous task of padding political candidates’ social-media “follower” lists. Bots are also used to drown out oppositional or marginal voices, halt protest, and relay “astroturf” messages of false governmental support. Political actors use them in general attempts to manipulate and sway public opinion.
The Syrian Electronic Army (SEA) is a hacker network that supports the Syrian government. The group developed a botnet that generates pro-regime content with the aim of flooding the Syrian revolution hashtags (e.g., #Syria, #Hama, #Daraa) and overwhelming the pro-revolution discussion on Twitter and other social-media portals.36 As the Syrian blogger Anas Qtiesh writes, “These accounts were believed to be manned by Syrian Mokhabarat (intelligence) agents with poor command of both written Arabic and English, and an endless arsenal of bite and insults.”37 Differing forms of bot-generated computational propaganda have been deployed in dozens of countries.38 Current contemporary political crises in Thailand and Turkey, as well as the ongoing situation in Ukraine, are giving rise to computational propaganda. Politicians in those lands have been using bots to torment their opponents, muddle political conversations, and misdirect debate. We need political leaders to pledge not to use bots, but the internet of things will make them easier to use.
Table 3 reveals that bot usage is often associated with either elections or national security crises. These may be the two most sensitive moments for political actors where the potential stigma of being caught manipulating public opinion is not as serious as the threat of having public opinion turn the wrong way. While botnets have been actively tracked for several years, their use in political campaigning, crisis management, and counterinsurgency is relatively new.39 Moreover, from the users’ perspective it is increasingly difficult to distinguish between content that is generated by a fully automated script, by a human, or by a combination of the two.40
In a recent Canadian election, one-fifth of the Twitter followers of the country’s leading political figures were bots. Even presidential candidate Mitt Romney had a bot problem, though it’s not clear whether exaggerating the number of Twitter followers he had was a deliberate strategy or an attempt by outsiders to make him look bad. We know that authoritarian governments in Azerbaijan, Russia, and Venezuela use bots. The governments of Bahrain, Syria, and Iran have used bots as part of their counterinsurgency strategies. The Chinese government uses bots to shape public opinion around the world and at home, especially on sensitive topics like the status of Tibet.
Table 3 Political Bot Usage, by Country
Bots are becoming increasingly prevalent. And social media are becoming increasingly important sources of political news and information, especially for young people and for people living in countries where the main journalistic outlets are marginalized, politically roped to a ruling regime, or just deficient. Sophisticated technology users can sometimes spot a bot, but the best bots can be quite successful at poisoning a political conversation. Would political campaign managers in a democracy like the United States actually use bots?
These days, campaign managers consider interfering with a rival’s contact system an aggressive campaign strategy. That’s because one of the most statistically significant predictors of voter turnout is a successful phone contact from a party supporter the night before the election. That reality is what has driven up invasive robocalls. In 2006, automated calling banks reached two-thirds of voters, and by 2008 robocalls were the favored outreach tool for both Democrats and Republicans. Incapacitating your opponents’ information infrastructure in the hours before an election has become part of the game, though there have been a few criminal convictions of party officials caught working with hackers to attack call centers, political websites, and campaign headquarters. Republican National Committee official James Tobin was sentenced to ten months in prison for hiring hackers to attack Democratic Party phone banks on Election Day in 2002. Partisans continue to regularly launch denial-of-service attacks; attackers consistently target Affordable Care Act (“Obamacare”) websites, for example. If an aggressive move with technology provides some competitive advantage, some campaign manager will try it.
So the question is not whether political parties in democracies will start using bots on one another—and us—but when. They could be unleashed at a strategic moment in the campaign cycle, or let loose by a lobbyist targeting key districts at a sensitive juncture for a piece of legislation. Bots could have immense implications for a political outcome. Bots are a kind of nuclear option for political conversation. They might influence popular opinion, and they are certainly bad for the public sphere.
For countries that hold elections, bots have become a new, serious, and decidedly internal threat to democracy. Most of the other democracies where bots have been used have tight advertising restrictions on political campaigns and well-enunciated spending laws. In the United States, political campaigning is an aggressive, big-money game, where even candidates in local and precinct races may think that manipulating public opinion with social media is a cost-effective campaign strategy.
Political campaigning is not a sport for the weak. Neither Twitter nor Facebook is the best place for complex political conversations—few people change their minds after reading comment threads on news websites. Citizens do use social media to share political news, humor, and information with their networks of family and friends. Meaningful political exchanges over social media are prevalent, especially when elections are on the horizon.
There may come a time when the average citizen can distinguish between human and autogenerated content. Of course, with an industry of political consultants eager to have the most effective bots at election time, it is more likely that the public may just come to expect interaction with bots.41 But for now, with most internet users barely able to manage their cookies, the possibility that bots will have significant power to interfere with public opinion during politically sensitive moments is very real.
Bots will be used in regimes of all types in the years ahead. Bots threaten our networks in two ways. First, they slow down the information infrastructure itself. With the power to replicate themselves, they can quickly clutter a hashtagged conversation, slow down a server, and eat up bandwidth. The second and more pernicious problem is that they can pass as real people in our own social networks. It is already hard to protect your own online identity and verify someone else’s. Political conversations don’t need further subterfuge.
Badly designed bots produce the typos and spelling mistakes that reveal authorship by someone who does not quite speak the native language. Well-designed bots blend into a conversation well. They may even use socially accepted spelling mistakes, or slang words that aren’t in a dictionary but are in the community idiom. By blending in, they become a form of negative campaigning.42 Indeed, bot strategies are similar to “push polling,” an insidious form of negative campaigning that disguises an attempt to persuade as an opinion poll in an effort to affect elections. The American Association of Public Opinion Researchers has a well-crafted statement about why push polls are bad for political discourse, and many of the complaints about negative campaigning apply to bots as well.
Bots work by abusing the trust people have in the information sources in their networks. It can be very difficult to differentiate feedback from a legitimate friend and autogenerated content. Just as with push polls, there are some ways to identify a bot:
• One or only a few posts are made, all about a single issue.
• The posts are all strongly negative, effusively positive, or obviously irrelevant.
• It’s difficult to find links and photos of real people or organizations behind the posting.
• No answers, or evasive answers, are given in response to questions about the post or source.
• The exact wording of the post comes from several accounts, all of which appear to have thousands of followers.
The fact that a post has negative information or is an ad hominem attack does not mean it was generated by a bot. Politicians, lobbyists, and civic groups regularly engage in rabble-rousing over social media. They don’t always stay “on message,” even when it means losing credibility in a discussion.
Nonetheless, bots have influence precisely because they generate a voice, and one that appears to be interactive. Many users clearly identify themselves in some way online, though they may not always identify their political sympathies in obvious ways. Most people interact with several other people on several issues. The interaction involves questions, jokes, and retorts, not simply parroting the same message over and over again. Pacing is revealing: a legitimate user can’t generate a message every few seconds for ten minutes.
Botnets generating content over a popular social network abuse the public’s trust. They gain user attention under false pre-tenses by taking advantage of the goodwill people have toward the vibrant social life on the network.
When disguised as people, bots propagate negative messages that may seem to come from friends, family, or people in your crypto-clan. Bots distort issues or push negative images of political candidates in order to influence public opinion. They go beyond the ethical boundaries of political polling by bombarding voters with distorted or even false statements in an effort to manufacture negative attitudes. By definition, political actors do advocacy and canvassing of some kind or other. But this should not be misrepresented to the public as engagement and conversation. Bots are this century’s version of push polling, and may be even worse for society.
Social media bots are not the only automated scripts working on our digital networks. Most devices, if they are designed to be hooked up to a network, are designed to report data back to designers, manufacturers, and third-party analysts.
For example, when Malaysian Airlines Flight 370 was lost in March 2014, it emerged that some of the best available data on the status of the plane was not the black box recorder designed specifically to preserve data. Nor was it the data from satellites. It was data from the embedded chips in parts of the engine that could be read as evidence that the plane was flying after pilots stopped communicating. In fact, the company that built the plane and the company that built the plane’s engines each had its own active device network.43 Boeing had equipped the plane with a continuous data monitoring system, which transmits data automatically, and Rolls-Royce collected data on engine status. It turned out that Malaysian Airlines was subscribing only to the engine analytics service from Rolls-Royce, but both equipment manufacturers collect immense amounts of data about the things they make long after those things are sold. So the engine parts came with embedded data systems that remained accessible to manufacturers even if the owner of the material devices didn’t want to pay for access to the data.
Such data supplies are useful—to many people. They can help manufacturers improve the quality of their products. They can help suppliers understand their consumers. It is not just specialized, high-performance engine parts that now come embedded with chips and networking capabilities. Most consumer electronics, if they download some kind of firmware, software, or content, upload some kind of location, status, and usage data.
One television owner in the United Kingdom discovered this when he started playing around with the settings on his LG Smart TV. Many of us aren’t interested in exploring the inner workings of our devices, but those who do sometimes find that data is being sent in surprising ways. When Jason Huntley, who blogs as DoctorBeet, tried to understand what information his new LG TV was collecting, he was surprised at what was being aggregated.44 The TV could report the contents of any files read from a memory stick. It was reporting what was being watched to LG’s servers. The company had to admit that even when users turned this feature off, the device continued to transmit information.45 The company claimed to need this information to help tailor ads. Huntley discovered that his TV was essentially a computer, and that LG was interested in a long-term relationship with him, through the data about his media interests.
Other manufacturers were quickly caught up in the public debate. Samsung revealed that its televisions were collecting immense amounts of information, and that it has the ability to activate microphones and cameras on its latest models. An investigative team looked at software vulnerabilities and advised owners against giving networked TVs a view of their beds.46 While device manufacturers might understand the data trail that a device leaves about its use and its users, consumers may not be told of the trail.
As the internet of things expands, increasing numbers of manufacturers will be presented with the opportunity to make money from long-term services associated with what they are making. At the very least they will be presented with the opportunity to collect vast amounts of data that might be valuable to a third party. Experience suggests that most will succumb to the temptation, unless explicit public policy guidelines require manufacturers to make good decisions and help consumers understand what their device networks are up to.
Even more dangerous is the prospect that device manufacturers will begin to use digital rights management (DRM) to protect the families of devices and streams of data they can collect. If the future value of a device is in the data it returns—behavioral data would be valuable to analysts, product data would be valuable to designers—then manufacturers have an incentive to protect this value stream. A precedent for this kind of behavior already exists, as several different industries have pushed for digital rights management to aggressively protect intellectual property.
DRM so far has been mostly used to protect against copyright infringement, particularly of music, video, and other cultural products.47 When industry lawyers found it difficult to go after individual infringers, they went after internet-service providers who supply the infrastructural support for infringement. Would DRM become an option for manufacturers eager to protect uniquely designed material objects and the data they render? It is hard to know where to draw the line, but the line needs to be drawn by civil-society groups and consumers, not by corporations.
Other Challenges (That Are Lesser Challenges)
A handful of other challenges threaten the stability of the pax technica. Most of them are not structural threats because the information networks that set up problems also become the sources of resolution and security. Criminals and extremists will always find ways to use and abuse the internet of things. One such problem is that new technologies leak across markets and jurisdictions, resulting in political advantages and disadvantages for different actors. It is almost impossible to fully bound a national internet without allowing some connections to the outside world. New technological innovations spread along social and digital networks. So the latest software for encrypting messages passes from activist to activist over Tor servers. At the same time, the latest snooping hardware ends up in the hands of repressive regimes.
For example, when Bolivia wanted a new national biometric card system, it turned to the Cuban company Datys.48 Data on Venezuelan citizens is also stored and analyzed in Cuba. In Colombia, as noted previously, equipment provided by the United States for the war on drugs was also used by government intelligence officers to spy on journalists and opposition politicians.49 Some governments don’t confine themselves to spying on their opponents in country, however. The Ethiopian government watches even those political opponents who are part of the cultural diaspora living in other countries.50
Blue Coat Systems manufactures equipment to secure digital data, but the same equipment can restrict internet access for political reasons, and monitor and record private communications. When the Citizen Lab’s researchers went looking to see where such devices were popping up on public networks, they spotted them as far afield as Syria, Sudan, and Iran—countries supposedly already subject to U.S. sanctions.51
But it is not just that people learn about new software from friends, family, and colleagues, or that countries sharing political perspectives also share technologies. In China’s case, its information architects go to countries like Zimbabwe to train that country’s government in ways to build a network that might serve economic interests without taking political risks.
Drug lords also make use of digital media. Indeed, one of the features of the pax technica is that information infrastructure has helped make different kinds of political actors structurally equivalent. In Latin America, the large drug gangs Los Zetas and MS-13 occasionally cooperate, and frequently battle. In their own territories each rules more effectively than the government, and each competes with the government for the control of people and resources. They conduct information operations by hacking state computers, attacking journalists, and going after citizens who tweet about street shootouts.
Along with competing infrastructure networks, dirty networks will remain the great threat to peace and stability when they’ve been able to adapt device networks for their goals. Corrupt political families make everyone angry because they usually set about enriching themselves. In the case of Tunisia, as Lisa Anderson argues, the now-deposed dictator had taken family corruption to new levels of effectiveness.52
Ben Ali’s family was also unusually personalist and predatory in its corruption. As the whistleblower website WikiLeaks recently revealed, the U.S. ambassador to Tunisia reported in 2006 that more than half of Tunisia’s commercial elites were personally related to Ben Ali through his three adult children, seven siblings and second wife’s ten brothers and sisters. This network became known in Tunisia as “the Family.”
If corrupt ruling families, radicals, and extremists can use digital media to build their ranks, they will try.
The English Defense League, a group that inspired Norway gunman Anders Behring Breivik on his murderous trek in 2011, had only begun two years earlier. It grew from fifty members to more than ten thousand supporters in two years, and the far right group’s leaders expressly credited Facebook as its key organizational tool.53 There are thousands of forums for extremist groups of all kinds—including ISIS—and many have multimedia sites offering streaming lectures, mobilization guides, and social-networking services.54 Feiz Muhammad, who also inspired the Boston bombers Dzhokhar and Tamerlan Tsarnaev, has generated a great volume of YouTube sermons from the safety of his home in Sydney, Australia. A great volume of white-supremacist videos can also be found online.
At the same time, the internet allows these kinds of groups to seem deceptively powerful. In fact, it is clear that they have less overall impact on public life than the more moderate groups that meaningfully engage with political processes. Certainly some individuals, such as the Tsarnaev brothers in Boston, find inspiration from texts and groups online. They remain, for the most part, pariahs. Most of the purposive organizational activity online is still aimed at furthering the greater good, not at singling out people to hate.
For someone running, or planning to run, a terrorist organization, the internet is a dream technology. Access is cheap and the reach is international. There are tricks for making your online activities anonymous, such as the steganography of embedding messages in photos. Though there’s plenty of evidence that such tricks don’t fool the very security agencies that have been able to build surveillance tools right into their networks.55 There can be no doubt that the same technology that enables democracy advocates to convert followers can also let terrorists do the same. Anticorruption organizations use information technologies to track and analyze patterns in government spending, and governments use the same technologies to track and analyze citizen behavior. But we can’t forget that all technology users are embedded in communities, meaning that they face social pressures and, more important, they face socialization.
For several years in China, anti-cnn.com collected material from thousands of people who wanted to report on the supposedly pro-Tibetan bias of Western media. The site was launched in 2005 and quickly earned such clout that it was able to become a key node in Chinese cybernationalism, with the power to organize street protests and consumer boycotts.56 Organizing hatred—or love—on the internet is easy because consumer-grade electronics make it possible for those with just a little tech savvy and a small budget to aggregate and editorialize content. Often this work doesn’t involve generating new ideas, it involves only linking up to other people and getting them to provide text, photos, and videos. Putting vitriol online doesn’t mean it will have wide appeal.
Connective security and connective action have definite downsides. The same information infrastructure that allows friends and family to trade emails allows Russian mafia members to buy the credit card records of U.S. citizens, or terrorists to plan and launch attacks. Governments use the web to advance their territorial claims, interpret (and sometimes forget) history in flattering ways, justify human-rights abuses, or assert regional power. They spy on their citizens. The risk of keeping our online infrastructure open is that some people will use this system for evil. Someone will always try to set up rival infrastructures for social control rather than creativity.
But one of the challenges, when it comes to adding it all up, is that there is no way to know whether the bonds of friendship between people divided by distance and culture are more numerous—or stronger—than the ties that have been destroyed or weakened by digitally mediated communication. Put more simply, are there more good relationships and projects coming out of digitally mediated social networks than bad?
The myth that the internet is radicalizing our society, fragmenting our communities, or polarizing our political conversations makes for a good editorial or news-feature story. It remains more of a news peg than a demonstrable, widespread phenomenon. It may have become easier to find the text of Mein Kampf or other tomes of hatred online, but there is much more content and social interaction that has none of that hatred. There are many more mainstream political parties, regular newspapers, and middle-of-the-road political conversations. A balanced look finds examples of how digital media have been useful for both constructive and destructive political engagement. If the internet of things greatly expands the digital network of our political lives, will the network as a whole be more conducive to hate speech?
Attempts by extremists and criminals to use device networks for their hateful and nefarious projects represent a small risk when weighed against the benefits and affordances of the internet of things: if personal data is managed responsibly and civil-society groups can actively participate in the standards-setting conversations, that is. Over the past twenty-five years of internet interregnum, the violent extremists who organized online became easy to identify and catch as a result. The cure—widespread surveillance—may or may not be worse than the disease. Extensive surveillance might put a pall on the mood of the majority of internet users, but the ongoing NSA surveillance scandal seems not to have affected user attitudes. Knowing that the NSA can surveil the Western web and that the Chinese can surveil their telecommunications infrastructure has not had consequences for the enthusiasm of most new users.
With the right conditions, a radical website can galvanize a community of hatred, give individuals a target for their vitriol, and help them organize their attacks, both on and offline. Fortunately, it’s rare to get the right conditions for this heady mix of nasty conditions.
Rival Devices on Competing Networks
The critical rivalry in the years ahead will not be between countries but between technical systems that countries choose to defend. Rival information infrastructure is the single most important long-term threat to international stability. The empire of the Western-inspired, but now truly global, internet isn’t the only major system in which political values and information infrastructures are deeply entwined. Indeed, there are many internets. Witness the way Russians, Iranians, and Chinese use their social media in different ways. The Iranian blogosphere is full of poets.57 The Russian blogosphere has lots of nationalists, such as the Nashi from the Seliger Forum.58
There will be more asymmetric conflicts, in which upstart civic leaders organize protest networks with surprising impact and the media, not just states and elites, are the targets. All political organizations, from parties and government offices to militaries and armies, will hemorrhage information. There will be more whistleblowers and defectors, and the nastier a government is, the more it will hemorrhage information about its corruption and abuses. Every dictator in the world will face embarrassing videos he cannot block and public outrage he cannot respond to. There will also be more clicktivism, half-hearted consumer activism, stillborn protests, and social movements that bring chaos to city centers but can’t bring voters out on election day. Such civic engagement and nonviolent conflict will still be valuable, especially in parts of the world where ruling elites need to see that new forms of collective action are now possible. The Chinese and Russians are leading the race to build rival information infrastructures, policy environments, and cultures of technology use. What’s happening in Russia and China is happening elsewhere.
But digital activism is on the rise globally, and the impact of these activist projects grows more impressive year by year. The Arab Spring involved countries where citizens used social media to create news stories that the dictators’ broadcasters would have never covered. Bouazizi’s self-immolation and Said’s murder became the inciting incidents of uprisings because of social media. Tunisia’s Ben Ali and Egypt’s Mubarak were caught thoroughly off guard by social-media organization, with its heady mix of old and new activism. In Iran, the opposition Green Movement continues to deploy Facebook, and possesses eloquent bloggers to advance its cause, while the mullahs’ propaganda response comes in the form of movies and broadcasts.
The surveillance leakage of technologies across jurisdictions is also something that democracies wrestle with. In democracies, the technologies we design leak to lower and lower levels of law enforcement. Such technologies can be an important part of the toolkit for fighting crime. They often start off highly regulated, with high-level access. Eventually, they trickle down to average police departments. The LAPD gets Stingray cell phone trackers that allow them to listen to calls over mobile networks.59 Other departments are also investing in drones, even as privacy activists push back on the use of both technologies by local police.60
The developments can be positive in that many authoritarian regimes can’t control the leakage of new technologies to politically active citizens. The devices that a regime considers consumer electronics for economic productivity get used for political conversations, repurposed and reconfigured. In authoritarian regimes, they leak out of the regime’s control.
Beijing may not be able to produce the online content that the world wants to see. The Chinese government will continue to put resources into the content it wants to see. It will try to build an internet of things with access protocols that make surveillance through many devices easy. Globally, it may control more and more of the digital switches over which the world’s con tent flows.
In an important way, China’s national network is not just a subnetwork of the global internet. As a sociotechnical system, that country has become quite distinct. It was built, from the very beginning, as a tool for social control and cultural preservation. Every new device and new user extends the political reach and capacity of the network. As the internet of things evolves over most of North America and Europe, it too will be given surveillance and censorship tasks. But the greater danger to the stability of the pax technica are the privileged content and infrastructure firms that actually want to create subnetworks of closed content and devices.