In the summer of 2017, a group of young political activists in the United Kingdom figured out how to use a popular dating app to attract new supporters. They understood how the dating platform Tinder worked. They knew how its algorithms distributed content, and they were familiar with the platform’s culture. So the campaigners built a bot. The bot engaged with real people on Tinder, and though the conversations started as flirty, they soon turned political—and to the strengths of the Labour Party.1
The bot accounts sent between thirty thousand and forty thousand messages in all, targeting eighteen- to twenty-five-year-olds in constituencies where the Labour candidates needed help.2 It is always hard to tell how many votes are won through social media campaigns. In this case, however, the Labour Party either won or successfully defended some of these targeted districts by just a few votes. In celebrating their victory over Twitter, campaign managers thanked their team—their team of bots.
We know that social media are among the internet’s most used applications. In the United States, 85 percent of adults regularly use the internet, and 80 percent of those people are on Facebook.3 Most of the time such apps are not used for politics. But as social platforms have become increasingly embedded in people’s lives—they are so perfectly designed for segmenting and targeting messaging, so quick and cheap and unregulated, and so trusted by users—it’s obvious that they wouldn’t be ignored by political operators for long. There is mounting evidence that social media is being used to manipulate and deceive voters—and to degrade public life.
We once celebrated the fact that social media allowed us to express ourselves, share content, and personalize our media consumption. It is certainly difficult to tell the story of the Arab Spring without explaining how social media platforms allowed democracy advocates to coordinate themselves in surprising new ways and to send their inspiring calls for political change cascading across North Africa and the Middle East.4 But the absence of human editors in our news feeds also makes it easy for political actors to manipulate social networks. In Russia, some 50 percent of Twitter conversations involve highly automated accounts that communicate with humans and one another. In October 2019, as protests against China’s authoritarian control over Hong Kong disrupted the ruling party’s celebrations of seventy years in power, complex algorithms scanned for images of protest and pleas for help, removing them from social media.5
Such automated accounts and social media algorithms can distribute vast quantities of political content very quickly or can be scripted to interact with people in political debates. Professional trolls and bots have been aggressively used in Brazil during three presidential campaigns, one presidential impeachment campaign, and the race for mayor of Rio.6 We’ve found political leaders in many young democracies actively using automation to spread misinformation and junk news. In the United States, we have found that misinformation on national security issues was targeted at military personnel on active duty and that junk news was concentrated in swing states during the presidential election of 2016.7 It seems that social media has gone from being the natural infrastructure on which to share collective grievances and coordinate civic engagement to being the medium of choice for crafty political consultants and unapologetic dictators.8
In fact, social media platforms might be the most important tools in the trolling toolkit. It is the algorithms for automation that allow a single person with a corrupted message to disseminate nonsense and spin to large numbers of people. To understand lie machines, we must understand automation and algorithms that disseminate misleading political information. What are social media bots and how do they work? When are they used and who uses them?
A bot or botnet—from robot and network—is a collection of programs that communicate across multiple devices to perform tasks. A single bot can be a fake user with preprogrammed statements and responses that it serves up to its immediate network. A collection of bots, working in concert as a botnet, can generate massive cascades of content for an extended network of human users. Their tasks can be simple and annoying, like generating spam. Or they can be aggressive and malicious, like choking off internet exchange points, promoting political messages, and launching denial-of-service attacks. Some programs simply amuse their creators; others support criminal enterprises.9 On social media platforms, botnets can be an important part of a lie machine because they can take a politically valuable falsehood and distribute it to large numbers of carefully selected users. Bots on Twitter or Tinder, or highly automated accounts on Facebook and Instagram, can rapidly deceive large numbers of people using networks that both humans and bots have co-created. Sometimes the deceit is in pushing misinformation around. In the case of the bot on the Tinder dating platform, the deception was in using automation to flirt and divert human users into political conversations.
In my book Pax Technica, I wrote about how bots and fake social media accounts threaten public life. People naturally tend to trust friends, and many still trust technology. With the power to replicate themselves and rapidly send messages, bots can quickly clutter a conversation in a network of users or slow down a platform’s servers and eat up a user’s bandwidth. Another, more pernicious problem is that bots can pass as real people in our social networks. It is already hard to protect your own online identity and verify someone else’s, and bots introduce an additional level of uncertainty about the interactions we have on social media.
Badly designed bots can produce the typos and spelling mistakes that reveal authorship by someone who does not quite speak the native language. Well-designed bots blend into a conversation. They may even use socially accepted spelling mistakes, or slang words that aren’t in a dictionary but are in the community idiom. By blending into our social networks, they can be a powerful tool for negative campaigning—emphasizing and exaggerating a political opponent’s flaws.10
Indeed, political communication strategies involving bots share many features with what is often called push polling. Both are insidious forms of automated, negative campaigning that plant misinformation and doubt with citizens under the pretense of running an opinion poll. For example, an ad might invite you to participate in an informal survey about political leaders, but the survey questions may be full of misleading information carefully crafted to make you dislike one particular leader or policy option. Survey researchers have long known that you can shape how people answer a question by priming them through word choice and limited answer options. Push polls exploit those effects by purposefully biasing the polling questions. In fact, a user’s responses are rarely analyzed by the campaign putting out the survey—collecting meaningful data is secondary to planting seeds of doubt, stoking fears, or enraging respondents through politically motivated lies.
The American Association of Public Opinion Researchers has produced a well-crafted statement about why push polls are bad for political discourse, and many of the complaints about negative campaigning apply to bots as well. Bots work by abusing the trust people have in the information sources in their networks.
It can be very difficult to differentiate feedback from a distant member of your social network and automatically generated content. Just because a post includes negative information or is an ad hominem attack does not mean that it was generated by a bot. But just as with push polls, there are some ways to identify a highly automated account:
• one or only a few posts are made, all about a single issue;
• the posts are all strongly negative, over-the-top positive, or obviously irrelevant;
• it is difficult to find links to and photos of real people or organizations behind the post;
• no answers, or evasive answers, are given in response to questions about the post or source; and
• the exact wording of the post comes from several accounts, all of which appear to have many of the same followers.
Bots have influence precisely because they appear to be a genuinely interactive person with a political voice. Such highly automated accounts seem to have personality, though they may not always identify their political sympathies in obvious ways. Interaction involves questions, jokes, and retorts, not simply parroting the same message over and over again. Message pacing is revealing: a legitimate user can’t generate a message every few seconds for extended periods.
Highly automated accounts and fake users generating content over a popular social network abuse the public’s trust. They gain user attention under false pretenses by taking advantage of the goodwill people have toward the vibrant social life on the network. When disguised as people, bots can propagate negative messages that may seem to come from friends, family, or others in your extended network. Bots distort issues or push negative images of political candidates to influence public opinion. They go beyond the ethical boundaries of political polling by bombarding voters with distorted or even false statements to manufacture negative attitudes. By definition, political actors engage in advocacy and canvassing of some kind or other. But distributing misinformation with highly automated accounts does not improve civic engagement.
Bots are this century’s version of push polling, and they may have even more dire consequences for public understanding of social issues. In democracies, they can interfere with political communication by helping political campaigns coordinate in ways that they are not supposed to, they can illegally solicit contributions and votes, or they can violate rules on political disclosure.11 Later on, we will explore a push poll campaign run during the Brexit debate in the United Kingdom, algorithmically distributed to the users who were most likely to respond to its prompts.
Social bots are particularly prevalent on Twitter. These computer-generated programs post messages of their own accord. Often social bot profiles lack rich and meaningful account information, and the content they produce can seem disconnected and simple. Whereas human social media users get access from front-end websites, bots access such websites directly through code-to-code connection, through the site’s wide-open application programming interface, posting and parsing information in real time.
Bots are versatile, cheap to produce, and ever evolving. They can be designed for any social media platform, live and thrive in the cloud, produce content, interact at any hour of the day, and replicate themselves. Indeed, for several years now, unscrupulous internet users have deployed bots for more than such mundane commercial tasks as spamming or scraping sites like eBay for bargains. Unfortunately, the unscrupulous internet users are often dictators and desperate political campaign managers.
But do bots have influence? Researchers have been largely unable to provide statistical models to show how a particular social media post may change an individual voter’s mind—although the social media firms themselves have those models. Firms like Facebook, Twitter, and Google don’t share data with that level of granularity with researchers, but within the firms such models help inform their pricing strategies for advertisements. Still, three forms of evidence reveal that bots have influence. First, we have the dictators and political professionals themselves, many of whom sink significant resources into bot-driven campaigns and argue that their campaigns have influence. Second, we understand the process by which a message that disseminates across networks of bot accounts crosses a threshold into human networks. Third, we can measure the long-term damage to public understanding caused by campaigns that have relied on bots to distribute misinformation on a massive scale.
If you can’t comprehensively surveil or censor social media, the next best strategy is to write automated scripts that clog traffic, promulgate your message, and overwhelm the infrastructure of your political enemies.12 One unique feature of the emerging political order is that it is being built, from the ground up, for surveillance and information warfare. Another is that it has new kinds of soldiers, defenders, and combatants.
For example, the ongoing civil war in Syria has cost hundreds of thousands of lives. The great majority of these are victims of President Bashar al-Assad’s troops and security services. After the Arab Spring arrived in Syria, it looked as if the forces loyal to the Ba’ath government would stay in control. Speedy military responses to activist organizations, torturing opposition leaders and their families, and the use of chemical weapons seemed to give Assad the strategic advantage. Yet despite these brutal ground tactics, he could not quell the uprising in his country. And by 2013 he was losing the battle for public opinion, domestically and abroad. Even China and Russia, backers that supplied arms and prevented consensus in the United Nations Security Council about what to do, appeared to succumb to political pressure to join the consensus that Assad had to go.
What’s unusual about the crisis is that it might be the first civil war in which bots actively participated from the beginning, with serious political consequences. Public protest against the rule of Assad, whose family had been in charge since 1971, began in the southwestern Syrian city of Daraa in March 2011. Barely a month later, Twitter bots were detected trying to spin the news coming out of a country in crisis. In digital networks, bots behave like human writers and editors.
People program bots to push messages around or to respond in a certain way when they detect another message. Bots can move quickly, they can generate immense amounts of content, and they can choke off a conversation in seconds. From early on, people in Syria and around the world relied on Twitter to follow the fast-moving events. Journalists, politicians, and the interested public used the hashtag #Syria to follow the protest developments and the deadly government crackdown.
The countering bot strategy, crafted by security services loyal to the government, had several components. First, security services created a host of new Twitter accounts. Syria watchers called these users “eggs” because at the time, users that didn’t offer a profile photo were represented by a default symbol of an egg. No real person had bothered to upload an image and personalize the account. Regular users suspected that these profiles were bots because most people do put an image up when they create their profiles. Leaving the default image in can be the marker of an automated process, since bots don’t care what they look like. These eggs followed the human users who were exchanging information on Syrian events. The eggs generated lots of nasty messages for anyone who used keywords that signaled sympathy with activists.
Eggs swore at anyone who voiced affinity for the plight of protesters and antigovernment forces, and eggs pushed proregime ideas and content that had nothing to do with the crisis. Eggs provided links to Syrian television soap operas, lines of Syrian poetry, and sports scores from Syrian soccer clubs to drown out any conversation about the crisis. One account, @LovelySyria, simply provided tourist information. Because of how quickly they work, the pro-regime bots started to choke off the #Syria hashtag, making it less and less useful for getting news and information from the ground.
Investigation revealed that the bots originated in Bahrain, from a company called Eghna Development and Support.13 This is one of a growing cluster of businesses offering so-called political campaign solutions in countries around the world. In the West, such companies consult with political leaders seeking office and lobbying groups who want some piece of legislation passed or blocked. In authoritarian countries, political consulting can mean working for dictators who need to launder their images or control the news spin on brutal repression. Eghna’s website touted the @LovelySyria bot as one of its most successful creations because it built a community of people who supposedly just admire the beauty of the Syrian countryside; the company has denied directly working for the Syrian government.14
At the beginning of the Syrian civil war, when @LovelySyria went to work, it had few followers and little online community presence. With two tweets a minute, @LovelySyria was prevented by Twitter itself from devaluing a socially important hashtag. Of course, automated scripts are not the only source of computational propaganda. The Citizen Lab and Telecommix found that Syrian opposition networks had been infected by a malware version of a censorship circumvention tool called Freegate.15 So instead of being protected from surveillance, opposition groups were exposed to it. And a social media campaign by a duplicitous cleric was cataloging his supporters for the government.
In today’s physical battlefield, information technologies are already key weapons and primary targets. Smashing your opponent’s computers is not just an antipropaganda strategy, and tracking people through their mobile phones is not just a passive surveillance technique. Increasingly, the modern battlefield is not even a physical territory. And it’s not just the front page of the newspaper either. The modern battlefield involves millions of individual instructions designed to hobble an enemy’s computers through cyberwar, long-distance strikes through drones, and coordinated battles to which only bots can respond.
More recently, bots have gone after the aid workers hoping to assist in the complex humanitarian disaster that has evolved in Syria.16 Officially known as the Syria Civil Defence, at its peak this organization, also known as the White Helmets, had some 3,400 volunteers—former teachers, engineers, tailors, and firefighters—who would rush into conflict zones to provide aid. The organization has been credited for saving the lives of thousands of people, but a successful campaign driven by Russian bot networks spread rumors that the group was linked to Al-Qaeda. This turned the organization’s volunteers into military targets. The Syrian government accused Syria Civil Defence of exaggerating claims that they were being attacked and charged the group with causing the damage themselves. Eventually, many group members had to be rescued and resettled well away from conflict zones.
Over time this political campaign industry has grown, with firms competing to offer increasingly aggressive services. Until it filed for bankruptcy, Cambridge Analytica offered clients services ranging from data mining and potent negative campaign strategies to political entrapment and bribery.17 In 2016 the Trump campaign solicited proposals from the now-defunct Israeli firm Psy-Group for campaign services, and the company pitched the idea of creating five thousand fake social media users to swing party supporters his way and of using social media to expose or amplify division among rival campaigns and factions. The proposal used code names for Donald Trump (“Lion”), who was the potential client, and the target, Ted Cruz (“Bear”), who was a rival for the Republican Party’s nomination as a presidential candidate.18
Firms promising to use big-data analytics to make political inferences emerged in the early 2000s.19 So when social media applications took off, the ability to use the algorithms themselves to study and then push opinion quickly became a specialized craft within the political consulting industry. It isn’t always easy to evaluate whether firms like Cambridge Analytica and Psy-Group can make good on their promises. But as Mark Andrejevic observes in his book Infoglut, the core of the promise is that specialized analytics software and big data can let such neuromarketers understand what people are really thinking, at a nuanced level beyond what any pollster or seasoned political analyst can reveal.
Understanding precisely how social media platforms affect public life is difficult. Even the social media platforms themselves seem to have trouble reporting on the depth and character of the changes they have brought about in society. When evaluating how the Russian government contributed to the Trump campaign in the lead-up to the 2016 US presidential election, Facebook first reported that this foreign interference was negligible. Then it was reported as minimal, with just three thousand ads costing $100,000 linked to some 470 accounts that were eventually shut down.20 But as public and political pressure to understand the election grew, so did the company’s estimates of the electoral impact of other social media platforms caught up in the fray. Under questioning from US congressional investigators, the lawyers from these companies revealed that Russia’s propaganda machine had in fact reached 126 million Facebook users with this ad campaign. Foreign agents had published more than 131,000 messages from 36,000 Twitter accounts and uploaded over a thousand videos to Google’s YouTube service.21
What makes social media a powerful tool for propaganda is obviously its networked structure. Such platforms are distributed systems of users without editors to control the production and circulation of content—but also no editors to help with quality control and be responsible for content.
In its prepared remarks sent to Congress, Facebook stated that the Internet Research Agency, the shadowy Russian company linked to the Kremlin, had posted roughly 80,000 pieces of divisive content that were shown to about twenty-nine million US users between January 2015 and August 2017.22 Those posts were then liked, shared, and followed by others, spreading the messages to tens of millions more. In its statement, Facebook also said that it had found and deleted more than 170 suspicious accounts on its photo-sharing app Instagram and that those accounts had posted around 120,000 pieces of Russia-linked content. Each account was designed to present itself as a real social media user, a real neighbor, a real voter.
Since the US election in November 2016, and with increasing scrutiny of foreign influence by Congress, the Federal Bureau of Investigation, and the world’s media, platforms like Facebook and Twitter have been more aggressive in shutting down highly automated accounts and the profiles of fake or abusive users. They maintain that they have no role in facilitating foreign interference in elections or degrading public life in democracy. They insist that they are not publishing platforms with responsibility for what is disseminated to their users. But in almost every election, political crisis, and complex humanitarian disaster, their algorithms disseminate misinformation. Can democracy survive such computationally sophisticated political propaganda?
Simply defined, computational propaganda is the misleading news and information, algorithmically generated and distributed, that ends up in your social media feed. It can come from networks of highly automated Twitter accounts or fake users on Facebook. It can respond to your flirtation with a political entreaty. Often the people running these campaigns find ways to game the news feeds so that social media algorithms exaggerate the perceived importance of some junk news. But sometimes they simply pay for advertising and take advantage of the services that social media companies offer to all kinds of advertisers. The political campaigns or foreign governments that generate computational propaganda experiment with all manner of platforms, not just Instagram and YouTube. Doing so usually means breaking terms of service agreements, violating community norms, and using platform affordances in a way that engineers didn’t intend. Sometimes it means also breaking election guidelines, privacy standards, consumer protection rules, or democratic norms. Unfortunately, not all of these things are always illegal or fully investigated and prosecuted.23
Generating computational propaganda usually requires that someone compose the negative campaign messages and attack ads for automated and strategic distribution. Social media firms provide the algorithms that allow for automating the distribution of messages to targeted users. For example, geolocation data from mobile phones helped Facebook compute where ad placement would be most effective, sending more of the Brexit campaign ads to users in England rather than Wales or Scotland. During the 2016 elections in the United States, junk news was concentrated in swing states. Essentially, social media firms provide the computational services that allow for the most effective distribution of misinformation. The precise computational work of ad auction systems varies from platform to platform, but all are designed to automate the process of converting insights about users into compelling ads, delivered to the users most likely to react as desired.
When we independent researchers trace these bot networks, we are constrained because we can anticipate only some of the hashtags that will be used in an election campaign. Candidate names, party names, and a handful of other standard words are often useful keywords that help analysts identify posts, tweets, or ads about politics. Of course, this means that we’ll miss political conversations that are not tagged this way as well as conversations hashed with unusual terms and terms that emerge organically during a campaign. Nonetheless, once we identify a network of highly automated accounts, we try to answer a few basic questions so that we can compare trends across countries. Which candidates are getting the most social media traffic? How much of that traffic comes from highly automated accounts? What sources of political news and information are used?
Investigating how automation is used in political processes usually means starting off with a handful of accounts that are behaving badly. They might, for example, produce such high volumes of content that they effectively choke off a public conversation. Or they might push small amounts of content, but from extremist, conspiratorial, or sensationalist sources. There are many ways to track bots at scale, but all have weaknesses. Bots tend to follow other bots. They produce content that is often found to come from other accounts, word for word and character for character. But we have found that the best way to manage false positives on Twitter—no humans like being called bots—is simply to watch the frequency of contribution. We have found that accounts tweeting more than fifty times a day using a political hashtag are invariably bots or high-frequency accounts mixing automated techniques with occasional human curation. This varies somewhat from country to country. But few humans—even among journalists and politicians—can consistently generate fresh tweets on political hashtags for days on end.
Not surprisingly, given how important and complex this problem is, the study of computational propaganda has grown by leaps and bounds in recent years. With support from the National Science Foundation and the European Research Council, our team of social and computer scientists has been tracking automated activity on social media platforms, how misinformation spreads across these networks, and ultimately how this might affect democracy itself.
The 2016 US presidential election certainly stands as a watershed moment for understanding the evolution of computational techniques for spreading political propaganda across online social networks. In the most public way possible, it demonstrated how the algorithms used by social media companies to spread influence can be turned into tools for political interference—plowshares turned into swords. Both candidates attracted networks of automated Twitter accounts that pushed around their content during the election period.
For example, mapping political conversations by tracking the most prominent Clinton-related, Trump-related, and politically neutral hashtags revealed that the Trump-supporting bot networks were more active and expansive than the Clinton-supporting ones. There were more of them, they generated more content, and they were more interconnected.
Even though bots are simply bits of software that can be used to automate and scale up repetitive processes (such as following, linking, replying, and tagging on social media), we can still see evidence of the human deliberation behind the strategic use of bots for political communication. A bot may be able to pump out thousands of pro-candidate tweets a day, giving the impression of a groundswell of support, confusing online conversation, and overwhelming the opposition—but that bot was coded and released by a human.
The use of automated accounts was deliberate and strategic throughout the 2016 election season, seen most clearly with the pro-Trump campaigners and programmers who carefully adjusted the timing of automated content production during the debates, strategically colonized Clinton-related hashtags by using them for anti-Clinton messages, and then quickly disabled these activities after election day. We collected several samples of Twitter traffic and mapped the botnets by extracting the largest undirected, connected component of retweets by bots within the set of either Trump-related or Clinton-related hashtags. We tracked political conversations on Twitter during the three presidential debates and then again in the ten days before voting day. Following these trends over time allowed us to examine how social media algorithms could be leveraged in political strategy.
First, we demonstrated that highly automated bot campaigns were strategically timed to affect how social media users perceived political events. After the first and second debates, highly automated accounts supporting both sides tweeted about their favored candidate’s victory in the televised debates. We also noticed campaigners releasing bot activity earlier and earlier with each debate night. Some highly automated pro-Trump accounts were declaring Trump the winner of the third debate before the debate was even broadcast.24
Second, we saw that, over time, the pro-Trump bot networks colonized Clinton’s community hashtags. For the most part, each candidate’s human and bot fans used the hashtags associated with their favorite candidate. By election day, around a fifth of the exclusively pro-Trump Twitter conversation was generated by highly automated accounts, as was around a tenth of the exclusively pro-Clinton Twitter conversation. The proportion of highly automated Twitter activity rose to a full quarter when we looked at mixtures of Clinton and Trump hashtags—often seeing the negative messages generated by Trump’s campaigners (#benghazi, #CrookedHillary, #lockherup) injected into the stream of positive messages being traded by Clinton supporters (#Clinton, #hillarysupporter).
Last, we noticed that most of the bots went into hibernation immediately following the election: their job was done. Bots often have a clear rhythm of content production. If they are designed to repeat and replicate content in interaction with humans, then their rhythm of production follows our human patterns of being awake in the day and asleep at night. If they are frontloaded with content and just push out announcements and statements constantly, then they can run twenty-four hours a day, building up an enormous production volume over time. These highly automated accounts certainly worked hard during the campaign season—but interestingly, they were programmed to wind down afterward. Whoever was behind them turned them off after voting day.
The data consists of approximately 17 million unique tweets and 1,798,127 unique users, collected November 1–11, 2016. The election was on November 8, 2016. To model whether humans retweeted bots, we constructed a version of the retweeting network that included connections only where a human user retweeted a bot. The result was a directed retweet network, where a connection represented a human retweeting a bot. Overall, this network consisted of 15,904 humans and 695 bots. On average, a human shared information supplied by a bot five times. The largest coherent Trump botnet captured in a sample of Twitter traffic consisted of 944 pro-Trump bots, compared with the 264 that supported Clinton.
The largest botnet we found in our sample of the Trump supporter network was more than three times the size of the largest network of highly automated accounts promoting Clinton. Moreover, in comparing these networks, we found that the highly automated accounts supporting Trump were much more likely to be following each other. This suggests that the organization of Clinton’s network of highly automated accounts may have been a little more organic, with individual users creating bots and linking up to networks of human accounts in a haphazard way. In contrast, Trump’s retweeting network was quite purposefully constructed, and highly coordinated and automated accounts were a much more prominent feature of the Twitter traffic about Trump.25
Misinformation over other platforms spread in similar ways. Facebook’s algorithms distributed false news stories favoring Trump thirty million times and distributed false news stories favoring Clinton eight million times.26 Only some 14 percent of US voters thought of social media as their most important source of election news. The average voter saw, on average, only one or two fake news stories, and only half of those voters who recalled seeing fake news stories believed them. However, evaluating averages like this isn’t the best way to understand the impact of algorithmic content distribution, and a few percentage points in exposure can result in major political outcomes. The national averages may seem low, but these fake news stories were concentrated in swing states, where voters would have been exposed to much higher volumes of misinformation and where a few percentage points of change in voter opinion gave Trump the edge in claiming victory for the entire state.27
Social media platforms have responded to mounting criticism from government and civil society with modest initiatives. They occasionally catch a large volume of fake users and have introduced small interface changes to combat the spread of malicious automation. But the problem has gone global. The flow of political conversations has always been directed by information infrastructure, whether newspapers, radio, or television. But the degree to which algorithms and automation speed up the flow of political content—and widen its distribution—is entirely new. And these computational techniques have spread globally with exceptional speed. Usually it takes only a few campaign cycles for political consultants to take the latest dirty tricks around the world.
Programmers and political campaign managers have devised a wide range of applications for bots in political communication. It is often hard to attribute responsibility for bot activity on social media platforms. Some platforms thrive because of permissive automation policies, and political campaign managers take advantage of such supportive algorithms. Other platforms require that bot accounts identify as such or require that only real human users can create profiles. But creative political consultants find the work-arounds, even if they have to violate a platform’s terms of service, pay for custom programming, or hire trolls to generate messages.
At this point we’ve seen bots used in multiple ways. Obviously, they can be used by one candidate to attack another candidate in a seemingly public conversation. They can be used to plant false stories, debate with campaign staff, and argue with opponents. They can be used to make a candidate look more popular, either by inflating the numbers of followers or boosting the counts of attention received (likes, retweets). They can be used to apply all the negative campaigning techniques that campaign managers use on other media, such as push polling, rumor mongering, or direct trolling.
Candidates have used bots to encourage the public to register complaints against other candidates with elections officials. For example, during the US Republican primary elections, highly automated accounts managed by a Trump advocate encouraged voters to complain about another candidate’s campaign practices to the Federal Communications Commission. The complaint concerned Senator Ted Cruz’s use of robocalls—automated phone calls that remind voters to vote or conduct push polling. So bots were used to encourage humans to complain about bots bothering humans. Similarly, the political consulting firm New Knowledge experimented with a bot network that would advocate for a conservative candidate, but then get caught and outed as a Russian-backed support campaign. This would have made it look as if the Russians were campaigning for the conservative and embarrassed the conservative candidate—except that the whole effort was exposed.28
When our research team tried to inventory global spending on computational propaganda, we were surprised to find organized and coordinated ventures in every country we looked at, including many Western democracies. The earliest reports of organized social media manipulation emerged in 2010, and by 2017 we found details on such organizations in twenty-eight countries. By 2018 formal misinformation operations were in play in forty-eight countries, with some half a billion US dollars budgeted for research and development in public opinion manipulation. When we took an inventory in 2019, there were seventy countries with some domestic experience in cybertroop activity, and eight countries with dedicated teams meddling in the affairs of their neighbors through social media misinformation.
Looking across these countries, we found evidence of every authoritarian regime in our sample targeting its own population with social media campaigns and several of them targeting foreign publics. But authoritarian regimes are not the only—or even the best—organizations doing social media manipulation. In contrast, for democracies it was their own political parties that excelled at using social media to target domestic voters. Indeed, the earliest reports of government involvement in nudging public opinion involve democracies, and new innovations in political communication technologies often come from political parties and are trialed during high-profile elections. Cyber troops can work within a government ministry, such as Vietnam’s Hanoi Propaganda and Education Department, or Venezuela’s Communication Ministry.29 In the United Kingdom, cyber troops can be found across a variety of government ministries and functions, including the military (Seventy-Seventh Brigade) and electronic communications (Government Communications Headquarters, or GCHQ). Most democracies can’t hide the fact that they have such groups, though they will be secretive about group activities.30 And in China, the public administration behind cyber troop activities is incredibly vast. There are many local offices that coordinate with their regional and national counterparts to create and disseminate a common narrative of events across the country.31 In other nations, cyber troops are employed under the executive branch of government. For example, in Argentina and Ecuador, cyber troop activities have been linked to the office of the president.32
In our comparative analysis, we found several patterns in how different kinds of countries organize cyber troops. In democracies, strategic communication firms generally enter into contracts with political parties to manipulate voters or agreements with government agencies to shape public opinion overseas. In authoritarian regimes, cyber troops are often military units that experiment with manipulating public opinion over social media networks. In other words, social media companies have provided the platform and algorithms that allow social control by ruling elites, political parties, and lobbyists.
Many governments have been strengthening their cyberwarfare capabilities for both defensive and offensive purposes. In addition, political parties worldwide have begun using bots to manipulate public opinion, choke off debate, and muddy political issues.
The use of political bots varies across regime types and has changed over time. In 2014, my colleagues and I collected information on a handful of high-profile cases of bot usage and found that political bots tended to be used for distinct purposes during three events: elections, political scandals, and national security crises. The function of bots during these situations extends from the nefarious practice of demobilizing political opposition followers to the relatively innocuous task of padding political candidates’ social media follower lists. Bots are also used to drown out oppositional or marginal voices, halt protest, and relay false messages of governmental support. Political actors use them in broad attempts to sway public opinion.
Elections and national security crises are sensitive moments for countries. But for ruling elites, the risk of being caught manipulating public opinion is not as serious as the threat of having public opinion turn against them. While botnets have been actively tracked for several years, their use in political campaigning, crisis management, and counterinsurgency has greatly expanded recently. A few years ago, it was tough for users to distinguish between text content made by a fully automated script from that produced by a human. Now video can be computationally produced, so that familiar national leaders can appear to say things they never said and political events that never occurred appear to be captured in a recording.
In a recent Canadian election, one-fifth of the Twitter followers of the country’s leading political figures were bots. All the people running for US president in 2020 have bots following their social media accounts, though we can never know whether this is a deliberate strategy by candidates to look more popular or an attempt by opponents to make them look bad. We know that authoritarian governments in Azerbaijan, Russia, and Venezuela use bots. The governments of Bahrain, China, Syria, Iran, and Venezuela have used bots as part of their attacks on domestic democracy advocates.
Pro-Chinese bots have clogged Twitter conversations about the conflict in Tibet and have meddled in Taiwanese politics.33 In Mexico’s recent presidential election, all the major political parties ran campaign bots on Twitter. Furthermore, the Chinese, Iranian, Russian, and Venezuelan governments employ their own social media experts and pay small amounts of money to large numbers of people to generate progovernment messages.
Even democracies have groups like the United Kingdom’s Joint Threat Intelligence Group. These secretive organizations are similarly charged with manipulating public opinion over social media using automated scripts. Sometimes Western governments are unabashed about using social media for political manipulation. For example, the US Agency for International Development tried to seed a “Cuban Twitter” that would gain lots of followers through sports and entertainment coverage and then release political messages by using bots.34
The Syrian Electronic Army is a hacker network that supports the Syrian government. The group developed a botnet that generates proregime content with the aim of flooding the Syrian revolution hashtags—such as #Syria, #Hama, and #Daraa—and overwhelming the prorevolution discussion on Twitter and other social media portals. As the Syrian blogger Anas Qtiesh writes, “These accounts were believed to be manned by Syrian Mokhabarat (intelligence) agents with poor command of both written Arabic and English, and an endless arsenal of bite and insults.”35 Differing forms of bot-generated computational propaganda have been deployed in dozens of countries.36 Contemporary political crises in Thailand and Turkey, as well as the ongoing situation in Ukraine, are giving rise to computational propaganda. Politicians in those countries have been using bots to torment their opponents, muddle political conversations, and misdirect debate.
Although I have mostly discussed Twitter, every platform has processes that can be automated by someone who wants to distribute massive amounts of political content. At first, it was mostly authoritarian governments that used bots, but these days they are also a part of most major political campaigns in democracies.
Between 5 and 15 percent of all Twitter accounts are either bots or highly automated accounts; two-thirds of the links to information about news and current events shared on Twitter come from bot accounts; and in 2018, Twitter swept out seventy million fake accounts.37 By 2020, Twitter was regularly culling bot accounts in advance of every nation’s elections. Most of these crafty bots generate inane commentary and try to sell stuff, but some are given political tasks.
The reason that digital technologies are now crucial for the management of political conflict and competition is that they respond quickly. Brain scientists find that it takes 650 milliseconds for a chess grandmaster to realize that her king has been put in check after a move. Any attack faster than that and bots have the strategic advantage—they spot the move and calibrate the response in even less time. The most advanced bots battle it out in financial markets, where fractions of a second can mean millions in profit or loss. In competitive trading they have the advantage of being responsive. Slight changes in information or social validation from a few humans, however, can result in a massive cascade of misinformation. Such changes or validation can drive social media algorithms to treat the content as socially valuable, distributing it even more widely.
Bots do have a role in managing traffic on digital networks, whether electronic stock markets or social media feeds. Bots help make transactions speedy and help filter the political news and information we want from that we don’t want. And they don’t always capture our attention. They can certainly compete for it, clog our networks, stifle economic productivity, and block useful exchanges of information, though. Elsewhere, I’ve argued that we are living in a pax technica because the majority of the cultural, political, and economic life of most people is managed over digital media.38 Having control of that information infrastructure—whether it’s by technology firms, dictators, or bots—means having enormous power.
A lie machine is constituted by the relationships among devices, bots, and social media accounts as much as by the relationships among real people. Obviously, technology networks grow along with social networks, but that means that our digital devices and social media accounts increasingly contain our political, economic, and cultural lives. These user identities provide some significant capacities but also some troubling constraints on our political learning.
Occasionally, bots and other automated scripts are used for social good. Citizens and civic groups are beginning to use bots, drones, and embedded sensors for their own proactive projects. Such projects, for example, use device networks to bear witness, publicize events, produce policy-relevant research, and attract new members.
It certainly takes a dedicated organization of people to construct and maintain a socially significant lie. But social media firms provide the platforms that can be used to distribute such content in a massive way. And this technology for distributing it is incredibly fast—in a sense, it automates lying. We will soon discuss how to assemble the different parts: the human trolls, combined with the bots described here, are components of a larger mechanism that can involve high-volume, high-frequency content production for particular communities with highly partisan content.
Even campaign managers admit that they are importing tactics from authoritarian regimes into democracies. As Patrick Ruffini, a Republican research and analytics firm founder, told Politico, “A lot of these unsavory tactics that you would see in international elections are being imported to the US.”39
The market for deceitful robots is a competitive one. The people who build and maintain such networks must create visually pleasing and sufficiently entertaining stories of political intrigue, hidden agendas, or malfeasance. The stories must contain enough realism about the situation that users accept the claims enough to click through, read more, and retain a few details and doubts. The story must be delivered by a relatively enclosed network of other accounts and other content that affirms and reinforces what people are seeing.
Now that we understand how social media algorithms are used by modern lobbyists, politicians, and foreign governments to disseminate information, we next need to examine the process of marketing a political lie.