2
Production

Troll Armies and the Organization of Misinformation on Social Media

When Russia president Vladimir Putin sponsored a military incursion into Eastern Ukraine in November 2014, he faced immediate criticism from journalists and political opponents at home. The incursion itself was a complex, multimedia operation, with Russian-backed military personnel staging a pro-Russian uprising and actively producing social media content. Within Russia, the opposition was not a significant threat to Putin’s control of the government. But over time the opposition to his military aggression grew. At the center of the protests was Boris Nemtsov, and Putin tolerates little dissent.

Nemtsov was a relatively centrist politician who had long been calling public attention to government corruption. He was critical of all the profiteering around government contracts. He argued that destabilizing Ukraine was pointless and costly, and he organized tens of thousands of protesters in peace marches. On February 27, 2015, Nemtsov and his girlfriend were on a date in Moscow. Before midnight, the couple left the restaurant and were ambushed while crossing a bridge near the Kremlin. Nemtsov was shot dead on the spot. Russian courts have since convicted several Chechnyan nationalists for the murder, but the evidence supporting those convictions was weak.

Like an engine, lie machines are built from several components. When all the components are working well, the chances are higher that the machine will successfully manipulate public opinion on an issue. The first component is the process of producing a lie. Organized and motivated people produce lies in the service of some ideology, political candidate, or ruling elite, and the most effective and efficient of such groups can be the agencies of authoritarian governments. Later on, we’ll look at other stories and places to understand the other two components—the distribution systems and marketing processes—but one of the earliest assemblages of these parts accompanied Vladimir Putin’s invasion of Ukraine, and the crisis around Boris Nemtsov’s murder reveals much about how the lie producers operate. Essentially, troll armies are formally organized; they are often paid staff working under contract or in an employment relationship with a government, public agency, or recognized political party. The staff of such social media militias usually have genuine ideological affinities with the ruling elites of a government or party.

Politically motivated murders have become all too common in Russia. Such attacks occasionally involve targets overseas, and when they happen, the Russian government often seems to be ready with a social media strategy for managing the fallout. Common targets include journalists, opposition leaders, and business leaders who don’t want to play by the norms and rules of corruption in modern Russia. With Russia’s invasion of Ukraine in 2014, the world was closely watching Russia’s belligerence, and Nemtsov’s murder received international attention. The Russian government’s army of social media propagandists at the Internet Research Agency was immediately up and running, and a recently leaked document sheds light on how those propagandists were instructed to spin out stories of Nemtsov’s homicide.

The fast pace of international affairs and the negative coverage of Russia in the world’s free press means that the Kremlin’s media strategy must stay agile and fresh. The government’s news agencies, RT (formerly Russia Today) and Sputnik, promote the regime’s political perspectives overseas by broadcasting biased news stories and promoting stories of political corruption, racial strife, and social inequality in foreign democracies. A human staff pushes this content across the major social networking platforms; the best available estimate on staff capacity is that the IRA has between four hundred and one thousand paid employees and an annual budget of approximately ten million dollars. These resource commitments allow the Russians to generate media spin quickly in times of crisis.1

Guidance from the political center on what messages to distribute comes in the form of a weekly memo on the key issues that staff should be working on and the perspectives they should be taking. The memo doesn’t contain specific instructions on what to say and when to say it. It outlines positions and perspectives and makes suggestions about tone and context. On some issues the weekly memo identifies particular lies to propagate, and on others it provides tips on the subtle messaging that makes an issue seem complex, nuanced, and helpfully multifaceted. The guidelines are sent out to the paid army of social-media-savvy staff who then go to work pushing messages out over the thousands of fake user accounts they have set up.

Nemtsov was murdered at 11:30 that night in February 2015, and the Kremlin’s social media strategy was immediately ready for action.

The Beginning of Lie Machines

All governments try to shape public opinion, though different regimes do this in varying ways under diverse circumstances. Democratically elected governments tend to do it during military or economic crises, with propaganda offices dedicated to maintaining public support for difficult policies. Such governments, almost by definition, tend to practice censorship and public opinion manipulation on only a select number of critical issues, with oversight by courts or elected officials, and generally not in order to keep leaders in power. In contrast, authoritarian governments regularly use censorship, surveillance, and public opinion manipulation on a wide range of issues and to keep leaders in power.

Wealthy authoritarian governments use their media organizations to shape opinion constantly, but in moments of crisis they intervene aggressively to head off a real confrontation with political challengers or the public at large. Several state bureaucracies, such as Russia and China, have a century of experience managing public opinion through newspapers, radio, films, and television.

The founding story of modern, high-tech lie machines begins with Russia’s long-term investments in nationalist youth blogging camps. These teams directed misinformation at Russian citizens over social media using the country’s popular LiveJournal blogging system, an effort that involved several government departments. The well-resourced Internet Research Agency came to life in 2012 as a formal organization, with desks, telephones, job ads, and performance bonuses—at least, 2012 is the year that IRA-operated accounts came to life on Twitter.

Political elites in Russia saw how social movements in other parts of the world were using platforms like Facebook and Twitter to organize and decided to apply those same tactics with the aim of political control rather than political change. Effectively, Russia’s authoritarian government developed a social media strategy, funded it well, and built the first significant component of the modern lie machine: a professional organization for systematically producing political misinformation specifically for distribution over social media platforms.

These teams, whether operating in the public or private sector, are tasked primarily with manipulating public opinion online: pushing ideology through the algorithms of social media.2 Over the past decade, several authoritarian regimes have launched such organizations by retasking entire military units to carry out social media manipulation. Even dictators in small countries have begun developing small, professional teams to defend regime policies, attack opponents, and stay alert for moments of crisis. Private firms have sprung up, not just in eastern Europe, but in the world’s global cities—boutiques that advertise “social media services” and “political consulting” for any kind of client, be it a government agency, political party, dictator, or candidate campaigning for election. As of 2020, such troll armies operated in seventy countries around the world, with copycat IRA agencies in China, India, Iran, Pakistan, Saudi Arabia, and Venezuela that also generate campaigns targeting users outside their own countries.

Troll armies disseminate computational propaganda over social media platforms using automation, algorithms, and big-data analytics to manipulate public life.3 Doing this often involves producing junk news stories, spreading them over social media platforms, illegally harvesting data and microprofiling particular citizens, exploiting social media algorithms and putting them into service for influence operations, amplifying hate speech or harmful content through fake accounts or political bots, and producing clickbait content for optimized social media consumption. They are the dictator’s response to a social-media-enabled Arab Spring, and they are a standing army to assist with social media spin and manage public perception whenever a dictator stumbles.

What Dictators Learned from the Arab Spring

Many authoritarian regimes began treating social media as a tool for managing public opinion after the events of the Arab Spring. Beginning in late 2010, these popular democracy movements began in Tunisia, inspired democracy advocates in Egypt, and animated other movements to cascade across the region.4 Governments watched as some of the world’s toughest strongmen—Tunisia’s Ben Ali, Libya’s Muammar Gaddafi, Egypt’s Hosni Mubarak, and Yemen’s Ali Abdullah Saleh—fell to demands for change despite their many decades in control of public life. Each leader lost power after unparalleled levels of social protest and civil unrest ended their tough regimes. Several other autocrats had to dismiss their cabinets, make political concessions, and redistribute wealth. Discontent cascaded over transnational networks of family and friends to Algeria, Jordan, Lebanon, and Morocco. Several countries remain in crisis today, and in most of these countries it is not clear whether the popular demand for change will result in new sustainable political institutions.

Social protests in the Arab world spread across North Africa and the Middle East largely because digital media allowed communities to realize that they shared grievances and nurtured transportable strategies for mobilizing against dictators. But the early months of the Arab Spring were not about traditional political actors such as unions, political parties, or radical fundamentalists. These protests drew out networks of people, many of whom had not been as successful at political organization before: young entrepreneurs, government workers, women’s groups, and the urban middle class.

Ben Ali ruled Tunisia for twenty years, Mubarak reigned in Egypt for thirty years, and Gaddafi held Libya in a tight grip for forty years. Yet their bravest challengers were twenty- and thirty-year-olds without ideological baggage, violent intentions, or clear leadership. The groups that initiated and sustained protests had had few meaningful experiences with public deliberation or voting and little experience with successful protests. However, these young activists were politically disciplined, pragmatic, and collaborative. Even though they had grown up under entrenched authoritarian regimes, they had developed their political identities and aspirations over social media. For several years before the Arab Spring, young people had used social media to learn about political life in countries where faith and freedom coexisted. Then during the Arab Spring, they used social media to express their grievances and coordinate their protests.5

Democracy advocates learn tricks from one another, but so do dictators. Authoritarian regimes copy specific policy ideas for public administration and general ways of maintaining legitimacy and control. Research has demonstrated that contact between governments often produces bureaucracies that look alike—they emulate each other’s structure and processes. This is particularly true for democracies; such countries are more open about their successes and failures, and public agencies operate with higher expectations for success.6 However, it is also true of authoritarian regimes that have military units, police units, and national security teams that aggressively learn and apply the tricks that help autocratic regimes stay in power. The most important techniques often involve the machinery of information policy, telecommunications infrastructure, online censorship, and digital surveillance.7 For some of us, the Arab Spring was an inspiring moment of public demand for regime change. For the affected dictators, it was a lesson in how digital media can be used against them during a power struggle.

We can safely say that since the mid-1990s, digital media has in different ways and contexts been either a necessary or a sufficient cause of social uprising, regime change, and democratization.8 The Arab Spring exposed, both to researchers and to dictators, several phases of modern social unrest. The first is a preparation phase that involves activists using digital media in creative ways to find one another, build solidarity around shared grievances, and identify collective political goals. The ignition phase that follows involves some inciting incident that is ignored by the mainstream, state-controlled media but that circulates digitally and enrages the public. Second comes a phase of street protests that are coordinated digitally. Next there must be a phase of international buy-in, during which digital media is used to draw in foreign governments, global diasporas, and especially overseas news agencies.

This all culminates in a climax phase in which the state cracks down and protesters are forced to go home (Bahrain, Iran), rulers concede and meet public demands (Egypt, Tunisia), or political groups move into a protracted civil war (Libya, Syria). These days, one mark of the peak of a crisis of social unrest is that the government disconnects the national information infrastructure from the global internet in an attempt to hold back the cascade of unrest. This is the denouement of post-protest information in an ideological war between the winners and losers of the strife.

Having seen this arc of events, dictators began developing their own social media strategies.9 In the past, authoritarian regimes had easily controlled broadcast media in times of political crisis by destroying newsprint supplies, seizing radio and television stations, and blocking phone calls. It is certainly more difficult to control digital media on a regular basis, but there have been occasions on which states have disabled a range of marginal to significant portions of their national information infrastructure.

Authoritarian governments build lie machines for two broad reasons.10 The first is to protect political leaders and state institutions from reputational harm. When the results of a rigged election are openly questioned by the public and international actors, social media is used to reassure a domestic audience and restore public trust. Protecting authority can also involve using social media to respond to an internal challenger’s propaganda or a neighboring country’s meddling in domestic affairs. It can also involve responding to or censoring expressions of dissidence, with the pretense of ensuring national security.

The second common reason regimes give for building out such machinery is the need to preserve public order, cultural values, and the public good. In some cases, authoritarian regimes develop and release computational propaganda to protect social values or religious morals. In others, regimes explain that they need to preserve racial harmony, protect children, or safeguard individual privacy through surveillance, censorship, and computational propaganda. Often the regimes claim that these systems are needed to discourage criminal or destabilizing activity.

Some countries, such as Russia and China, offer many reasons, explanations, and excuses simultaneously or in different combinations depending on the nature of the crisis. Social media platforms have been particularly useful for Russia’s ruling elites, who can mix direct attacks on political opponents with public opinion manipulation that helps mislead the entire country.

Some pundits and researchers argue that social media is not part of the complex causal mechanism that explains contemporary social change. This is myopic, however. Perhaps the best evidence that social media is now part of the toolkit for social control is in how many regimes invested in social media communications since the Arab Spring. The same regimes that abused human rights had no problem using Twitter and Facebook to manipulate their citizens. Moreover, these platforms give regimes with significant budgets for psychological operations the ability to work on their own people, on their citizens who have moved overseas, and on the citizens of other countries. As a leader in the use of media control and manipulation, Russia made the first big, creative investments when it put the algorithms of Facebook, Twitter, and Instagram to work in a lie machine.

Training the Trolls

In early 2016, I received a copy of the IRA’s social media misinformation strategy for the week of Boris Nemtsov’s murder. The document came from an anonymous sender and from a temporary email address. It came encrypted, and after checking it for viruses, I opened it and had it translated. The document has since appeared in a few public repositories and been leaked to several people and organizations.

The document does not prove that Nemtsov’s murder was premeditated by the Kremlin. We can’t know if the Kremlin’s communications teams had their social media strategy ready to go before the homicide or just reacted instantly to it. But we can conservatively say that the teams had an exceptional capacity to respond swiftly and comprehensively. Boris Nemtsov was murdered just before midnight, and the Kremlin’s social media strategy launched just after.

In translation, the document itself is called simply Assignments for Savushkin 55, a reference to the physical address of the Kremlin’s social media operations building at 55 Savushkin Street in St. Petersburg. The document’s metadata—including information about when it was created—contains nothing suspicious. It contained no viruses, and the structure of the content is internally consistent. After a trusted colleague translated the document from Russian into English, another person “back translated” the document with a third individual, turning the English text into Russian and comparing it against the original. This allowed us to check that some of the nuanced rhetoric—always important in political messaging—was accurately translated.

Assignments for Savushkin 55 covers sixty-eight issues that were in the news at the time, ranging from US negotiations with Iran to the status of the South China Sea. It includes economic topics like the exchange value of the ruble and hot-button security issues such as the military maneuvers in Ukraine. Each issue is well researched, with a batch of links to friendly news sources both foreign and domestic.

For each issue there is a basic concept that provides a succinct one-sentence summary of the Kremlin’s position. Below that is a list of specific current events—what journalists call news pegs. These news pegs are connected to the key themes that the Kremlin wants its social media specialists to reinforce. Next the government researchers provide talking points, which include statements from prominent politicians and public figures about the issue. The researchers provide extensive links to friendly online news sources—and Western ones when those foreign sources seem to support the Kremlin spin. A concluding paragraph summarizes the Kremlin’s position, and a helpful list of keywords gives the social media team a set of good hashtags to use.

All of this provides perfect background material for the staff of the Internet Research Agency and a ready supply of photos, links, quotations, and observations to pour into the world’s social media feeds. The Kremlin’s trolls are trained in how to use these dispatches to push out waves of misinformation.

On February 28, 2015, the primary concern for Russia’s trolls was spinning Nemtsov’s murder. The spin was simple: the killing didn’t obviously serve Putin’s interests, so it must have been a foreign provocation.

“There is no doubt that the murder of Boris Nemtsov shocked the public, and especially the opposition,” the instructions for that day conclude. “But common sense says the murder would not benefit the government because it would only galvanize the country’s anti-government activists.” The messaging plan then offers several bizarre, contradictory explanations to be disseminated simultaneously, across both automated and organic accounts, in Russian, English, and Ukrainian.

First, the trolls are instructed to plant the notion that Nemtsov’s assassins might have been anti-Russian agents from Ukraine who killed him to disrupt Russian domestic politics. A quote from Putin’s press secretary Dmitry Peskov offers speculation about possible causes: “It is obvious that Boris Nemtsov was a member of the opposition, it is obvious that he was in close contact with different people in Kiev, he went there often, it’s not a secret, and everyone knows that.”11 Perhaps the Ukrainians killed him?

Second, the trolls are directed to introduce the possibility that the people who organized Nemtsov’s murder were also those who shot down Malaysia Airlines Flight 17 the previous summer. “From the political point of view,” speculates a senior government official, Ivan Melnikov, “it looks like a brutal, bloody provocation, organized with the same objectives as the Boeing plane crash. Neither first, nor second event were in the interests of the political opponents of Nemtsov.” Rhetorically, saying this reinforces the idea that Putin couldn’t have ordered either the murder of Nemtsov or the downing of Malaysia Airlines Flight 17 because he wouldn’t have benefited from either event. Perhaps there is a multifaceted, international conspiracy against Russia?

Third, the staff trolls speculate that Nemtsov’s own opposition supporters might have killed him in an attempt to inspire more antigovernment activism. Government spokesperson Dmitry Olshansky suggests, “Poor Boris Nemtsov was sacrificed in order to revive the opposition. His dead body was supposed to shock people to come out to the streets to direct their anger at the government.” He makes the bizarre conclusion that “the murder fits the opposition’s desire to escalate domestic tensions.” At the time, Russia’s own democracy advocates were trying to organize a big demonstration against the government’s invasion of Ukraine. Perhaps Putin’s opponents murdered their own leader to help energize more protests? The troll army is asked to promote this most unlikely political strategy because even raising the prospect of treachery among Russia’s democracy advocates can undermine their moral authority.

With multiple, contradictory messages ready to go within a few hours of Nemtsov’s murder, the churn of rumors quickly dissipated public consensus about what, if anything, should be done about his death. The social media campaign of disinformation produced by the IRA’s human staff was amplified by almost three thousand highly automated Twitter accounts that all pushed out the same slur at the same time. “Ukrainians killed him …. He was stealing one of their girlfriends,” they announced in unison.

Of course, an important part of any social media strategy is not just to frame the same incident in multiple ways but to make other incidents seem just as important, insulting, outrageous, or provocative. The messaging instructions for the rest of the week after Nemtsov’s murder reveal a lot about how trolls work and what makes them effective.

Throughout that week, Russia’s troll armies spoke up on dozens of topics. They discussed the real arrest of a man in Latvia who sympathized with the Russian government and used the arrest to criticize the European record on human rights and freedom of speech. They discussed the Ukrainian government’s very real trouble paying Russia for the natural gas needed to heat Ukrainian homes during the cold winter months. This was described as a form of state failure, and they chronicled the desperate attempts of the Russian government to help the Ukrainian economy. Special new military equipment was announced for the troops in the Arctic region. There was a concerted effort to make the ruble seem undervalued. Social media platforms allow the authoritarian regime to track political conversations and the Internet Research Agency to shape and direct such conversations.

Expanding Troll Armies

Over time, the Russian government has developed its internet trolls into a professional army of propagandists.12 The first well-trained teams of social media trolls were developed in Russia for use against Russians. Many of them were simply “patriotic bloggers” loosely coordinated by ruling elites. But national security services in the country quickly professionalized these teams and tasked them with working on misinformation campaigns across eastern Europe. In short order they needed more staff who could write and speak eloquently in the languages of the region. Significant budgets now provide for hundreds of staff, dedicated workspaces, and performance bonuses. The agency is able to activate campaigns in several countries at once or maintain extended misinformation campaigns outside of elections—such as when Russia was pilloried over the poisoning of Sergei Skripal in a small English town.

The Internet Research Agency now occupies several buildings in St. Petersburg and advertises openly for new recruits. For example, job ads for the Russian troll factory can often be found online. They detail the pay and range of activities for potential new employees: preparation of thematic posts, developing mechanisms to attract new audiences, and monitoring target groups.

The people hired for these positions are chosen for their creativity. Generating politically polarizing content and distributing it in a savvy way over social media takes ingenuity. One example of the content produced by these teams is the campaign to encourage US voters to fear immigration and to abhor the uncertain costs of allowing outsiders into the United States.

One of the most shared cross-platform images known to have come from Russia’s Internet Research Agency depicts a downtrodden veteran and suggests that there is a trade-off between assisting veterans or immigrants. “At least 50,000 homeless veterans are starving dying in the streets, but liberals want to invite 620,000 refugees and settle them among us,” claims the accompanying text. “We have to take care of our own citizens, and it must be the primary goal for our politicians!”

On its own, this misinformation sets up a false contrast and uses made-up numbers. It uses a reasonable sentiment—that we should take care of our friends and family who do military service—as an ideological accusation against liberals. It is not clear that liberals would want to make this trade-off or even that it is a trade-off. But most importantly, the post lures readers onto websites and additional content that is more sensational, extremist, and conspiratorial.

Perhaps we shouldn’t be surprised that an authoritarian government has used a new information technology as a tool of social control and run information operations against its opponents. Perhaps the best evidence of how a comprehensive lie machine can take over a country’s public life is Russia itself, where the manipulative strategies have been practiced and perfected on each new social media platform to arrive. In Russia, almost half of all conversation on Twitter is conducted by highly automated accounts. Some of these accounts organically and actively police conversation; others simply generate spam in an effort to make the platform unusable for conversation.13 The bigger surprise is how rapidly this way of organizing misinformation campaigns has spread globally—even to democracies.

Russia’s Troll Army and Political Polarization in the United States

As Russian troll armies became more practiced at using social media algorithms to manipulate public opinion, they got bolder about which country’s citizens they would interfere with. With a smooth-running operation, able to handle emergency communications during internal political crises, and some successful experiments working in other languages, Russia’s troll armies were next tasked with producing computational propaganda for social media users in the United States.

Russia’s Internet Research Agency launched an extended attack on the United States by using computational propaganda to misinform and polarize US voters. What we know about Russia’s social media campaigns against voters in democracies comes from the small amounts of data released by the major social media firms.14 There is certainly a constant flow of examples of suspected Russian-backed, highly automated, or fake social media accounts working to polarize public understanding of important social issues. But understanding the structure and reach of the IRA’s efforts requires large pools of data. In the summer of 2017, the major social media firms provided such data to the US Senate Select Committee on Intelligence. The committee then turned to my team and I at Oxford University—the Computational Propaganda Project within the Oxford Internet Institute—to analyze the data.

Major social media firms gave the US Senate and our team data on the accounts that these firms identified as IRA-managed troll and bot accounts. Facebook provided data on ads bought by IRA users on Facebook and Instagram and on organic posts on both platforms generated by accounts that Facebook knew were managed by IRA staff. Twitter provided a vast corpus of detailed account information on the Twitter accounts that the company knew were managed by IRA staff. Google provided images of ads and videos that were uploaded to YouTube.

To analyze the data, we had to commit to a nondisclosure agreement with the US Senate for a short period. This was certainly an impingement on academic freedom, but we decided that our consent was worth it to get an inside peek into how troll armies work. In the end it paid off, and we were able to generate the first, most comprehensive analysis of this rare data, and more importantly, we were able to expose IRA activity across multiple platforms over several years.

The volume of IRA ads and suspicious accounts is small compared to the total number of ads placed and the total user base. But the organic content generated by Russian trolls had enormous reach, with tens of millions of US voters seeing the ads, interacting with fake voters, and sharing the misinformation generated by the IRA. Facebook provided data on 3,393 individual ads. Public data released by the House Permanent Select Committee on Intelligence provided details on 3,517 ads. These ads encouraged users to engage with specific pages. And these pages were the center of issue-based ad campaigns run by the IRA.

Facebook provided data on 76 different advertising accounts that purchased ads on behalf of these campaigns on Instagram and Facebook, though only a handful were responsible for most of the spending on ads. On Facebook, these campaigns generated a total of 67,502 organic posts (produced by the IRA page administrator and not advertised) across 81 distinct pages. On Instagram, these campaigns generated a total of 116,205 organic posts across 133 separate Instagram accounts. The campaigns’ organic Facebook posts had very high levels of engagement. In total, IRA posts were shared by users just under 31 million times, were liked almost 39 million times, were reacted to with emojis almost 5.4 million times, and engaged enough users to generate almost 3.5 million comments.

Engagement was not evenly distributed across the 81 pages for which Facebook provided data on organic posting: the top 20 most liked pages received 99 percent of all audience engagement, shares, and likes. Twenty ad campaigns received the most attention from audiences and absorbed most of the IRA’s spending. And all the campaigns and pages contained conspiratorial content, polarizing messages, misinformation, simple lies, and other forms of junk news.

On Instagram, a similar pattern is evident. In total, all Instagram posts garnered almost 185 million likes, and users commented about 4 million times. Forty pages received 99 percent of all likes. The themes of these Instagram posts do not seem to differ significantly from those of Facebook, though the presentation style is different. The data Twitter provided contained handles and associated metadata for 3,841 accounts believed to have been managed by the IRA. The analysis of Twitter content in this report covers 8,489,989 posts (tweets, in this case) across 3,822 of these accounts.

The IRA’s international activities across the major social media platforms grew significantly after its early successes inside Russia—such as the subterfuge around Boris Nemtsov’s murder. In 2016, the average monthly volume of live ads was more than double the 2015 level, and the volume remained similar in 2017. Unlike the ads, the monthly volume of organic Facebook posts rose steadily between 2015 and 2017. Between 2015 and 2016, monthly organic post volume increased almost sevenfold, and it continued to rise rapidly into 2017. On Instagram, after a small increase in average monthly post volume between 2015 and 2016, there was a large increase between 2016 and 2017. Unlike the average volume of Facebooks ads, the average volume of Facebook and Instagram organic posts was much higher in 2017 than in 2016: by a factor of 1.7 for Facebook organic posts, and by a factor of 2.3 for Instagram organic posts. The volume of Twitter posts (tweets) did not change significantly in the years 2015–17.

The Russian secret services bought at least $100,000 worth of ads on Facebook targeting US voters.15 They used social media to encourage Catalan independence from Spain during a tense national referendum.16 And as of this writing, several democracies have made criminal indictments, political accusations, and diplomatic complaints against Russia’s troll armies.

The Internet Research Agency adapted existing techniques used in digital advertising to spread disinformation and propaganda. They set about creating and managing advertising campaigns on multiple platforms, often making use of false personas or imitating activist groups. This strategy is not a unique invention for politics and foreign intrigue—it is consistent with techniques used in digital marketing.

The overall approach appears to have served three advantages. First, it enabled the IRA to reach its target audiences across multiple platforms and formats. Indeed, the IRA’s core messages and target audiences were remarkably consistent across platforms. Second, it helped create a semblance of legitimacy for the false organizations and personas managed by the IRA. We can hypothesize that users were more likely to assume the credibility of the false organizations set up by the IRA because they had a presence across multiple platforms, operating websites, YouTube channels, Facebook pages, Twitter accounts, and even PayPal accounts set up to receive donations. Finally, the IRA was able to leverage its presence on multiple platforms after detection efforts caught up with it by redirecting traffic to platforms where its activities had not been disrupted and by using its accounts on one social media platform to complain about suspensions of its accounts on another platform.

When called to testify before the US Senate in 2018, I was able to report on several stunning findings from our team’s analysis of the data.17 Between 2013 and 2018, the IRA’s Facebook, Instagram, and Twitter campaigns reached tens of millions of users in the United States. More than thirty million users, between 2015 and 2017, shared the IRA’s Facebook and Instagram posts with their friends and family, liking, reacting to, and commenting on them along the way. Peaks in advertising and organic activity often corresponded to important dates in the US political calendar, crises, and international events. Agency activities focused on the United States began on Twitter in 2013 but quickly evolved into a multiplatform strategy involving Facebook, Instagram, and YouTube, among other platforms.

The most far-reaching IRA activity was in organic posting, not advertisements. In other words, the most pernicious content was not in the political ads simply purchased and placed by Russian agents. It was the seemingly homespun, organic content that had the greatest reach. And that content came from the fake users managed by Russia’s troll army.

Russia’s IRA activities were designed to polarize the US public and interfere in elections, by campaigning in 2016 for African American voters to boycott elections or to follow the wrong voting procedures. More recently, the IRA campaigned for Mexican American and Hispanic voters to distrust US institutions. Russian trolls have been encouraging extreme right-wing voters to be more confrontational, and they have been spreading sensationalist, conspiratorial, and other forms of junk political news and misinformation to voters across the political spectrum. Surprisingly, these campaigns did not stop once the IRA was caught interfering in the 2016 elections. Engagement rates increased and covered a widening range of public policy issues, national security concerns, and topics pertinent to younger voters. The highest peak of IRA ad volume on Facebook was in April 2017—the month of the Syrian missile strike, the use of the Mother of All Bombs on ISIS tunnels in eastern Afghanistan, and the release of the Trump administration’s tax reform plan.

The Internet Research Agency posts on Instagram and Facebook increased substantially after the US elections of 2016, with Instagram seeing the greatest increase in IRA activity. The IRA accounts actively engaged with disinformation and practices common to Russian trolling. Some posts referred to Russian troll factories that flooded online conversations with posts; others denied being Russian trolls. When they faced account suspension, some trolls complained about the political bias of platforms.

Perhaps least surprising, but most important, is that IRA misinformation activities bloomed at key moments in US public life. Troll armies plan for and campaign around moments of political crisis and election year milestones. Broadly, over the years 2015–17, the volume of activity in Facebook ads, Facebook posts, and Instagram posts increased from the Democratic and Republican National Conventions in July 2016 to voting day in November 2016. But several spikes in ad and post volume happened around the dates of important political events, in particular:

• the third Democratic primary debate and the sixth Republican primary debate (both in January 2016);

• the presidential candidate debates between Hillary Clinton and Donald Trump (autumn 2016);

• election day (November 8, 2016); and

• the dates of the postelection Russian hacking investigation (December 29 and 30, 2016).

The data on Russia’s Internet Research Agency provided to the Senate Select Committee on Intelligence by social media and internet platforms demonstrates a sustained effort to manipulate the US public and undermine its democracy. With years of experience of manipulating public opinion in Russia, the IRA used major social media platforms, including Facebook, Instagram, and Twitter, to target US voters and polarize US social media users.

The Russian effort targeted many kinds of communities within the United States, but especially the most extreme conservatives and those with particular sensitivities to race and immigration. The IRA used a variety of fake accounts to infiltrate political discussion in liberal and conservative communities, including African American activist communities, in order to exacerbate social divisions and influence the agenda. Accounts posing as liberal and as conservative US users were frequently created and operated from the same computers.

The Social Organization of Misinformation

Social media is a ubiquitous and prominent part of everyday life, and users place high amounts of trust in these platforms. Indeed, even after several years of scandal, polling still finds that people trust technology companies more than public services, food and drink companies, pharmaceutical firms, banks, oil and gas companies, the news media, and government (in order of diminishing trust).18 With the ability to segment audiences and target messages in a quick, cheap, and largely unregulated way, these platforms have not surprisingly attracted the interest of political operators. Unfortunately, there is mounting evidence that social media is being used to manipulate and deceive the voting public—and to undermine democracies and degrade public life.

It is difficult to tell the history of social media or anticipate its future without understanding trolls. If the history of the internet is the history of spam, the history of social media is similarly intertwined with the history of trolls.19 In politics and public life, social media conversations are inflected by political rants with idiosyncratic spelling, invitations to join small groups of resisters, truthers, and woke individuals. Political campaigns come with pleas to back your moral outrage by sharing content with your social network and donating to the cause. As Finn Brunton explains in Spam, all this activity is shaped by many different programmers, con artists, bots and their botmasters, pharmaceutical merchants, marketers, identity thieves, crooked bankers and their victims, cops, lawyers, network security professionals, vigilantes, and hackers. But it is much harder to filter out organic political trolling, which tends to be dynamic and conversational and is produced by organizations with significant budgets and personnel.

It may not be surprising that an authoritarian government would develop techniques for manipulating public life over new media. The Russian origins of modern lie machines—algorithms carrying ideology—won’t be surprising to experts in the long history of propaganda. Political elites in every country work hard to manage public perception and grievances. And there is a long history of governments meddling in each other’s domestic politics, promoting local politicians who favor foreign interests, and even interfering in how a country runs its elections. But the use of direct messaging and personal engagement with voters in strategically important districts in the West caught many other analysts off guard.

Unfortunately, the trolling techniques pioneered by the Russian government have spread far and wide. Prominent politicians and lobbyists in every country—authoritarian or democratic, and at all levels of government—use them.20

The successes of these enormous lie machines cannot simply be attributed to resourcing and organizational behavior. Social media firms themselves provided the algorithms—the toolkit—and explaining the current challenges to public life would be incomplete without including the technical part of this sociotechnical system.

The machine driving politics has itself changed. Public life, going forward, will be fundamentally different from what has gone before because of the purposeful coordination of vast amounts of misinformation that can be passed across networks of family and friends. The politics of the future, and the now, is different from what came before. And if we don’t act quickly, political life will be perpetually dominated by these lie machines.

Lie machines, however, are not just produced by authoritarian governments to oppress their own populations through misinformation and disinformation. A close look at Russia’s troll armies reveals a lot about the production side of a modern lie machine. But we can look to other examples, in other countries, to illustrate best how the distribution and marketing of big political lies works.