When all think alike, no one thinks very much.
—WALTER LIPPMANN, The Stakes of Diplomacy
NEVER BEFORE COULD THESE TEENAGE BOYS have afforded the $100 bottles of Moët champagne that they sprayed across the nightclub floor. But that was before the gold rush, before their lives were flooded with slick wardrobes and fancy cars and newly available women. In the rusted old industrial town of Veles, Macedonia, they were the freshly crowned kings.
They worked in “media.” More specifically, they worked in American social media. The average U.S. internet user was basically a walking bag of cash, worth four times the advertising dollars of anyone else in the world—and they were very gullible. In a town with 25 percent unemployment and an annual income of under $5,000, these young men had discovered a way to monetize their boredom and decent English-language skills. They set up catchy websites, peddling fad diets and weird health tips, and relying on Facebook “shares” to drive traffic. With each click, they got a small slice of the pie from the ads running along the side. Soon the best of them were pulling in tens of thousands of dollars a month.
But there was a problem. As word got out, competition swelled. More and more Veles teens launched websites of their own.
Fortunately, these young tycoons had timed their business well. The American political scene soon brought them a virtually inexhaustible source of clicks and resulting fast cash: the 2016 U.S. presidential election.
The Macedonians were awed by Americans’ insatiable thirst for political stories. Even a sloppy, clearly plagiarized jumble of text and ads could rack up hundreds of thousands of “shares.” The number of U.S. politics–related websites operated out of Veles ballooned into the hundreds. As U.S. dollars poured into the local economy, one nightclub even announced that it would hold special events the same day Google released its advertising payouts.
“Dmitri” (a pseudonym) was one of the successful entrepreneurs. He estimated that in six months, his network of fifty websites attracted some 40 million page views driven there by social media. It made him about $60,000. The 18-year-old then expanded his media empire. He outsourced the writing to three 15-year-olds, paying each $10 a day. Dmitri was far from the most successful of the Veles entrepreneurs. Several became millionaires. One even rebranded himself as a “clickbait coach,” running a school where he taught dozens of others how to copy his success.
Some 5,000 miles from actual American voters, this small Macedonian town had become a cracked mirror of what Mark Zuckerberg had pulled off just a decade earlier. Its entrepreneurs had pioneered a new industry that created an unholy amount of cash and turned a legion of young computer nerds into rock stars. As one 17-year-old girl explained at the nightclub, watching the teen tycoons celebrate from her perch at the bar, “Since fake news started, girls are more interested in geeks than macho guys.”
The viral news stories pumped out by these young, hustling Macedonians weren’t just exaggerations or products of political spin; they were flat-out lies. Sometimes, the topic was the long-sought “proof” that Obama had been born in Kenya or revelations that he was planning a military coup. Another report warned that Oprah Winfrey had told her audience that “some white people have to die.” In retrospect, such articles seem unbelievable, but they were read on a scale that soared past reports of the truth. A study of the top election news–related stories found that false reports received more engagement on Facebook than the top stories from all the major traditional news outlets combined.
As with their peddling of fad diets, the boys turned to political lies for the sole reason that this was what their targets seemed to want. “You see they like water, you give water,” said Dmitri. “[If] they like wine, you give wine.” There was one cardinal rule in the business, though: target the Trumpkins. It wasn’t that the teens especially cared about Trump’s political message, but, as Dmitri explained, “nothing [could] beat” his supporters when it came to clicking on their made-up stories.
Of the top twenty best-performing fake stories spread during the election, seventeen were unrepentantly pro-Trump. Indeed, the single most popular news story of the entire election—“Pope Francis Shocks World, Endorses Donald Trump for President”—was a lie fabricated in Macedonia before blasting across American social networks. Three times as many Americans read and shared it on their social media accounts as they did the top-performing article from the New York Times. Pope Francis didn’t mince words in his reaction to such articles: “No one has a right to do this. It is a sin and it is hurtful.”
Dmitri and his colleagues, though, were unrepentant. “I didn’t force anyone to give me money,” he said. “People sell cigarettes, they sell alcohol. That’s not illegal, why is my business illegal? If you sell cigarettes, cigarettes kill people. I didn’t kill anyone.” If anything, the fault lay with the traditional news media, which had left so much easy money on the table. “They’re not allowed to lie,” Dmitri noted scornfully.
At the same time that governments in Turkey, China, and Russia sought to obscure the truth as a matter of policy, the monetization of clicks and “shares”—known as the “attention economy”—was accomplishing much the same thing. Social media provided an environment in which lies created by anyone, from anywhere, could spread everywhere, making the liars plenty of cash along the way.
When the work of these Macedonian media moguls came to light, President Obama himself huddled with advisors on Air Force One. The most powerful man in the world dwelled on the absurdity of the situation and his own powerlessness to fight back. He could dispatch Navy SEALs to kill Osama bin Laden, but he couldn’t alter this new information environment in which “everything is true and nothing is true.” Even in the absence of digitally empowered censorship, the free world had still fallen victim to the forces of disinformation and unreality.
When the social media revolution began in earnest, Silicon Valley evangelists enthused about the possibilities that would result from giving everyone “access to their own printing press.” It would break down barriers and let all opinions be heard. These starry-eyed engineers should have read up on their political philosophy. Nearly two centuries earlier, the French scholar of democracy Alexis de Tocqueville—one of the first foreigners to travel extensively in the new United States of America—pondered the same question. “It is an axiom of political science in the United States,” he concluded, “that the only way to neutralize the influence of newspapers is to multiply their number.” The greater the number of newspapers, he reasoned, the harder it would be to reach public consensus about a set of facts.
Tocqueville was worried about the number of newspapers expanding past the few hundred of his time. Today, the marvels of the internet have created the equivalent of several billion newspapers, tailored to the tastes of each social media user on the planet. Consequently, there is no longer one set of facts, nor two, nor even a dozen. Instead, there exists a set of “facts” for every conceivable point of view. All you see is what you want to see. And, as you’ll learn how it works, the farther you’re led into this reality of your own creation, the harder it is to find your way out again.
“Imagine a future in which your interface agent can read every newswire and newspaper and catch every TV and radio broadcast on the planet, and then construct a personalized summary.”
This is what MIT media professor Nicholas Negroponte prophesied in 1995. He called it the “Daily Me.” A curated stream of information would not only keep people up to date on their own personal interests, but it would cover the whole political spectrum, exposing people to other viewpoints. His vision aligned with most internet pioneers. The internet didn’t just mean the end of censorship and authoritarians. Access to more information would also liberate democracies, leading to a smarter, wiser society.
As the web exploded in popularity and the first elements of the “Daily Me” began to take shape, some pondered whether the opposite might actually be true. Rather than expanding their horizons, people were just using the endless web to seek out information with which they already agreed. Harvard law professor Cass Sunstein rebranded it as the “Daily We.”
Imagine . . . a system of communications in which each person has unlimited power of individual design. If some people want to watch news all the time, they would be entirely free to do exactly that. If they dislike news, and want to watch football in the morning and situation comedies at night, that would be fine too . . . If people want to restrict themselves to certain points of view, by limiting themselves to conservatives, moderates, liberals, vegetarians, or Nazis, that would be entirely feasible with a simple point-and-click. If people want to isolate themselves, and speak only with like-minded others, that is feasible too . . . The implication is that groups of people, especially if they are like-minded, will end up thinking the same thing that they thought before—but in more extreme form.
With the creation of Facebook just a few years later, the “Daily We”—the algorithmically curated newsfeed—became a fully functioning reality. However, the self-segregation was even worse than Sunstein had predicted. So subtle was the code that governed user experience on these platforms, most people had no clue that the information they saw might differ drastically from what others were seeing. Online activist Eli Pariser described the effect, and its dangerous consequences, in his 2011 book, The Filter Bubble. “You’re the only person in your bubble,” he wrote. “In an age when shared information is the bedrock of shared experience, the filter bubble is the centrifugal force, pulling us apart.”
Yet, even as social media users are torn from a shared reality into a reality-distorting bubble, they rarely want for company. With a few keystrokes, the internet can connect like-minded people over vast distances and even bridge language barriers. Whether the cause is dangerous (support for a terrorist group), mundane (support for a political party), or inane (belief that the earth is flat), social media guarantees that you can find others who share your views. Even more, you will be steered to them by the platforms’ own algorithms. As groups grow, it becomes possible for even the most far-flung of causes to coordinate and organize, to gain visibility and find new recruits.
Flat-earthers, for instance, had little hope of gaining traction in a post–Christopher Columbus, pre-internet world. It wasn’t just because of the silliness of their views, but they couldn’t easily find others who shared them.
Today, the World Wide Web has given the flat-earth belief a dramatic comeback. Proponents now have an active online community and an aggressive marketing scheme. They spread stories that claim government conspiracy, and produce slick videos that discredit bedrock scientific principles. Pushing back at the belief only aids it, giving proponents more attention and more followers. “YouTube cannot contain this thing,” declared one flat-earther. “The internet cannot contain it. The dam is broken; we are everywhere.”
Flat-earthism may sound amusing, but substitute it for any political extreme and you can see the very same dynamics at play. As groups of like-minded people clump together, they grow to resemble fanatical tribes, trapped in echo chambers of their own design. The reason is basic human nature. In numerous studies, across numerous countries, involving millions of people, researchers have discovered a cardinal rule that explains how information disseminates across the internet, as well as how it shapes our politics, media, and wars. The best predictor is not accuracy or even content; it is the number of friends who share the content first. They are more likely to believe what it says—and then to share it with others who, in turn, will believe what they say. It is all about us, or rather our love of ourselves and people like us.
This phenomenon is called “homophily,” meaning “love of the same.” Homophily is what makes humans social creatures, able to congregate in such large and like-minded groups. It explains the growth of civilization and cultures. It is also the reason an internet falsehood, once it begins to spread, can rarely be stopped.
Homophily is an inescapable fact of online life. If you’ve ever shared a piece of content after seeing it on a friend’s newsfeed, you’ve become part of the process. Most people don’t ponder deeply when they click “share.” They’re just passing on things that they find notable or that might sway others. Yet it shapes them all the same. As users respond positively to certain types of content, the algorithms that drive social media’s newsfeeds ensure that they see more of it. As they see more, they share more, affecting all others in their extended network. Like ripples in a pond, each of these small decisions expands outward, altering the flow of information across the entire system.
But there’s a catch: these ripples also reverberate back toward you. When you decide to share a particular piece of content, you are not only influencing the future information environment, you are also being influenced by any information that has passed your way already. In an exhaustive series of experiments, Yale University researchers found that people were significantly more likely to believe a headline (“Pope Francis Shocks World, Endorses Donald Trump for President”) if they had seen a similar headline before. It didn’t matter if the story was untrue; it didn’t even matter if the story was preceded by a warning that it might be fake. What counted most was familiarity. The more often you hear a claim, the less likely you are to assess it critically. And the longer you linger in a particular community, the more its claims will be repeated until they become truisms—even if they remain the opposite of the truth.
Homophily doesn’t just sustain crazy online echo chambers; its effects can sow deadly consequences for society. A prime example is the anti-vaccine movement, which claims that one of the most important discoveries in human history is actually a vast conspiracy. The movement got its start in the 1960s but exploded in popularity along with social media. People with radical but seemingly disparate views—those on the far left suspicious of pharmaceutical companies, the far right suspicious of the government, and religious fundamentalists suspicious of relying on anything but prayer—found common cause online. Across Facebook groups and alternative-health websites, these “anti-vaxxers” shared made-up stories about the links between childhood vaccination and autism, reveling in conspiracy theories and claims they were the ones who faced a second “Holocaust.”
In an endless feedback loop, each piece of content shared within the anti-vaxxer community leaves them only more convinced that they are the sane ones, defending their children against blasphemous, corporate-enriching, government-induced genetic engineering. In the process, the personalization afforded by social media also becomes a weapon. Whenever they are challenged, anti-vaxxers target not just the counterargument but also the person making it. Any critic becomes part of the conspiracy, transforming a debate over “facts” into one over motivations.
Their passion has also made them a potent online force. In turn, this has made them an attractive movement for others to leverage to their own ends. Beginning in the late 2000s, this cadre of true believers was joined by a series of lower-tier celebrities whose popularity had diminished, like Jenny McCarthy and Donald Trump (who tweeted, “Healthy young child . . . gets pumped with massive shot of many vaccines, doesn’t feel good and changes—AUTISM. Many such cases!”). These failing stars used the attention-getting power of the anti-vaxxers to boost their personal brands—magnifying the reach of the conspiracy in the process.
In the United States, the net result of this internet-enabled movement is that—after more than two centuries of proven, effective use and hundreds of millions of lives saved—vaccines have never faced so much public doubt. That might be just as funny as the flat-earthers, claiming that the earth is flat while coordinating via satellites that circle the globe, except for the real costs borne by the most vulnerable members of society: children. In California, the percentage of parents applying a “personal belief exception” to avoid vaccinating their kindergartners quadrupled between 2000 and 2013, and disease transmittal rates among kids soared as a result. Cases of childhood illnesses like whooping cough reached a sixty-year high, while the Disneyland resort was rocked by an outbreak of measles that sickened 147 children. Fighting an infectious army of digital conspiracy theorists, the State of California eventually gave up arguing and passed a law requiring kindergarten vaccinations, which only provided more conspiracy theory fodder.
Tempting as it may be to blame the internet for this, the real source of these digital echo chambers is again deeply rooted in the human brain. Put simply, people like to be right; they hate to be proven wrong. In the 1960s, an English psychologist isolated this phenomenon and put a name to it: “confirmation bias.” Other psychologists then discovered that trying to fight confirmation bias by demonstrating people’s errors often made the problem worse. The more you explain with facts that someone is mistaken, the more they dig in their heels.
What the internet does do is throw this process into overdrive, fueling the brain’s worst impulses and then spreading them to countless others. Social media transports users to a world in which their every view seems widely shared. It helps them find others just like them. After a group is formed, the power of homophily then knits it ever closer together. U.S. Army colonel turned historian Robert Bateman summarizes it pointedly: “Once, every village had an idiot. It took the internet to bring them all together.”
Thanks to this combination of internet-accelerated homophily and confirmation bias, civil society can be torn into fragments. Each group comes to believe that only its members know the truth and that all others are ignorant or, even worse, evil. In fragile states, the situation can become untenable. A 2016 study from George Washington University’s Institute for Public Diplomacy and Global Communication explored this phenomenon in the context of the Arab Spring (which, as we saw earlier, marked the height of optimism about the power of social media), helping to explain how these democratic uprisings were so quickly exploited by authoritarianism.
When the researchers pored over nearly 63 million Twitter and Facebook posts that followed the initial uprisings, a pattern became clear. The availability of information and ease of organizing online had catalyzed masses of disparate people into action. But then came the division. “As time went on, social media encouraged political society to self-segregate into communities of the like-minded, intensifying connections among members of the same group while increasing the distance among different groups.”
Once the shared enemy was gone, wild allegations demonized former allies and drove people farther apart. As the researchers explained elsewhere, “The speed, emotional intensity and echo-chamber qualities of social media content make those exposed to it experience more extreme reactions. Social media is particularly suited to worsening political and social polarization because of its ability to spread violent images and frightening rumors extremely quickly and intensely.”
Although the main case study was Egypt, they could well have been describing the plight of any nation on earth.
The outcome is a cruel twenty-first-century twist on one of the classic quotes of the twentieth century. “Everyone is entitled to his own opinion, but not his own facts,” declared the legendary sociologist and New York senator Daniel Patrick Moynihan in a widely attributed axiom. He was born in the age of radio and rose to power in the age of television. He died in 2003—the same year Mark Zuckerberg was mucking around in his Harvard dorm room. In Moynihan’s time, such noble words rang true. Today, they’re a relic.
Fact, after all, is a matter of consensus. Eliminate that consensus, and fact becomes a matter of opinion. Learn how to command and manipulate that opinion, and you are entitled to reshape the fabric of the world. As a Trump campaign spokesperson famously put it in 2016, “There’s no such thing, unfortunately, anymore as facts.” It was a preposterous claim, but in a certain way, it is true.
And yet there’s another disturbing phenomenon at work. On social media, everyone may be entitled to their own facts, but rarely do they form their own opinions. There’s someone else manufacturing the beliefs that go viral online.
The families were just sitting down for lunch on December 4, 2016, when the man with the scraggly beard burst through the restaurant door. Seeing him carrying a Colt AR-15 assault rifle, with a Colt .38 revolver strapped to his belt, parents shielded their terrified children. But Edgar Welch hardly noticed. After all, he was a man on a mission. The 28-year-old part-time firefighter knew for a fact that the Comet Ping Pong pizza restaurant was just a cover for Hillary Clinton’s secret pedophilia ring, and, as a father of two young girls, he was going to do something about it.
As the customers made a run for it (and, of course, started posting on social media about it), Welch headed to the back of the pizza place. He expected to find the entrance to the vast, cavernous basement that he knew to hold the enslaved children. Instead, he found an employee holding pizza dough. For the next forty-five minutes, Welch hunted for the secret sex chambers, overturning furniture and testing the walls. Eventually, his attention turned to a locked door. This was surely it, he thought. He fired his weapon, destroying the lock, and flung open the door. It was a tiny computer room little larger than a storage closet. There were no stairs to a secret underground sex chamber. Indeed, there was no basement at all. Dejected and confused, Welch dropped his weapons and surrendered to police.
In the subsequent trial, neither side would suggest Welch was insane. Indeed, the prosecution wrote that Welch “was lucid, deadly serious, and very aware.” He’d sincerely believed he was freeing children from captivity, that he was embarking on a one-way mission for which he was prepared to give his life. On his 350-mile journey, he’d recorded a tearful farewell to his family on his smartphone; a martyrdom message to broadcast on social media if he died in a hail of bullets. Welch would be sentenced to four years in prison.
For James Alefantis, the founder and owner of the pizza café, the conviction was small comfort. “I do hope one day, in a more thoughtful world, every one of us will remember this day as an aberration,” he said. “When the world went mad and fake news was real.”
But it was no aberration. Welch’s ill-fated odyssey could be traced to a flurry of viral conspiracy theories known collectively as #Pizzagate. Arising in the final days of the 2016 U.S. election, the hoax claimed Hillary Clinton and her aides were involved in satanic worship and underage sex trafficking at a DC-area pizza parlor. Their “evidence” was a picture of the owner, Alefantis, hosting a fundraiser for Clinton, and a heart-shaped logo on the restaurant’s website. Working through a crowdsourced “investigation” that was a perverse reflection of the sort conducted by Bellingcat, these far-right sleuths had determined the heart was a secret sign for child predators. It was actually the symbol of a fundraiser for St. Jude Children’s Research Hospital.
#Pizzagate blazed across social media, garnering 1.4 million mentions on Twitter alone. On the Infowars YouTube channel, conspiracy theorist Alex Jones told his 2 million subscribers, “Something’s being covered up. All I know, God help us, we’re in the hands of pure evil.” Spying opportunity, the Russian sockpuppets working in St. Petersburg also latched onto the #Pizzagate phenomenon, their posts further boosting its popularity. #Pizzagate not only dominated far-right online conversation for weeks, but actually increased in power following Clinton’s electoral defeat. When polled after the election, nearly half of Trump voters affirmed their belief that the Clinton campaign had participated in pedophilia, human trafficking, and satanic ritual abuse.
Yet, as Welch ruefully admitted after his arrest, “the intel on this wasn’t 100 percent.” Welch’s use of the word “intel” was eerily appropriate. Among the key voices in the #Pizzagate network was Jack Posobiec, a young intelligence officer in the U.S. Navy Reserve. Although Posobiec’s security clearance had been revoked and he’d been reassigned by his commanding officers to such duties as “urinalysis program coordinator,” on Twitter he was a potent force. Posobiec was relentless in pushing #Pizzagate to his more than 100,000 followers. He’d even livestreamed his own “investigation” of the restaurant, barging in on a child’s birthday party and filming until he was escorted from the premises.
For Posobiec, social media offered a path to popularity that eluded him in real life and a way to circumvent the old media gatekeepers. “They want to control what you think, control what you do,” he bragged. “But now we’re able to use our own platforms, our own channels, to speak the truth.”
The accuracy of Posobiec’s “truth” was inconsequential. Indeed, Welch’s violent and fruitless search didn’t debunk Posobiec’s claims; it only encouraged him to make new ones. “False flag,” Posobiec tweeted as he heard of Welch’s arrest. “Planted Comet Pizza Gunman will be used to push for censorship of independent news sources that are not corporate owned.” Then he switched stories, informing his followers that the DC police chief had concluded, “Nothing to suggest man w/gun at Comet Ping Pong had anything to do with #pizzagate.” It was, like the rest of the conspiracy, a fabrication. The only thing real was the mortal peril and psychological harm that opportunists like Posobiec had inflicted on the workers of the pizza place and the families dining there.
Yet Posobiec suffered little for his falsehoods. Indeed, they only increased his online fame and influence. They also brought other rewards. Just a few months after he’d trolled a pizza parlor into near tragedy, he was livestreaming from the White House press briefing room, as a specially invited guest. And then came the ultimate validation. Posobiec and his messages were retweeted multiple times by the most powerful social media platform in all the world, that of President Donald Trump.
#Pizzagate shows how online virality—far from a measure of sincere popularity—is a force that can be manipulated and sustained by just a few influential social media accounts. In internet studies, this is known as “power law.” It tells us that, rather than a free-for-all among millions of people, the battle for attention is actually dominated by a handful of key nodes in the network. Whenever they click “share,” these “super-spreaders” (a term drawn from studies of biologic contagion) are essentially firing a Death Star laser that can redirect the attention of huge swaths of the internet. This even happens in the relatively controlled parts of the web. A study of 330 million Chinese Weibo users, for instance, found a wild skew in influence: fewer than 200,000 users had more than 100,000 followers; only about 3,000 accounts had more than 1 million. When researchers looked more closely at how conversations started, they found that the opinions of these hundreds of millions of voices were guided by a mere 300 accounts.
The internet may be a vast, wild, and borderless frontier, but it has its monarchs all the same. Vested with such power, these super-spreaders often have little regard for the truth. Indeed, why should they? The information of truth is less likely to draw eyes.
In the past several years, episodes like #Pizzagate have become all too common, as have fabulists like Posobiec. These conspiracy-mongers’ influence has been further reinforced by the age-old effects of homophily and confirmation basis. Essentially, belief in one conspiracy theory (“Global warming is a hoax”) increases someone’s susceptibility to further falsehoods (“Ted Cruz’s dad murdered JFK”). They’re like the HIV of online misinformation: a virus that makes its victims more vulnera-ble to subsequent infections.
The combination of conspiracy theories and social media is even more toxic than that, however. As psychologist Sander van der Linden has written, belief in online conspiracy theories makes one more supportive of “extremism, racist attitudes against minority groups (e.g., anti-Semitism) and even political violence.”
Modest lies and grand conspiracy theories have been weapons in the political arsenal for millennia. But social media has made them more powerful and more pervasive than ever before. In the most comprehensive study of its kind, MIT data scientists charted the life cycles of 126,000 Twitter “rumor cascades”—the first hints of stories before they could be verified as true or false. The researchers found that the fake stories spread about six times faster than the real ones. “Falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information,” they wrote.
Ground zero for the deluge, however, was in politics. The 2016 U.S. presidential election released a flood of falsehoods that dwarfed all previous hoaxes and lies in history. It was an online ecosystem so vast that the nightclubbing, moneymaking, lie-spinning Macedonians occupied only one tiny corner. There were thousands of fake websites, populated by millions of baldly false stories, each then shared across people’s personal networks. In the final three months of the 2016 election, more of these fake political headlines were shared on Facebook than real ones. Meanwhile, in a study of 22 million tweets, the Oxford Internet Institute concluded that Twitter users, too, had shared more “misinformation, polarizing and conspiratorial content” than actual news stories.
The Oxford team called this problem “junk news.” Like junk food, which lacks nutritional value, these stories lacked news value. And also like junk food, they were made of artificial ingredients and infused with sweeteners that made them hard to resist. This was the realization of a danger that internet sociologist danah boyd had warned about as far back as 2009:
Our bodies are programmed to consume fats and sugars because they’re rare in nature . . . In the same way, we’re biologically programmed to be attentive to things that stimulate: content that is gross, violent, or sexual and that [sic] gossip which is humiliating, embarrassing, or offensive. If we’re not careful, we’re going to develop the psychological equivalent of obesity. We’ll find ourselves consuming content that is least beneficial for ourselves or society as a whole.
What the Oxford researchers called “junk news” soon became more commonly known as “fake news.” That term was originally created to describe news that was verifiably untrue. However, President Trump quickly co-opted it (using it more than 400 times during his first year in office), turning “fake news” into an epithet to describe information that someone doesn’t like. That is, even the term used to describe untruths went from an objective measure of accuracy to a subjective statement of opinion.
Whatever the term, in the United States, as in Macedonia, many people saw dollar signs in this phenomenon. Posobiec, for instance, would market his expertise as an online conspiracy theorist in a book that promised to explain “how social media was weaponized.” As with Operation INFEKTION and the anti-vaxxers, however, the right-wing had no monopoly on driving lies viral, or making money along the way. One example could be seen in Jestin Coler, a self-described family man in his early 40s. With a degree in political science and an avid interest in propaganda, Coler claimed to have gotten into the fake news business as an experiment, testing the gullibility of right-wing conspiracy theorists. “The whole idea from the start,” he explained, “was to build a site that could kind of infiltrate the echo chambers of the alt-right, publish blatantly or fictional stories and then be able to publicly denounce those stories and point out the fact that they were fiction.” But then the money began to pour in—sometimes tens of thousands of dollars in a single month. Any high-minded purpose was forgotten.
Coler expanded his operation into a full-fledged empire: twenty-five websites, manned by a stable of two dozen freelance writers, each of whom took a cut of the profits. The wilder the headline, the more clicks it got and the more money everyone made. One of Coler’s most popular pieces told the tragic and wholly false story of an FBI agent and his wife who, amid an investigation of Hillary Clinton, had died in a suspicious murder-suicide. In a ten-day period, 1.6 million readers were drawn to the real-sounding but fake newspaper (the Denver Guardian) that had posted the fake story. On Facebook, the damning headline would be glimpsed at least 15 million times.
Coler was unmasked when an intrepid reporter for National Public Radio pierced through his shell of web registrations and tracked him to his home. Asked why he’d stayed hidden, Coler was blunt about the people who were making him rich. “They’re not the safest crowd,” he said. “Some of them I would consider domestic terrorists. So they’re just not people I want to be knocking on my door.” Their beliefs were crazy; their cash was good.
Yet these individual for-profit purveyors of lies were just the small fry. More significant was the new media business environment surrounding them, which meshed profit and partisan politics. When researchers at the Columbia Journalism Review broke down the readership of some 1.25 million news stories published during the 2016 election cycle, they found that liberal and conservative news consumers were both relying more on social media than on traditional media outlets, but both groups essentially existed in their own parallel universes. This finding confirmed what we’ve seen already. Homophily and virality combined to increase users’ exposure to information they agreed with, while insulating them from information they found distasteful.
But the research revealed something else. The drivers of conversation in the left-leaning social media universe were divided across multiple hubs that included old media mainstays like the New York Times and avowedly liberal outlets like the Huffington Post. In contrast, the right-leaning universe was separate but different. It had just one central cluster around the hyperpartisan platform Breitbart, which had been launched in 2005 (the year after Facebook) with the new media environment deliberately in mind. As founder Andrew Breitbart explained, “I’m committed to the destruction of the old media guard . . . and it’s a very good business model.”
After Breitbart’s death in 2012, the organization was run by Steve Bannon, a former investment banker turned Hollywood producer, who intimately understood both markets and the power of a good viral headline. Bannon embraced social media as a tool to dominate the changing media marketplace, as well as to remake the right-wing. The modern internet wasn’t just a communications medium, he lectured his staff, it was a “powerful weapon of war,” or what he called “#War.”
Through Breitbart, Bannon showered favorable coverage on the “alt-right,” an emerging online coalition that could scarcely have existed or even been imagined in a pre–social media world. Short for “alternative right” (the term popularized by white supremacist leader Richard Spencer), the alt-right fused seemingly disparate groups ranging from a new generation of web-savvy neo-Nazis to video gamer collectives using online harassment campaigns to battle perceived “political correctness.” All these groups found unity in two things. The first was a set of beliefs that, as the Associated Press put it, rejected “the American democratic ideal that all should have equality under the law regardless of creed, gender, ethnic origin or race.” The second was a recognition that social media was the best means to transform that conviction into reality.
Stoking outrage and seeking attention, Bannon declared Breitbart “the platform for the alt-right.” Its editors even invited the movements’ leaders to edit their own glowing profile articles. These sorts of arrangements helped restructure the media marketplace. In contrast to how the network of liberal media was distributed across multiple hubs, thousands of smaller, far-right platforms clung to Breitbart in a tight orbit. They happily sent hyperlinks and advertising profit to each other, but almost never anywhere outside of their closed network, which tilted the balance. The change wasn’t just that conservatives were abandoning mainstream media en masse, but that the marketplace of information within their community had changed as well. When judged by key measures like Facebook and Twitter “shares,” Breitbart had eclipsed even the likes of Fox News, more than doubling “share” rates among Trump supporters.
In this new media universe, not just money, journalism, and political activism mixed, but also truth and hateful disinformation. News reports of actual events were presented alongside false ones, making it hard for readers to differentiate between them. A series of articles on illegal immigration, for instance, might mix stories about real illegal immigrants with false reports of Al Qaeda–linked terrorists sneaking in via Mexico. In some cases, this situation entered the realm of the bizarre, such as when Breitbart quoted a Twitter account parodying Trump, instead of his actual feed, in order to make him sound more presidential than he did in reality.
What the stratagem revealed was that on social networks driven by homophily, the goal was to validate, not inform. Internet reporter John Herrman had observed as much in a prescient 2014 essay. “Content-marketed identity media speaks louder and more clearly than content-marketed journalism, which is handicapped by everything that ostensibly makes it journalistic—tone, notions of fairness, purported allegiance to facts, and context over conclusions,” he wrote. “These posts are not so much stories as sets of political premises stripped of context and asserted via Facebook share—they scan like analysis but contain only conclusions; after the headline, they never argue, only reveal.” This was just as well. In 2016, researchers were stunned to discover that 59 percent of all links posted on social media had never been clicked on by the person who shared them.
Simply sharing crazy, salacious stories became a form of political activism. As with the dopamine-fueled cycle of “shares” and “likes,” it also had a druglike effect on internet partisans. Each new “hit” of real (or fake) news broadcast on social media might be just enough to help their chosen candidate win.
There was also a sort of raw entertainment to it—a no-holds-barred battle in which actual positions on policy no longer mattered. This, too, was infectious. Now taking their lead from what was trending online, traditional media outlets followed suit. Across the board, just one-tenth of professional media coverage focused on the 2016 presidential candidates’ actual policy positions. From the start of the year to the last week before the vote, the nightly news broadcasts of the “big three” networks (ABC, CBS, and NBC) devoted a total of just thirty-two minutes to examining the actual policy issues to be decided in the 2016 election!
Yet for all its noise and spectacle, the specter of online misinformation didn’t begin with the 2016 U.S. presidential race, nor did it fade once the votes were cast. Disappointed Clinton donors vowed to create a “Breitbart of the left,” while a new generation of liberal rumor mills and fabulists purported to show why every Republican politician teetered on the brink of resignation and how every conservative commentator was in the secret employ of the Kremlin. Meanwhile, the misinformation economy powered onward. It would pop up in the 2017 French presidential election and roil the politics of Germany, Spain, and Italy soon thereafter.
Nor was the problem limited to elections. Perhaps the most worrisome example occurred on Christmas Eve 2016, when Pakistani defense minister Khawaja Asif read a false online report that Israel was threatening to attack his country if it intervened in Syria. “We will destroy them with a nuclear attack,” the report had quoted a retired Israeli defense minister as saying. Asif responded with a real threat of his own, tweeting about Pakistan’s willingness to retaliate with nuclear weapons against Israel. Fortunately, Christmas was saved when the original report was debunked before the crisis could escalate further.
Sadly, not all false online reports have been stopped before they’ve sparked real wars. In mid-2016, the rival armies of South Sudan’s president and vice president had settled into an uneasy truce after years of civil war. But when the vice president paid a visit to the presidential palace, his spokesperson published a false Facebook update that he had been arrested. Reading the post, the vice president’s men paid an angry (and heavily armed) visit to the palace to rescue him. The president’s bodyguards in turn opened fire—igniting a series of battles that would leave over 300 dead and plunge the nation back into conflict. Even after a cease-fire was declared by both sides, social media then fueled a new cycle of sectarian and ethnic violence, helped by a heavy dose of online hate speech and false accusations. The same echo chambers that have swung elections saw rival Sudanese Facebook groups allege nonexistent attacks that inspired extremists on both sides to commit real and deadly acts of revenge. A combination of viral falsehood and the “cyberbanging” problem of Chicago escalated to nationwide conflict.
What played out in South Sudan has been echoed around the world. In India, riots erupted in 2017 over fake stories pushed by the Indian equivalent of Breitbart. These prompted a new round of fake stories about the riots and their instigators, which reignited the real cycle of violence. That same year in Myanmar, a surge in Facebook rumormongering helped fuel genocide against the nation’s Rohingya Muslim minority. The following year in Sri Lanka, wild (and viral) allegations of a “sterilization” plot led a frenzied Buddhist mob to burn a Muslim man alive. “The germs are ours,” a Sri Lankan official explained of his country’s religious tensions, “but Facebook is the wind.”
The online plague of misinformation has even become a problem for some of the least sympathy-inducing groups in the world. In El Salvador, the MS-13 gang faced an unexpected crisis when false stories spread that it was murdering any woman who had dyed blond hair and wore leggings (the hair and leggings were a trademark look of the rival Los Chirizos gang). “We categorically deny the rumor that has been circulated,” read the gang’s official statement, itself posted online. The criminals solemnly denounced the stories that “only create alarm and increase fear and anxiety in the poor population that live in the city center.”
Even the unrepentantly barbaric Islamic State had to deal with false headlines. When ISIS instituted its repressive, fundamentalist government after the seizure of Mosul, reports circulated that it would force genital mutilation on 4 million Iraqi women and girls. Subsequent news stories were shared tens of thousands of times. ISIS propagandists and supporters were aggrieved. Although they’d happily held public beheadings and reinstituted crucifixion as a form of punishment, female genital mutilation wasn’t their policy. An ISIS Twitter account, whose Arabic username translated to “Monster,” offered a terse rebuttal denouncing the fake news and demanding that the media retract its claims.
In only a few years, online misinformation has evolved from a tabloid-style curiosity to a global epidemic. Ninety percent of Americans believe that these made-up news stories have made it harder to know what’s true and what’s not. Nearly one-quarter of Americans admit to having shared a fake story themselves. At the end of 2015, the Washington Post quietly ended a weekly feature devoted to debunking internet hoaxes, admitting there were simply too many of them. “[This] represents a very weird moment in internet discourse,” mused columnist Caitlin Dewey. “At which point does society become utterly irrational? Is it the point at which we start segmenting off into alternate realities?”
The answer is that the whirlwind of confirmation bias and online gratification can swiftly mobilize millions. It can also produce what the World Economic Forum has called “digital wildfires,” fast-moving bursts of information that devastate markets, upend elections, or push nations to the brink of war. While these fires may be set by super-spreaders with a specific agenda, as they advance they can cleave huge rifts across society. And if someone’s online network has helped fuel a particular fire, he or she is more likely to believe it—and even more likely to help spread the next one.
The human brain was never equipped to operate in an information environment that moves at the speed of light. Even those who’ve grown up in this world have found it difficult to adjust. Studies show that more than half of U.S. middle schoolers—who spend an average of 7.5 hours online each day outside of school hours—cannot discern advertisements from legitimate news, nor distinguish basic fact from fiction online. “If it’s going viral, it must be true,” one middle schooler patiently explained to a team of Stanford researchers. “My friends wouldn’t post something that’s not true.”
On the internet, virality is inseparable from reality. A fake story shared by millions becomes “real” in its own way. An actual event that fails to catch the eye of attention-tracking algorithms might as well never have happened. Yet nothing says the people sharing the story have to be real either.
Angee Dixson was mad as hell, and she wasn’t going to take it anymore.
A “Christian first,” as her profile declared, the photogenic brunette had one item on her agenda. “I want my country back. MAGA.” Joining Twitter in August 2017, Dixson took to the platform immediately, tweeting some ninety times a day. She made good on her profile’s pledge, leaping to President Trump’s defense against Democrats, the FBI, late-night comedians, and everyone in between.
Three days after Dixson hopped online, a coalition of alt-right groups descended on Charlottesville, Virginia, for what they dubbed the #UniteTheRight rally. As counterprotesters poured into the streets to oppose what became a vivid expression of hate and white nationalism, a far-right terrorist drove his car into the crowd, killing one young woman and wounding three others. When public sentiment turned against President Trump (who claimed “both sides” were to blame for the violence), Dixson furiously leapt to his defense. “Dems and Media Continue to IGNORE BLM [Black Lives Matter] and Antifa [anti-fascist] Violence in Charlottesville,” she tweeted, including an image of demonstrators with the caption “DEMOCRAT TERROR.” In the days that followed, her tweets grew even more strident, publicizing supposed cases of left-wing terrorism around the country.
But none of the cases were real—and Dixson wasn’t either. As Ben Nimmo, a fellow with the Digital Forensic Research Lab at the Atlantic Council, discovered, “Angee Dixson” was actually a bot—a sophisticated computer program masquerading as a person. One clue to her identity was her frequent use of URL “shorteners,” shortcuts that bots use to push out links. (A machine’s efficiency can often spill the beans, as lazy humans tend to use the old-fashioned copy-and-paste method.) Another telltale sign was her machinelike language pattern, sometimes lifted from RT and Sputnik. Despite her avowed American focus, Dixson also couldn’t help but slip in the occasional attack on Ukraine. And finally, the true giveaway: Dixson’s profile picture was actually a photograph of Lorena Rae, a German model who was dating Leonardo DiCaprio at the time.
Dixson was one of at least 60,000 Russian accounts in a single “botnet” (a networked army of bots) that infested Twitter like a cancer, warping and twisting the U.S. political dialogue. This botnet, in turn, belonged to a vast galaxy of fake and automated accounts that lurk in the shadows of Twitter, Facebook, Instagram, and numerous other services. These machine voices exist because they have power—because the nature of social media platforms gives them power.
On Twitter, popularity is a function of followers, “likes,” and retweets. Attract lots of attention in a short period of time and you’ll soon find yourself, and any views you push, going viral. On Google, popularity is a function of hyperlinks and keywords; the better trafficked and more relevant a particular website, the higher it ranks in Google search results. On Facebook, popularity is determined by “likes” from friends and the particular updates that you choose to share. The intent is to keep users emotionally grafted to the network. Bombard your friends with silly, salacious news stories and you’ll find yourself receiving less and less attention; describe a big personal moment (a wedding engagement or professional milestone) and you may dominate your local social network for days.
Every social media platform is regulated by such an algorithm. It represents the beating heart of the platform’s business, its most closely guarded treasure. But as the world has come to be ruled by the whims of virality and the attention economy, plenty of people seek to cheat their way to fame and influence. Plenty more happily sell them the tools to do so.
The most common form of this cheating is also the simplest. Fake followers and “likes” are easy to produce—all they require is a dummy email address and social media account—and they enjoy essentially unlimited demand. Politicians, celebrities, media outlets, and wannabe “influencers” of all stripes have come to rely on these services. The result is a decade-old black market worth at least $1 billion.
Often, these fake followers are easy to track. In 2016, internet users had a collective chuckle when People’s Daily, the main Chinese propaganda outlet, launched a Facebook page that swiftly attracted 18 million “likes,” despite Facebook being banned in China. This included more than a million “fans” in Myanmar (out of the then 7 million Facebook users in that country), who instantly decided to “like” China. Likewise, when Trump announced his nationalistically themed presidential campaign in 2015, 58 percent of his Facebook followers, oddly, hailed from outside the United States. Despite his anti-immigrant rhetoric and repeated calls for a border wall, 4 percent supposedly lived in Mexico.
In the nations of Southeast Asia, the demand for fake followers has given rise to a “click farm” industry that resembles the assembly lines of generations past. Amid the slums of places like Dhaka in Bangladesh or Lapu-Lapu in the Philippines, workers crowd into dark rooms crammed with banks of monitors. Some employees follow rigid scripts intended to replicate the activity of real accounts. Others focus on creating the accounts themselves, the factories equipping their workers with hundreds of interchangeable SIM cards to beat the internet companies’ spam protection measures.
Yet as with every other industry, automation has begun to steal people’s jobs. The most useful form of fakery doesn’t come through click farms, but through the aforementioned bots. Describing software that runs a series of automated scripts, the term “bot” is taken from “robot”—in turn, derived from a Czech word meaning “slave” or “servitude.” Today, social media bots spread a message; as often as not, it’s human beings who become the slaves to it.
Like actual robots, bots vary significantly in their complexity. They can be remarkably convincing “chatbots,” conducting conversations using natural language and selecting from millions of preprogrammed responses. Or the bots can be devilishly simple, pushing out the same hashtag again and again, which may get them caught but still accomplishes their mission, be it to make a hashtag go viral or to bury an opponent under countermessages.
For example, the day after Angee Dixson was outed in an analysis by the nonprofit organization ProPublica, a new account spun to life named “Lizynia Zikur.” She immediately decried ProPublica as an “alt-left #HateGroup and #FakeNews Site.” Zikur was clearly another fake—but one with plenty of friends. The bot’s message was almost instantly retweeted 24,000 times, exceeding the reach of ProPublica’s original analysis. In terms of virality, the fake voices far surpassed the reports of their fakeness.
This episode shows the power of botnets to steer the course of online conversation. Their scale can range from hundreds to hundreds of thousands. The “Star Wars” botnet, for example, is made up of over 350,000 accounts that pose as real people, detectable by their predilection for spouting lines from the franchise.
If a few thousand or even hundred of these digital voices shift to discussing the same topic or using the same hashtag all at the same time, that action can fool even the most advanced social media algorithm, which will mark it as a trend. This “trend” will then draw in real users, who have nothing to do with the botnet, but who may be interested in the news, which itself is now defined by what is trending online. These users then share the conversation with their own networks. The manufactured idea takes hold and spreads, courting ever more attention and unleashing a cascade of related conversations, and usually arguments. Most who become part of this cycle will have no clue that they’re actually the playthings of machines.
As businesses whose fortunes rise or fall depending on the size of their user base, social media firms are reluctant to delete accounts—even fake ones. On Twitter, for instance, roughly 15 percent of its user base is thought to be fake. For a company under pressure to demonstrate user growth with each quarterly report, this is a valuable boost.
Moreover, it’s not always easy to determine whether an account is a bot or not. As the case of Angee Dixson shows, multiple factors, such as time of activity, links, network connections, and even speech patterns, must be evaluated. Researchers then take all of these clues and marry them up to connect the dots, much as in the Sherlock Holmes–style investigations the Bellingcat team pursues to chronicle war crimes.
Although botnets have been used to market everything from dish soap to albums, they’re most common in the political arena. For authoritarian regimes around the world, botnets are powerful tools in their censorship and disinformation strategies. When Syria began to disintegrate into civil war in 2011, the Assad regime used Twitter bots to flood its opponents’ hashtags with random soccer statistics. Those searching for vital information to fight the regime were instead greeted with a wall of nonsense. At the same time, the #Syria news hashtag was flooded with beautiful landscape images. A year later, when international attention turned to the plight of Chinese-occupied Tibet, the Chinese government did the same. Thousands of bots hijacked hashtags like #FreeTibet, overpowering activists with random photographs and snippets of text.
Botnets have proven just as appealing to the politicians and governments of democratic nations. Among the first documented uses was in 2010, when Massachusetts held a special election to fill the seat vacated by the late Senator Ted Kennedy. At the beginning of the race, there was little notable social media activity in this traditionally Democratic stronghold. But then came a shock: an outlying poll from Suffolk University showed Republican Scott Brown might have a chance. After that came a social media blitz, masterminded by two out-of-state conservative advocacy groups. One was funded by the Koch brothers and the other by the group that had organized the “Swift Boat” negative advertising campaign that had sunk the 2004 presidential bid of Democratic candidate John Kerry.
Suddenly, bots popped up everywhere, all fighting for Brown. Fake accounts across Facebook and Twitter trumpeted Brown’s name as often as possible, seeking to manipulate search results. Most novel was what was then called a “Twitterbomb.” Twitter users interested in the election began to receive automated replies supporting Brown. Importantly, these solicitations hit users beyond Massachusetts, greatly enriching Brown’s coffers. When Brown became the first Republican to win a Massachusetts Senate seat since 1952, political analysts were both floored and fascinated. Bots had enabled an election to be influenced from afar. They had also shown how one could create the appearance of grassroots support and turn it into reality, a tactic that became known as “astroturfing.”
Botnets would become a part of every major election thereafter. When Newt Gingrich’s promise to build a moon base didn’t excite voters in the 2012 U.S. presidential primaries, his campaign reportedly bought more than a million fake followers, to try to create the sense of national support. In Italy, a comedian turned populist skyrocketed to prominence with the help of bot followers. The next year, a scandal hit South Korea when it was revealed that a massive botnet—operated by military cyberwarfare specialists—had transmitted nearly 25 million messages intended to keep the ruling party in power.
Often, botnets can play the role of political mercenaries, readily throwing their support from one cause to the next. During Brexit, Britain’s contentious 2016 referendum to leave the European Union, researchers watched as automated Twitter accounts that had long championed Palestinian independence abruptly shifted their attention to British politics. Nor was it an even fight: the pro-Brexit bots outnumbered the robotic champions of “Remain” by a ratio of five to one. The botnets (many since linked to Russia) were also prodigious. In the final days before the referendum, less than 1 percent of Twitter users accounted for one-third of all the conversation surrounding the issue. Political scientists were left to wonder what might have happened in a world without the machines.
The 2016 U.S. presidential race, however, stands unrivaled in the extent of algorithmic manipulation. On Twitter alone, researchers discovered roughly 400,000 bot accounts that fought to sway the outcome of the race—two-thirds of them in favor of Donald Trump. Sometimes, these bots were content simply to chirp positive messages about their chosen candidate. Other times, they went on the offensive. Like the suppressive tactics of the Syrian regime, anti-Clinton botnets actively sought out and “colonized” pro-Clinton hashtags, flooding them with virulent political attacks. As Election Day approached, pro-Trump bots swelled in intensity and volume, overpowering pro-Clinton voices by (in another echo of Brexit) a five-to-one ratio.
To an untrained eye, Trump’s bots could blend in seamlessly with real supporters. This included the eye of Trump himself. In just the first three months of 2016, the future president used his Twitter account to quote 150 bots extolling his cause—a practice he would continue in the White House.
Behind this massive bot army lay a bizarre mix of campaign operatives, true believers, and some who just wanted to watch the world burn. The most infamous went by the online handle “MicroChip.” A freelance software developer, MicroChip claimed to have become a believer in the alt-right after the 2015 Paris terrorist attacks. With his tech background, he realized he could manipulate Twitter’s programming applications, initially testing such “anti-PC” hashtags as #Raperefugees to see what he could drive viral. By the time of the 2016 election, he labored twelve hours at a time, popping Adderall to stay focused as he pumped out pro-Trump propaganda.
Described by a Republican strategist as the “Trumpbot overlord,” MicroChip specialized in using bots to launch hashtags (#TrumpTrain, #cruzsexscandal, #hillarygropedme) that could redirect and dominate political conversation across Twitter. When his machine was firing on all cylinders, MicroChip could produce more than 30,000 retweets in a single day, each of which could reach orders of magnitude more users. He took particular joy in using his army of fake accounts to disseminate lies, including #Pizzagate. “I can make whatever claims I want to make,” he bragged. “That’s how this game works.”
MicroChip lived in Utah. Where bots became truly weaponized, though, was in how they expanded the work of Russian sockpuppets prosecuting their “information war” from afar. In 2017, growing public and congressional pressure forced the social media firms to begin to reveal the Russian campaign that had unfolded on their platforms during the 2016 election. The numbers, once begrudgingly disclosed, were astounding.
The bot accounts were putting the disinformation campaign on steroids, allowing it to reach a scale impossible with just humans at work. Twitter’s analysis found that bots under the control of the Internet Research Agency (that lovely building in St. Petersburg where our philosophy major worked) generated 2.2 million “election-related tweets” in just the final three months of the election. In the final month and a half before the election, Twitter concluded that Russian-generated propaganda had been delivered to users 454.7 million times. (Though enormous, these company-provided numbers are likely low, as Twitter identified only accounts definitively proven to belong to the Internet Research Agency’s portion of the larger Russian network. The analysis also covered only a limited period of time, not the whole election, and especially not the crucial nomination process.) The same army of human sockpuppets, using automation tools to extend their reach, would also ripple out onto other sites, like Facebook and its subsidiary Instagram. Overall, Facebook’s internal analysis estimated that 126 million users saw Russian disinformation on its platform during the 2016 campaign.
The automated messaging was overwhelmingly pro-Trump. For example, known Russian bots directly retweeted @realDonaldTrump 469,537 times. But the botnet was most effective in amplifying false reports planted by the fake voices of the Russian sockpuppets—and ensuring that stories detrimental to Trump’s foes received greater virality. They were particularly consumed with making sure attention was paid to the release of hacked emails stolen from Democratic organizations. (The collective U.S. intelligence community and five different cybersecurity companies would attribute the hack to the Russian government.) Indeed, as Twitter’s data showed, when these emails were first made public, botnets contributed between 48 percent and 73 percent of the retweets that spread them.
After the news of Russia’s role in the hacks was revealed, these same accounts pivoted to the defensive. An army of Russian bots became an army of supposed Americans arguing against the idea that Russia was involved. A typical, and ironic, message blasted out by one botnet read, “The news media blames Russia for trying to influence this election. Only a fool would not believe that it’s the media behind this.”
As Samuel Woolley, a researcher at Oxford University who studied the phenomenon, has written, “The goal here is not to hack computational systems but to hack free speech and to hack public opinion.” The effects of this industrial-scale manipulation continue to ripple across the American political system. Its success has also spawned a legion of copycat efforts in elections from France to Mexico, where one study found over a quarter of the posts on Facebook and Twitter about the 2018 Mexican election were created by bots and trolls.
And yet this may not be the most unsettling part. These artificial voices managed to steer not just the topics of conversation, but also the human language within it, even changing the bounds of what ideas were considered acceptable.
After the 2016 election, data scientists Jonathon Morgan and Kris Schaffer analyzed hundreds of thousands of messages spread across conservative Twitter accounts, Facebook pages, and the Breitbart comments section, charting the frequency of the 500,000 most-used words. They cut out common words like “the” and “as,” in order to identify the top terms that were “novel” to each online community. The idea was to discover the particular language and culture of the three spaces. How did conservatives speak on Facebook, for instance, as compared with Twitter? They were shocked to find something sinister at work.
Originally, the three spaces were completely different. In January, February, and March 2016, for example, there was not much of a pattern to be found in the noise of online debate on Twitter as compared with Breitbart or Facebook. In the kind of homophily by then familiar, people in the three spaces were often talking about the same things, since they were all conservatives. But they did so with divergent language. Different words and sentence constructions were used at different frequencies in different communities. This was expected, reflecting both how the particular platforms shaped what could be posted (Twitter’s allowance of only 140 characters at the time versus Facebook’s lengthier space for full paragraphs) and the different kinds of people who gravitated to each network.
But, as the researchers wrote, in April 2016 “the discussion in conservative Twitter communities, the Trump campaign’s Facebook page, and Breitbart’s comment section suddenly and simultaneously changed.”
Within these communities, new patterns abruptly appeared, with repeated sentences and word choices. Swaths of repetitive language began to proliferate, as if penned by a single author or a group of authors working from a shared playbook. It wasn’t that all or even many of the users on Twitter, Facebook, and in the comments section of Breitbart during the run-up to the 2016 election were fake. Rather, the data showed that a coordinated group of voices had entered these communities, and that these voices could be sifted out from the noise by their repeated word use. As Morgan and Schaffer wrote, “Tens of thousands of bots and hundreds of human-operated, fake accounts acted in concert to push a pro-Trump, nativist agenda across all three platforms in the spring of 2016.” When the researchers explored what else these accounts were pushing beyond pro-Trump or anti-Clinton messaging, their origin became clearer. The accounts that exhibited these repetitive language patterns were four times as likely to mention Russia, always in a defensive or complimentary tone.
The analysis uncovered an even more disturbing pattern. April 2016 also saw a discernible spike in anti-Semitic language across all three platforms. For example, the word “Jewish” began to be used not only more frequently but also in ways easily identifiable as epithets or conspiracy theories, such as being associated with words like “media.”
While the initial blast of repeated words and phrases had shown the use of a common script driven by machines from afar, the language soon spread like a virus. In a sense, it was a warped reflection of the “spiral of silence” effect seen in the previous chapter. The sockpuppets and bots had created the appearance of a popular consensus to which others began to adjust, altering what ideas were now viewed as acceptable to express. The repeated words and phrases soon spread beyond the fake accounts that had initially seeded them, becoming more frequent across the human users on each platform. The hateful fakes were mimicking real people, but then real people began to mimic the hateful fakes.
This discovery carries implications that transcend any particular case or country. The way the internet affects its human users makes it hard enough for them to distinguish truth from falsehood. Yet these 4 billion flesh-and-blood netizens have now been joined by a vast number of digital beings, designed to distort and amplify, to confuse and distract. The attention economy may have been built by humans, but it is now ruled by algorithms—some with agendas all their own.
Today, the ideas that shape battles, votes, and even our views of reality itself are propelled to prominence by this whirring combination of filter bubbles and homophily—an endless tide of misinformation and mysterious designs of bots. To master this system, one must understand how it works. But one must also understand why certain ideas take hold. The ensuing answers to these questions reveal the foundations of what may seem to be a bizarre new online world, but is actually an inescapable kind of war.