“Truth” is a lost cause and . . . reality is essentially malleable.
—PETER POMERANTSEV AND MICHAEL WEISS, “The Menace of Unreality”
“INFORMATION WANTS TO BE FREE,” declared web pioneer and counterculture icon Stewart Brand at the world’s first Hackers Conference in 1984. This freedom wouldn’t just sound the death knell of censorship; it would also mark the end of authoritarian regimes that relied on it. After all, what government could triumph against a self-multiplying network of information creators and consumers, where any idea might mobilize millions in a heartbeat? John Gilmore, an early cyber-activist and cofounder of the Electronic Frontier Foundation, put it simply in a 1993 interview: “The Net interprets censorship as damage and routes around it.”
For many years, this seemed to be the case. In a dispatch for the newly launched Wired magazine, reporter Bruce Sterling described the key role of an early freedom fighter. In 1989, a mysterious digital Johnny Appleseed appeared in Czechoslovakia. Activists would credit him with helping to spark the uprisings that spread across Soviet-ruled Eastern Europe. But at the time, he was known simply as “the Japanese guy.”
Without any warning or fanfare, some quiet Japanese guy arrived at the university with a valise full of brand-new and unmarked 2400-baud Taiwanese modems. The astounded Czech physics and engineering students never did quite get this gentleman’s name. He just deposited the modems with them free of charge, smiled cryptically, and walked off diagonally into the winter smog of Prague, presumably in the direction of the covert-operations wing of the Japanese embassy. They never saw him again.
The Czech students distributed the new networking technology, using it to circulate manifestos and disseminate daily news updates. They were able to expand their revolutionary circles in a way never before possible, while evading the old methods of monitoring and censorship.
As the internet continued its blistering growth, the power of democratic dissidents followed. The first so-called internet revolution shook Serbia in 1996. Cut off from state media, young people used mass emails to plan protests against the regime of President Slobodan Milošević. Although the initial protests failed, they returned stronger than ever in 2000, being organized even more online. Serbia’s youth won out and kicked off a series of “color revolutions,” which soon spread throughout the former Soviet bloc, toppling rulers in Georgia, Ukraine, and Kyrgyzstan.
Then, in 2009, anger against a rigged election swept across theocratic Iran. While the front pages of Iranian newspapers were full of blank spaces (where government censors had blotted out reports), young people took to social media to organize and share the news. An astounding 98 percent of the links posted on Twitter that week were about Iran. Photos showed tens of thousands of Iranian youth pouring into the streets, a smartphone in nearly every hand. “The Revolution Will Be Twittered,” declared one excited headline. Wired magazine’s Italian edition nominated the internet for a Nobel Peace Prize.
In 2010, Mohamed Bouazizi, a 26-year-old Tunisian, touched off the next outbreak of web-powered freedom. Each morning for ten years, he had pushed a cart to the city marketplace, selling fruit to support his widowed mother and five siblings. Every so often, he had to navigate a shakedown from the police—the kind of petty corruption that had festered under the two-decade-long rule of dictator Zine el-Abidine Ben Ali. But on December 17, 2010, something inside Bouazizi snapped. After police confiscated his wares and he was denied a hearing to plead his case, Bouazizi sat down outside the local government building, doused his clothes with paint thinner, and lit a match.
Word of the young man’s self-immolation spread quickly through the social media accounts of Tunisians. His frustration with corruption was something almost every Tunisian had experienced. Dissidents began to organize online, planning protests and massive strikes. Ben Ali responded with slaughter, deploying snipers who shot citizens from rooftops. Rather than retreat, however, some protesters whipped out their smartphones. They captured grisly videos of death and martyrdom. These were shared tens of thousands of times on Facebook and YouTube. The protests transformed into a mass uprising. On January 14, 2011, Ben Ali fled the country.
The conflagration soon leapt across national borders. While the Egyptian dictator Hosni Mubarak ordered censorship of the events in Tunisia, Wael Ghonim, a 30-year-old Google executive, used Facebook to organize similar protests in Cairo. When the first 85,000 people pledged online to march with him, Time magazine asked, “Is Egypt about to have a Facebook Revolution?” It was and it did. The trickle of pro-democracy protests turned to a raging torrent. Hundreds of thousands of demonstrators braved tear gas and bullets to demand Mubarak’s resignation. His thirty-year reign ended in a matter of days. In the geopolitical equivalent of the blink of an eye, Egypt became a free nation.
A euphoric Ghonim gave credit where credit seemed due. “The revolution started on Facebook,” he said. “We would post a video on Facebook that would be shared by 60,000 people . . . within a few hours. I’ve always said that if you want to liberate a society, just give them the Internet.” Elsewhere, he said, “I want to meet Mark Zuckerberg one day and thank him.” Another Egyptian revolutionary gave thanks in a more unorthodox way, naming his firstborn baby girl “Facebook.”
Political unrest soon rocked Syria, Jordan, Bahrain, and a dozen more nations. In Libya and Yemen, dictators who had ruled for decades through the careful control of their population and its sources of information saw their regimes crumble in a matter of days. Tech evangelists hailed what was soon called the Arab Spring as the start of a global movement that would end the power of authoritarian regimes around the world, perhaps forever.
The Arab Spring seemed the perfect story of the internet’s promise fulfilled. Social media had illuminated the shadowy crimes through which dictators had long clung to power, and offered up a powerful new means of grassroots mobilization. In the words of technology writer Clay Shirky, online social networks gave activists a way to “organize without organizations.” Through Facebook events and Twitter hashtags, protests grew faster than the police could stamp them out. Each time the autocrats reacted violently, they created new online martyrs, whose deaths sparked further outrage. Everywhere, it seemed, freedom was on the march, driven by what Roger Cohen of the New York Times extolled as “the liberating power of social media.”
Yet not everyone felt so sure. The loudest dissenter was Evgeny Morozov. Born in 1984 in the former Soviet bloc nation of Belarus, Morozov had been raised in an environment where a strongman had clung to power for nearly three decades. Like others his age, Morozov had enthusiastically embraced the internet as a new means to strike back against authoritarianism. “Blogs, social networks, wikis,” he remembered. “We had an arsenal of weapons that seemed far more potent than police batons, surveillance cameras, and handcuffs.”
But it never seemed to be enough. Not only did the activists fail to sustain their movement, but they noticed, to their horror, that the government began to catch up. Tech-illiterate bureaucrats were replaced by a new generation of enforcers who understood the internet almost as well as the protesters. They no longer ignored online sanctuaries. Instead, they invaded them, not just tracking down online dissidents, but using the very same channels of liberation to spread propaganda. More alarming, their tactics worked. Years after the first internet revolutions had sent shivers down dictators’ spines, the Belarusian regime actually seemed to be strengthening its hand.
Morozov moved to the United States and set his sights squarely on the Silicon Valley dreamers, whom he believed were leading people astray. In a scathing book titled The Net Delusion, he coined a new term, “cyber-utopianism.” He decried “an enthusiastic belief in the liberating power of technology,” made worse by a “stubborn refusal to acknowledge its downside.” When his book was released at the height of the Arab Spring, those he attacked as “cyber-utopians” were happy to laugh him off. If newly freed populations were literally naming their kids after social media, who could doubt its power for good?
As it turned out, the Arab Spring didn’t signal the first steps of a global, internet-enabled democratic movement. Rather, it represented a high-water mark. The much-celebrated revolutions began to fizzle and collapse. In Libya and Syria, digital activists would soon turn their talents to waging internecine civil wars. In Egypt, the baby named Facebook would grow up in a country that quickly turned back to authoritarian government, the new regime even more repressive than Mubarak’s.
Around the world, information had been freed. But so had a countering wave of authoritarianism using social media itself, woven into a pushback of repression, censorship, and even violence. The web’s unique strengths had been warped and twisted toward evil ends. In truth, democratic activists had no special claim to the internet. They’d simply gotten there first.
Liu was a new arrival to the city of Weifang, China. He was new, too, to the city’s traditions. One balmy August evening, he stumbled upon a neighborhood square dance. It looked like fun, and Liu—tired from another day of hunting for work—decided to join in the festivities.
Too late, Liu noticed the laughter and pointed fingers from the audience, the smartphones snapping his photo. He realized that nearly all the other dancers were middle-aged women. Liu fled the scene, flushed with embarrassment. He became petrified that his picture would be shared online for others to mock. So he did the only thing that made sense to him: he decided to destroy the internet.
Liu prowled the city looking for optical cable receivers, the big boxes of coiled wire that relay internet data to individual households. Each time he found one, he forced it open and tore the receiver apart by hand. By the time Liu was caught, he’d caused $15,000 worth of damage. Liu was sent to prison, but the internet—although temporarily disrupted across parts of Weifang—kept right on chugging. We know this because we read about him in an online report that made its way around the world.
While Liu failed in his mission, he actually had the right idea. After all, the internet isn’t really a formless, digital “cloud.” It is made up of physical things. His problem was that these “things” include billions of computers and smartphones linked to vast server farms that play host to all the world’s online services. These are then bound together through an ever-growing network of everything from fiber-optic cable that runs twenty-five times the circumference of the earth, to some 2,000 satellites that circle the planet.
No one human could hope to control so monumental a creation. But governments are a different story.
For all the immensity of today’s electronic communications network, the system remains under the control of only a few thousand internet service providers (ISPs), the firms that run the backbone, or “pipes,” of the internet. Just a few ISPs supply almost all of the world’s mobile data. Indeed, because two-thirds of all ISPs reside in the United States, the average number per country across the rest of the globe is relatively small. Many of these ISPs hardly qualify as “businesses” at all. They are state-sanctioned monopolies or crony sanctuaries directed by the whim of local officials. Liu would never have been able to “destroy” the internet. Neither can any one government. But regimes can control when the internet goes on (or off) and what goes on it.
Designed as an open system and built on trust, the web remains vulnerable to governments that play by different rules. In less-than-free nations around the world, internet blackouts are standard practice. All told, sixty-one countries so far have created mechanisms that allow for national-level internet cutoffs. When the Syrian uprising began, for instance, the government of Bashar al-Assad compelled Syria’s main ISP to cut off the internet on Fridays, as that was the day people went to mosques and organized for protests. It doesn’t just happen in wartime. In 2016, the exam questions for a national high school test in Algeria were leaked online, spreading across kids’ social media. In response, government officials cut off the entire nation’s access to the internet for three days while students took the test. Many Algerians suspected their government was actually using the scandal over the exam as a way to test its new tools of mass censorship.
These blackouts come at a cost. A 2016 study of the consequences of eighty-one instances of internet cutoffs in nineteen countries assessed the economic damage. Algeria’s economy lost at least $20 million during that three-day shutdown, while a larger economy like Saudi Arabia lost $465 million from an internet shutdown in May 2016.
With this in mind, governments are investing in more efficient ways to control internet access, targeting particular areas of a country. For example, India is the world’s largest democracy, but when violent protests started in the district of Rohtak in 2016, everyone in the district had their mobile connections cut for a week. (Even this limited focus cost the Indian economy $190 million.) Yet even more finely tuned censorship is possible. That same year, Bahrain instituted an “internet curfew” that affected only a handful of villages where antigovernment protests were brewing. When Bahrainis began to speak out against the shutdown, authorities narrowed their focus further, cutting access all the way down to specific internet users and IP addresses.
A variant of this cutoff strategy is “throttling.” Whereas internet blocks cut off access completely, throttling slows down connections. It allows vital online functions to continue while making mass coordination more difficult. It’s also harder to detect and prove. (Your Facebook posts on the evils of the government might not be loading because of a web slowdown or simply because your neighbor is downloading a video game.) Web monitoring services, for instance, have noticed that every time a protest is planned in Iran, the country’s internet coincidentally and conveniently slows to a crawl.
A corollary to this strategy is the effort by governments to bring more of the internet’s infrastructure under their direct control. Apologists call this “data localization,” but it is better known as “balkanization,” breaking up the internet’s global network into a series of tightly policed national ones. The Islamic Republic of Iran, for instance, has poured billions of dollars into its National Internet Project. It is intended as a web replacement, leaving only a few closely monitored connections between Iran and the outside world. Iranian officials describe it as creating a “clean” internet for its citizens, insulated from the “unclean” web that the rest of us use. Of course, with each new stride in censorship, human ingenuity finds ways to get around it. Identity-masking technologies can circumvent even the strongest government controls, while communications satellites can beam data into neighboring nations as easily as into their own. Despite the regime’s best efforts, for instance, Syrian rebel fighters were able to maintain active social media profiles by using solar-powered phone chargers and tapping into the mobile data network of neighboring Turkey.
But outside of the absolute-authoritarian state of North Korea (whose entire “internet” is a closed network of about thirty websites), the goal isn’t so much to stop the signal as it is to weaken it. If one has to undertake extensive research and buy special equipment to circumvent government controls, the empowering parts of the internet are no longer for the masses. The potential network shrinks. The flow of information slows. And authoritarians’ greatest fear—the prospect of spontaneous, widespread political mobilization—becomes harder to realize.
Yet governments’ reach extends beyond the internet’s infrastructure. They also have the police and the courts, all the mechanisms of state-sanctioned violence. As the internet has magnified the power of speech, these authoritarians haven’t hesitated to use their own unique powers to control it.
Does a retweet actually mean endorsement? For Dion Nissenbaum, the answer to this question landed him in a Turkish prison.
A soft-spoken man with a neat, gray-flecked goatee, Nissenbaum is a journalist who has spent years reporting from the most dangerous places in the world. He’s been abducted by masked gunmen in the Gaza Strip, shot at by Israeli soldiers, rocketed by Hezbollah militants, and forced to ditch a broken-down car in the midst of Taliban-controlled Afghanistan. When the Wall Street Journal sent him on assignment to Turkey, Nissenbaum assumed the situation would be comparatively tame. He was wrong.
In July 2016, Turkey was roiled by an attempted military coup. The plotters followed the classic playbook, rounding up politicians in the middle of the night, setting up armed checkpoints at key locations in the major cities, and seizing control of newspaper printing presses and TV stations. The idea was that the Turkish public would wake up the next morning to a fait accompli.
Instead, the coup became a story of the internet at its very best: a tale of mass mobilization that wouldn’t have been possible without social media. The first rallying cry came from the mayor of Ankara, taking to Twitter as he evaded antigovernment forces. “RT HERKES SOKAGA,” he wrote. “RETWEET: EVERYONE ON THE STREETS.”
Hundreds of thousands of Turkish citizens streamed from their homes. They engulfed city squares and surrounded military positions, chanting slogans. In almost every hand was a smartphone, inviting friends and family to join them, and the world to cheer them on. When armed soldiers took control of the printing press of the nation’s largest newspaper, with a daily circulation of just over 300,000 print copies, it didn’t matter. Its 34-year-old digital content coordinator reported the news on the newspaper’s Facebook page, allowing him to reach the ten times as many subscribers instantly. When soldiers tried to track him through the office building, he kept up a running commentary via Facebook, livestreaming his dangerous game of hide-and-seek.
As the online furor grew, even more protesters hit the streets. Meanwhile, the soldiers were beset by doubt. Many had been told by their officers that this was a routine training exercise. Staring at the faces of their furious countrymen and reading the online reports, they began to realize the truth. By dawn, the coup’s architects had been captured or killed. The confused soldiers surrendered.
Instead of celebrating the triumph of online people power, Turkish president Recep Tayyip Erdoğan, the target of the coup, saw a different opportunity. “This insurgency is a blessing from Allah, because it will allow us to purge the military,” he declared. Within three days, over 45,000 people suspected of links to his political opponents were pushed out of public service or marched before kangaroo courts. Among those arrested were 103 admirals and generals, 15,200 teachers, even 245 staffers at the Ministry of Youth and Sports. With the rebellious soldiers already in jail, few of these subsequent arrests had any connection to the coup. They were just people Erdogan wanted to get rid of.
Within months, over 135,000 civil servants were purged, and 1,058 schools and universities, 16 television stations, 23 radio stations, 45 newspapers, 15 magazines, and 29 book publishers were shut down. As part of this crackdown, Facebook, Twitter, and YouTube—services whose unfettered access had been crucial in stopping the coup—were increasingly restricted. Journalists saw their accounts suspended at the behest of the government. Freedom of speech was curtailed, the consequences demonstrated in a series of arrests of prominent figures. A satirical Instagram caption, penned by a former Miss Turkey, was enough to net her a fourteen-month jail sentence.
As conditions worsened, Dion Nissenbaum kept doing his job reporting the news. A few months after the coup, Nissenbaum was reading his Twitter feed, where he came across a report from an OSINT social media tracker—one of the same sources used by Eliot Higgins and the Bellingcat team. The report revealed that two Turkish soldiers being held captive by ISIS had been burned alive in a gruesome propaganda video. Nissenbaum thought it was newsworthy, as the Turkish government had been claiming that its operations in Syria were going well. He clicked the “retweet” button, sharing someone else’s OSINT news with his few thousand followers. He thought little of it, as he regularly retweeted tidbits of news that came across his feed, mixed in with stories he found amusing, like of a new robotic waitress at a pizza shop.
As Nissenbaum explained, he quickly learned that “Twitter is a bare-knuckle battleground.” A network of Turkish nationalists circulated screenshots of Nissenbaum’s online profile, overlaid with threats. Another person turned his picture into a mug shot and urged people in Istanbul to be on the lookout for “this son of a whore.” A popular Turkish newspaper editor, meanwhile, called for him to be deported. A friend quickly sent him a message warning him to check out the furor building online. After seeing the reaction, Nissenbaum took down his retweet, which had been up for just a few minutes. It was too late. As anger continued to swell, the Turkish government called his office, warning of unspecified “consequences.”
Those consequences soon arrived in the form of three Turkish police officers, who showed up at Nissenbaum’s apartment that night. They explained that he needed to pack a bag and come with them. There was no room for discussion. As he was driven away in a police van, Nissenbaum assumed he was going to be deported from the country. He grew alarmed when the van passed the airport and kept on going.
Nissenbaum was taken to a detention center, where he found himself strip-searched and thrown into a windowless isolation cell. For three days, he was denied all contact with the outside world. He played tic-tac-toe with himself and read the one book he’d been allowed to bring, a guide for new parents (he had just become a father).
And then, just as abruptly, he was yanked out of the jail, put in another van, and driven to a gas station parking lot. There waited his Wall Street Journal colleagues, who had been working around the clock to get him released. He didn’t waste any time. Within hours, he and his family were on a one-way flight out of Istanbul.
Afterward, Nissenbaum reflected on the experience. If he could turn back the clock, he admitted, he would do things differently. “The cost of the retweet was so high,” he said, “and the news value of putting it out was modest at best.” There was a broader lesson, he added. Social media was a “volatile political battleground.” What was said and shared—even a hasty retweet—carried “real-world consequences.”
In retrospect, Nissenbaum was lucky. As an American citizen, he had powerful advocates on his side. He also had the power to leave. For thousands of Turks jailed for online speech, as well as tens of thousands more “under investigation,” there was no such protection.
Nissenbaum’s story shows how the internet’s ultrafast and vast reach can spread information as never before. Yet it also shows how written (and unwritten) laws still vest immense power in government authorities, who determine the consequences for what is shared online.
Often, these restrictions are wrapped in the guise of religion or culture. But almost always they are really about protecting the government. Iran’s regime, for example, polices its “clean” internet for any threats to “public morality and chastity,” using such threats as a reason to arrest human rights activists. In Saudi Arabia, the harshest punishments are reserved for those who challenge the monarchy and the competence of the government. A man who mocked the king was sentenced to 8 years in prison, while a wheelchair-bound man was given 100 lashes and 18 months in jail for complaining about his medical care. In 2017, Pakistan became the first nation to sentence someone to death for online speech after a member of the Shia minority got in a Facebook argument with a government official posing as someone else.
Such codes are not limited to the Muslim world. Thailand has strictly enforced its law of “lèse majesté,” promising years of prison for anyone who insults a member of the royal family. The scope can be incredibly expansive. In 2017, unflattering photos of the king wearing a crop-top shirt and (especially) low-rise jeans appeared on Facebook. The government threatened to punish not just anyone who posted the images, but anyone who looked at them.
These regimes are also proactive in searching for online dissent. “We’ll send you a friend request,” a Thai government official explained. “If you accept the friend request, we’ll see if anyone disseminates [illegal] information. Be careful: we’ll soon be your friend.” The regime’s eyes are many, extending into the ranks of the very young. Since 2010, Thai police have administered a “Cyber Scouts” program for children, encouraging them to report on the online activity of friends and family—and promising $15 for each report of wrongdoing.
More than religion or culture, this new generation of censors relies on appeals to national strength and unity. Censorship is not for their sake, these leaders explain, but rather for the good of the country. A Kazakh visiting Russia and criticizing Russian president Vladimir Putin on his Facebook page was sentenced to three years in a penal colony for inciting “hatred.” A Russian woman who posted negative stories about the invasion of Ukraine was given 320 hours of hard labor for “discrediting the political order.”
The state can wield this power not only against users but also against the companies that run the networks. They may seem like faceless organizations, but there are real people behind them, who can be reached by the long arm of the law—or other means. VKontakte is the most popular social network in Russia. After anti-Putin protesters used VK in the wake of the Arab Spring, the regime began to take a greater interest in it and the company’s young, progressive-minded founder, Pavel Durov. When the man once known as “the Mark Zuckerberg of Russia” balked at sharing user data about his customers, armed men showed up at his apartment. He was then falsely accused of driving his Mercedes over a traffic cop’s foot, a ruse to imprison him. Getting the message, Durov sold his shares in the company to a Putin crony and fled the country.
Over time, such harsh policing of online speech actually becomes less necessary as self-censorship kicks in. Communications scholars call it the “spiral of silence.” Humans continually test their beliefs against those of the perceived majority and often quietly moderate their most extreme positions in order to get along better with society as a whole. By creating an atmosphere in which certain views are stigmatized, governments are able to shape what the majority opinion appears to be, which helps steer the direction of actual majority opinion.
Although plenty of dissenters still exist in authoritarian states, like those seeking to circumvent web bans and throttling, they now have to work harder. Their discussions have migrated from open (and easily monitored) social media platforms to secure websites and encrypted message applications, where only true believers can find them.
Yet there is more. Through the right balance of infrastructure control and enforcement, digital-age regimes can exert remarkable control over not just computer networks and human bodies, but the minds of their citizens as well. No nation has pursued this goal more vigorously—or successfully—than China.
“Across the Great Wall we can reach every corner in the world.”
So read the first email ever sent from the People’s Republic of China, zipping 4,500 miles from Beijing to Berlin. The year was 1987. Chinese scientists celebrated as their ancient nation officially joined the new global internet.
Other milestones soon followed. In 1994, China adopted the same TC/IP system that powered the World Wide Web. Almost overnight, the dour research tool of Chinese scientists became a digital place, popping with colorful websites and images. Two years later, the internet was opened to Chinese citizens, not just research institutions. A trickle of new users turned into a flood. In 1996, there were just 40,000 Chinese online; by 1999, there were 4 million. In 2008, China passed the United States in number of active internet users: 253 million. Today, that figure has tripled again to nearly 800 million (over a quarter of all the world’s netizens), and, as we saw in chapter 2, they use some of the most vibrant and active forms of social media.
Yet it was also clear from the beginning that for the citizens of the People’s Republic of China, the internet would not be—could not be—the freewheeling, crypto-libertarian paradise pitched by its American inventors. China has remained a single, cohesive political entity for 4,000 years. The country’s modern history is defined by two critical periods: a century’s worth of embarrassment, invasion, and exploitation by outside nations, and a subsequent series of revolutions that unleashed a blend of communism and Chinese nationalism. For these reasons, Chinese authorities treasure harmony above all else. Harmony lies at the heart of China’s meteoric rise and remains the underlying political doctrine of the Chinese Communist Party (CCP), described by former president Hu Jintao as the creation of a “harmonious society.” Dissent, on the other hand, is viewed as only harmful to the nation, leaving it again vulnerable to the machinations of foreign powers.
Controlling ideas online has thus always been viewed as a vital, even natural, duty of the Chinese state. Unity must be maintained; harmful ideas must be stamped out. Yuan Zhifa, a former senior government propagandist, described this philosophy in 2007. “The things of the world must have cadence,” he explained. His choice of words was important. Subtly different from “censorship,” “cadence” means managing the “correct guidance of public opinion.”
From the beginning, the CCP made sure that the reins of the internet would stay in government hands. In 1993, when the network began to be seen as something potentially important, officials banned all international connections that did not pass through a handful of state-run telecommunications companies. The Ministry of Public Security was soon tasked with blocking the transmission of all “subversive” or “obscene” information, working hand in hand with network administrators. In contrast to the chaotic web of international connections emerging in the rest of the globe, the Chinese internet became a closed system. Although Chinese internet users could build their own websites and freely communicate with other users inside China, only a few closely scrutinized strands of cable connected them to the wider world. Far from surmounting the Great Wall, the “Chinese internet” had become defined by a new barrier: the Great Firewall.
Chinese authorities also sought to control information within the nation. In 1998, China formally launched its Golden Shield Project, a feat of digital engineering on a par with mighty physical creations like the Three Gorges Dam. The intent was to transform the Chinese internet into the largest surveillance network in history—a database with rec-ords of every citizen, an army of censors and internet police, and automated systems to track and control every piece of information transmitted over the web. The project cost billions of dollars and employed tens of thousands of workers. Its development continues to this day. Notably, the design and construction of some of the key components of this internal internet were outsourced to American companies—particularly Sun Microsystems and Cisco—which provided the experience gained from building vast, closed networks for major businesses.
The most prominent part of the Golden Shield Project is its system of keyword filtering. Should a word or phrase be added to the list of banned terms, it effectively ceases to be. As Chinese internet users leapt from early, static websites to early-2000s blogging platforms to the rise of massive “microblogging” social media services starting in 2009, this system kept pace. Today, it is as if a government censor looms over the shoulder of every citizen with a computer or smartphone. Web searches won’t find prohibited results; messages with banned words will simply fail to reach the intended recipient. As the list of banned terms updates in real time, events that happen on the rest of the worldwide web simply never occur inside China.
In 2016, for instance, the so-called Panama Papers were dumped online and quickly propelled to virality. The documents contained 2.6 terabytes of once-secret information on offshore bank accounts used by global elites to hide their money—a powerful instance of the internet’s radical transparency in action. Among the disclosures were rec-ords showing that the families of eight senior CCP leaders, including the brother-in-law of President Xi Jinping, were funneling tens of millions of dollars out of China through offshore shell companies.
The information in all its details was available for anyone online—unless you lived in China. As soon as the news broke, an urgent “Delete Report” was dispatched by the central State Council Internet Information Office. “Find and delete reprinted reports on the Panama Papers,” the order read. “Do not follow up on related content, no exceptions. If material from foreign media attacking China is found on any website, it will be dealt with severely.” With that, the Panama Papers and the information in them was rendered inaccessible to all Chinese netizens. For a time, the entire nation of Panama briefly disappeared from search results in China, until censors modulated the ban to delete only if the post contained “Panama” and leaders’ names or related terms like “offshore.”
So ubiquitous is the filter that it has spawned a wave of surreal wordplay to try to get around it. For years, Chinese internet users referred to “censorship” as “harmony”—a coy reference to Hu Jintao’s “harmonious society.” To censor a term, they’d say, was to “harmonize” it. Eventually, the censors caught on and banned the use of the word “harmony.” As it happens, however, the Chinese word for “harmony” sounds similar to the word for “river crab.” When a word had been censored, savvy Chinese internet users then took to calling it “river crab’d.” And, as social media has become more visual, the back-and-forth expanded to image blocks. In 2017, the lovable bear Winnie-the-Pooh was disappeared from the Chinese internet. Censors figured out “Pooh” was a reference to President Xi, as he walks with a similar waddle.
History itself (or rather people’s knowledge and awareness of it) can also be changed through this filtering, known as the “cleanse the web” policy. Billions of old internet postings have been wiped from existence, targeting anything from the past that fails to conform to the regime’s “harmonious” history. Momentous events like the 1989 Tiananmen Square protests have been erased through elimination of nearly 300 “dangerous” words and phrases. Baidu Baike, China’s equivalent of Wikipedia, turns up only two responses to a search on “1989”: “the number between 1988 and 1990” and “the name of a computer virus.” The result is a collective amnesia: an entire generation ignorant of key moments in the past and unable to search out more information if they ever do become aware.
Chinese censorship extends beyond clearly political topics to complaints that can be seen as challenging the state in any way. In 2017, a man in Handan was arrested for “disturbing public order” after he posted a negative comment online about hospital food.
As we’ve seen, many nations muzzle online discussion. But there is a key difference in China: the content of the story is sometimes irrelevant to the perceived crime. Unlike in other states where the focus is on banning discourse on human rights or calls for democratization, Chinese censorship seeks to suppress any messages that receive too much grassroots support, even if they’re apolitical—or even complimentary to the authorities. For example, what seemed like positive news of an environmental activist who built a mass movement to ban plastic bags was harshly censored, even though the activist started out with support from local government officials. In a truly “harmonious society,” only the central government in Beijing should have the power to inspire and mobilize on such a scale. Spontaneous online movements challenge the state’s authority—and, by extension, the unity of the Chinese people. Or, as China’s state media explained, “It’s not true that ‘everyone is entitled to their own opinion.’”
From the first days of the Chinese internet, authorities have ruled that websites and social media services bear the legal responsibility to squelch any “subversive” content hosted on their networks. The definition of this term can shift suddenly. Following a spate of corruption scandals in 2016, for instance, the government simply banned all online news reporting that did not originate with state media. It became the duty of individual websites to eliminate such stories or suffer the consequences.
Ultimately, however, the greatest burden falls on individual Chinese citizens. Although China saw the emergence of an independent blogging community in the early 2000s, the situation abruptly reversed in 2013 with the ascendancy of President Xi Jinping. That year, China’s top court ruled that individuals could be charged with defamation (and risk a three-year prison sentence) if they spread “online rumors” seen by 5,000 internet users or shared more than 500 times. Around the same time, China’s most popular online personalities were “invited” to a mandatory conference in Beijing. They received notebooks stamped with the logo of China’s internet security agency and were treated to a slide show presentation. It showed how much happier a blogger had become after he’d switched from writing about politics to exploring more “appropriate” subjects, like hotel reviews and fashion. The message was clear: Join us or else.
The government soon took an even harder line. Charles Xue, a popular Chinese American blogger and venture capitalist, was arrested under suspicious circumstances. He appeared on state television in handcuffs shortly afterward, denouncing his blogging past and arguing for state control of the internet. “I got used to my influence online and the power of my personal opinions,” he explained. “I forgot who I am.”
The pace of internet-related detainments soon spiked dramatically. Since Xi came to power, tens of thousands of Chinese citizens have been charged with “cybercrimes,” whose definition has expanded from hacking to pretty much anything digital that authorities don’t like. In 2017, for instance, Chinese regulators determined that the creator of a WeChat discussion group wasn’t responsible just for their own speech, but also for the speech of each group member.
In China, it’s not enough simply to suppress public opinion; the state must also take an active hand in shaping it. Since 2004, China’s provincial ministries have mobilized armies of bureaucrats and college students in publishing positive stories about the government. As a leaked government memo explained, the purpose of these commenters is to “promote unity and stability through positive publicity.” In short, their job is to act as cheerleaders, presenting an unrelentingly positive view of China, and looking like real people as they do so.
Where this phalanx of internet commenters differs from a traditional crowdsourcing network is the level of organization that comes from a state bureaucracy, boasting its own pay scales, quotas, and guidelines, as well as examinations and official job certifications. Critics quickly labeled these commenters the “50-Cent Army,” for the 50 Chinese cents they were rumored to be paid for each post. (Eventually, China would simply ban the term “50 cents” from social media entirely.) One early advertisement for the 50-Cent Army promised that “performance, based on the number of posts and replies, will be considered for awards in municipal publicity work.” By 2008, the 50-Cent Army had swelled to roughly 280,000 members. Today, there are as many as 2 million members, churning out at least 500 million social media postings each year. This model of mass, organized online positivity has grown so successful and popular that many members no longer have to be paid. It has also been mimicked by all sorts of other organizations in China, from public relations companies to middle schools.
All of these firewalls, surveillance, keyword censorship, arrests, and crowdsourced propagandists are intended to merge the consciousness of 1.4 billion people with the consciousness of the state. While some may see it as Orwellian, it actually has more in common with what China’s Communist Party founder, Mao Zedong, described as the “mass line.” When Mao broke with the Soviet Union in the 1950s, he criticized Joseph Stalin and the Soviet version of communism for being too concerned with “individualism.” Instead, Mao envisioned a political cycle in which the will of the masses would be refracted through the lens of Marxism and then shaped into policy, only to be returned to the people for further refinement. Through this process, diverse opinions would be hammered into a single vision, shared by all Chinese people. The reality proved more difficult to achieve, and, indeed, such thinking was blamed for the Cultural Revolution that purged millions through the 1960s and 1970s, until it was repudiated after Mao’s death in 1976.
Through the possibilities offered by the Chinese internet, this mass-line philosophy has made a comeback. President Xi Jinping has lauded these new technologies for offering the realization of Mao’s vision of “condensing” public opinion into one powerful consensus.
To achieve this goal, even stronger programs of control lurk on the horizon. In the restive Muslim-minority region of Xinjiang, residents have been forced to install the Jingwang (web-cleansing) app on their smartphones. The app not only allows their messages to be tracked or blocked, but it also comes with a remote-control feature, allowing authorities direct access to residents’ phones and home networks. To ensure that people were installing these “electronic handcuffs,” the police have set up roving checkpoints in the streets to inspect people’s phones for the app.
The most ambitious realization of the mass line, though, is China’s “social credit” system. Unveiled in 2015, the vision document for the system explains how it will create an “upward, charitable, sincere and mutually helpful social atmosphere”—one characterized by unwavering loyalty to the state. To accomplish this goal, all Chinese citizens will receive a numerical score reflecting their “trustworthiness . . . in all facets of life, from business deals to social behavior.”
Much like a traditional financial credit score, each citizen’s “social credit” is calculated by compiling vast quantities of personal information and computing a single “trustworthiness” score, which measures, essentially, someone’s usefulness to society. This is possible thanks to Chinese citizens’ near-universal reliance on mobile services like WeChat, in which social networking, chatting, consumer reviews, money transfers, and everyday tasks such as ordering a taxi or food delivery are all handled by one application. In the process, users reveal a staggering amount about themselves—their conversations, friends, reading lists, travel, spending habits, and so forth. These bits of data can form the basis of sweeping moral judgments. Buying too many video games, a program director explained, might suggest idleness and lower a person’s score. On the other hand, regularly buying diapers might suggest recent parenthood, a strong indication of social value. And, of course, one’s political proclivities also play a role. The more “positive” one’s online contributions to China’s cohesion, the better one’s score will be. By contrast, a person who voices dissent online “breaks social trust,” thus lowering their score.
In an Orwellian twist, the system’s planning document also explains that the “new system will reward those who report acts of breach of trust.” That is, if you report others for bad behavior, your score goes up. Your score also depends on the scores of your friends and family. If they aren’t positive enough, you get penalized for their negativity, thus motivating everyone to shape the behavior of the members of their social network.
What gives the trustworthiness score its power is the rewards and risks, both real and perceived, that underpin it. Slated for deployment throughout China in 2020, the scoring system is already used in job application evaluations as well as doling out micro-rewards, like free phone charging at coffee shops for people with good scores. If your score is too low, however, you can lose access to anything from reserved beds on overnight trains to welfare benefits. The score has even been woven into China’s largest online matchmaking service. Value in the eyes of the Chinese government thus will also shape citizens’ romantic and reproductive prospects.
Luckily, no other nation has enjoyed China’s level of success in subordinating the internet to the will of the state, because of both its head start and its massive scale of investment. But other nations are certainly jealous. The governments of Thailand, Vietnam, Zimbabwe, and Cuba have all reportedly explored establishing a Chinese-style internet of their own. Russian president Vladimir Putin has even gone so far as to sign a pact calling for experienced Chinese censors to instruct Russian engineers on building advanced web control mechanisms. Just as U.S. tech companies once helped China erect its Great Firewall, so China has begun to export its hard-won censorship lessons to the rest of the world.
Programs like these make it clear that the internet has not loosened the grip of authoritarian regimes. Instead, it has become a new tool for maintaining their power. Sometimes, this occurs through visible controls on physical hardware or the people using it. Other times, it happens through sophisticated social engineering behind the scenes. Both build toward the same result: controlling the information and controlling the people.
Yet the web has also given authoritarians a tool that has never before existed. In a networked world, they can extend their reach across borders to influence the citizens of other nations just as easily as their own.
This is a form of censorship that hardly seems like censorship at all.
“It was difficult to get used to at first,” the young man confessed. “Why was I sitting in a stuffy office for eight hours a day, doing what I did? But I was tempted by easy work and good money.”
On the surface, his story is familiar. A philosophy major in college, he was short on job options and found himself sucked into the corporate grind. But this young man didn’t become a bored paralegal or a restless accountant. Instead, his job was causing chaos on the internet, to the benefit of the Russian government. He did this by writing more than 200 blog posts and comments each day, assuming fake identities, hijacking conversations, and spreading lies. He joined a war of global censorship by means of disinformation.
It is not surprising that Russia would pioneer this strategy. From its birth, the Soviet Union relied on the clever manipulation and weaponization of falsehood (called dezinformatsiya), both to wage ideological battles abroad and to control its population at home. One story tells how, when a forerunner of the KGB set up an office in 1923 to harness the power of dezinformatsiya, it invented a new word—“disinformation”—to make it sound of French origin instead. In this way, even the origin of the term was buried in half-truths.
During the Cold War, the Soviet Union turned disinformation into an assembly-line process. By one count, the KGB and its allied agencies conducted more than 10,000 disinformation operations. These ranged from creating front groups and media outlets that tried to amplify political divisions in the West, to spreading fake stories and conspiracy theories to undermine and discredit the Soviet Union’s foes.
These operations often used “black propaganda,” in which made-up sources cleverly laundered made-up facts. Perhaps the most notorious was Operation INFEKTION, the claim that the U.S. military invented AIDS, a lie that echoes through the internet to this day. The campaign began in 1983, launched via an article the KGB planted in the Indian newspaper Patriot, which itself was created as a KGB front in 1967. Its purported author was presented as a “well-known American scientist and anthropologist.” It was given further academic validation by another article in which two East Germans posed as French scientists and confirmed the findings reported in the fake article by the fake author. This subsequent article was the subject of no less than forty reports in Soviet newspapers, magazines, and radio and television broadcasts. At this point, the reports began to be distributed into the West through pro-Soviet, left-leaning media outlets and extreme right-wing ones prone to conspiracy theories (such as the fringe Lyndon LaRouche movement). The operation was a remarkable success, but it took four years to reach fruition.
The fall of the Soviet Union brought a seeming end to such initiatives. In article 29 of its newly democratic constitution, the Russian Federation sought to close the door on the era of state-controlled media and shadowy propaganda campaigns. “Everyone shall have the right to freely look for, receive, transmit, produce and distribute information by any legal way,” the document declared.
In reality, the Cold War’s end didn’t mean the end of disinformation. With new means of dissemination via social media, the prospect of spreading lies became all the more attractive, especially after the ascension of Vladimir Putin, a former KGB officer once steeped in them.
By way of crony capitalism and forced buyouts, Russia’s large media networks soon lay in the hands of oligarchs, whose finances are deeply intertwined with those of the state. Today, the Kremlin makes its positions known through press releases and private conversations, the contents of which are then dutifully reported to the Russian people, no matter how much spin it takes to make them credible.
Of course, this modern spin differs considerably from the propaganda of generations past. In the words of The Economist, old Soviet propagandists “spoke in grave, deliberate tones, drawing on the party’s lifelong wisdom and experience.” By contrast, the new propaganda is colorful and exciting, reflecting the tastes of the digital age. It is a cocktail of moralizing, angry diatribes, and a celebration of traditional values, constantly mixed with images of scantily clad women. A pop star garbed like a teacher in a porn video sings that “freedom, money and girls—even power” are the rewards for living a less radical lifestyle, while a rapper decries human rights protesters as “rich brats.” Running through it all is a constant drumbeat of anxiety about terrorism, the CIA, and the great specter of the West. Vladimir Milov, a former Russian energy minister turned government critic, explained it best. “Imagine you have two dozen TV channels,” he said, “and it is all Fox News.”
Milov’s freedom to say this, though, shows another twist on the traditional model. Unlike the Soviet Union of the past, or how China and many other regimes operate today, Russia doesn’t prevent political opposition. Indeed, opposition makes things more interesting—just so long as it abides by the unspoken rules of the game. A good opponent for the government is a man like Vladimir Zhirinovsky, an army colonel who premised his political movement on free vodka for men and better underwear for women. He once proposed beating the bird flu epidemic by shooting all the birds from the sky. Zhirinovsky was entertaining, but he also made Putin seem more sensible in comparison. By contrast, Boris Nemtsov was not a “good” opponent. He argued for government reform, investigated charges of corruption, and organized mass protests. In 2015, he was murdered, shot four times in the back as he crossed a bridge. The government prefers caricatures to real threats. Nemtsov was one of at least thirty-eight prominent opponents of Putin who died under dubious circumstances between 2014 and 2017 alone, from radioactive poisonings to tumbling down an elevator shaft.
Dissent is similarly allowed among the few journalists at news outlets independent of the state, but again, only within certain boundaries. Those who become too vocal or popular will experience a backlash. It might be through low-level harassment to make their life gratingly tenuous (such as by raising their taxes or instructing their landlord to suddenly break their lease). Or it might be through disinformation efforts to undermine their reputation. A favorite tactic is the state-linked media accusing them of being terrorists or arranging “scandals” using kompromat, a tactic whereby compromising material, like a sex tape, is dumped online. There are also more forceful methods of ensuring silence. Since Putin consolidated power in 1999, dozens of independent journalists have been killed under circumstances as suspicious as those that have befallen his political opponents.
The outcome has been an illusion of free speech within a newfangled Potemkin village. “The Kremlin’s idea is to own all forms of political discourse, to not let any independent movements develop outside its walls,” writes Peter Pomerantsev, author of Nothing Is True and Everything Is Possible. “Moscow can feel like an oligarchy in the morning and a democracy in the afternoon, a monarchy for dinner and a totalitarian state by bedtime.”
But importantly, the village’s border no longer stops at Russia’s frontier. After the color revolutions roiled Eastern Europe and the Arab Spring swept the Middle East, a similar wave of enthusiasm in late 2011 inspired tens of thousands of young Russians to take to the streets, mounting the most serious protests of Putin’s reign. Perceiving the combined forces of liberalization and internet-enabled activism as an engineered attack by the West, the Russian government resolved to fight back.
The aim of Russia’s new strategy, and its military essence, was best articulated by Valery Gerasimov, the country’s top-ranking general at the time. He channeled Clausewitz, declaring in a speech reprinted in the Russian military’s newspaper that “the role of nonmilitary means of achieving political and strategic goals has grown. In many cases, they have exceeded the power of force of weapons in their effectiveness.” In contrast to the haphazard way that Western governments have conceived of the modern information battlefield, Gerasimov proposed restructuring elements of the Russian state to take advantage of the “wide asymmetrical possibilities” that the internet offered.
These observations, popularly known as the Gerasimov Doctrine, have been enshrined in Russian military theory, even formally written into the nation’s military strategy in 2014. Importantly, Russian theorists saw this as a fundamentally defensive strategy—essentially a “war on information warfare against Russia.”
Such a power for Russia would only arise through strategic investment and organization, a stark contrast to the way most people in the West think about what happens on the internet as inherently chaotic and “organic.” A conglomerate of nearly seventy-five education and research institutions was devoted to the study and weaponization of information, coordinated by the Federal Security Service, the successor of the KGB. It was a radical new way to think about conflict (and one we’ll return to in chapter 7), premised on defanging adversaries abroad before they are able to threaten Russia at home. Ben Nimmo, who has studied this issue for NATO and the Atlantic Council, has described the resultant strategy as the “4 Ds”: dismiss the critic, distort the facts, distract from the main issue, and dismay the audience. Just as Western radio and television signals once ranged into the Soviet Union, Russian propagandists began to return the favor—with interest.
The most visible vehicle for this effort is Rossiya Segodnya (Russia Today, or RT), a state news agency founded in 2005 with the declared intention of sharing Russia with the world. Initially, it was a fairly boring, traditional broadcasting outlet. But when Russia reforged its information warfare strategy, the organization’s identity and mission shifted. Today, RT is a glitzy and contrarian media empire, whose motto can be found emblazoned everywhere from Moscow’s airport to bus stops adjacent to the White House: “Question More.”
RT was originally launched with a Russian government budget of $30 million per year in 2005. By 2015, the budget had jumped to approximately $400 million, an investment more in line with the Russian view of the outlet as a “weapons system” of influence. That support, and the fact that its long-serving editor in chief, Margarita Simonyan, simultaneously worked on Putin’s election team, belies any claims of RT’s independence from the Russian government. Indeed, on her desk sits a yellow landline phone with no dial or buttons—a direct line to the Kremlin. When asked its purpose, she answered, “The phone exists to discuss secret things.”
The reach of the RT network is impressive, broadcasting across the world in English, Arabic, French, and Spanish. Its online reach is even more extensive, pushing out digital content in these four languages plus Russian and German. RT is also popular; it has more YouTube subscribers than any other broadcaster, including the BBC and Fox News.
The network’s goal is no longer sharing Russia with the world, but rather showing why all the other countries are wrong. It does so by publishing harsh, often mocking stories about Russia’s political opponents, along with attention-grabbing pieces designed to support and mobilize divisive forces inside nations Russia views as its adversaries (such as nationalist parties in Europe or the Green Party and extreme right-wing in the United States). Any content that grabs eyeballs and sows doubt represents a job well done. Snarky videos designed to go viral (Animated Genitals and Lawnmower Explodes were major hits) are intermingled with eye-popping conspiracy theories (RT has promoted everything from Trump’s “birther” claims about Barack Obama to regular reporting on UFO sightings). As Matt Armstrong, a former member of the U.S. Broadcasting Board of Governors, has explained, “‘Question More’ is not about finding answers, but fomenting confusion, chaos, and distrust. They spin up their audience to chase myths, believe in fantasies, and listen to faux . . . ‘experts’ until the audience simply tunes out.”
After RT’s initial success, a supplementary constellation of Russian government–owned or co-opted outlets was organized, allowing stories and scoops to be shared from one mouthpiece to the next, building more and more online momentum. Sputnik International is a “news service” modeled on savvy web outlets like BuzzFeed, claiming to “[cover] over 130 cities and 34 countries.” Meanwhile, the news service Baltica targets audiences in the Baltic (and NATO-member) nations of Estonia, Latvia, and Lithuania. These well-funded Russian propaganda outlets can often outgun and overwhelm their local media competitors.
This modern network of disinformation can quickly rocket a falsehood around the world. In 2017, for instance, the U.S. Army announced that it would be conducting a training exercise in Europe involving 87 tanks. That nugget of truth was transformed into an online article headlined “US Sends 3,600 Tanks Against Russia—Massive NATO Deployment Under Way.” The first source of this false report was Donbass News International (DNI), the official media of the unofficial Russian separatist parts of Ukraine. That DNI Facebook page article was then distributed through nineteen different outlets, ranging from a Norwegian communist news aggregator to far-left activist websites to seemingly reputable outlets like the “Centre for Research on Globalization.” However, the “Centre” was actually an online distribution point for conspiracy theories on everything from “chemtrails” (the idea that the air is secretly being poisoned by mysterious aircraft) to claims that Hillary Clinton was behind a pedophile ring at a Washington, DC, pizzeria. That second cascade of reports was read by tens of thousands. The reports were then used as inspiration for further reporting, under different titles, by official Russian media like RT, which extended the story’s reach by orders of magnitude more.
This was exactly how Operation INFEKTION worked during the Cold War, except for two key differences. Through the web, a process that once took four years now takes mere hours, and it reaches millions more people.
The strategy also works to blunt the impact of any news that is harmful to Russia, spinning up false and salacious headlines to crowd out the genuine ones. Recall how Eliot Higgins and Bellingcat pierced the fog of war surrounding the crash of flight MH17, compiling open-source data to show—beyond a reasonable doubt—that Russia had supplied and manned the surface-to-air missile launcher that stole 298 lives. The first response from Russia was a blanket denial of any role in the tragedy, accompanied by an all-out assault on the Wikipedia page that had been created for the MH17 investigation, seeking to erase any mention of Russia. Then came a series of alternative explanations pushed out by the official media network, echoed by allies across the internet. First the Ukrainian government was to blame. Then the Malaysian airline was at fault. (“Questions over Why Malaysia Plane Flew over Ukrainian Warzone,” one headline read, even though the plane flew on an internationally approved route.) And then it was time to play the victim, claiming Russia was being targeted by a Western smear campaign.
Mounting evidence of Russia’s involvement in the shootdown proved little deterrent. Shortly after the release of the Bellingcat exposé showing who had shot the missiles, Russian media breathlessly announced that, actually, a newfound satellite image showed the final seconds of MH17. Furthermore, it could be trusted, as the image had both originated with the Russian Union of Engineers and been confirmed by an independent expert.
The photo was indeed remarkable, showing a Ukrainian fighter jet in the act of firing at the doomed airliner. It was a literal smoking gun.
It was also a clear forgery. The photo’s background revealed it had been stitched together from multiple satellite images. It also pictured the wrong type of attack aircraft, while the airliner said to be MH17 was just a bad photoshop job. Then it turned out the engineering expert validating it did not actually have an engineering degree. The head of the Russian Union of Engineers, meanwhile, explained where he’d found it: “It came from the internet.”
All told, Russian media and proxies spun at least a half dozen theories regarding the MH17 tragedy. It hardly mattered that these narratives often invalidated each other. (In addition to the fake fighter jet photos, another set of doctored satellite images and videos claimed to show it hadn’t been a Russian, but rather a Ukrainian, surface-to-air missile launcher in the vicinity of the shootdown, meaning now the airliner had somehow been shot down from both above and below.) The point of this barrage was to instill doubt—to make people wonder how, with so many conflicting stories, one could be more “right” than any other.
It is a style of censorship akin to the twist in Edgar Allan Poe’s “The Purloined Letter.” In the famous short story, Parisian police hunt high and low for a letter of blackmail that they know to be in their suspect’s possession. They comb his apartment for months, searching under the floorboards and examining the joints of every piece of furniture; they probe each cushion and even search the moss between the bricks of the patio. Yet they come up empty-handed. In desperation, they turn to an amateur detective, C. Auguste Dupin, who visits the suspect’s apartment and engages him in pleasant conversation. When the suspect is distracted, Dupin investigates a writing desk strewn with papers—and promptly finds the missing letter among the suspect’s other mail. The very best way to hide something, Dupin explains to the shocked police, is to do so in plain sight. So, too, is it with modern censorship. Instead of trying to hide information from prying eyes, it remains in the open, buried under a horde of half-truths and imitations.
Yet, for all the noise generated by Russia’s global media network of digital disinformation sites, there’s an even more effective, parallel effort that lurks in the shadows. Known as “web brigades,” this effort entails a vast online army of paid commenters (among them our charming philosophy major) who push the campaign through individual social media accounts. Unlike the 50-Cent Army of China, however, the Russian version isn’t tasked with spreading positivity. In the words of our philosophy student’s boss, his job was to sow “civil unrest” among Russia’s foes. “This is information war, and it’s official.”
While these activities have gained much attention for their role in the 2016 U.S. presidential election and the UK’s Brexit vote the same year, Russia’s web brigades actually originated almost a decade earlier in a pro-Kremlin youth group known as Nashi. When government authorities (firmly in control of traditional media) struggled to halt the fierce democratic activism spreading through Russian social media circles after the color revolutions and Arab Spring, the group stepped in to pick up the slack, praising Putin and trashing his opponents. The Kremlin, impressed with these patriotic volunteers, used the engine of capitalism to accelerate the process. It solicited Russian advertisers to see if they could offer the same services, dangling fat contracts as a reward. Nearly a dozen major companies obliged. And so the “troll factories” were born. (In 2018, several Russian oligarchs associated with these companies would be indicted by Special Counsel Robert Mueller in his investigation of Russian interference in the U.S. election.)
Each day, our hapless Russian philosophy major and hundreds of other young hipsters would arrive for work at organizations like the innocuously named Internet Research Agency, located in an ugly neo-Stalinist building in St. Petersburg’s Primorsky District. They’d settle into their cramped cubicles and get down to business, assuming a series of fake identities known as “sockpuppets.” The job was writing hundreds of social media posts per day, with the goal of hijacking conversations and spreading lies, all to the benefit of the Russian government. For this work, our philosophy major was paid the equivalent of $1,500 per month. (Those who worked on the “Facebook desk” targeting foreign audiences received double the pay of those targeting domestic audiences.) “I really only stayed in the job for that,” he explained. “I bought myself a Mazda Six during my time there.”
Like any job, that of being a government troll comes with certain expectations. According to documents leaked in 2014, each employee is required, during an average twelve-hour day, to “post on news articles 50 times. Each blogger is to maintain six Facebook accounts publishing at least three posts a day and discussing the news in groups at least twice a day. By the end of the first month, they are expected to have won 500 subscribers and get at least five posts on each item a day. On Twitter, they might be expected to manage 10 accounts with up to 2,000 followers and tweet 50 times a day.”
The hard work of a sockpuppet takes three forms, best illustrated by how they operated during the 2016 U.S. election. One is to pose as the organizer of a trusted group. @Ten_GOP called itself the “unofficial Twitter account of Tennessee Republicans” and was followed by over 136,000 people (ten times as many as the official Tennessee Republican Party account). Its 3,107 messages were retweeted 1,213,506 times. Each retweet then spread to millions more users, especially when it was disseminated by prominent Trump campaign figures like Donald Trump Jr., Kellyanne Conway, and Michael Flynn. On Election Day 2016, it was the seventh most retweeted account across all of Twitter. Indeed, Flynn followed at least five such documented accounts, sharing Russian propaganda with his 100,000 followers at least twenty-five times.
The second sockpuppet tactic is to pose as a trusted news source. With a cover photo image of the U.S. Constitution, @tpartynews presented itself as a hub for conservative fans of the Tea Party to track the latest headlines. For months, the Russian front pushed out anti-immigrant and pro-Trump messages and was followed and echoed out by some 22,000 people, including Trump’s controversial advisor Sebastian Gorka.
Finally, sockpuppets pose as seemingly trustworthy individuals: a grandmother, a blue-collar worker from the Midwest, a decorated veteran, providing their own heartfelt take on current events (and who to vote for). Another former employee of the Internet Research Agency, Alan Baskayev, admitted that it could be exhausting to manage so many identities. “First you had to be a redneck from Kentucky, then you had to be some white guy from Minnesota who worked all his life, paid taxes and now lives in poverty; and in 15 minutes you have to write something in the slang of [African] Americans from New York.” Baskayev waxed philosophic about his role in American politics. “It was real postmodernism. Postmodernism, Dadaism and Sur[realism].”
Yet, far from being postmodern, sockpuppets actually followed the example of classic Cold War “active measures” by targeting the extremes of both sides of American politics during the 2016 election. The fake accounts posed as everything from right-leaning Tea Party activists to “Blacktivist,” who urged those on the left to “choose peace and vote for Jill Stein. Trust me, it’s not a wasted vote.” A purported African American organizer, Blacktivist, was actually one of those Russian hipsters sitting in St. Petersburg, whose Facebook posts would be shared an astounding 103.8 million times before the company shut the account down after the election.
By cleverly leveraging readers’ trust, these engineers of disinformation induced thousands—sometimes millions—of people each day to take their messages seriously and spread them across their own networks via “shares” and retweets. This sharing made the messages seem even more trustworthy, since they now bore the imprimatur of whoever shared them, be it a distinguished general or a family friend. As the Russians moved into direct advertising, this tactic enabled them to achieve an efficiency that digital marketing firms would kill for. According to a dataset of 2016 Facebook advertisements purchased by Russian proxies, the messages received engagement rates as high as 24 percent—far beyond the single digits to which marketing firms usually aspire.
The impact of the operation was further magnified by how efforts on one social media platform could complement (and amplify) those on another. Russian sockpuppets ran rampant on services like Instagram, an image-sharing platform with over 800 million users (larger than Twitter and Snapchat combined) and more popular among youth than its Facebook corporate parent. Here, the pictorial nature of Instagram made the disinformation even more readily shareable and reproducible. In 2017, data scientist Jonathan Albright conducted a study of just twenty-eight accounts identified as having been operated by the Russian government. He found that this handful of accounts had drawn an astounding 145 million “likes,” comments, and plays of their embedded videos. They’d also provided the visual ammunition subsequently used by other trolls who stalked Facebook and Twitter.
These messages gained even greater power as they reached beyond social media, taking advantage of how professional news outlets—feeling besieged by social media—had begun embedding the posts of online “influencers” in their own news stories. In this, perhaps no one matched the success of @Jenn_Abrams. A sassy American teen, who commented on everything from Kim Kardashian’s clothes to the need to support Donald Trump, her account amassed nearly 70,000 Twitter followers. That was impressive, but not nearly as impressive as the ripple effect of her media efforts. “Jenn” was quoted in articles in the BBC News, BET, Breitbart, Business Insider, BuzzFeed, CNN, The Daily Caller, The Daily Dot, the Daily Mail, Dallas News, Fox News, France24, Gizmodo, HuffPost, IJR, the Independent, Infowars, Mashable, the National Post, the New York Daily News, New York Times, The Observer, Quartz, Refinery29, Sky News, the Times of India, The Telegraph, USA Today, U.S. News and World Report, the Washington Post, Yahoo Sports, and (unsurprisingly) Russia Today and Sputnik. Each of these articles was then read and reacted to, spreading her views even further and wider. In 2017, “Jenn” was outed by Twitter as yet another creation of Russia’s Internet Research Agency.
The Russian effort even turned the social media firms’ own corporate strategies against their customers. As a way to draw users deeper into its network, Facebook automatically steered people to join groups, where they could find new friends who “share their common interests and express their opinion.” The Russian sockpuppets learned to create and then manipulate these online gatherings. One of the more successful was Secured Borders, an anti–Hillary Clinton Facebook group that totaled over 140,000 subscribers. It, too, was actually run out of the St. Petersburg office of the Internet Research Agency. By combining online circulation with heavy ad buys, just one of its posts reached 4 million people on Facebook and was “liked” more than 300,000 times.
Much like the harassment campaigns inside Russia, sockpuppets also targeted Putin critics abroad. The most extreme efforts were reserved for those who investigated the disinformation campaigns themselves. After journalist Jessikka Aro published an exposé of the fake accounts, sockpuppets attacked her with everything from posts claiming she was a Nazi and drug dealer to messages pretending to be from her father, who had died twenty years earlier. When another group of Western foreign affairs specialists began to research the mechanics of disinformation campaigns, they found themselves quickly savaged on the professional networking site LinkedIn. One was labeled a “pornographer,” and another was accused of harassment. Such attacks can be doubly effective, not only silencing the direct targets but also discouraging others from doing the sort of work that earned such abuse.
While the sockpuppets were extremely active in the 2016 election, it was far from their only campaign. In 2017, data scientists searched for patterns in accounts that were pushing the theme of #UniteTheRight, the far-right protests that culminated in the killing of a young woman in Charlottesville, Virginia, by a neo-Nazi. The researchers discovered that one key account in spreading the messages of hate came to life each day at 8:00 A.M. Moscow time. Realizing they’d unearthed a Russian sockpuppet, they dug into its activities before the Charlottesville protests. For four years, it had posted around a hundred tweets a day, more than 130,000 messages in all. At first, the chief focus was support for UKIP, a far-right British party. Then it shifted to pushing Russia’s stance on the Ukraine conflict. Then it pivoted to a pro-Brexit stance, followed by support for Trump’s candidacy. After his election, it switched to white nationalist “free speech” protests. The efforts of these networks continue to this day, ever seeking to sow anger and division within Russia’s foes.
Indeed, a full three years after the flight MH17 tragedy, we tested the strength of the Russian disinformation machine for ourselves by setting what’s known as a “honeypot.” The term traditionally referred to a lure—in fiction, usually a sexy female agent—which enemy operatives couldn’t resist. Think Vesper Lynd’s seduction of James Bond in Casino Royale, or her real-life counterpart, Anna Chapman, the redheaded KGB agent who worked undercover in New York and then, after she was caught by the FBI and deported back to Russia, began a second career as a Facebook lingerie model. We posted something even more enticing on Twitter: one of Bellingcat’s reports. Within minutes, an account we’d had no prior link with reached out, inundating us with images disputing the report as “#Bellingcrap.” The account’s history showed it, day after day, arguing against Russia’s role in MH17, while occasionally mixing things up with anti-Ukrainian conspiracy theories and tweets in support of far-right U.S. political figures. In trying to persuade us, our new online friend had instead provided a window into a fight over “truth” that will likely continue to rage for as long as the internet exists.
Success breeds imitators. Just as some nations have begun to study China’s internet engineering, many others are copying Russian techniques. In Venezuela, the nominally elected “president,” Nicolas Maduro, enjoys an online cult of personality in which loyal (and paid) supporters quickly suppress critical headlines. In Azerbaijan, “patriotic trolls” launch coordinated attacks to discredit pro-democratic campaigners. Even in democratic India, rumors fly of shadowy online organizations that exist to defend the party of Prime Minster Narendra Modi. They applaud each new government policy and circulate “hit lists” to dig up dirt on opponents and pressure them into silence. If no incriminating material exists, they simply invent it.
A 2017 study from Oxford University’s Computational Propaganda Research Project found that, all told, at least twenty-nine regimes have followed this new model of censorship to “steer public opinion, spread misinformation, and undermine critics.” Even more worrisome, in 2017 at least eighteen national-level elections were targeted by such social media manipulation. As more governments become attuned to the internet’s dark possibilities, this figure will only grow.
Perhaps the most pernicious effect of these strategies, however, is how they warp our view of the world around us. It is a latter-day incarnation of the phenomenon explored in Gaslight, a 1938 play that was subsequently turned into a movie. In the story, a husband seeks to convince his new wife that she’s going mad (intending to get her committed to an asylum and steal her hidden jewels). He makes small changes to her surroundings—moving a painting or walking in the attic—then tells her that the things she is seeing and hearing didn’t actually occur. The play’s title comes from the house’s gas lighting, which dims and brightens as he prowls the house late at night. Slowly but surely, he shatters his wife’s sense of reality. As she says of her mounting self-doubt and resulting self-censorship, “In the morning when the sun rises, sometimes it’s hard to believe there ever was a night.”
Since the 1950s, the term “gaslighting” has been used to describe relationships in which one partner seeks control over another by manipulating or even denying the truth. We’re now seeing a new form of gaslighting, perpetrated repeatedly and successfully through social media on the global stage. In the words of writer Lauren Ducca, “Facts . . . become interchangeable with opinions, blinding us into arguing amongst ourselves, as our very reality is called into question.” All the while, a new breed of authoritarians tighten their grip on the world.
Yet sinister as they might be, even the strongest dictators cannot force someone to believe that the earth is flat. Nor can the accumulated weight of 100,000 online comments so much as bend a blade of grass unless someone chooses to act on them. There’s another piece of the puzzle still unaccounted for, perhaps the information battlefield’s most dangerous weapon of all.
Our own brains.