ON MARCH 15, 2019, an Australian white supremacist entered the Al Noor Mosque in Christchurch, New Zealand, and murdered fifty-one worshippers. Brandishing multiple automatic weapons and playing military music on a portable speaker, he streamed the entire spree on Facebook Live. It is now a week later, and Monika Bickert is in Washington, DC, in a dimly lit cocktail lounge, wolfing down French fries and fighting back tears.
Bickert’s job is setting the rules for content on Facebook. There has always been tension between Facebook’s free-speech cheerleading and its need to keep the platform safe. But after the election, the scrutiny has been heightened. Bickert doesn’t take it personally. Her mission isn’t connecting the world. It’s trying to stop Facebook from ruining it. Making the job harder was that, after Cambridge Analytica, the whole world was watching. And jumping on her each time her team failed at an impossible job.
BICKERT GREW UP in Southern California. She loved sports and excelled on the volleyball team while taking AP courses. Her high school history teacher was coach of the mock-trial team and urged her to join, and she loved it—the strategizing, the analysis, and especially performing in a virtual court. At Rice University, she studied economics and played volleyball. She had accumulated the credits required for graduation when an injury ended her varsity career after her third year. She went directly to Harvard Law, did a federal clerkship after graduating, and joined the US Attorney’s Office, first in DC, and then in Chicago.
Bickert handled cases like the prosecution of forty-seven members of the Mickey Cobras, a street gang accused of selling heroin and fentanyl in the Dearborn Homes housing project. She put people in jail for government corruption and child pornography. She also fell in love with one of the top lawyers in the Chicago US Attorney’s Office, Philip Guentert, a widower who had two young adopted girls. In 2007, to expose their Chinese-born daughters to life in Asia, they moved to Bangkok, still working for the DOJ. Bickert concentrated on sex-trafficking cases. Then Guentert was diagnosed with kidney cancer. They moved back to the States to address his medical problems. So when Bickert heard that Facebook was looking for someone with government and international experience, “I threw in a résumé and came to visit the campus, not really knowing what to expect,” she says. No amount of prepping could have revealed what would eventually become the nature of her job there.
When she interviewed in 2012, Bickert was taken with the energy of the Facebookers, but more profoundly intrigued by the prospect of untangling some of the legal threads that came from running the biggest social network the world had ever seen. Nobody had ever dealt with those questions before.
Her first role at Facebook was responding to government requests for user data, where she got a sense of the power of the information people shared with Facebook. After about six months, the company tapped her as legal muscle to enforce its data policy with developers.
Because Bickert dealt with troublesome developers, she wound up in a room where Facebook’s policy people were considering a video game that included what seemed like hate speech. The argument was whether it violated Facebook’s rules. Bickert weighed in with an analysis that impressed Marne Levine, then the head of global policy. Levine realized Bickert might be the ideal match for the open position of arbiter of content policy. Bickert started in 2013.
Which is how Monika Bickert’s supposedly low-key job at a tech company transformed into one of its most public and exposed roles: she became perhaps the world’s most powerful arbiter of speech, operating in a fishbowl where every decision was subject to scorn and outrage, and implementing those decisions at a scale where failures were guaranteed.
Those failures could have consequences, particularly overseas, where the company had all too often rushed into countries without understanding the culture or setting up infrastructure to deal with sometimes dangerous abuses of the platform. Organized groups, sometimes actual governments, would post hate speech and inflammatory falsehoods that targeted dissidents or vulnerable minorities. Before such problems became public—and even sometimes afterward—Facebook would pay little attention, despite warnings.
During the Arab Spring movement in the Middle East, Facebook was celebrated as a force for freedom. Users running Facebook pages helped organize the 2010 Tunisian uprising. The Egyptian campaign to overthrow the government was hugely lubricated by Facebook; after a computer programmer was murdered by state police, a Facebook page called “We Are All Khaled Said” galvanized the protest movement that would overthrow the regime. “It felt like Facebook had extraordinary power, and power for good,” former Facebook policy executive Tim Sparapani told Frontline. “I remember feeling elated to see people use this tool, a free tool, to do things that they could never have done before, to organize, to share their world, to show violence that was being foisted on them by people in their government who are trying to prevent this uprising. . . . It can’t get any more real.”
For years, the halo effect from empowering righteous activists would blind Facebook to the potential for abuses in other countries. From Menlo Park, it was hard to envision how the platform’s political mojo that freed people could just as easily be used by those in power to divide and dominate them.
The Growth game plan was to spread Facebook around the world, and its operatives figured that some measure of good vibes and crowdsourced problem-solving would take care of unpleasant consequences of a mass free-speech platform popping up in regions unaccustomed to anything like it.
Maria Ressa, the Philippine journalist who had reported misinformation and hate campaigns to Facebook in 2016, was seeing this firsthand. After the Philippine strongman leader Rodrigo Duterte consolidated power, his followers kept using Facebook to demonize opponents, and then Ressa herself. She kept pushing Facebook for action. Ressa spoke to all the key policy people—Elliot Schrage, Monika Bickert, Alex Schultz, Sheryl Sandberg—and even was part of a small meeting with Zuckerberg at the annual F8 conference in May 2017, where the CEO met with global developers, telling them fake news was an issue, but it will take time to get it right. But the problem was now! To Ressa, none of those Facebook executives seemed to get it. “For the longest time I felt like not only were they in denial, I would get blank stares,” she says. It wasn’t until early 2018, she says, that Facebook responded emphatically.
Facebook’s situation in Myanmar, formerly called Burma, was even worse. Myanmar was one of the countries where Facebook had rushed in without employing a single speaker of the language. Before things went bad, Chris Cox actually celebrated this approach to me in a 2013 conversation. “As the usage expands, [Facebook] is in every country, it’s in places in the world and languages and cultures we don’t understand,” he said then. Facebook’s solution at that time, he said, was not to hire dozens of people who knew the culture but to double down on algorithms that measured how much people were using it. The more engagement you had, the better it was working! Cox admitted that it was challenging to juggle the various factors around the world where people used the platform differently—like getting their news from it. He told me that a friend in “Burma” told him Facebook was a channel for news there. “It’s like, We have to get the news somewhere!” (To his credit, Cox later became a force for more aggressive action in taking down offensive content. He would often clash with Zuckerberg on the issue.)
When Cox was boasting about Burma, Facebook was already being abused there. “What you’ve seen in the past five years is almost an entire country getting online at the same time,” says Sarah Su, who works on content safety issues on the News Feed. “On the one hand, it’s really incredible to have been a part of that. On the other hand, we realized that digital literacy is quite low. They don’t have the antibodies to [fight] viral misinformation.”
Both in journalistic accounts and in UN human rights reports, Myanmar’s president and his supporters used Facebook as a weapon to incite violence against the Rohingya, a Muslim minority group. For instance, on June 1, 2012, the president’s key spokesperson posted a call to action against the Rohingya in a Facebook post, warning of dangerous, armed “Rohingya terrorists,” essentially garnering support for a government massacre that would indeed occur a week later. “It can be assumed that the troops are already destroying them [the Rohingya],” the post said. “We don’t want to hear any humanitarian or human rights excuses. We don’t want to hear your moral superiority, or so-called peace and loving kindness.”
In November 2013, Aela Callan, an Australian journalist on a Stanford fellowship at the time, visited Facebook to alert the company to the situation, meeting with Elliot Schrage. She was told that Facebook had only a single content moderator, based in Dublin, who understood the Burmese language. She told Wired that she felt Facebook was “more excited about the connectivity opportunity” than the violence issues.
In 2014, a woman was paid to make a false report that Rohingya had raped her. The posts spread on Facebook and inspired a riot that led to the death of two people. Around that time Bickert hosted some civil society groups including one from Myanmar, who asked for help. Spurred in part by one of its policy people in Australia who took an interest in the problem, Facebook understood then that it needed more native speakers, but by its own admission was slow to hire enough content moderators who knew the language. An additional problem was the way the Burmese language is deployed online: sometimes it uses Unicode, which is the international standard; other times it uses a unique font that is tricky for Facebook’s system to read. It wasn’t even until 2015 that Facebook translated its Community Standards manual to Burmese.
From the Philippines, Maria Ressa noticed that Facebook’s struggles to grasp that problem resonated with its failure in her home country. “I think it took them a very long time to understand Myanmar,” she says. “It was a combination of willful denial and lack of context. It’s really, really a different world because they don’t live in countries that are vulnerable. I just watched my life get torn apart by the stuff that Facebook allowed. They were looking at their problems within the context of Silicon Valley.”
In June 2016, Facebook introduced Free Basics to Myanmar, making it even more challenging to police hate speech. When violence intensified, Facebook was unable to contain it. Bickert says one reason was that Facebook was hiring what it thought was a reasonable number of native speakers, but when violence breaks out, more people use Facebook and more provocative content gets posted. “We weren’t well positioned,” she says. “When the violence broke out, we didn’t have good technical tools to find content. We were dealing with the font problem, with the report flow rendering incorrectly, and we didn’t have sufficient language expertise.”
Not until August 2018 did Facebook take major steps to take down content in Myanmar, removing eighteen Facebook accounts, one Instagram account, and fifty-two Facebook Pages. It also banned twenty people and organizations, including the commander-in-chief of the armed forces and a television network run by the military. Still, hate speech and provocations to violence continued. When Zuckerberg testified before Congress, Senator Patrick Leahy confronted him with an incident where a Facebook post called for the death of a journalist. Multiple pleas had gone to Facebook to remove the post before the company acted, said Leahy.
“Senator, what’s happening in Myanmar is a terrible tragedy, and we need to do more,” was Zuckerberg’s response.
WhatsApp, which had become the dominant service in Myanmar, presented special challenges because its content was encrypted, and Facebook could not know what was in the text exchanges unless a recipient of a message sent them the decoded information. The WhatsApp founders had decided to build encryption deep into the product, believing that this impenetrability was an unalloyed positive.
“There is no morality attached to technology, it’s people that attach morality to technology,” WhatsApp co-founder Brian Acton told me in 2018, looking back on the controversy. “It’s not up to the technologists to be the ones to render judgment. I don’t like being a nanny company. Insofar as people use a product in India or Myanmar or anywhere for hate crimes or terrorism or anything else, let’s stop looking at the technology and start asking questions about the people.”
Acton was expressing something that most of the company dared not speak out loud, but sometimes muttered in private conversation. Violence had persisted in many regions for centuries, well before there was a thing called Facebook. Naturally, the appearance of a communications platform like Facebook would be exploited by dark forces, just as previous technologies like radio, the telephone, and automobiles had been. In this view, Facebook was just the medium du jour.
But this was not a tenable argument for Facebook to make about a place like Myanmar, where people were using Facebook’s unique properties of viral distribution to spread lies about a minority group, inciting readers to kill them. Facebook contracted with a firm called BSR to investigate its activity in Myanmar. It found that Facebook rushed into a country where digital illiteracy was rampant: most Internet users did not know how to open a browser or set up an email account or assess online content. Yet their phones were preinstalled with Facebook. The report said that the hate speech and misinformation on Facebook suppressed the expression of Myanmar’s most vulnerable users. And worse: “Facebook has become a useful platform for those seeking to incite violence and cause offline harm.” A UN report had reached similar conclusions.
When unveiling the BSR report in November 2018, Bickert announced in a press call, “We updated our policy so that we now remove misinformation that may contribute to imminent violence or physical harm. That change was made as a result of advice we got from groups in Myanmar and Sri Lanka.”
Every reporter on the call probably had the same thought I did: You mean, before 2018 it was okay?
THE DOWNSIDE OF “moving fast” wasn’t limited to Facebook’s international expansion. There was also a carelessness regarding products that Facebook was launching. The real-time program Facebook Live was intended as a feel-good feature, but it misjudged the human capacity for mischief, self-destruction, and evil.
It started as a way to help the famous use Facebook to become more famous. Around 2014 a small team working on Mentions, a feature supporting celebrities, began developing a feature to stream video in real time. They convinced their manager, Fidji Simo, to back them. Simo was a hard worker—Facebookers recall with awe that when she was on bed rest in a troublesome pregnancy, she kept up her pace, having teams come to her house for meetings—and she decided to pivot her product team to focus on video.
By the time Facebook Live launched in August 2015, Twitter already had started a live-streaming product of its own called Periscope, and a start-up called Meerkat was also getting buzz. Facebook, unlike the others, not only streamed video live, but let it remain on the page after it was done, allowing users to continue posting comments about it. This allowed for the clips to spread virally over a period of hours or days, creating maximum engagement. Facebook initially limited Live to the certified celebrities that the Mentions team had worked with, but when Zuckerberg saw that people were watching—a Ricky Gervais stream got almost a million users’ attention—he decided to open it to the world.
Facebook Live had huge impact from the start, in large part because the company tweaked the News Feed algorithm to favor the videos. Early on, a joyful thirty-seven-year-old Texas woman streamed herself in a Chewbacca mask and won herself more than 100 million views and a brief period of stardom. News and quasi-news outlets embraced it; when BuzzFeed, a publication built on the back of the News Feed algorithm, streamed the destruction of a watermelon in 2016, it became a national phenomenon, with 800,000 people tuned in live. Harmless stuff.
But there was also harmful stuff.
“We really didn’t know how people would use it,” says Allison Swope, who helped create the product. “We were like, People can post videos right now of horrible things. Is this really that much different than a [prerecorded] video? We tried to think of all the scenarios, but I still don’t know why someone wants to commit suicide live on Facebook.”
Ellen Silver, of Facebook’s Trust and Protect team, insists there was some planning. “We definitely had the team think through potential new abuse factors that would occur, from a policy perspective and the enforcement perspective,” she says. “And it was unfortunate that those behaviors did occur on Facebook Live.”
Nonetheless, Facebook was unprepared. The Live team endured a three-month “lockdown” to deal with suicide videos soon after launch. “We saw just a rash of self-harm, self-injury videos go online,” Neil Potts of Facebook’s public policy team told Motherboard. “And we really recognized that we didn’t have a responsive process in place that could handle those.”
Suicide presented a tough case, but one the company had already been grappling with in an enlightened way, encouraging users to spot warning signs and using artificial intelligence to detect posts indicating an impending attempt. When one was flagged, the company would dispatch helpers, either Facebook friends or local authorities, or hotlines. (Later some critics would attack Facebook for doing exactly that, charging that by trying to identify impending suicides, the company was overstepping into the medical realm. Perfect example of a Facebook can’t-win situation.) Video added another complication. The content was disturbing, but the video could alert people to action. Some people even charged that a suicide could be the fault of Facebook Live—that the temptation of a public exit lured people into the act.
There were murders, too, and Facebook had trouble dealing with those as well. For instance, in June 2016, a twenty-eight-year-old man named Antonio Perkins was streaming on Facebook Live when someone fatally shot him in the head and neck. Since the video did not show gore, Facebook said it did not violate policy and left it up. The murder happened only one day after a young man in France, who had just killed two police officers, ranted on Facebook Live for thirteen minutes. This triggered unease among Facebook employees. Andrew Bosworth addressed the issue with one of his notorious internal memos. He meant it as a conversation starter, but it wound up sounding too much like a manifesto. He titled it “The Ugly.”
We talk about the good and the bad of our work often. I want to talk about the ugly.
We connect people.
That can be good if they make it positive. Maybe someone finds love. Maybe it even saves the life of someone on the brink of suicide.
So we connect more people.
That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies. Maybe someone dies in a terrorist attack coordinated on our tools.
And still we connect people.
The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good. . . . It is literally just what we do. We connect people. Period.
That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do to bring more communication in . . .
I know a lot of people don’t want to hear this. Most of us have the luxury of working in the warm glow of building products consumers love. But make no mistake, growth tactics are how we got here. . . . We do have great products but we still wouldn’t be half our size without pushing the envelope on growth.
“The Ugly” generated hundreds of comments from Facebook’s employees, most of them appalled by the idea that fatalities might be collateral damage of Facebook’s growth. But those objections were tame compared to the reaction when BuzzFeed leaked the memo in 2018. Zuckerberg had to issue a statement: “We’ve never believed that the ends justify the means,” he wrote. Zuckerberg further disowned the Boz memo during congressional testimony, adding that controversial posts were part of Facebook’s tradition of open internal debate.
Even Boz distanced himself from it. “I was putting a stake in the ground at what is the most succinct and extreme formulation about the philosophy we have towards growth,” he says. He carelessly dashed it off to spur a conversation about growth, intentionally overstating the sentiment for the purposes of his thought experiment.
I suggested that maybe the reason his memo had created such a fuss was that it really did present the ugly truth. Wasn’t Chamath Palihapitiya’s obsessive drive to snare the entire Internet population of Earth really a huge risk for populations unprepared for a mass tsunami of sharing?
Bosworth rejected that conclusion, as did key people on the Growth team. But another Facebook executive provided a different perspective. “Mark realized this in 2007—with the first kidnapping, the first rape, the first suicide—that there were going to be consequences,” the official says. “The world is full of bad people. No company in history has ever had to answer to the bad people in the world as much as Facebook has. It’s mentioned in forty percent of divorces!” (It’s not clear where he got that number, but a 2012 study found that Facebook was mentioned in a third of divorces.)
After the 2016 election, Facebook could not blow past those consequences, or minimize them by citing how tiny a percentage they were of the total content on the platform. It had to deal with the ugly. In 2017, Facebook created a group called Risk and Response, to try to get ahead of impending crises. “There had been a lot more interest and scrutiny of the way that Facebook was making decisions around content on the platform,” says James Mitchell, who heads the group. “In that environment, one of the things you can do is say, Well, let’s do a better job internally of trying to find and identify these vulnerabilities.”
If so, Facebook might have done better dealing with the mass murder in Christchurch, New Zealand. Facebook Live had been an integral part of the killer’s social-media strategy. Using proven techniques of effective brand consultants, the terrorist promoted his deadly broadcast in advance of its premiere, using sites like 8chan, Reddit, and more obscure white-supremacist outposts. He knew that he didn’t need many viewers initially, because he could count on hundreds of thousands of people, be they followers or trolls or voyeurs or simply curious, to repost his deadly selfie. The carnage stunned the world. And in a season of constant scandals, it was one more blow to a company whose reputation, it seemed, could not get lower.
Facebook’s job—Bickert’s job—was making sure as few of Facebook’s users as possible watched the horrible video. Her job also required that she view the whole clip herself, no matter how much she would hate it.
She is telling me this after appearing on a panel in the nation’s capital about free speech. On earlier trips to DC, Bickert had appeared several times before committees, often having to defend Facebook content by invoking rules that made sense only in conference rooms in Menlo Park or K Street.
Now we are in a cocktail lounge down the block from the conference and she is recounting Christchurch. The seventeen-minute video that depicted the massacre, including the assassin’s hop from one mosque to a second, had only been viewed live by about two hundred people. Facebook heard about it twelve minutes after it ended, and took down the video. But then the video kept spreading on Facebook, even as the company used a digital fingerprint to thwart uploads. An elaborate cat-and-mouse game broke out where Facebook would block the video and persistent users would alter the file to slip past the censor. Within twenty-four hours, users attempted to upload a version of the video 1.5 million times, and Facebook blocked 1.2 million of those, meaning that 300,000 copies managed to appear on the platform. A week later, people were reporting that one could still find copies of the video.
Why thousands of people saw fit to upload that video was and is a mystery. Just another piece of evidence that connecting the world has a dark downside.
As she describes the experience of watching the video, her voice breaks, and the edges of her eyes begin to moisten. Even for Monika Bickert, former prosecutor of Bangkok sex trafficking, onetime Javert to drug-dealing Chicago street gangs, steadfast arbiter of speech for two and a half billion souls, and icy defender of Facebook against preening legislators, this was too much.
THE FRONT LINES of Bickert’s efforts are the content moderators that Facebook began hiring around 2009, when it established its first international centers in Dublin. These were the successors to the customer-support people who in Facebook’s early days blocked nude photos from parties, dealt with lactating activists, and frantically hired colleagues as the task became overwhelming. By now Facebook directed a cast of thousands—more than tripling post-election, to 15,000 by 2019. “A lot of that is because we feel like we under-invested earlier,” says Bickert. The moderators work around the globe, sifting through millions of pieces of content that are either reported by users as improper or identified by artificial-intelligence systems as potential violations. And they quickly make decisions whether these posts are indeed violations of the Facebook rules.
Yet the vast majority of these moderators have little contact with the engineers, designers, and even the policy people who set their rules. Most aren’t even employees. Since 2012, when Facebook started centers in Manila and India, it began outsourcing companies to hire and employ the workers. They can’t attend all-hands meetings and they don’t get Facebook swag.
Facebook is not the only company to use content moderators: Google, Twitter, and even dating apps like Tinder have the need to monitor what happens on their platforms. But Facebook uses the most.
While a global workforce of content moderators was slowly building to tens of thousands, it was at first a largely silent phenomenon. The first awareness came from academics. Sarah T. Roberts, then a graduate student, assumed, like most people, that artificial intelligence did the job, until a computer-science classmate explained how primitive AI was back then.
“That crystallized the problem for me,” she says. “The only remedy had to be a massive, basically underclass of workers. In 2010, the firms were not admitting that they did this.” Roberts and others clued in to the phenomenon identified a new kind of worker—not with the elite degrees and engineering background that the tech companies preferred, but still essential to their operations. They were also a reminder that the twenty-first-century Internet had veered from the idealism that marked its previous era. Though Mark Zuckerberg might have begun Facebook with expectations that a light hand would be required, his underlings in support roles figured out early that humans would be spending their days sifting through Facebook content to protect the masses from offensive and even illegal content. It was a natural evolution to put them in factories. They became the equivalent of digital janitors, cleaning up the News Feed like the shadow workforce that comes at night and sweeps the floors when the truly valued employees are home sleeping. Not a nice picture. And this kind of cleaning could be harrowing, with daily exposure to rapes, illegal surgery, and endless images of genitals. The presence of all that stomach-turning content was an uncomfortable fact for Facebook, which preferred to keep its armies of scrubbers out of sight.
A subgenre of journalism emerged, exposing the conditions of the moderation centers. Though Facebook says that the stories were exaggerated, some details were cross-confirmed by multiple articles and academic studies: The moderators are almost always employed by outsourcing companies like Accenture and Cognizant, and their wages are relatively low, generally in the $15 an hour range. They view an astounding amount of horrifying content at a brisk pace. The rules they use to determine what will stay up or be taken down are deceptively complicated. And the job messed with their minds. A series of stories by The Verge’s Casey Newton introduced some Dickensian elements: pubic hair and fingernails among the desks, lines to use the restrooms, and even a temptation to embrace the toxic conspiracy theories that were constantly posted on Facebook.
When I visited the Phoenix office, I spotted no pubic hair. I didn’t have to get in line to use the bathroom. The space was clean, and a colorful mural greeted employees when they entered. There was no way the office matched the buzzy cacophony of an actual Facebook office, but it didn’t have the dingy oppression of the boiler-room operation that some stories implied. The workstations consisted of display screens on long black tables; since moderators don’t have assigned spaces, there were no personal items. That, along with the venue’s status as a “paperless office,” gave the unoccupied areas a sense of abandonment. At peak, I was told, four hundred moderators would be there; the office was staffed 24/7.
My guide was the Cognizant executive who set up the office. His expertise was outsourcing, not content policies. All the rules and their execution came from “the client,” which is the way he referred to Facebook.
I met with a group of moderators who had volunteered to speak to me. About half had college training. They had all calculated that this particular job was superior to alternatives at this time in their lives. We went over the details of their work. Facebook expects moderators to make about 400 “jumps” a day, which means an “average handle time” of around 40 seconds for them to determine whether a questionable video or post must remain, be taken down, or in rare cases, escalated to a manager, who might send the most baffling decision to the policy-crats at Menlo Park. Facebook says that there is no set time limit on each decision, but reporting by journalists and several academics who have done deep dives on the process all indicate that pondering existential questions on each piece of content would put one’s low-paying moderation career at risk. One of the moderators I spoke to seems to be lodging a personal war against this unwritten quota: his personal goal is 200 jumps a day. “When you do it too fast, you miss little details,” he says, adding that he hopes that his high accuracy will provide him cover if his low average handle time is questioned.
How many errors are made? It’s hard to say, but one indicator is the number of times that a user appeal of a decision was upheld. In the first three months of 2019, Facebook removed 19.4 million pieces of content. Users appealed 2.1 million of those decisions and were upheld—that is, the original decisions were wrong—a little under a fourth of the time. Another 668,000 removed posts were restored without an appeal. In other words, though most of the time the calls are correct, there are still millions of people affected by an inability of moderators to get things right in that stopwatch atmosphere.
Part of the challenge comes from trying to match questionable content to their playbook, the Community Standards successor to the one-page document that Paul Janzer referred to and Dave Willner began to expand in Facebook’s early days. Moderators learn how to interpret the guide first in classroom sessions, then alongside a veteran before they’re allowed to solo. Facebook made the guide public in 2018, after multiple partial leaks.
The Community Standards are a testament to the complexity of the task. The same set of rules applies all over the globe, despite variations in cultures as to what is deemed permissible. The standards apply to all of Facebook’s properties, including the News Feed, Instagram, the Timeline on the profile page, and private messages on WhatsApp and Messenger.
The rules can venture into confounding, Jesuitical flights of logic. Some things are fairly straightforward. There are attempts to define levels of offensiveness in subjects like exposure to human viscera. Some exposure is okay. Other varieties require an “interstitial,” a warning on the screen like the one before a television show that might show a glimpse of buttocks. Outright gore is banned. It takes a judgment call to fit a given bloodbath into the right box.
“If you rewind to Facebook’s early, early days, I don’t think many people would have realized that we’d have entire teams debating the nuances of how we define what is nudity, or what exactly is graphic violence,” says Guy Rosen of the Integrity team. “Is it visible innards? Or is it charred bodies?”
To be sure, the twenty-seven-page document hardly covers every example. Facebook has created a vast number of non-public supplementary documents that drill on specific examples. These are the Talmudic commentaries shedding light on Facebook’s Torah, the official Community Standards. A New York Times reporter said he had collected 1,400 pages of these interpretations. A cache of training documents leaked to Motherboard showed cringe-worthy images where anal sphincters were photoshopped into a picture of Taylor Swift, swapped for her eyes. The training slide says such defacement is permissible, because Swift is a celebrity. Doing this to someone in your high school class would be bullying, and disallowed. But the altered image of Kim Jung Un with his mouth swapped for an anus with a sex toy inserted is to be removed.
The toughest calls come with hate speech. Facebook doesn’t allow it but has understandable difficulty in defining it crisply. “Hate speech is our most difficult policy to enforce, because of the lack of context,” says Bickert. The same words used to joke with a friend are regarded much differently when directed at a vulnerable stranger or acquaintance. One case that found its way into the press was a post from a comedian that said “Men are scum.” This got her a suspension. The rule says no blanket insults of a protected group. Genders are protected groups.
Monika Bickert and her team understand that it’s not the same to say “men are scum” as it is to say “Jews are scum.” But they feel it would introduce too much complication to distinguish between vulnerable groups and privileged ones. As it is, moderators have difficulty enough determining what hate speech is, according to Facebook.
Take the example of someone on Facebook describing a racist quote from a celebrity. If the user frames it as “Mr. Celebrity said this, isn’t it shocking?” Facebook will allow that, she says. It is information that helps people assess the personality. If the user cited the same quote and said, “That’s why I love this person!” Facebook would remove the post, because it affirms racism. “But what if I just give that racist quote and I say ‘said by’ and then I say the celebrity?” asks Bickert. “Am I saying he’s great or am I saying it’s bad? It’s not clear.”
Hate speech is so complicated that Facebook has laid it out in several tiers. Tier 1 includes calling men scum, as well as likening a group to bacteria, sexual predators, or “animals that are culturally perceived as intellectually or physically inferior.” Tier 2 are insults that imply inferiority, like calling someone or a group mentally ill or just worthless. Tier 3 is a kind of political or cultural insult including calls for segregation, admissions of racism, or straight cursing. Penalties are commensurate with tiers.
Hate speech was one of the topics considered when I sat in on the Content Standards Forum meeting in 2018. In a building just down Route 84 from the Gehry structure, Bickert convenes this gathering every two weeks to consider changes to the rules. There are about twenty people in the room, and video connections to Dublin, DC, and other locations around the globe. They discuss either a “heads-up” issue identifying a potential problem and deciding whether to investigate, or a “recommendation” issue, where the team makes a decision on an investigation. Such inquiries, which usually get input from experts in the given field (civil rights, psychology, terrorism, domestic violence, etc.), involve weeks of data analysis, cultural studies, and feasibility consideration. In this meeting, a hate-speech issue similar to “men are scum” came up for discussion. The question was whether a hateful comment about a powerful group, like men or billionaires, should be treated as harshly as a slur against a protected group, identified by gender or race. The outcome was interesting: the report said that the best outcome would be making that distinction, and letting people vent on the group in power. But that course was rejected, because it would be asking the moderators to make overly complicated decisions.
The moderators themselves say they are ready for responsibility. Not surprisingly, these contractors hunger to ascend to employee status. Before I visited Phoenix’s moderators, I spoke with Facebook’s blessing to one who made the leap (I am permitted to refer to him only as “Justin”). He confirmed that it isn’t easy, since the “skill sets” of moderation differ from those useful to create or market Facebook products.
Justin said that, somewhat unexpectedly, some of his more harrowing duties dealt with content that came into question not because of bad behavior but because the user had died. Facebook’s algorithms often wind up surfacing dead people’s accounts on the feeds of loved ones, and it can have the effect of a drowned corpse rising to the surface. Facebook now has elaborate “memorialization” protocols for dead members. “Memorialization was really stressful,” he says.
But not the most stressful. “The worst video I ever saw was a man cutting his own penis off with a serrated knife,” says Justin. “It was not a great time.” By the time he saw that can’t-unsee-it vignette, around 2016, Facebook was providing therapists. (When he’d started at the job, in 2015, there was no counseling provided.) He now goes once a week.
The moderators in Phoenix seem to regard exposure to disturbing images as an unpleasant but tolerable part of the job. Sometimes things trigger them, and they go to the therapist. One told me that a “hit me bad” post involved an animated video of animals and people having sex, being slaughtered, and defecating, among other things. “That had me for maybe a good two weeks,” he says. But with support, he says, he got past it.
Facebook regularly reviews the decisions of its moderators, and when significant errors occur, it does postmortems for improvement. But the dynamics of time and money dictate that those decisions will be at a speed that ensures frequent errors. “If everyone reviewed one thing a day, we probably wouldn’t miss things,” the former moderator known as Justin had told me earlier.
It’s Facebook’s key dilemma: it keeps hiring moderators, but the volume of content they view is still so much that they have to move too fast to do their job right. People notice. When someone mistakenly has a picture removed, they go to social media to complain. When someone reports offensive content and it’s not taken down, more complaints. The media notices: the journalistic equivalent of shooting fish in a barrel is following up on a report that Facebook blew a call that now looks terrible, but was probably the result of an overworked and perhaps traumatized moderator. Zuckerberg himself acknowledges this. “Nine out of ten issues that we have that are public are not actually because we have a policy that people broadly disagree with, it’s because we’ve messed up in handling it,” he says.
Despite the time pressure and the exposure to the worst of humanity, the people I spoke to said that, as jobs go, moderating for Facebook wasn’t bad. They see themselves as unsung first responders, protecting the billions of Facebook users from harm. “I’ve had interactions with reviewers who have helped save somebody’s life—someone who was trying to attempt suicide and they reported it to the law enforcement authorities,” says Arun Chandra, whom Facebook hired in 2019 to lead the moderation effort. “The sense of satisfaction and pride in this work was a pleasant surprise.”
Sarah T. Roberts told me that she had found in her work that moderators’ best moments are often dampened by realizing that they are cogs in a machine where their employer—or the company that contracts their employer—isn’t really listening to them. “If they are ever a part of the feedback loop, it’s rare,” she says. Once a moderator told her about a suicide threat that was resolved positively. “We never stopped to ask ourselves,” said the moderator, “to what extent the crap that people see on our platform leads them to feel like they would want to self-harm.”
The Phoenix moderators I interviewed had an unpleasant surprise a few months after I talked to them. In October 2019, Cognizant decided it no longer wanted to be involved in Facebook moderation. Facebook announced it would close the Phoenix office, and those digital first-responders would be out of a job.
FACEBOOK REALLY ISN’T pleased that it requires tens of thousands of people working in office parks to police its content at a rate of 400 posts a day each. But it has a long-term solution that will greatly improve its track record and mitigate the number of relatively low-paid workers who will require therapy to deal with the images posted by Facebook users. What if Facebook were to assess and remove all its troublesome content before people saw it, and not wait until someone reported it?
They believe the answer is artificial intelligence.
“Ultimately, the way we’ve been thinking about all of this space is how we move from a world where our approach to content is more reactive to one that’s proactive,” says Guy Rosen. “How do we keep building more and more AI systems that can proactively find more kinds of that content?”
That was the long-term solution to the seemingly intractable moderation issues. While Zuckerberg consistently warned that the content problems would never go away—angering those who felt that even a tiny percentage of missteps on Facebook’s part meant hundreds of thousands of false or harmful posts left untouched—he fervently believed that salvation would come in the form of robots, perpetually patrolling the alleys of the News Feed like friendly local cops.
The company had been building its AI muscle for years, but not for that purpose. In the earliest days Facebook did hire some people adept in AI, and both the News Feed and the ad auction were fueled by learning algorithms. But beginning in the mid-2010s one particular approach known as machine learning began to accumulate amazing results, suddenly putting AI to use in a number of practical cases. This supercharged iteration on machine learning was called deep learning. It worked by training networks of artificial neurons—working somewhat like the actual neurons in the human brain—to rapidly identify things like objects in images, or spoken words.
Zuckerberg felt that this was another moment like mobile, where the winners would be those who had the best machine-learning engineers. He wasn’t thinking about content moderation then, but rather improvement in things like News Feed ranking, better targeting in ad auctions, and facial recognition to better identify your friends in photographs, so you’d engage more with those posts. But the competition to hire AI wizards was fierce.
The godfather of deep learning was a British computer scientist working in Toronto named Geoffrey Hinton. He was like the Batman of this new and irreverent form of AI, and his acolytes were a trio of brilliant Robins who individually were making their own huge contributions. One of the Robins, a Parisian named Yann LeCun, jokingly dubbed Hinton’s movement “the Conspiracy.” But the potential of deep learning was no joke to the big tech companies who saw it as a way to perform amazing tasks at scale, everything from facial recognition to instant translation from one language to another. Hiring a “conspirator” became a top priority.
Zuckerberg pursued Yann LeCun the same way he hunted and bagged Instagram and WhatsApp. In October 2013, he called LeCun. “We’re just about to turn ten years old and we need to think about the next ten,” he said. “We think AI is going to play a super-important role.” He told LeCun that Facebook wanted to start a research lab—not something designed to get better ad placements, but to develop mind-blowing creations like virtual assistants that could understand the world. “Can you help us?” he asked.
LeCun presented a list of requirements that Facebook would have to meet if he were to set up a lab. It would have to be a separate organization, with no ties to product groups. It would have to be completely open—no restrictions on publishing. The results they came up with would have to be open-source so they would benefit everyone. Oh, and LeCun would retain his NYU post, working there part-time, and base the new lab in New York City.
No problem! said Zuckerberg, and the Facebook Artificial Intelligence Lab, or FAIR, now is centered in New York City, on the edge of NYU’s Greenwich Village campus. It is the horizon-exploring partner to the company’s Applied Machine Learning team, which directs its AI work to products.
LeCun says that the integration worked superbly. The applied group imbued the product with machine learning, and the research group worked on general advances in natural-language understanding and computer vision. It often worked out that those advances helped Facebook. “If you ask Schrep or Mark, like, how much of an impact FAIR has had on product, they will say it’s much larger than they expected,” says LeCun. “They told us, Your mission is to really push the state of the art, the research. When things come out of it for a product impact, that’s great, but be ambitious.”
LeCun gave this rosy description of the relationship between FAIR and AML in late 2017. But only a few weeks later, Schroepfer created a new post, a vice president of artificial intelligence, who would lead both the research and applied branches of Facebook AI. The job went to Jérôme Pesenti, a French scientist who had worked for IBM. LeCun professed to be delighted at this move, which freed him from management tasks so he could concentrate more on actual science.
But after people turned on Facebook post-election, the company needed the whole field of AI to take a step forward, to produce algorithms and neural nets that would far exceed the capabilities of human beings to identify unsavory content, illegal content, hateful speech, and state-sponsored misinformation. The goal was that these could work proactively, finding rogue content before anyone reported it, maybe even before anyone saw it.
Pesenti says that AML now has a dedicated team in Integrity Solutions helping to address the company’s issues with toxic content. However, the state of the art falls far short of what Zuckerberg is promising, and Facebook needs more from FAIR. Its scientists have to invent breakthroughs that might be as good as or better than humans in dealing with things like hate speech. But because LeCun set up FAIR as a research organization, Facebook can’t order the scientists to focus their studies on specific domains. “One challenge we have is to map product problems to research,” says Pesenti. “We haven’t solved that, actually.”
There have been some successes. It turns out that terrorist content was fairly easy for AI systems to identify, and Facebook would come to claim better than a 99 percent success rate in taking down such posts, even before users had the opportunity to view them. But the current state of AI can’t really deal with complicated issues like hate speech. Humans can hardly deal with tackling the speech of 2 billion people with one set of rules that applies all over the globe, addressing wildly disparate cultures.
“A lot of work has gone into building and training these AI systems, understanding how they manifest across different languages,” says Rosen. He cites a 2017 project that addresses Facebook’s hate-speech system work in Burmese. It has helped increase the percentage of hate-speech posts Facebook blocks proactively—before anyone reports them—from 13 percent to 52 percent. Critics will note that means about half of the hate-speech posts in that dangerous region are still viewable.
Facebook also looks to AI to deal with another persistent problem: the mind-boggling number of fake accounts on the system. Not surprisingly, these are a huge source of fraud, hate speech, and misinformation. People were stunned when Facebook revealed that between January and March 2019, it blocked 2 billion attempts to open fake accounts—almost as many as actual users on the system. Overwhelmingly, these are clumsy, though persistent, attempts to create phony Facebook identities in bulk. As Alex Schultz told The New York Times, “the vast majority of accounts we take down are from extremely naïve adversaries.” But not all are so naïve. Despite AI, or anything else that Facebook could throw at the problem, the company concedes that around 5 percent of active accounts are fake. That’s well over 100 million.
That’s Facebook’s dilemma: its scale is so massive that even when it makes improvements, the scope of what’s left is staggering. And those motivated to make the posts will learn to adjust to Facebook’s tactics. In 2018, for example, Facebook proudly announced that its AI teams had learned to read the content of messages embedded into graphics. Previously, its systems could only read words when they were stored as text. That shortcoming had allowed Russian operatives to slip their inflammatory ads on immigration, racism, and Hillary Clinton’s identity as Satan past Facebook’s digital monitors.
In other words, Facebook had figured out a defense for a war that it had lost the last time around. Who knows what tactics its foes will adopt in the future?
Meanwhile, it’s left to the 15,000 or so content moderators to actually determine what stuff crosses the line, forty seconds at a time. In Phoenix I asked the moderators I was interviewing whether they felt that artificial intelligence could ever do their jobs. The room burst out in laughter.
THE MOST DIFFICULT calls that Facebook has to make are the ones where following the rule book creates an outcome that seems just plain wrong. For some of these, moderators “elevate” the situation to full-time employees, and sometimes off-site to the people who sit in the Content Moderation meetings. The toughest ones are sometimes elevated to Everest, to the worktables of Sandberg and Zuckerberg. Even then, the decisions are hard to make. There are times when the rules of offensive content come in conflict with what is going to look best for Facebook. They can involve essentially politically charged decisions, with powerful supporters on each side of the argument. No matter what Facebook decides on these, it loses.
As with many of its problems, Facebook didn’t really confront this until the election year of 2016. That September, a Norwegian writer named Tom Egeland posted on Facebook a story he had written about six photos that “changed the history of war.” One of them was an iconic image that would have been familiar to anyone who lived in America during its tragic fiasco in Vietnam. The picture had won the 1972 Pulitzer Prize for photography. It was known as “Terror of War” or “Napalm Girl.” It showed a group of children running down a road screaming in pain because of napalm burns. Behind them were American soldiers in uniform. The child framed in the center of the photo, Kim Phúc, was naked.
For Facebook’s moderators—especially since the case seems to have been handled outside the United States, where the photo wasn’t familiar—this was a no-brainer. The rulebook clearly bans nude images of children past infancy, and so Facebook quickly removed the image. Egeland was annoyed, and tried to repost the photo. Facebook suspended his account. By then, the issue had reached Menlo Park. Bickert’s team now understood that Facebook was censoring a photo of historic value, but supported the takedown. If you make exceptions for one naked kid, where do you stop?
Then the story broke wide open. Egeland had been writing for the most popular newspaper in Norway. His furious editor wrote a front-page editorial, with huge letters saying DEAR MR. ZUCKERBERG. . . . It claimed that Facebook—“the world’s most powerful editor”—was acting as a censor. Norway’s prime minister reposted the photo, only to have Facebook take it down again. Other news outlets picked up the story. The comms people got flooded with queries.
This caused a crisis in Facebook’s policy world. For years, Facebook had been dodging complaints of what it left up—a year earlier, it had left up Donald Trump’s anti-Muslim posts. Now it was under fire for what it took down. Could naked children also qualify for an exception? Plenty of people felt that, Pulitzer or no, there was no room for Kim Phúc’s terror on Facebook.
Chaos broke out in Facebook’s policy world. “We were all in this together trying to fix it, but we just don’t know how to fix it,” says someone involved in the discussion. What made it a major decision for Facebook was not the choice itself, but the outrage generated by Facebook’s adherence to its own rules. Interpretations that seemed logical when the rule book was written often could look outrageous when exposed to public scrutiny.
“That photo got posted all the time,” says Dave Willner, who had taken a job at Airbnb doing similar work by then. “If you do not know that it is a nonconsensual nude image of a child who has had a war crime committed against her—if it were not a Pulitzer Prize–winning photo—everyone would lose their damn minds had Facebook not censored it.” Another keep-it-down advocate was Andrew Bosworth. “I would have said, Hey if you want that picture up, change the laws in your country. Like, listen, buddy, I think that is a tremendously important photograph, historically, but I can’t have it on the site, not legally. Change the laws!”
But that’s not what Zuckerberg thought. Ultimately he and Sandberg had to sign off on the decision. From that point on “newsworthiness” was a factor in determining exceptions to the general rules. Napalm Girl, in all her shocking nakedness, was back on Facebook.
Facebook’s heads of policy, Elliot Schrage and Joel Kaplan, saw the incident as a watershed. “That was—internally—the clearest example that our impact and influence in America had changed,” says Schrage. “Facebook was no longer about sharing interesting and relevant information; we shaped larger cultural conversations too.”
Monika Bickert puts it another way. “We learned that it is okay to make exceptions to the letter of the policy to maintain the spirit of the policy,” she says.
From that point, the pageant of exposure, pressure, and correction would play out on a regular basis. The most striking examples came in criticism of Facebook’s handling of fringe right-wing content that seemed to violate Facebook’s rules. Nearly every time a Facebook representative would testify before Congress, GOP legislators would rant about the conspiracy of Menlo Park liberals to suppress conservative speech. Their complaint was not only that in some cases Facebook took down posts from extremists—the Republicans believed that Facebook cooked up algorithms that favored liberal content. The data didn’t prove it, and it wasn’t even clear if the legislators actually believed it or were just trying to game the ref.
As a result, Facebook had a torturous time with trash-talking provocateurs from the right. When the white-nationalist conspiracy monger Alex Jones repeatedly posted comments that seemed to violate Facebook’s hate-speech rules, the company was loath to ban him. Complicating matters was that Jones was an individual, and his Facebook page, InfoWars, was an operation staffed by several people.
The situation was radioactive. Fringe as he was, Jones had a huge following, including the president, who had been a guest on the InfoWars radio show. Did Jones’s newsworthiness make him a figure like the president, worthy of a hate-speech free pass? During the summer of 2018, the controversy raged, as reporters kept citing hateful posts. Ultimately, it was the pressure. Within hours after Apple took down his podcast, Zuckerberg himself pulled the plug on InfoWars. Jones was suspended for thirty days, and ultimately Facebook would ban him as “dangerous.” It did this in tandem with the expulsion of the fierce-tongued Nation of Islam leader Louis Farrakhan, in what appeared to be an unmistakable play for balance.
When I pressed Zuckerberg in early 2018 about Facebook’s delicacy in handling GOP complaints, he bent over so far backward in respecting their point of view that I worried his chair would hit the floor. “If you have a company which is ninety percent liberal—that’s probably the makeup of the Bay Area—I do think you have some responsibility to make sure that you go out of your way and build systems to make sure that you’re not unintentionally building bias in,” he told me. Then, ever balancing, he mentioned that Facebook should monitor whether its ad systems discriminated against minorities. Indeed, Facebook would commission studies of each of those areas.
Part of Zuckerberg’s discomfort arises from his preference for less oversight. Even while acknowledging that content on Facebook can be harmful or even deadly, he believes that free speech is liberating. “It is the founding ideal of the company,” he says. “If you give people a voice, they will be able to share their experiences, creating more transparency in the world. Giving people the personal liberty to share their experiences will end up being positive over time.”
Still, it was clear that Zuckerberg did not want the responsibility of policing the speech of more than 2 billion people. He wanted a way out, so he wouldn’t have to make decisions on Alex Jones and hate speech, or judge whether vaccines caused autism. “I have a vision around why we built these products to help people connect,” he said. “I do not view myself or our company as the authorities on defining what acceptable speech is. Now that we can proactively look at stuff, who gets to define what hate speech is?” He hastened to say that he wasn’t shirking this responsibility, and Facebook would continue policing its content. “But I do think that it may make more sense for there to be more societal debate and at some point even rules that are put in place around what society wants on these platforms and doesn’t.”
As it turns out, Zuckerberg was already formulating a plan to take some of the heat off Facebook for those decisions. It involved an outside oversight board to make the momentous calls that were even above Mark Zuckerberg’s galactic pay grade. It would be like a Supreme Court of Facebook, and Zuckerberg would have to abide by the decisions of his governance board.
Setting up such a body was tricky. If Facebook did it completely on its own, the new institution would be thought of as a puppet constrained by its creator. So it solicited outside advice, gathering a few hundred domain experts in Singapore, Berlin, and New York City for workshops. After listening to all these great minds, Facebook would take the parts of the recommendations it saw fit to create a board with the right amounts of autonomy and power.
I was one of 150 or so workshop participants at the NoMad Hotel gathering in New York City’s Flatiron district. Sitting at tables in a basement ballroom were lawyers, lobbyists, human rights advocates, and even a couple of us journalists. For much of the two-day session we dug into a pair of individual cases, second-guessing the calls. One of them was the “men are scum” case that had been covered a few times in the press.
A funny thing happened. As we got deeper into the tensions of free expression and harmful speech, there was a point where we lost track of the criteria that determined where the line should be drawn. The Community Standards that strictly determined what stood and what would be taken down was not some Magna Carta of online speech rights but a meandering document evolved from the scribbled notes of customer support people barely out of college.
The proposed board would be able to overrule something in that playbook for the individual cases it considered, but Facebook provided no North Star to help us draw the line—just a vague standard touting the values of Safety, Voice, and Equity. What were Facebook’s values? Were they determined by morality or dictated by its business needs?
Privately, some of the Facebook policy people confessed to me that they had profound doubts about the project.
I could see why. For one thing, the members of this proposed body—there will be forty members, chosen by two people appointed by Facebook—can take on only a tiny fraction of Facebook’s controversial judgment calls. (In the first quarter of 2019, about 2 million people appealed Facebook content decisions.) Facebook would have to abide by the decisions on individual cases, but it would be up to Facebook to determine whether the board’s decisions would be regarded as precedent, or simply limited to the individual pieces of content ruled on, because of expedience or because they were lousy calls.
One thing seems inevitable: an unpopular decision by a Facebook Supreme Court would be regarded just as harshly as one made by Zuckerberg himself. Content moderation may be outsourced, but Facebook can’t outsource responsibility for what happens on its own platform. Zuckerberg is right when he says that he or his company should not be the world’s arbiter of speech. But by connecting the world, he built something that put him in that uncomfortable position.
He owns it. Christchurch and all.