BRIANNA WU HAS BEEN tormented by online trolls for three years. It started in 2014, when Wu spoke up to defend women in the gaming industry, only to find herself plunged into a roiling controversy called Gamergate that turned her life upside down. The threats of rape and murder hurled at her online became so scary that she and her husband fled their home. To this day, they live in a new location under an alias. But sometimes the trolls still manage to track her down, and online harassment becomes an off-line ambush.
“They found our address and smashed a window of my house. Threw a brick right through it,” Wu told me in April 2017, when I reached her by phone at a number she instructed me never to share. At the time we spoke, the window was still shattered.
The online attacks, like the one perpetrated on Wu, began and gathered force on sites like 4chan, Twitter, and Reddit, the largely unmonitored town halls of the web. All of these sites allow or encourage anonymity and pseudonymity, as well as a laissez-faire approach to free speech, in keeping with the long-standing libertarian ethos of the internet. All of them have tolerated years of online harassment of women.
It should be of little surprise at this point that the sites that harbor the most vicious trolls—4chan, Twitter, and Reddit included—were all started and led by white men, who aren’t usually the targets of the most vicious online harassment. Would these sites be as hostile to women today if they had been built and run by women, or at least included a meaningful number of women leaders early on?
Few places on the internet are more troubled by misogynistic trolling than the world of online gaming. Gaming is a billion-dollar business— much bigger than movies and rivaling TV—including an exploding generation of mobile and social games, classic PC console games, and rising categories like e-sports and virtual reality. Yet the gaming industry is also saddled with a long-standing history of violence toward and degradation of women, allowing gamers to play out dozens of dark fantasies. One of the earliest rape-simulation games, Custer’s Revenge, was released in 1982 by the game maker Mystique. The goal was to rape a Native American woman tied to a cactus, with points awarded for every thrust. More than three decades later, our most popular games feature similar scenes. In Take-Two Interactive’s monster hit Grand Theft Auto (whose fifth iteration is one of the bestselling video games of all time), players can sleep with a prostitute, then murder her. Take-Two’s CEO, Strauss Zelnick, has defended the game, saying, “It is art. And I embrace that art, and it’s beautiful art, but it is gritty.”
Like the broader tech industry, gaming has systematically excluded women for decades. In 2016, the International Game Developers Association (IGDA) reported that women make up just 22 percent of game developers, with men vastly outnumbering them in management and in powerful technical roles such as programming, software engineering, and technical design. Interestingly, the IGDA also found that men were much less likely than women to believe that diversity in the industry, and in the games it produces, was important. Indeed, women have also been poorly represented in the games themselves. As the report dryly puts it, “Women have long experienced derogatory representations of their gender in videogame content as well as a general invisibility within the wider videogame culture.”
It’s not surprising, then, that the most notorious case of online trolling sprang out of the gaming industry and that women were the targets. Gamergate was sparked in 2014 by the peevish post of an unhappy former boyfriend of one of the gaming industry’s few female developers. “This is written almost entirely in shitty metaphors and bitter snark,” wrote Eron Gjoni, a coder in the industry. “It’s a post about an ex.” Gjoni alleged that his former girlfriend, the game developer Zoe Quinn, had slept with other people in the gaming industry while she and Gjoni were dating.
For reasons that are still somewhat inexplicable, the missive unleashed a volcanic explosion of hate, all of it directed at Quinn, who was a feminist critic of mainstream video games. Though Gjoni never called for any kind of campaign against Quinn, a certain subset of gamers took his nine-thousand-word, she-done-me-wrong post and turned it into a rallying point from which to defend their sacred, mostly male gaming territory. They derided Quinn’s game development as basic, simplistic girl work and claimed she used sexual favors to get good reviews.
Gjoni’s post was put up on 4chan (not by him, he would later attest in a note on his original post), an online community founded in 2003 by a then teenager named Christopher Poole. Today 4chan claims some twenty million monthly visitors, including a large population that seems to delight in wreaking havoc online. They were particularly vicious when attacking Quinn and other women in the gaming industry.
With the 4chan members engaged in the fight, accounts sprang up across Twitter and Reddit to attack Quinn and spread the #Gamergate hashtag. The trolling expanded to target other female game developers on the premise that there was a conspiracy of women trying to ruin the industry by promoting more gender equality in the games themselves and in the studios where the products are produced. The Gamergaters created and shared lists of industry women to target and torment, including Anita Sarkeesian, a media critic who rose to prominence by calling out sexism in the video game industry. The trolls even created a game called Beat Up Anita Sarkeesian, which enabled users to punch her virtual face. The game’s creators wrote, “There’s been a disgusting large imbalance of women who get beaten up in games. Let’s add a lady . . . She claims to want equality: Well, here it is.”
Once riled, many internet trolls have no shame. They often compete with each other to see who can be the most cruel. And as Quinn and Sarkeesian found out, they don’t limit their attacks and threats to a single individual. They will threaten family members, including children. They will also instantly direct their bile toward anyone who comes to the target’s defense.
That is where Brianna Wu enters the story. About two months after Gjoni’s post, Wu, an established game designer, spoke out against the #Gamergate campaign, sarcastically tweeting a meme suggesting that the trolls were saving everyone from an “apocalyptic future” where women might have slightly more influence in the industry.
That’s when all hell broke loose. Shortly after responding to the trolls on Twitter, Wu was inundated with violent, disturbing threats on her life. One series of tweets in particular stands out. “Guess what bitch, I now know where you live;” “Your mutilated corpse will be on the front page of Jezebel tomorrow;” and “If you have any kids, they’re going to die too.” As the threats piled up, Wu and her husband fled their home, crashing on friends’ couches and hiding out in extended-stay hotels. They didn’t have children to worry about, but they did spend an inordinate amount of money boarding their dog while they were on the run. Wu had a choice to make: speak up for what was right or be silenced. She chose to talk back.
“I was angry. I was scared. I was terrified. But within all of that I was trying to reach inside myself and find that bravery to really change the industry for women,” Wu told me. She spent days documenting the dozens of death threats against her so she could provide them to law enforcement officials. At the height of the online vitriol, she hired a full-time staff member to help collect information on her harassers to share with police, but none of that was enough to bring the perpetrators to justice.
Wu wasn’t alone. Others who spoke out in support of Quinn or even mildly criticized the trolls or the gaming industry were similarly attacked. Kellee Santiago, a female game developer who now works at Google, likened Gamergate to a witch hunt or public stoning, telling me, “It was really shocking to discover that I live in a time and place in which such animosity toward women existed.”
Wu would later understand that she had been the victim of the trolling playbook, an extremely effective technique used to silence women that anonymous social-media denizens disagree with. “Find the woman and identify something in her past to distort her with,” Wu explains. “If she’s gay, attack her on that. Larger than size 12? Attack her on that. Transgender? Attack her on that. Find the spot where the woman feels the most vulnerable and make her feel unsafe until the cost of speaking out is too great.”
Use of the troll playbook is not limited to the fans or members of the gaming industry.
Online harassment is now one of the most disturbing problems plaguing the internet at large. Can such widespread cyber hating be chalked up to the dark side of human nature, which is simply finding a new expression on this medium? Or have the internet’s most popular sites exacerbated the problem by building their networks in a way that allows, even encourages, bad behavior rather than good? And if the latter, has so little care been given to protecting users because most of these networks have been built and run by men?
To Brianna Wu, the answer is obvious. “If we had more women in positions of authority at Twitter, and in the gaming industry, I don’t think our industry would be so complacently terrible.”
Evan Williams, the CEO of Medium who is most famous for co-founding Twitter, has been building websites to allow people to express themselves on the internet since the late 1990s. “Trolling used to be seen as a fringe activity,” he told me. “Many of us that built these systems are surprised there are that many people out there who are that terrible. It’s disheartening about humanity. And I don’t think anyone would have predicted whatever psychological feedback is encouraging these people.”
I asked Del Harvey, the woman in charge of Twitter’s Trust and Safety division, whether it mattered that Twitter had been designed primarily by men.
“It may have been a factor,” Harvey says. “There are, absolutely, aspects of being male that offer you more privilege and shelter. If you are not a member of a marginalized community in any form, you are less likely to think of those things,” she said. Not that Twitter was specifically created to transmit hate, she was quick to point out. “But it was designed by people who tend to be really optimistic—cheerful people who are thinking about really fun, optimistic things to be done with their product. They aren’t thinking, ‘How can I create a product that will allow people to send death threats really easily?’” (Del Harvey is not her real name, and she won’t say much about her own identity—expressly to minimize becoming a target herself.)
Product managers, especially those who design consumer products, will tell you they try to be empathetic and do as much user research as possible, but at the end of the day, building these products requires making choices based on their own opinions. In tech, these choices are made mostly by men.
Early Twitter investor Bijan Sabet believes that the relative cluelessness about the potential of online harassment has a direct connection to the industry’s gender imbalance. “These dudes aren’t getting it,” he says, “because they’re not getting harassed.”
When I pressed Williams on this, he conceded. “We weren’t thinking about it enough,” he acknowledges. “Had we had more women on the team, maybe we would have known better.”
By the time Dick Costolo was promoted to CEO of Twitter in 2010, Twitter’s harassment problem was already out of control. Costolo, who was brought in to make Twitter more attractive to a mainstream audience, saw curbing harassment as one of the main routes to achieving that. But he didn’t make much progress.
“I would bang my head against the wall for days and days . . . and then I would move on to other things,” he says. “Twitter had lots of reasons why it wouldn’t be a good idea to restrict speech to prevent harassment. I was always on the side of ‘We should prohibit more things,’ and I always got pushback.”
The pushback came because attempts to implement specific antiharassment policies bumped up against the principles the company had been founded upon. Twitter’s founders had felt it was important that users be allowed to use pseudonyms. This was partly a way of differentiating Twitter from Facebook, where real names are required. Twitter’s network is built on mostly public profiles that anyone can follow, while Facebook is built on connections that require mutual consent. More important, Williams told me the founders wanted Twitter to be a safe communication platform for political and human rights activists around the world. This was so embedded in Twitter’s DNA that free speech proponents within the company were able to resist executives’ attempts to enact policies that would infringe on that freedom.
But while pseudonyms protect free speech, they also liberate users to behave as badly as they like, without consequence. And because tweets are public, Twitter’s design can actually amplify harassment. Even if you block someone who’s hurling insults at you, others can still see those tweets and pile on, allowing an attack to pick up at breathtaking speed. And Twitter’s rules for what it does and does not tolerate—and how it implements those rules—have been consistently inconsistent.
“If I could go back in time, I would go back to a meeting in 2010 and say I don’t care what you present me, I want this changed tomorrow. I would totally change the way I did it,” Costolo says.
Trolling is the modern version of hateful language that has long been directed at prominent or outspoken women. Well-known suffragettes, fighting for women’s right to vote, often received vulgar, threatening letters from anonymous men.
Trolling as we know it began in the late 1980s, just as email became a popular business tool. People immediately began to notice that there was something about communicating via computers that seemed to undermine the good manners and social norms that govern most face-to-face encounters. In 1984, a New York Times article chronicled the rise of “emotional outbursts” in “electronic mail.” “Scientists are documenting and trying to explain the surprising prevalence of rudeness, profanity, exultation and other emotional outbursts by people when they carry on discussions via computer,” the Times declared. Scientists who were interviewed observed that electronic communications “convey none of the nonverbal cues of personal conversation—the eye contact, facial expressions and voice inflections that provide social feedback and may inhibit extreme behavior.”
“It’s amazing,” Carnegie Mellon professor Sara Kiesler told the Times, “we’ve seen messages sent out by managers—messages that will be seen by thousands of people—that use language normally heard in locker rooms.”
Over thirty years later, vulgar discourse and threats of violence now pervade the most prominent sites on the web, including mainstream social networks. Thanks to the anonymity provided by many social-media sites, there are usually zero consequences for the offenders, either reputational or economic.
Perhaps more remarkably still, trolling behavior has been, for many, a path toward fame and power. To take one example, we need look no further than Twitter-happy Donald Trump. In the midst of a petty feud with MSNBC hosts Joe Scarborough and Mika Brzezinski, the president tweeted that he once saw “low I.Q. Crazy Mika . . . bleeding badly from a face-lift.” Like many online trolls, Trump directed one of his most offensive verbal assaults yet at a woman, and other trolls parroted his remarks with glee.
To be clear, men do get harassed online, but women experience the more extreme forms, such as rape threats, death threats, and stalking. Studies show that men are far more likely to be called out or belittled because of their sports affiliations, while women are far more likely to be attacked simply because of their gender. Girls are also disproportionately among the victims of cyber bullying. Young women, particularly those aged eighteen to twenty-four, are three times as likely to be sexually harassed online. As one feminist researcher put it, “Rape threats have become a sort of lingua franca—the ‘go-to’ response for men who disagree with something a woman says.” Perhaps that “go-to” aspect is why law enforcement doesn’t take such threats very seriously. But women do: 38 percent of women who have been harassed online describe their experience as “extremely upsetting,” as opposed to 17 percent of men.
For many women, especially those in the public eye, the hate being thrown around means the internet at large has become a place where they feel unwanted. Marissa Mayer told me she took a monthlong break from Twitter while she was running Yahoo because “it was just so negative.” In the summer of 2016, the Saturday Night Live star Leslie Jones tweeted, “I feel like I’m in a personal hell,” after she was swamped with racist and sexist attacks sparked by her appearance in the all-female Ghostbusters remake. She also took a break from Twitter, but before leaving, she wrote, “Twitter I understand you got free speech I get it. But there has to be some guidelines . . . You can see on the profiles that some of these people are crazy sick. It’s not enough to freeze Acct. They should be reported.”
The message of these negative, upsetting episodes is this: Women, you’re not welcome here. And if you choose to show up anyway, be prepared to live with any harassment that comes your way.
I know this from personal experience.
As a journalist, I regularly use social-media sites such as Twitter and Facebook to promote my stories and interviews. They are invaluable platforms for distribution and constructive feedback. However, I often find myself on the receiving end of messages that are obnoxious, dirty, and sometimes downright frightening. One user, who stalked me on Twitter for several months, suggested taking me to a warehouse “for a whipping,” eating his “high-quality sperm” and tweeted a hard-core pornographic video at me with the words “Submission Time.” He also mentioned my husband by name and suggested they have sex with me together. “Any boy that penises you gets my support,” the troll wrote. And when I was pregnant, the user tweeted, “Obeying me is a good thing, looks like you’re pregnant with my lil girl in belly.” The cherry on top: when the troll responded inappropriately to a tweet in which I had tagged IBM CEO Ginni Rometty, after an interview I had conducted with her, Rometty herself was alerted with several cheerful notifications from Twitter.
I’ve developed the requisite thick skin, and I use a common tactic for dealing with trolls: ignoring them. I quickly scroll past the vitriolic direct replies to my Twitter account, and I never, ever use Reddit. Once an interview I conducted with Apple’s co-founder Steve Wozniak ended up on Reddit, and the response was worse than unnerving. (For the same reason, many women in tech avoid using Hacker News, the prominent start-up incubator YCombinator’s official bulletin board that has since become one of the industry’s leading message boards; the trolls are there too.) Most important, I don’t respond to the haters. This is accepted wisdom among many female users: the worst way to deal with a troll is to poke it. Though sometimes the words disturb me, I do my best not to let them make me feel like any less of a journalist, a person, or a woman.
But the internet shouldn’t just be for people with thick skin. And being a woman online shouldn’t be accompanied by routine threats of sexual assault.
I reported my personal troll to Twitter in March 2017, after the company claimed, yet again, to have improved its harassment controls. Just twelve hours after filing my report, however, I received this message from Twitter: “We reviewed your report carefully and found that there was no violation of Twitter’s Rules regarding abusive behavior.” Twitter’s “rules” state that “you may not incite or engage in the targeted abuse or harassment of others.” If telling me to eat his “high-quality sperm” and inviting me to a whipping doesn’t count as harassment, what does? Sure, I can mute the account or block it, but all of these tweets are still visible to the public, and this troll can easily set up a new account and start attacking me again. It appears that my troll hasn’t tweeted from this particular account for some time. When I asked Twitter for more information about why, the company told me it doesn’t comment on individual cases. All of the offensive tweets I have referenced still live online. It feels as if Twitter is telling me, “Just deal with it” or, worse, “You’re not worth fixing this.” Apparently, I—and so many other women—aren’t worth alienating one extremely offensive user.
In a telling example of just how crudely Twitter’s rules can be applied, actress Rose McGowan’s account was suddenly suspended in October 2017 as she was in the midst of tweeting allegations that Hollywood heavyweight Harvey Weinstein had raped her. In keeping with its policy not to comment on individual accounts, Twitter did not explain why, then faced an epic backlash. Actress Anna Paquin called for women to boycott Twitter, and countless women rallied their accounts behind the cause.
Twitter later broke its own rule and explained that McGowan’s account had been temporarily locked because she had tweeted a private phone number. (The number was contained in the signature of an email image McGowan had tweeted as proof that the others at the Weinstein company were aware of his behavior.) While Twitter seems often reluctant to act on behalf of users who have been abused, this is one prominent case in which Twitter was remarkably responsive to censoring someone who was trying to out an abuser. These seemingly inexplicable decisions might be explained in part by a closer look at how offensive content is handled once it is reported. The social networks, including Twitter, outsource most content moderation to contractors around the world. While there’s hope that technology, with the help of artificial intelligence, might be able to enforce rules more consistently in the future, for now, the task is up to humans. The contractors faced with the difficult job of filtering and flagging disturbing content on these networks generally don’t last long and must constantly be retrained, yet wield an inordinate amount of power when it comes to deciding what stays up and what comes down. Their decisions, informed or not, greatly impact people’s lives, whether they be myself, Brianna Wu, Leslie Jones, or Rose McGowan.
McGowan’s account was reinstated and Twitter promised to be more transparent about how it makes such decisions in the future. “Today we saw voices silencing themselves and voices speaking out because we’re *still* not doing enough,” CEO Jack Dorsey tweeted.
Over the years, many social-media executives might have assumed that combating trolls could be bad for the bottom line. To be seen as silencing free speech can be a rallying cry for boycotts and cyber attacks. After all, traffic from trolls is still traffic; who wants to drive users away? However, we may be at an inflection point. It seems increasingly likely that not combating harassment might be even worse for business.
Today, both Reddit and Twitter are fighting to attract not only new users but advertisers, who have become wary of being associated with less-than-mainstream content. The most famous example: When big-name companies including Mercedes-Benz, Johnson & Johnson, Verizon, and JPMorgan discovered, early in 2017, that some of their YouTube ads were running next to neo-Nazi and jihadist videos, they all suspended or pulled advertising from Google. Most returned, but only after the company made changes, including doing a much better job of flagging offensive content by hiring more people and deploying “machine learning tools” (a form of artificial intelligence) to deal with the problem. Ad crisis averted, but the market spoke clearly: hate is bad for business. Google’s actions spoke clearly too: they showed that companies can indeed change, when sufficiently motivated.
As Twitter has sought to gain broader adoption, it too has tried to change—not always successfully. In 2015, Costolo stepped down as CEO of Twitter. He was replaced by Twitter co-founder Jack Dorsey, under whose leadership additional steps have been taken to reduce harassment. The network rolled out a new filter that would prevent users from seeing offensive or threatening content, and says it works harder to identify accounts that were obviously spawned to harass others. It’s added tools to mute and report hateful speech, tweaked search to hide abusive tweets, and says it is cracking down harder on repeat offenders. In 2017, Twitter said it was taking action against ten times more accounts than it had the year before (although, somehow it seems, my troll is not one of them).
Meanwhile, though Twitter and Reddit still have a fairly large user base, they have been left in the dust by the behemoth that is Facebook. While Facebook was inspired, in part, by the sexist “Hot or Not” ratings site, it has gone on to become a social-networking site that is, by comparison, friendly to a diverse range of users. In the process, Facebook has attracted over two billion users and, along with them, billions of advertising dollars.
Don’t get me wrong: Facebook isn’t perfect. The social network has a long way to go to combat online hate, both on the main site and on Instagram, which it owns. But cyber hate is a far bigger, more visible problem on Reddit and Twitter than it is on Facebook and Instagram. Because my Facebook account is private and I have to accept friends before they can interact with me, I almost never see hurtful comments. Even when I do post publicly, the responses are rarely vile. Perhaps that has something to do with the “real names” requirement. But it also has to do with the way the site has been architected to balance product and business concerns.
Facebook insiders say that Sheryl Sandberg, who joined the team in 2008, was critical in transforming the hugely successful start-up into an equally enormous business. Part of that involved developing policies to ensure that Facebook was a safe, hospitable place for both users and advertisers. Sandberg’s influence at Facebook goes some way to answering the question of whether major social-media sites might have benefited by having more diverse and inclusive leadership.
“Sheryl is, and I think Mark would agree, probably the most important decision he ever made,” former Facebook mobile director Molly Graham tells me. When Sandberg showed up in 2008, the social network, which then had just 66 million users, was having a dark year amid a storm of privacy issues and a nearly nonexistent business model.
“Facebook’s success was not an inevitability,” Graham says. Not only did Sandberg help compel immediate changes to company culture; she also took strong stands on the side of user privacy and protections. This was about the same time that Mark Zuckerberg was becoming obsessed with Twitter’s growing user base and was considering a series of changes that would have taken Facebook down a very different path.
In the months after Sandberg came aboard, the fledgling Twitter was dominating live conversation on the web and getting international traction. “He was trying to decide why Twitter was so successful,” a former Facebook employee tells me. “He got obsessed with openness and how much data they had and why are they owning real-time news? He fixated on this idea that people are actually willing to share more openly than we think they are.” Zuckerberg proposed several product tweaks designed to push Facebook users to be even more open in the hopes of driving engagement.
After Facebook introduced location tagging, for example, users could tag others at the same location, but those tagged users couldn’t untag themselves. I could have said that Mark Zuckerberg was in Las Vegas, but he wouldn’t have been able to say, “Uh, no, I’m actually in Palo Alto.” Zuckerberg wanted to extend the same rules to photos, such that if someone tagged another user in a photo, again that second user couldn’t then untag himself or herself. Several other Facebook executives including Sandberg and Facebook product head, Chris Cox, were against these changes, feeling they were unfriendly not only to users but to women especially. “The obvious example is you’re a woman, someone tags you in something offensive or not related to you, and you can’t untag yourself,” the former Facebook employee tells me. “It was a huge fight inside the company. Massive, teardown walls.” Ultimately, Zuckerberg’s photo untagging proposal never came to fruition, and the ban on location untagging was removed. “Before, when bad things happened, nobody had anybody to go to when weird decisions were made,” the former Facebook employee says. “Sheryl made every voice that was diverse stronger because now they had a place to go.”
To his credit, Zuckerberg—though he might have had a few harebrained ideas—was also willing to listen. “More than anyone I’ve ever met, he has this infinite capacity to learn and change. He has as many flaws as anybody, but I’ve never met anyone who is so open to change,” former Facebook CTO Bret Taylor tells me. The key is Zuckerberg made as much space for Sandberg as she made for him, employees say.
Both Zuckerberg and Sandberg cared very much about making sure Facebook was a free but safe community, but they approached difficult decisions on content differently. Zuckerberg was more likely to consider how issues might affect the broader platform, whereas Sandberg encouraged employees to think about the effect on individuals. “She came at it from more of an individual person’s perspective, the empathetic, how might this person feel? These are human beings, in their bedrooms, in their dorm rooms, reacting to something that causes them emotional trauma,” a former Facebook executive told me.
Sandberg oversaw the operations team as they refined a detailed set of policies that would guide Facebook’s stance on certain kinds of content on a massively complex scale—what gets left up, and what gets taken down—everything from Holocaust deniers to the Arab Spring to offensive satire. “There were a couple of content issues like rape jokes and violence toward women—any sort of content directed at women—it’s obviously a topic she cares a lot about,” the former Facebook executive said. “And she had a really big impact on helping the company and helping us get to better decisions on that stuff. Sheryl’s job was to push on us when she felt we didn’t get it right.”
Facebook still has an uphill battle to fight offensive and disturbing content that is only getting bigger as the site becomes more influential. In 2017, it added three thousand people to the forty-five hundred already employed to moderate content worldwide. This, as Facebook was roiled by the launch of its video service, Facebook Live, which soon became home to broadcasts of real-time rapes, beatings, suicides, and incidents of police brutality. But the biggest reckoning came later that year when Facebook revealed that the Russians bought thousands of ads on the social network in an attempt to cause political turmoil amid the US elections. Twitter and Google were quickly roped into the scandal. All three companies were called on to testify before Congress about their policies concerning not just political advertising but also fake accounts and fake news. Facebook announced that it was hiring an additional ten thousand people to handle safety and security. In an interview with Axios, Sandberg apologized to the American people, but also reasserted Facebook’s commitment to free speech and consideration of itself as a tech company, not a media company.
Facebook, Twitter, and Google, via YouTube, profit off the content that the public provides. This content includes everything from fake news to postings that might be hateful or abusive. But 2017 might well be seen as the turning point, the moment when these internet juggernauts began to take greater responsibility for the substance of the messages, ads, and news they facilitate. No doubt there are heated internal debates happening within Facebook at this very moment, about how to continue to build an online community where people feel both safe and free. The question remains how the company rises to that responsibility.
Culture change can’t be guaranteed by simply putting a woman at the top of the org chart. Changing the moral tone and community standards at a social media company can be particularly tricky because users feel ownership over the site—and rightly so, as they are producing the content. Case in point is the story of Ellen Pao’s second epic setback in Silicon Valley. In 2014, just months before she would lose her famous sex discrimination case against Kleiner, Pao was appointed interim CEO of Reddit, where she swiftly tried to crack down on harassment, only to resign, under pressure from users, after just eight months on the job.
Reddit, the so-called front page of the internet, was founded in 2005 by Steve Huffman and Alexis Ohanian, two young male entrepreneurs. They tell me they started the site as a place to have “authentic conversation.” If the optimism of that statement reminds you of Twitter’s free-speech philosophy, here’s something else the two sites share: user pseudonymity. Reddit quickly became a popular destination to discuss everything from puppies to politics, with more than 330 million monthly users to date, who became known as Redditors. But like Twitter, Reddit also became a haven for users spewing misogyny, racism, homophobia, and xenophobia, which made it difficult for the network to develop into a legitimate business. Ellen Pao agreed to take on the challenge of leading Reddit in the hope of making it—and the internet—“a better place for everyone.” At the same time, co-founder Alexis Ohanian, who had left the company in 2010, returned to help, with the title of executive chairman.
Pao (who got plenty of up close and personal online attacks from trolls during her suit against Kleiner) made it her top priority to clean up the site, first committing to remove revenge porn—explicit photos, usually of ex-girlfriends, that are posted without the subjects’ consent. She also shut down several of Reddit’s nastiest sub-forums, including antitrans and antiblack communities as well as one called “Fat People Hate,” in which users mocked the overweight.
Redditors staged a full-on revolt, attacking Pao and claiming their right to free speech. While that argument might suggest the users were high-minded, their attack tactics showed otherwise. Trolls attempted to post private information about Pao and threatened her life. When one of the site’s popular employees—responsible for managing Reddit’s famous Ask Me Anything series—was fired, over 200,000 people signed a petition calling for Pao to be fired, and the site’s moderators took its most popular sections private—essentially holding Reddit hostage. Days later, Pao resigned.
“The trolls are winning,” Pao wrote in an op-ed in the Washington Post following her resignation. “The foundations of the Internet were laid on free expression, but the founders just did not understand how effective their creation would be for the coordination and amplification of harassing behavior. Or that the users who were the biggest bullies would be rewarded with attention for their behavior . . . No one has figured out the best place to draw the line between bad and ugly—or whether that line can support a viable business model . . . I’m rooting for the humans over the trolls. I know we can win.”
Steve Huffman, Reddit’s original co-founder, came back as the company’s CEO, while Ohanian remained executive chairman. I sat down with both of them in 2016 in the aftermath of Pao’s resignation. Did they believe they took online harassment seriously enough in the company’s early days? “Alexis and I grew up in the generation on the leading edge of the internet, as teenagers, as boys going through puberty,” Huffman told me. “That comes with a certain desensitization.” Translation: they simply got too used to it. After seeing what Pao had endured, however, they insisted they were more committed than ever to cleaning up Reddit for good.
One of Huffman’s first efforts was to continue what Pao had started. He strengthened the company’s anti-hate policies and shut down a few more nasty sub-forums including “Coontown” (a favorite for racists) and “Raping Women” (a favorite for, well, aspiring rapists). With each ban, users protested; some even turned on him. “I have said many times I thought the way [Pao] was treated on Reddit was despicable,” Huffman said. “The changes we made to r/all [shorthand for the Reddit home page] would have mitigated some of the harassment, and I regret we didn’t make those changes years ago.”
After she left the company, Pao told me, “It is very difficult to change a community, once it has gone in a certain direction. What I hope other internet companies realize is, when you have problems, they scale with your company, and it becomes very hard to revise the approach you have taken once the genie is out of the bottle.” However, there is hope.
In a new study, researchers at Georgia Tech found that the changes Pao started on Reddit have made a meaningful difference. After Reddit banned those forums like “Fat People Hate” and “Coontown,” more Redditors than expected left the site entirely. Those who stayed were better behaved; their use of hate speech dropped dramatically by at least 80 percent. “Perhaps existing community norms and moderation policies within these other, well-established subreddits prevented the migrating users from repeating the same hateful behavior,” the researchers suggest. Some users undoubtedly flocked to other, more permissive sites (you can now find “Fat People Hate” and “Coontown” on a newer Reddit alternative called Voat). But the exercise proves that companies can change, if their leaders understand the problem and are willing to make hard choices.
But what about free speech? That’s a red herring, Pao says: “The purpose of free speech is to allow everybody to have a voice, to have these conversations, and if one group is pushing everybody else off you can have this free speech platform, but there are not many voices or opinions being represented.”
Pao strongly believes that if more women, especially women of color (she says they get the worst of the online harassment), had been involved at the creation of networks like Reddit and Twitter and Facebook from the start, the internet would be a very different place. “I think people would have invested more in tools, would have invested more in community management, would have had different rules, would have taken down more content faster and banned more people in a more consistent way.”
It’s an alternative reality we can only imagine.
Could the reforms that helped Reddit have an effect even in the troll haven of online gaming?
Benchmark-backed Riot Games, maker of one of the most popular multiplayer games in the world, League of Legends, has attempted to curb online harassment without alienating the game’s players.
In League of Legends—which boasts 100 million active players monthly—teams of gamers battle in an online arena using magical powers. As with many multiplayer online games, vitriolic, abusive, and misogynistic comments slung at other users have been a long-standing problem. In fact, research shows female gamers receive three times as many negative comments as male gamers. One player, under her gaming pseudonym Jenny Haniver, keeps track of such comments on a blog; entries include “Shut that fucking bitch up” and “Shut your mouth girl before I put my dick in it!” Just another day in the life of a female gamer.
In 2012, however, the game makers behind League of Legends noticed that a significant number of players were quitting due to these obnoxious comments. In response, Riot Games put together a “player behavior” team including experts on psychology and neuroscience to examine the issue more closely.
Reporting on what they found, Wired’s Laura Hudson wrote, “If you think most online abuse is hurled by a small group of maladapted trolls, you’re wrong. Riot found that persistently negative players were only responsible for roughly 13 percent of the game’s bad behavior. The other 87 percent was coming from players whose presence, most of the time, seemed to be generally inoffensive or even positive. These gamers were lashing out only occasionally, in isolated incidents—but their outbursts often snowballed through the community. Banning the worst trolls wouldn’t be enough to clean up League of Legends, Riot’s player behavior team realized. Nothing less than community-wide reforms could succeed.”
Riot experimented with several ways to accomplish this, such as turning off the chat function by default but allowing players to turn it on when they wanted to. It saw not only a 30 percent decrease in negative chat but also a nearly 35 percent increase in positive chat. By creating hurdles to bad behavior that were minimal but real, it was able to change that behavior dramatically.
Riot also decided that when it kicked players off the game for negative remarks, it would give those users more details about exactly why they were being penalized. When those players returned, the company found, their incidents of bad behavior dropped significantly. Riot also instituted a system called the Tribunal that enlisted a jury of players to vote on reports of bad behavior. It turns out the Tribunal gave players a greater sense of responsibility to do their part in creating a more positive environment.
Riot Games found that it could decrease negative behavior by enforcing certain consequences, and it could also create more positive norms of behavior. This supports the theory that at least one reason online harassment has proliferated is that there have been no consequences. To put it another way: when you threaten to rape or murder someone in public, you typically experience certain repercussions from the community.
At least one League of Legends troll found out that being a model citizen has rewards. A player whose highly negative behavior got him banned from competitive play for an entire year later said, “It took Riot’s interjection for me to realize that I could be a positive influence, not just in League but with everything. I started to enjoy the game more, this time not at anyone’s expense.”
If any of the internet’s managers need more convincing, there’s this: League of Legends had 67 million active players per month when Riot unveiled its efforts to reshape online behavior. Two years later, there were 100 million. As Riot waged war on negativity, growth soared.
The Riot example reminds us that the past does not have to be the prologue. The internet has created unparalleled opportunities for human beings to act out their most aggressive, sexually predatory, and emotionally hurtful impulses. Perhaps none of the founders knew that this would happen. But now they do, and it’s worth asking whether they are doing enough to reestablish, encourage, or enforce the codes of behavior that people accept in other environments.
They can’t control everything—there will always be those who flout social expectations—and we already know that communicating through screens and keyboards creates social distance. However, when these companies fail to do everything they can to create virtual environments that encourage respectful behavior, they should accept at least some responsibility for the impulsive, hostile, and antisocial outcomes.
How these companies decide to proceed really matters, because the internet revolution has only just begun. The future is coming, and there is little doubt we will soon be spending more and more of our lives in sophisticated virtual worlds, for both work and play. Those worlds will be dark and disturbing places—unless cyberspace is fundamentally reshaped.
In 2016, Jordan Belamire, another female gamer who plays under a pseudonym, was visiting her brother-in-law when she decided to try out a new VR game he owned, QuiVr, in which players shoot arrows at zombies and demons in a snowy, medieval world. Belamire put on the headset and played by herself for a while. But soon after she entered multiplayer mode, another gamer, identified as BigBro442, used his avatar to start groping her avatar’s chest. Players could hear each other’s voices (that was the only way BigBro could have known she was a woman, because all the players’ avatars were identical), so she said, “Stop!” He didn’t. She moved away, but when she did, BigBro442 started chasing her, then shoved his hand toward her virtual crotch and started rubbing. The virtual groping, Belamire later wrote in a Medium post, felt disturbingly real: “You’re not physically being touched . . . but it’s still scary as hell . . . As VR becomes increasingly real, how do we decide what crosses the line from an annoyance to an actual assault?”
Upon hearing of Belamire’s experience, QuiVr developers Aaron Stanton and Jonathan Schenker tweaked the game to include a new superpower, one that enables players to surround themselves with a personal bubble that shields them from any kind of virtual assault. But as the industry moves forward, not every developer may act so responsibly. In fact, there’s good reason to think that some won’t.
Engineers are now working to make virtual reality even more real with the help of new technologies such as haptic feedback, which enables players to physically feel it when they are punched or kicked. Augmented reality promises to further integrate our real and online worlds. Increasingly, these are spaces in which we will live, work, and play that will have dramatic physical and psychological effects. The norms of behavior for these new virtual and augmented worlds are being laid down right now, so right now is the time for the makers of VR and AR technology to build respect and safety into their products.
Let me be clear: Our shared goal shouldn’t be to remove all bad behavior, hateful comments, or potentially disturbing imagery from the internet. That is not only impossible, it’s undesirable too, because it involves a slippery slope of deciding whose judgment will determine what’s objectionable. A more reasonable goal would be to create an online social landscape that roughly mirrors that of a healthy city. In most cities, you can, if you desire, find a dive bar, a strip club, or a rough neighborhood where rude or obnoxious behavior is allowed or even encouraged. But in most public spaces—the parks, restaurants, museums, and theaters—you should be fairly confident that neither you nor your children will be verbally assaulted or followed around by some creep shouting threats. To make the internet a version of that safe and welcoming city, women should be involved in building the next generation of products from the start, from new social networks to the video games of the VR future.
Brianna Wu tells me her heart races when she hears these stories of women being stalked, threatened, and groped in virtual worlds. For her, it feels like déjà vu but on another platform. Running for Congress in Massachusetts in 2018, these are her campaign promises: to help get more women into high-tech fields and to write stronger cyber-security and antiharassment laws. Her goal: to make sure that women on the internet of the future don’t fall victim to the mistakes of its past.