On March 25, 2021, Dorsey, along with Facebook chief executive Mark Zuckerberg and Google CEO Sundar Pichai, dialed in to testify virtually in front of the U.S. House of Representatives’ Energy and Commerce Committee for a hearing on combating disinformation and online extremism. It was the fifth such hearing that Twitter’s leader had been summoned to in front of Congress, and he was sick of it. The hearings felt theatrical, with the senators or representatives grandstanding on the culture war topic du jour.
Dorsey attended the hearing—still virtual because of COVID concerns—from his Sea Cliff mansion kitchen, calling in from an iPad stacked on top of a pile of books. His dishes, glassware, and a blockchain-based clock could be seen in the shot framed behind his buzzed head and graying beard.
He did his best to explain how Twitter had arrived at its content moderation decisions, trying to walk the lawmakers through the process. But they kept interrupting him, demanding that he give yes or no answers that left little room for nuance. Annoyed, Dorsey tweeted out a poll. “?,” he wrote, asking his followers to vote yes or no.
Word of the tweet quickly reached the representatives who were questioning him.
“Mr. Dorsey, what is winning, yes or no, on your Twitter account poll?” Representative Kathleen Rice, a Democrat from New York, asked, raising her eyebrows behind thick tortoiseshell glasses.
“Yes,” Dorsey replied, the corners of his lips poking upward from behind his beard as he stifled a smile.
“Hmm, your multitasking skills are quite impressive,” she replied.
But Dorsey wasn’t looking to multitask for much longer.
>>> No legislation ever came out of the hours-long grilling sessions. The press pilloried Dorsey and Twitter, regardless of what he said. And most lawmakers seemed like they wanted to ask only about specific content moderation decisions: Why was this taken down? Why was this left up? Why was my account being shadowbanned? They didn’t appear to care about the future of technology or solving the problems they complained about.
Dorsey firmly insisted that world leaders should not be subject to the same rules as other Twitter users and refused to allow their accounts to be suspended when they posted threats that would get a regular person booted off the platform. He also staunchly defended Alex Jones, the far-right podcaster and Sandy Hook school shooting truther, after he was banned from Facebook, YouTube, and Apple’s podcast network, saying that Jones hadn’t violated any of Twitter’s rules. Eventually, as pressure mounted and Jones used Twitter to tell supporters to prepare their “battle rifles” against the mainstream media and harass a reporter, Dorsey relented, banning the conspiracy theorist and his company, Infowars.
Vijaya Gadde took on Twitter’s critics with a firmer hand. She had joined Twitter in 2011, while Dorsey was still trying to find his way back into the company’s day-to-day operations. When he secured his second CEO term in 2015, harassment was becoming rampant on the platform. Under his watch, she crafted Twitter’s rule book, drafting rules with her team and securing Dorsey’s sign-off. To other Twitter executives, Gadde, like Leslie Berland, was one of the few trusted voices who could speak Dorsey’s language. She could convince him to adopt her suggestions, and translate his desires to lawyers and investors who might otherwise find him inscrutable.
Born in India, Gadde moved to the United States with her family when she was three. She grew up in Beaumont, a small city in southeastern Texas where her father had to ask permission from local Ku Klux Klan leaders before he went door-to-door to sell insurance. After high school, she studied law at New York University. She had worked for a decade at the Silicon Valley legal powerhouse Wilson Sonsini Goodrich & Rosati, where she focused on complex acquisitions and corporate governance matters. At Twitter she avoided the limelight, and when she was forced to speak publicly, her voice would sometimes crack. She was fiercely protective of her connection with Dorsey, and some colleagues complained that she didn’t give them enough insight into his movements and opinions.
In March 2020, Gadde realized that Twitter would need new rules banning misinformation about COVID so it could take down tweets that promoted ineffective treatments or spread conspiracy theories like the idea that 5G technology caused the virus. But there was a problem—Twitter had no framework for tackling misinformation and no staff responsible for policing it.
Unlike at other social media companies—which would simply swoop in to remove a post without explanation, and then retroactively justify the decision—at Twitter, there had to be a rule first, and the rule had to be public. Sometimes, Twitter published drafts of new rules and then gathered feedback over months—even years—before finalizing them and enforcing them. It was this bureaucratic approach that allowed Jones to stay on the platform much longer than he was allowed to remain on others. At Twitter, they had waited for Jones to break enough rules on the site to merit a ban.
But COVID changed things. Gadde turned to Yoel Roth, one of her deputies who had helped identify and root out Russian disinformation accounts after the 2016 election.
Roth, an earnest, baby-faced man, had spent most of his career at Twitter, fighting misinformation. He joined the company in 2015, soon after completing a PhD focused on online communication and communities, and had written the closest thing Twitter had to a misinformation policy not long before COVID struck. The new rules prohibited users from sharing photos and videos that had been altered by artificial intelligence. The imagery, known as “deepfakes,” included fake videos of politicians making fictional statements or artificial pornography that placed someone’s face on a different person’s body, and were becoming commonplace with the more widely available and easy-to-use AI and video editing software. After consulting with his team, Roth decided people who shared deepfakes on Twitter would be banned in the most egregious cases, but if users created them to parody someone, Twitter would simply label the tweet, warning viewers it contained manipulated media.
Twitter executives were taken with the idea of labeling, no one more so than Dorsey, who was fed up with the pressure from Congress and the public over misinformation. His company had no business becoming an arbiter of truth, he thought, and it was in an untenable position in a polarized political climate where the idea of truth itself was contested. Labels allowed Twitter to avoid taking down anyone’s speech wholesale. Every day felt like another fire drill, and Dorsey was becoming inured to it.
The approach of labeling some risky tweets let Twitter protect its reputation—it was doing something about misinformation—without needing to fact-check every single questionable post or overstepping into censorship of unverified content. Dorsey worried about censorship and wanted to err on the side of leaving tweets online.
Then the pandemic struck. Usage of Twitter spiked as people sheltered at home around the globe, totally reliant on their phones for contact with the world, and Twitter became a hub for doctors to share tips on avoiding the virus, for scientists to write long thread breakdowns of the latest research, and for an army of armchair experts to swap theories about the virus’s origin, debate the efficacy of masks, or make their own predictions about the length and severity of the pandemic.
Twitter flooded with misinformation about COVID, and its team of content moderators were overwhelmed. The only tools they had were deleting tweets and suspending users, and they tried to reserve those punishments for the most dangerous posts—like ones encouraging fake “cures” that were, in fact, harmful.
Among the do-your-own-research experts on Twitter was Musk. On March 6, he tweeted, “the coronavirus panic is dumb.” Two weeks later, on a day when the U.S. reported 2,000 known COVID cases, he posted that the country would be “close to zero new cases” by the end of April and that “kids are essentially immune” from the disease. (Children were not immune from the virus, and the U.S. averaged more than 20,000 daily COVID cases by the end of April.) Armed with seemingly little information aside from what he read in his Twitter timeline and the random links he clicked on, Musk argued with virologists and doctors on the site, cast doubt on the virus positivity rates in the country, and advocated for a drug—anti-malarial treatment hydroxychloroquine—that had no verifiable impact on the disease.
In some ways, Musk was simply mimicking the means by which he learned about rocketry or automobile development, fields in which he had little prior experience. He consumed publicly available information, made quick judgments, and posited in his typical contrarian fashion that he could do better than the experts. With rockets and cars, he had been wildly successful. So why couldn’t he be right again, about COVID?
Musk also had a financial incentive to doubt the severity of the virus. A pandemic would destabilize the economy and, more important, the operations of Tesla and SpaceX. The people who worked on his manufacturing lines would be forced to stay home. Musk had always required his employees to show up for in-person collaboration and hands-on work, and COVID lockdowns would prevent that. On March 13, Musk sent out a company-wide email encouraging SpaceX’s workers to continue coming in to the office because he had seen data that he claimed showed the illness “is *not* within the top 100 health risks in the United States.
“As a basis for comparison, the risk of death from C19 is *vastly* less than the risk of death from driving your car home,” he wrote. “There are about 36 thousand automotive deaths [per year], as compared to 36 so far this year for C19.”
While SpaceX stayed open—declared an essential business because of its government contracts—Tesla was obligated to follow California’s stay-at-home orders, sending Musk into a rage. The shelter-in-place orders meant his workers weren’t in the factory. Musk became fed up, tweeting at 11:14 p.m. on April 28, “FREE AMERICA NOW.” He then replied to a far-right activist, who had declared that the scariest thing about the pandemic was how easily Americans “bow down & give up their blood bought freedom to corrupt politicians,” and called her assessment “true.”
Musk continued his tirade on a Tesla earnings call the next day, when he admitted the company was worried about not being able to quickly resume production of cars in Fremont. “The extension of the shelter-in-place, or frankly I would call it ‘forcibly imprisoning people in their homes against all their constitutional rights’—that’s my opinion—and breaking people’s freedoms in ways that are horrible and wrong and not why people came to America or built this country,” he said. “What the fuck? Excuse me. Outrage. It’s an outrage.
“If somebody wants to stay in their house, that’s great,” Musk continued. “They should be allowed to stay in their house and they should not be compelled to leave. But to say they cannot leave their house and they will be arrested if they do, this is fascist. This is not democratic. This is not freedom. Give people back their goddamn freedom.”
>>> At Twitter, the tsunami of tweets about the pandemic was unlike anything Gadde and her team had dealt with before. New details about the virus emerged daily, so a tweet posted today could be misinformation tomorrow.
Roth appealed to Dorsey. Twitter could use the labels it had built for deepfakes to add warnings to misleading tweets about the virus, he said. There was only one problem: the message cautioning viewers about manipulated media was hard-coded into the label Twitter had built, so it could only be used for one thing. Dorsey declared a “code red” and demanded the company’s product and engineering teams build a label that could include any kind of warning Twitter’s content moderators desired. Roth seized the new tool and quickly began adding labels to COVID misinformation.
The company did remove thousands of tweets under a policy that assessed “demonstrably false or potentially misleading content” that had “the highest risk of causing harm” but left up hundreds of thousands of others, including those from Musk, which questioned the official response to the pandemic but didn’t cross Twitter’s line into harmful misinformation.
With no end to the pandemic in sight, Dorsey doubled down on the label strategy. Heading into the 2020 U.S. presidential election, the company was under fire to do something about President Trump’s endless cascade of reckless tweets. It had a long-standing policy of ignoring rules violations if they came from world leaders, arguing that the public interest outweighed potential harm. But maybe Twitter didn’t need to delete messages from Trump, Roth thought. Maybe his tweets could be labeled, too. Dorsey gave the go-ahead.
Trump had started to rail against the electoral process, warning of fraudulent results and complaining about mail-in ballots, which were expected to fall in favor of his opponent, Joe Biden.
Donald J. Trump
@realDonaldTrump
There is NO WAY (ZERO!) that Mail-In Ballots will be anything less than substantially fraudulent. Mail boxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed. The Governor of California is sending Ballots to millions of people, anyone…. .
8:17 AM May 26, 2020
34K
30K
82K
Donald J. Trump
@realDonaldTrump
…. living in the state, no matter who they are or how they got there, will get one. That will be followed up with professionals telling all of these people, many of whom have never even thought of voting before, how, and for whom, to vote. This will be a Rigged Election. No way!
12K
11K
47K
The intervention Twitter undertook was small—just a link underneath Trump’s tweet that suggested users “get the facts about mail-in ballots” by reading a CNN article. The reaction, however, was massive.
Republicans decried “censorship” and Trump blasted Twitter in more tweets later that day. The company was “interfering” in the election and “completely stifling FREE SPEECH,” he ranted. While Dorsey took most of the heat, Trump’s supporters also started to target other employees. Kellyanne Conway, one of the president’s most senior advisers, trained her ire on Roth, after people started to dig into his old tweets looking for any proof of left-wing bias among Twitter’s employees.
They quickly found what they were looking for. “I’m just saying we fly over those states that voted for a racist tangerine for a reason,” Roth had tweeted in November 2016, when he was still a junior employee. In a 2017 post, he wrote there were “ACTUAL NAZIS IN THE WHITE HOUSE.” The tweets were screenshotted and spread across the site, making Roth, one of the main people at Twitter who helped handle abuse campaigns, the direct target of one. In interviews, Conway decried Roth as a censor, and Trump piled on by posing for photos with a copy of the New York Post, which featured a story about Roth and his supposed censorship on the cover.
Twitter, of course, had seen abuse campaigns before, but few employees besides Dorsey had been targeted at this scale: Roth was suddenly an ant under the beam of light through a magnifying glass. As Twitter did damage control, Roth went dark. Twitter assigned a security guard to sit outside his home as death threats streamed in online. Dorsey also attempted to step in. “There is someone ultimately accountable for our actions as a company, and that’s me. Please leave our employees out of this,” he tweeted on May 27.
Twitter’s labels emboldened Trump to tweet more, leading Gadde and her team to build a process for labeling Trump’s tweets. Only a few people could make the final call to slap a label on the president’s posts, including Gadde herself; Del Harvey, the head of safety; and Roth. When Twitter’s content moderators flagged a tweet from Trump that seemed to violate Twitter’s rules, Harvey and Roth would receive an urgent page. If they didn’t respond, Gadde would receive one next.
The executives lived in the Bay Area, near Twitter’s headquarters, and were routinely jolted awake in the wee hours of the morning by another alarming message from Trump, who habitually spent time in the morning tweeting from his bedroom in the White House before beginning his day. The process quickly shifted from a monumental one of historical consequence—a Twitter employee deciding to curtail statements by the leader of the free world—to a minor irritation squeezed in between morning alarms, coffee, and getting kids ready for school. Sean Edgett, the company’s general counsel, was added to pager duty for Trump’s wild tweets, but Dorsey was purposely left off the list. His global travel and routine absences sometimes made him tough to reach.
To some of his employees, Dorsey seemed increasingly disillusioned with Twitter and its daily free-speech fights. Over time, Twitter had created a strike policy that booted users off the platform after they received several warning labels. Dorsey began to wonder if Gadde was going too far by banning people, making Twitter into a censor again—the thing he had hoped labels would prevent. Dorsey also hated the requests from government officials, including the FBI and White House, who flagged tweets to be taken down that they believed were in violation of the company’s policies. It felt like the company was doing the bidding of too many outsiders and deviating from its mission to be an open public square.
Dorsey’s disagreements with his top policy deputy burst into public in October, just weeks before the U.S. presidential election. Ahead of the November vote, the FBI had repeatedly warned Twitter and other Silicon Valley companies to be prepared for hack-and-leak campaigns similar to those seen in 2016, when Hillary Clinton’s emails had been published in part on Twitter by a hacker.
As Gadde’s team received more warnings from the FBI, they steeled themselves for something similar. Two years earlier, as the company took stock of Russian meddling on its platform, it had instituted a new rule forbidding users from sharing information gleaned by hacking. If such an occurrence were to happen again, Twitter would block ill-gotten documents or information from appearing on the platform and freeze any accounts that tried to share them.
It didn’t take long for that policy to be put to the test. On October 14, 2020, the New York Post published an explosive article featuring emails extracted from a laptop belonging to Hunter Biden, presidential candidate Joe Biden’s son. The messages showed the younger Biden had brokered a meeting between his father and a Ukrainian executive he worked with, in contradiction to Joe Biden’s claim that he never involved himself in his son’s business dealings. There were also nude images of Hunter, a violation of Twitter’s policy forbidding explicit pictures shared without the subject’s consent, and images of him using drugs.
The provenance of the emails was murky. The Post reported that Hunter’s laptop had been turned over by a repair shop.
To Twitter’s executives, the story seemed to have all the telltale signs of a hack-and-leak: embarrassing emails leaked from an unknown source, just in time to tip the scales of a presidential election. Gadde quickly made the decision to block the link to the article from being shared on Twitter. She also greenlit the suspension of the New York Post’s official Twitter account, a move that blocked the publication from sharing any other stories until it deleted its tweet that featured the Hunter Biden article.
The backlash was immediate and furious. Republican lawmakers and Trump campaign officials accused Twitter of censorship, and even some Democrats questioned whether Twitter, by cracking down on a mainstream media outlet, had overstepped. Even Dorsey objected. “Our communication around our actions on the @nypost article was not great,” he wrote. “And blocking URL sharing via tweet or DM with zero context as to why we’re blocking: unacceptable.”
His statement seemed confusing. Who was Dorsey criticizing? Internally, however, employees knew where the message was being directed. While Dorsey had been a proponent of empowering the executives he put into positions of power to make their own decisions, he was now effectively admonishing Gadde. She took his tweet personally, particularly because he was more direct and harsh in what he said publicly than the criticism he shared in private, she told people close to her.
On October 16, Twitter decided to limit the hacked materials rule only to prevent hackers themselves from sharing information they stole, and to add contextual labels to other tweets about the Biden emails, warning viewers that they came from an unknown source. The Post would once again be able to share its links, but the controversy continued for weeks.
>>> On election day, November 3, Twitter had a team tracking misinformation and voting results around the clock. As it became clear that Biden was going to beat Trump, it monitored attempts to undermine trust in the electoral process. The company labeled some 300,000 tweets over a two-week period covering the election and its aftermath. Nearly 40 percent of Trump’s election tweets in the four days after the election received labels, warning that their content “might be misleading about an election or other civic process.”
Dorsey had gone along with the labeling or the outright removal of misleading information about COVID in the pandemic’s early days. But with the advent of newly created vaccines from major pharmaceutical companies like Johnson & Johnson and Pfizer, he balked at the idea of taking action against tweets that questioned the efficacy of the shots. He had given Gadde free range to make decisions previously, but by the spring, he began micromanaging the moderation policy process around vaccine and pandemic content.
He demanded to be added to an internal email thread used by moderators that sent out an automated message when a tweet was removed for violating the COVID misinformation policy, and often forwarded them to Gadde, second-guessing the moderators’ decisions. Twitter’s CEO later objected to labels applied to the account of Alex Berenson, a former New York Times reporter who became a vocal opponent of the vaccines. Dorsey also questioned the removal of tweets that discussed vaccination, bickering over whether they truly violated Twitter’s policy against spreading misinformation. Dorsey’s emails, often with few words, landed in Gadde’s inbox on a weekly basis, a steady source of paper cuts to her relationship with her boss.
She and others couldn’t understand why he had suddenly involved himself. Still, Gadde tried not to make decisions about COVID policy without speaking to Dorsey. “I need to talk to Jack about this, and we’ll hear from him when we hear from him,” she would tell her team. Plans about how to handle incendiary tweets about the virus often stagnated.
Some employees speculated that Dorsey was skeptical of the vaccines. Over time, Dorsey’s closest staffers would assume he was unvaccinated, although because Twitter’s offices remained closed and Dorsey did not come in, they never knew for sure.
>>> It was around noon on January 6, 2021, in the middle of the South Pacific, when Dorsey got the call from Gadde, recommending he put the president of the United States in time-out. He was on Tetiaroa, a pristine atoll in French Polynesia, in a luxury resort called the Brando. It was from here, on volcanic-formed land, shielded from the wild open ocean by a natural fortification of living coral, that Dorsey had decided to run his two businesses as most of the world remained closed during the pandemic.
Most employees had no idea where Dorsey was and some executives tried to keep his whereabouts secret. It would be horrible for morale if workers—many of whom were cooped up working from their beds or couches—knew their leader was on an island with thirty-three private villas, kayak tours, and a vast spa with personalized massage treatments. A few months earlier, Kim Kardashian had spent $1 million to rent the whole resort for her fortieth birthday, drawing widespread condemnation for partying in the pandemic.
Working from the Brando, three hours behind San Francisco headquarters, Twitter’s chief executive dialed into a Twitter leadership meeting to go over the company’s goals for the year. It was at that moment that executives started to get alerts about a large group of Trump supporters amassing in Washington, DC. They had been drawn there by the president with the promise of an epic rally. Trump had tweeted repeatedly that his vice president, Mike Pence, could overturn the outcome of the election.
Dorsey had spent more than four years resisting the idea of booting Trump off Twitter, digging into his position the more he was criticized for it. Twitter’s leaders decided to finish the goals meeting, though some kept one eye on their platform and developing news as Trump’s supporters made their way to the U.S. Capitol.
To Twitter’s trust and safety employees, the potential for violence was glaring. Harvey and Roth had warned for months that Trump would use his account to stir up trouble and argued for his removal from the platform. A few weeks after the election in November, Roth had drafted a document he titled “Post-Election Protests and Calls for Violence” that outlined the kind of activity Twitter might see in the wake of Trump’s loss, and how the company should respond.
If there were tweets calling for protests based on misinformation—that the election had been rigged or ballots had been altered—those tweets would be labeled. Calls for violence from extremist groups would be removed. Roth also wrote code that scanned Twitter for the coded phrases Trump had used to rile up supporters, like “locked and loaded.” Tweets that contained the phrase would be triaged by the company’s content moderation team within thirty minutes, a rapid response in case they contained specific election-related threats.
As the riot unfolded, however, Roth found there was little he could do. He kept the television in his home office on mute, aghast as the mob flooded through the Capitol, and tried to focus on Twitter. But few in the crowd were tweeting about their exploits in real time. Most of the posts about the riot were coming instead from journalists, news outlets, or terrified online observers.
Roth also watched @RealDonaldTrump. After giving a speech near the White House to his supporters, the president retreated to his residence to egg them on from there as they proceeded to the Capitol. “Mike Pence didn’t have the courage to do what should have been done,” he tweeted, raging that his second-in-command had not stopped the certification of the vote and that the election was being stolen from him. His “sacred landslide election victory” had been “unceremoniously & viciously stripped away,” he wrote.
Roth and his manager, Harvey, agreed that it was time to suspend Trump. He was clearly undeterred by the hundreds of warning labels that had been added to his tweets. His post about Pence posed a threat to the vice president’s safety and warranted a permanent ban, Roth argued. He and Harvey drafted an extensive letter outlining their rationale and presented it to Gadde in a video call.
But the idea of banning a sitting president was too much for Gadde, and she worried Dorsey wouldn’t approve even if she went along with the suggestion. For regular users, there was always an escalatory process before they got banned. They would get warnings, and then be blocked from tweeting for a “cooling off” period. If they continued to break the rules after that, they were finally suspended. “We haven’t done that yet,” Gadde argued during the call. “We haven’t timed him out yet.”
After Gadde signed off, Roth and Harvey sat in silence for a moment, just staring at each other. There was nothing to say.
Roth and Harvey rewrote the recommendation for Gadde, asking for Trump to get a temporary time-out instead. “Any further policy violations will result in suspension,” Roth wrote. He stared at the words on his screen, then went back and bolded, italicized, and underlined them to hammer home his point:
“Any further policy violations will result in suspension.”
As Trump’s supporters ransacked the Capitol, he released a video on Twitter, repeating his claims that he had been cheated out of the presidency. Each of his tweets seemed to manifest more violence in the real world. As long as he had his favorite megaphone, there wasn’t an end in sight. The safety executives expanded their document to include Trump’s video.
Gadde took the document to Twitter’s most senior executives, who threw their support behind it. Beykpour took notice of Roth’s emphatic text. “Does this mean we would be suspending him for anything else?” Beykpour asked. “Good.”
“This is the right thing to do,” Matt Derella, the company’s chief customer officer, commented at the top of the document. Roth watched as the executives’ comments appeared in the margins—but he didn’t see anything from Dorsey.
From his villa, Dorsey watched the events unfold on Twitter and joined various emergency calls that sought to deal with the madness. Gadde called him to tell him the news. The steady refrain of false claims—and the violence that stemmed from them—were finally enough. On the phone, Gadde told Dorsey that she had decided to lock Trump’s account for twelve hours. From his retreat halfway around the world, Dorsey assented.
Dorsey sent a company-wide email to his employees, saying it was important for Twitter to follow its own rules and allow any user, even Trump, to return after a temporary suspension.
Roth felt shell-shocked and confided in Gadde. “I feel like I have blood on my hands,” he told her.
“I think you’re taking too much of this on yourself,” Gadde responded. “It’s not just your decisions, it’s not blood on your hands. We are making these decisions together.”
Employees were livid. Their leaders had not done enough. Many had long believed that Trump had no place on the platform. They watched as their peer companies, like Facebook, unilaterally banned the former president. Meanwhile, they were giving the president back his favorite megaphone after a night’s sleep.
More than three hundred employees signed a letter directed to Dorsey and other company leaders. “Despite our efforts to serve the public conversation, as Trump’s megaphone, we helped fuel the deadly events of January 6th,” read the letter, which was delivered to executives on the morning of January 8. “We must learn from our mistakes in order to avoid causing future harm. We play an unprecedented role in civil society and the world’s eyes are upon us. Our decisions this week will cement our place in history, for better or worse.”
The letter called for Trump to be immediately suspended. But some senior engineers at the company went even further. They began discussing what actions they would take if Dorsey refused, and decided they would stop working by the end of January if Trump remained on the platform—either holding a strike or resigning en masse.
Trump returned from his twelve-hour time-out unrepentant. He declared he would not attend Biden’s inauguration. In another tweet on January 8, he referred to his supporters as “great American Patriots” whose voices would be heard long into the future. “They will not be disrespected or treated unfairly in any way, shape or form!!!” he wrote.
Harvey believed the tweets were an invitation for violence at the inauguration. Gadde and Edgett were resistant—the messages were somewhat coded, and could be interpreted as benign by some readers. Several lawyers at the company leaned on Edgett to change his mind, while Harvey showed Gadde the responses to Trump’s tweets. His supporters were interpreting it as she feared—an incitement to attack.
That afternoon, Harvey and Roth sat down again to write a new recommendation: Trump had to go. As they drafted it, they stayed on the phone and argued their position with Gadde, who eventually caved. As they continued to work, Gadde signed off to call Dorsey and Twitter’s board.
The board agreed with Gadde’s recommendation. But Dorsey had one demand—if Trump was going to be removed, Twitter had to publish its reasoning publicly for the world to see.
Suddenly, the document that Harvey and Roth were throwing together had to serve as the official explanation for the historic decision to silence America’s leader, cutting him off from his 88 million followers.
They quickly cleaned up the document for publication. World leaders “are not above our rules entirely and cannot use Twitter to incite violence,” the two policy executives wrote. Trump’s new tweets “are likely to inspire others to replicate the violent acts that took place on January 6, 2021, and there are multiple indicators that they are being received and understood as encouragement to do so.”
Harvey had no time to contemplate the decision. She was already worried about the consequences for Twitter itself. Removing an account like Trump’s, which occupied a large chunk of Twitter’s social graph—the web of following and follower connections between users—could easily crash the entire service. Freezing the account was only part of the process. It also had to be removed from the social graph—other users’ follower and following lists—and from the thousands of block lists of people who had gotten fed up with seeing Trump in their feeds. She called several engineers who worked on the social graph and warned them that Trump was about to go, and that he might take the entire site with him.
Having faced the backlash of Trump supporters once before, Roth was worried about safety. He moved the document he and Harvey had written to a burner account and took out the names of all the other employees who had worked on the draft. If it were leaked, the employees who had made the decision to ban Trump would be anonymous.
Then, just after 3:00 p.m., it was time.
On the internal dashboard that governed all Twitter accounts, there was a big red button: “PERM-SUSPEND.” If clicked, it would permanently terminate a user’s account, erasing their social graph. Harvey decided she would be the one to click it.
Roth stood up from his desk and walked upstairs to his living room, where his husband sat watching the news. “Something is about to happen,” Roth said. Moments later, the news broke: @RealDonaldTrump was gone.
The days of collective deliberation didn’t stop Dorsey from once again second-guessing Gadde and her team in public. Dorsey painted the Trump decision as one he wasn’t responsible for and didn’t wholeheartedly defend. “I believe this was the right decision for Twitter. We faced an extraordinary and untenable circumstance, forcing us to focus all of our actions on public safety,” he tweeted on January 13. “That said, having to ban an account has real and significant ramifications. While there are clear and obvious exceptions, I feel a ban is a failure of ours ultimately to promote healthy conversation.”
In the aftermath of the Trump ban, Harvey grappled with anxiety. She had seen the underbelly of the internet, but the Capitol riot weighed on her more heavily as some commentators rushed to trivialize the event. She believed a political game was being played with people’s lives, one that glossed over a societal failure. She scheduled a leave of absence from Twitter for that fall. In June, when the Nigerian president Muhammadu Buhari tweeted a threat to regional separatists, invoking the Nigerian Civil War and promising to treat his opponents “in the language they understand,” Harvey acted quickly, temporarily suspending Buhari’s account and requiring him to delete the call for violence.
When Gadde realized what Harvey had done, she was frustrated that she hadn’t been consulted. The Nigerian government responded by banning Twitter in the country, an embargo that would take months to resolve and cause tension between Gadde and Harvey. Trump cheered the decision in a statement, saying, “Congratulations to the country of Nigeria, who just banned Twitter because they banned their President.” In October, just one day shy of her thirteenth anniversary working at Twitter, Harvey resigned.
To some close to him, the Trump ban also broke Dorsey. It was the red line he had spent nearly half a decade promising not to cross. Twitter wasn’t an idyllic, free, and open town square. “It was like he was the kid who built the robot, which then destroyed the world,” said one former Twitter executive.