FACEBOOK’S “M TEAM” consists of around forty of its top leaders, the people who make its biggest decisions and are responsible for executing them. They gather a few times a year in a large room on the Classic campus. The July 2018 meeting was one of the first of these held after Cambridge Analytica.
It started out as usual. In M-team meetings, the executives all do a brief check-in, saying what’s on their minds both in business and in life. It can get pretty emotional: my kid’s sick . . . my marriage ended . . . When it was his turn to talk—he always goes last—Zuckerberg made a startling announcement.
He had been reading a book by venture capitalist Ben Horowitz, the partner of Facebook board member Marc Andreessen. Horowitz defined two kinds of CEOs: wartime and peacetime. A good CEO, he wrote, has to interpret circumstances in a given time and decide which to be. “In wartime, a company is fending off an imminent existential threat,” he writes. Wartime CEOs must be ruthless in confronting those threats.
Since Facebook had been under siege for the past two years, this made a big impression on Zuckerberg. In earlier times, Zuckerberg told the group, he had the luxury of being a peacetime CEO. (This was a debatable self-definition for the Cicero-quoting leader who went into lockdown mode to thwart perceived challenges from Google, Snapchat, and Twitter.) He told the group to hereafter consider him a wartime CEO.
He emphasized one shift in particular. Horowitz put it this way: “Peacetime CEO works to minimize conflict. Wartime CEO neither indulges consensus building nor tolerates disagreements.” Zuckerberg told his management team that as a wartime CEO he was just going to have to tell people what to do.
Some in the room thought that he was saying from that point their role was to shut up and obey his directives. Zuckerberg resisted that characterization when I later brought it up to him. “I basically said to people this is the mode that I think we’re in,” he says of the declaration. “We have to move quickly to make the decisions without the process of bringing everyone along as much as you would typically expect or like. I believe that this is how it needs to be to make the progress that we need right now.”
I wondered whether he found the role of wartime CEO more stressful or more fun.
A Zuck silence. Eye of Sauron.
“You’ve known me for a long time,” he finally said. “I don’t optimize for fun.”
Zuckerberg’s internal announcement reflected the huge amount of thought he’d been devoting to how Facebook might negotiate its woes now that he and Sandberg had turned on the apology fire hose. Zuckerberg felt that the company had backed up its mea culpas with a flood of new products and systems that addressed the flaws and vulnerabilities that had provided opportunities for misinformation, election tampering, and data privacy. Facebook had made changes in time for the 2017 French election and had avoided some of the worst effects that the US and the Philippines had seen earlier. It was embarking on a labor-intensive strategy to get through the 2018 midterms in America.
But though Zuckerberg had declared that his primary job was now fixing Facebook, he wasn’t going to curb his ambitions in a unilateral surrender to competitors. Facebook had to move forward as well.
On the eve of the 2018 F8 that May, I spoke with Zuckerberg about both what new products he’d be announcing and the thought process behind announcing anything. He knew he’d be obligated to show penance and talk about fixing things. But Facebook also had to keep introducing new products. “On the one hand, the responsibilities around keeping people safe—the election integrity, fake news, data privacy, and all those issues—are just really key,” he said. “And then on the other hand, we also have a responsibility to our community to keep building the experiences that people expect from us.”
With an engineer’s logic his keynote speech for the conference would begin with fifteen minutes of trust-building, and then an equivalent amount of time on new products. The “going forward” part.
Facebook was proceeding with caution on some fronts. It had developed a product called Portal, a display screen with a camera and microphone to enable video connections with friends and family. But Facebook’s cooler heads realized that introducing what could be seen as a home surveillance device might not be a smart move only weeks after the Cambridge Analytica disaster. Zuckerberg did have another product to announce at the event, though: called Dating, it would create an entirely new, and very personal, dossier about Facebook’s users.
I asked him at the time if he was sure that it was a good idea to introduce something involving such intimate information at what appeared to be the nadir of people’s trust in the company.
He replied that Facebook focused on meaningful relationships—what could be more meaningful than people you date? He also ran through the privacy protections in the new feature. The conversation drifted elsewhere, but then he abruptly returned to the concern I’d expressed about the Dating product. “Obviously you’re asking this question,” he said, “but do you think that this is a bad time to be talking about this?”
Well, yes, I said.
“This is the threading of the needle that we talked about up front,” he said. “I’m curious if you think that moving forward on new products will feel like we’re not taking the other stuff as seriously as we need to. Because my top priority is making sure that we convey that we are taking these things seriously.”
He wasn’t going to kid himself. Winning back people’s trust was a long process. It would probably take three years. But he felt that the rebuild had started.
Despite bad press, Facebook Dating rolled out in several smaller markets that year and hit the United States in September 2019.
And Facebook began selling its Portal before the end of 2018. Reviewers thought it was a good product but advised against buying it because, they said, no one could trust Facebook.
ZUCKERBERG WAS RIGHT to worry about the reaction from users and developers. In 2018, the year he determined to win back trust, Facebook’s trustworthiness had tanked. Even as Facebook worked to improve its products, a relentless stream of headlines kept dragging down its reputation. First came the revelations that Facebook’s cutback on data-gathering—the one that supposedly ended after the one-year grace period that started in 2014—had not been uniformly employed. Some major companies, like Airbnb, Netflix, and Lyft were white-listed, allowing them to continue accessing information. (Also on the white list was Hot or Not, the inspiration for Zuckerberg’s 2003 folly, Facemash.) Especially embarrassing: some of these revelations came out in a lawsuit from a company, Six4Three, that actually was blocked from receiving user data. Facebook had quite sensibly denied access to its app, Pikinis, which allowed users to find posts of friends in bathing suits and other forms of undress. Facebook’s reward was litigation that would wind up revealing a trove of damaging emails.
That was only one example of how things were spilling out. Every day dozens of reporters, working for top-tier newspapers or philanthropically funded investigative units, woke up in the morning and began digging up dirt on Facebook. It wasn’t hard.
Sometimes exposing Facebook could be as simple as using its ad product to discover a shocking flaw, like targeting an ad toward “Jew haters.” That was one of many questionable categories algorithmically generated by Facebook’s self-service ad product when one typed in the word “Jew” to start the process. Investigative reporters at ProPublica found 2,274 potential users identified by Facebook as fitting that category, one of more than 26,000 provided by Facebook, which apparently never vetted the list itself. “I know exactly how this happened,” says Antonio García Martínez, a former ad product manager who helped launch this feature. “Facebook feeds a bunch of user data into what pages you’ve liked, profile data, whatever. I used to call it Project Chorizo, like the sausage maker, because that’s what it was. You put in all this data, and it would pop out topics.” Essentially, to work at scale, Facebook had built a system where an AI with little understanding of what was offensive to humans was empowered to create those categories like . . . Jew haters. Facebook later removed the categories. “We know we have more work to do,” said Rob Leathern, a Facebook executive working on ad integrity.
Other scandals stemmed from Facebook’s own panicky attempts to address its precarious reputation. In November 2018, The New York Times revealed that Facebook’s policy team had hired a firm called Definers Public Affairs to impugn its competitors—and even to cast aspersions on financier George Soros, who had criticized Facebook in a Davos speech. Adding pungency to the claim was that Soros was also a favorite target of anti-Semitic hate speech, including attacks on Facebook. (It is exceedingly odd that a company headed by Jewish executives frequently found itself in situations that involved alleged anti-Semitism.)
Policy head Elliot Schrage, whose own family suffered losses in the Holocaust, publicly claimed responsibility. Observers saw this as Schrage taking the fall for his boss, Sheryl Sandberg, who insisted she knew nothing about it. Then emails emerged showing that Sandberg might have known something about it after all. This particular episode turned out to be overblown—companies often would hire outside mouthpieces to impugn competitors, and Definers didn’t hide its connection to Facebook—but no one was giving Facebook the benefit of the doubt in 2018.
Other wounds were totally self-inflicted. One might think that the last thing Facebook’s head of global policy might want to do was drag the company into the middle of the utterly radioactive controversy regarding Supreme Court nominee Brett Kavanaugh, whose angry response to a charge of a youthful sexual assault polarized the nation. But right there on television, sitting behind the nominee, was Joel Kaplan, taking a day off to support his friend from the Federalist Society. The outrage at Facebook was widespread. Kaplan was forced to publicly apologize at an all-hands a week later. Not for helping his friend, but for the failure to give Facebook a heads-up. Someone at the meeting later told Wired that Kaplan looked to be in shock, like “someone had just shot his dog in the face.” The apology’s sincerity came under question only a day later, when Kaplan threw a celebratory party for Kavanaugh after the Senate voted the justice to the Supreme Court.
Within weeks of that disaster, Facebook announced that it had discovered that hackers had exploited flaws in its infrastructure to get access to the information of 50 million users, including that of Sandberg and Zuckerberg. Unlike Cambridge Analytica, this was a literal breach. The intruders had exploited a vulnerability that had been exposed for more than a year. A few months earlier, as part of his post-Cambridge apology tour, Zuckerberg had planted a stake in the ground: “We have a responsibility to protect your data,” he wrote, “and if we can’t then we don’t deserve to serve you.” It’s hard to imagine a metric that wouldn’t indicate that his promise was broken. In a rare audio press conference, he was twice asked if he thought that his own resignation was appropriate. The answer was, both times, no.
SHERYL SANDBERG WAS fighting too, not just for Facebook, but for the personal brand she had built over a lifetime. In addition to her leadership role in the company’s rehabilitation, she still took time to support the extensive Lean In organization that had arisen from her first book. She believed that she had helped women and was proud of it. Facebook’s woes had affected the movement—it must have hurt Sandberg when Michelle Obama, speaking at Brooklyn’s Barclays Center, said, “It’s not enough to lean in, because that shit doesn’t work all the time.” But Sandberg kept her head down, still striving for the A+.
Every now and then, Sandberg would do a Facebook Live session in Building 20, in talk-show format. It was in her role as the leader of the Lean In movement rather than her day job. She would sit across the table from her guest, often someone promoting a newly released inspiring tome of her own, each sipping from coffee mugs with the Lean In logo, and conduct a gentle interview. You could tell it was the best part of her day. Behind them, visible through the glass walls of her conference room, you could see Facebookers bustling past, some surely creating new pixels to dampen whatever crisis was blowing up in that minute. Inside her conference room, dubbed Only Good News, Sandberg would be asking her subject the final question she always posed: What would you do if you weren’t afraid?
What if you were asked that question, I said to Sandberg in 2019. It was our final interview, a two-hour extravaganza that I begged for after our previous meetings had ended just when things were getting good. “What I would do if I wasn’t afraid is try to be the Facebook COO and grow this business and say I’m a feminist,” she says. People forget, she adds, but when she wrote Lean In, it was a risky and unpopular thing for a female corporate leader to declare.
Sandberg had recently been before Congress. She had been on a panel with Jack Dorsey; the committee wanted Google CEO Sundar Pichai to attend as well, but he declined. A piqued Senate committee left an open seat before an empty space on the table with his name on the folded paper.
Sandberg had prepared for the ordeal with her usual zeal, blocking off entire days for rehearsal. Every detail was considered, down to her interaction with her fellow panelist—she decided that she and Dorsey should not engage in their usual friendly hug, as it might suggest collaboration. She also determined not to bash Pichai, as it would look bad to attack him in his absence.
Sandberg’s testimony was successful. Nobody asked her, as they had done with Zuckerberg, what hotel she was staying at. Her former government work had taught her how to devote proper deference to showboating politicians, some of whom she had visited in days previous to pitch her case personally.
It also helped that, besides Dorsey’s hipster demeanor, other distractions drew attention from any gaps in her replies. Flitting around the hearing room was the conspiracy artist Alex Jones, newly banned from Facebook. “Beep-beep-beep-beep—I am a Russian bot,” he yelled, giving credibility to any social network that chose to shut down his divisive meme machine.
During our long session, Sandberg denied what virtually everyone assumed was her career goal: public elected office. Definitely not, she said. But she would have taken an appointed office. Circumstances thwarted her. One logical exit point might have been after the IPO in 2012. She had promised Zuckerberg five years, and 2013 would mark the spot. But the company had taken more than a year to struggle back to its opening price. It was a bad time to leave.
Her husband’s death was in a class of disaster by itself, she says, “a cataclysmic moment.” After that, she wasn’t taking anything on, except being with her kids and working her way back to Facebook. And then came 2016. “I was hitting a decade [at Facebook],” she says. “But I knew after the election and Russia started happening, and the fake news, we were in for a rough ride. And now I feel tremendous responsibility to stay and make sure this gets to a better place. Mark and I are the most likely people to fix what needs to be fixed.”
I ask her about what I thought might have been the roughest of all recent moments: her meeting with the Congressional Black Caucus in October 2017. Sandberg had gone to meet with them with Facebook’s global diversity officer, Maxine Williams. Basically, the caucus schooled her. They were outraged that Facebook had hosted Russian propaganda that fueled white prejudice against black people. And there was more. Facebook had been hit with civil rights violations because it allowed advertisers to discriminate against African Americans. Its workforce had too few people of color, and it had no black directors on its board. The members also zeroed in on Williams—why was she the head of diversity and not a C-level chief diversity officer? (Facebook later addressed each of these, giving Williams the title change and appointing former American Express CEO Kenneth Chenault to its board.) As each member vented, Sandberg kept repeating, almost like a mantra, “We will do better,” promising answers to their queries. After the meeting, Representative Donald Payne of New Jersey expressed his dissatisfaction, telling The New York Times that he once had an uncle who hated when people said they were “gonna” do this or that to fix their messes. “He used to say, Don’t be a gonna,” said Payne. “And that’s what I said to her—Don’t be a gonna.”
“It was one of the hardest meetings ever,” Sandberg says. “I listened through the whole thing. I took really careful notes. I walked out of there saying we, and I, have a lot of work to do. Over the next couple months I called every single member who was there and even others, and now I am personally leading our civil rights work.”
What made it especially hard is that those were her people—Democrats, human-rights supporters, fighters for justice. “I’m very progressive, I am a big donor, I am a big funder. Look, people were upset we missed it. Like, we’re upset we missed it.”
Sandberg was getting very emotional while reliving that meeting and needed a few seconds to pull herself together. And it then all came out with the tears. The frustration, the people are not getting it frustration, the pain of the past two years. She’d said earlier, quite wisely, that after losing her husband, how bad could dealing with Facebook’s troubles be? But she has been battered by how people have turned on Facebook, seen her reputation questioned, and here in her conference room she’s now addressing it without notes or talking points.
“I mean, look, this is a big deal,” she says, emotion still saturating her voice. “This company got built by Mark, by me, by all of us because we really believe. I believe so much in people having a voice. I started my career working on leprosy in India and I have been to villages and homes where there is no electricity. And I watched the head of leprosy of India step over a patient as if she was nothing. I know what it is to not be connected.
“And so for me, coming to Facebook, we connected people and gave people voice. And that voice is everyone from Hillary Clinton to Donald Trump to . . . to people all over. And so the fact that those same tools . . .”
She catches her breath. The unfinished part of the sentence seems to be those same tools that we use at Facebook are used for evil.
“I remember when the Arab Spring happened, we thought that was incredible,” she continues. “We didn’t do it—it was just a tool. But people were connecting. And for this to happen. Our election was and is a super big deal. And it’s not just that other people were upset. I think maybe because we’re not good at expressing it or maybe because I’m not this honest and open and this is probably the most I’ve ever shared or maybe I haven’t had the forum, but I’m upset. I don’t need the board to be . . . Like, I’m upset. Erskine [Bowles] and I are close; he’s upset, I’m upset, we’re all upset.”
It’s getting close in the room, and the PR person staffing the meeting—a recently hired reinforcement to replace a recent escapee—has stopped the incessant typing that PR nannies usually do throughout these meetings, and her eyes are so wide they look like those googly stickers on Messenger. “I knew this was going to be a really hard time at Facebook and I was hitting ten years,” says Sandberg. “But I am here to do the hard work.”
In these doldrums, it seemed almost as if weeping were part of the job description. In an interview for The New York Times, CTO Mike Schroepfer teared up while describing the inability of Facebook’s artificial intelligence to stop the spread of the murder video from Christchurch, New Zealand. I heard secondhand that one Facebooker remarked—I could never confirm this—that on some days all stalls in the women’s bathrooms would be occupied by crying employees; those coming in to weep had to queue up to wait their turn.
“That’s a terrible story,” says Sandberg when I shared it with her. “I mean, I cry. Cry right at your desk!”
ONCE FACEBOOK WAS Silicon Valley’s prime raider of talent. Now its competitors were preying on the Facebook workforce. Employees found it a good time to leave for start-ups. A computer-science teacher at one of the big AI schools told me that Facebook used to be the top employment choice. Now he guesses that about 30 percent of his students won’t consider it, for moral reasons.
Inside Facebook there were doubts too. Facebook regularly surveys its employees, and The Wall Street Journal managed to see the results of a poll taken by 29,000 Facebookers in October of 2018. (Even the fact that the poll leaked was a sign of trouble in the ranks.) Only a little more than half of the workforce expressed optimism about the company, a drop of 32 points from the previous year. Thousands of workers in the previous year no longer believed that Facebook was good for the world, a sentiment that also won a bare 53 percent majority.
The doubts had even reached the highest levels of the company. One executive described to me a meeting of the very top leaders, known as the small group, around mid-2018. “I don’t think everyone believes the stuff we’re working on,” Zuckerberg told them. He asked them to write down Facebook’s big efforts on a piece of paper, and rate them, one to ten.
The results were disheartening.
“Basically, everyone was like, Everything we’re doing is not good,” says that executive. “Why are we trying to compete with Google with Search? Why are we doing Watch? Why Oculus?” (Another person in the room contends that it wasn’t all so negative, but confirms the episode.) Zuckerberg was unfazed. He told his team that all big efforts face skepticism at first. He had always done well by blowing past the doubters.
Up until then, Zuckerberg’s magic had always been the ability to make the right call. Sam Lessin, the Harvard classmate who later worked as a Facebook executive and remains a close friend, says that multiple times he would be in a room where Zuckerberg made a decision that conflicted with the opinion of everyone else. His view would prevail, and he would be right. Time and again. After a while, people came to accept it.
Now some of those decisions didn’t look so good. Maybe even a wartime CEO might take objections more seriously. “It’s within every leader’s right to make edicts,” says someone in the room for many Zuckerberg decisions. “But leaders fail when they convince themselves that everyone disagreeing with them is a signal for them being right.”
Zuckerberg’s loyalists stuck with him. On his thirty-third birthday, he posted a celebratory photo with twenty of his closest work friends, who presented him with a cake depicting different cuts of meat. Among those gathered around their leader with huge smiles were people now defined by their ties to Zuckerberg: Sandberg, Bosworth, and his empath alter ego, Chris Cox.
But there was a growing number of disaffected former loyalists. Roger McNamee, the investor who wrote Zuckerberg and Sandberg about fake news in 2016, had been only the first of a vocal covey of apostates who paused in their glamorous lives to publicly condemn the company that made them wealthy. In a public interview at the National Constitution Center in Philadelphia, Sean Parker attacked Facebook’s addictiveness. “The thought process that went into building these applications, Facebook being the first of them . . . was all about: How do we consume as much of your time and conscious attention as possible?” he said. “The inventors, creators—it’s me, it’s Mark, it’s Kevin Systrom on Instagram, it’s all of these people—understood this consciously. And we did it anyway.” Justin Rosenstein, who co-invented the Like button, now was condemning the “bright dings of pseudo-pleasure” that come from clicking on the thumbs-up sign.
Perhaps the most bruising critique came from Chamath Palihapitiya. Speaking at the Stanford Graduate Business School in December 2017, the man who drove Facebook’s growth said, “I think we have created tools that are ripping apart the social fabric of how society works.” He cited an incident in India where the WhatsApp rumor mill spread news about a kidnapping that never happened—and seven people were lynched in the outrage that followed. “It’s just a really, really bad state of affairs,” he said. Though some good does come from Facebook, he added, he personally avoided the service and his children “aren’t allowed to use that shit.”
That could not stand. Sheryl Sandberg got in touch with him. Neither party will disclose what was said, but Palihapitiya publicly clawed back his statement.
DESPITE THE PUMMELING Facebook was taking, its business had never been better. Its core advertising strategy—which merged the voluminous data it gathered with outside information to help advertisers reach the most promising audiences—was proving unbeatable. After years of developing its techniques and calculating metrics that proved its value, Facebook was the undisputed leader in what was known as PII, or personally identifiable information.
Marc Pritchard, P&G’s chief brand officer, remembers a conversation that he had years earlier with Sandberg about cookies, the little data markers that websites plant on your computer when you visit. “I remember very clearly,” he says. “Sheryl said, Cookies are going to die and the future is going to be PII data. The difference is with PII data, you got a lot more to manage than cookie data. Cookie data is anonymous. The future being PII data was right.”
As journalists and regulators began to expose how much Facebook knew about its users and how skillfully it was packaging the information to deliver ads, the company made some mild concessions to transparency. That hardly slowed down momentum. For one thing, the ad system was so complicated that even Zuckerberg didn’t understand all the intricacies. When he went before Congress, he deferred a number of questions about Facebook’s ad practices. It wasn’t part of his preparation. “I was expecting the congressional testimony to be primarily about Cambridge Analytica and maybe to some extent about Russian interference,” he told me soon after testifying. “I figured that the other product questions that came up, I’d basically be able to answer because I built our product.” He punted the questions, and on his plane ride home vowed to look into it himself. “I actually felt like I didn’t understand all the details around how we were using external data on our ad system, and I wasn’t okay with that,” he said.
What Zuckerberg found was a system so fortified with information that even seemingly significant changes would essentially make no difference. Just before Zuckerberg testified, Facebook had ended one of its most controversial practices, called Partner Categories. Until then, Facebook matched its own information with the extensive files that data brokers (including the giants like Equifax and Experian) compiled on consumers, so advertisers could target individuals more accurately. If, for instance, a publication wanted to reach its own subscribers on Facebook—or those subscribing to a competitor—it could use the combined data to hit them directly.
When I asked an ad executive a few months later if this change had any effect on the business, the Facebooker laughed. None! was the answer. While Facebook stopped buying data from brokers, its policy stated clearly, “Businesses may continue, on their own, to work with data providers.” Facebook made it easy for advertisers to plug that purchased data into their system, just as before. The only difference was that the advertisers were now paying the data brokers directly.
While the Europeans imposed relatively strict privacy regulations, everywhere else Facebook was free to participate in the ongoing bonanza of Internet tracking, a widespread practice where every website people visited and every search term they used was routinely logged and used to sell things to them. The US legislators kept talking about privacy laws that might roll that back but never seemed to come up with any. No one took more advantage of this than Facebook, because its own invisible pixel was on millions of sites. If you lit on a page for a brand of sneakers, or checked out a car, or, heaven forbid, vetted an over-the-counter drug, you could reliably count on an ad for what you had just perused popping up in your News Feed. The creepiness of it made people shudder.
The phenomenon gave rise to a widely shared suspicion that Facebook was somehow listening to everyone’s conversations. Senator Gary Peters spoke for many Americans when he asked Zuckerberg about it during testimony. “I hear it all the time, including from my own staff,” he said. “Yes or no: Does Facebook use audio obtained from mobile devices to enrich personal information about its users?”
“No,” said Zuckerberg.
The truth was that Facebook didn’t need to spy on people’s audio. It already had all the PII data it needed to help advertisers hit the mark not just on the kinds of audiences it wanted to reach, but on the exact individuals who would be in that audience.
As a result, Facebook was a must-buy for advertisers, as digital advertising headed toward the majority of all ad spending in the United States, a mark it would actually reach in 2019. Its only serious competition, particularly in the dominant mobile field, was Google; the two companies combined had around 60 percent of all digital advertising, and more than two-thirds of the mobile market.
All during Facebook’s tumble from grace, no matter which story about the company’s missteps was leading the news, one could expect its earnings call to tell a different story: either Sandberg or its chief financial officer, David Wehner, would say, “We had a very good quarter,” more often than not reporting record revenues. The company that Mark Zuckerberg started with a thousand dollars from a classmate was now raking in more than $50 billion a year, and its Wall Street valuation was more than $500 billion.
One call, however, did not go so well, and that was to report second-quarter earnings in July 2018. As always, after the stock market closed, Zuckerberg, Sandberg, and Wehner trudged to a conference room on campus to report the results and take questions from analysts. This time they had bad news.
For months Zuckerberg had been promising to hire thousands more people to work safety and security, and some of that was now affecting profits. That wasn’t new. “As I’ve said on past calls,” Zuckerberg read from his notes, “we’re investing so much in security that it will significantly impact our profitability.” What did make an impact was that Facebook indicated that its momentum from its current ad model had slowed: sponsored stories on the News Feed may no longer be the future. But Facebook did have a replacement in mind: ads placed among the strip of clips known as Stories, which started on Instagram and had now moved to Facebook, WhatsApp, and Messenger. Facebook had yet to figure out how to make those equally profitable, and its advertisers were still learning how to use those ads. Facebook was confident, though, that all of this would happen. Just not right away. And for multiple quarters, the gap would affect revenue.
It was like someone had yelled Fire! in a jam-packed nightclub. Investors panicked, selling shares in the after-hours market. When Zuckerberg and his team left the conference room, Facebook’s stock price had fallen 20 percent, losing $120 billion in value. Zuckerberg himself had lost $17 billion in the hour that the call took.
“I think we had the biggest stock drop in the history of the world or something,” Zuckerberg told me later. “That was a very big correction based on trying to reset expectations about how we were going to run the company.”
But even that setback was temporary. Facebook’s users weren’t going anywhere. And neither were its revenues. “What is clear is that people are still using Facebook and Instagram,” says Pritchard. “And people are still advertising on Facebook and Instagram.”
FACEBOOK WAS DOING hard work, led by the Integrity team morphed out of the Growth organization. According to Guy Rosen, the thinking was that since Growth had built Facebook to more than 2 billion users, it would be best suited to fixing safety and security at scale. “And the Growth team is very analytical in how the work is approached and how things are measured,” he says.
The Integrity team had a motto for this process: “Remove, Reduce, Inform.” And it seemed to be having an impact. Three independent reports studying Facebook between 2016 and 2018 concluded that the company was making progress on fake news. A study by researchers at the University of Michigan estimated that what it called “iffy content” had been reduced by half.
Citing such statistics didn’t move the public much, since the headlines were all about the repercussions of Facebook’s previous sins, which regulators were pursuing with zeal. None less than the Federal Trade Commission. It now appeared that Facebook had not lived up to the promises it made in the 2011 Consent Decree. One was that Facebook must inform users in advance if their data were to be handed over to other companies. Since that’s exactly what happened for at least 50 million users in the Cambridge case, Facebook had the very difficult task of explaining why it failed to notify those people, and did nothing while CA pelted Facebook with ads that may have used targeting based on Aleksandr Kogan’s personality profiles.
Compliance with the order took two paths. One was with the FTC itself: when Facebook launched something new, it would brief staffers on the commission, noting the privacy protections of the product or feature. Sometimes they would even take guidance on tweaking the product to protect users even more than in the original design. Facebook also retained, as the order demanded, an outside auditor, in this case PwC (formerly PricewaterhouseCoopers), one of the “Big Four.” (At the time Facebook hired PwC, Joel Kaplan’s wife had been its partner in chief of public policy, a post she kept until 2016.) Periodically, a PwC team would hear from Facebook lawyers and policy people about how the company was complying with the order, and then shuffle back to their offices to prepare a report for the FTC. Apparently the auditors did not flag the issue that Facebook had failed to inform 50 million users that a developer had violated its terms of service and handed their data over to political consultants funded by the far right. People learned about this from reporters, not Facebook.
The FTC felt rightfully burned by Facebook’s behavior. A new investigation found the company in violation of its 2011 agreement. Its sins included “deceptive privacy settings, failure to maintain safeguards over third-party access to data, serving ads by using phone numbers provided to Facebook for the purpose of account security, and lying to certain users [when it said] its facial recognition technology was off by default, when in fact it was on.” The complaint was a devastatingly detailed description of a company that seemed to have earned the sobriquet of “digital gangsters” that the UK Parliament had bestowed on it in February 2019. What made things worse was that all of the deception and trickery occurred while the company was supposedly on good behavior after its previous perfidies.
The finding kicked off a protracted settlement negotiation with Facebook, a complicated game of chicken where the commission tried to eke out the most punishment it could without Facebook rejecting the deal and taking the issue to a trial, whose uncertain outcome would not be determined for years. One of the key points of contention was the personal responsibility of Zuckerberg and Sandberg. Many observers expected they would be named, as it was up to them to uphold the previous settlement, and they’d failed spectacularly.
The FTC blinked first. On July 24, it announced its settlement, without naming Zuckerberg or Sandberg. They had not even been deposed, as is common in such investigations. As expected, it hit Facebook with a $5 billion fine, by far the largest the agency had ever levied. (The previous high had been $100 million.) Even so, two of the five commissioners dissented, feeling that the settlement went too easy on Facebook. Bolstering their claim was that Facebook’s stock hardly budged at the announcement. In an earnings call soon after the settlement, Facebook reported $17 billion in revenue that quarter. The term used most often in the reports of the settlement was “slap on the wrist.”
IN JUNE 2018, longtime VP of communications and public policy Elliot Schrage resigned (though he would remain at the company in an advisory role). After a long search, Sandberg began pursuing former UK politician Nicholas Clegg, who had once been deputy prime minister before suffering two humiliating defeats, losing his cabinet post and then his seat. Since then he had been looking at what was happening in the tech world. “The more I looked at the rhetoric and language of backlash against technology, in particular social media, the more alarmed I became that the backlash would kind of throw the baby out with the bathwater,” he says. This clearly resonated with Facebook.
Clegg was reluctant to take another public beating by representing the fattest piñata of the tech world, but Sandberg convinced him to fly to California and meet with Zuckerberg and Chan. “When Sheryl has a target in mind, she is pretty implacable and pretty remorseless,” says Clegg. He warned Sandberg that he would be blunt. Indeed, on meeting the CEO, he told him, “Your fundamental problem is that people think you’re too powerful and you don’t care.”
“Yes, totally understandable, I get that,” said Zuckerberg. Clegg later would say the reply surprised him, but Zuckerberg had been absorbing criticism for two years, with no tears whatsoever dripping from his unblinking eyes. Clegg got the job.
Clegg’s arrival came at a time when internal tension at Facebook had been so high that it welcomed anything that seemed like a breather. Whether it was simply fatigue or a feeling of genuine improvement, morale was stabilizing at Facebook, and Clegg contributed to that. Some months earlier, Zuckerberg had handed down several edicts that fit with his wartime CEO stance. No longer would anyone be given a C-level title. (Zuckerberg says that this was not a “broad company effort,” but a decision that came from the fact that high-ranking executives like Olivan were C-deprived, while others with no more power got that perk.) It was a surprisingly easy order to enforce because most people besides Sandberg who had a C—the chief security officer, the chief marketing officer, and the CEOs of Instagram, WhatsApp, and Oculus—were already gone or headed out the door. Another was that Facebook would not cooperate with any media profiles of its executives. Clegg’s view was looser. He allowed a long, thoughtful Vanity Fair story on content moderation that focused on Monika Bickert’s role.
Late in the year Clegg was the final speaker at an all-hands meeting, following Zuckerberg’s assurances that the company was making progress, and Guy Rosen’s presentation on how it was proceeding on Integrity issues. Clegg’s straightforward, encouraging style resonated. “I said that [while] some of the coverage might be unfair, you can’t con your way out of what is true,” he told me afterward. The good news, said Clegg, was that while Facebook had screwed up, it was now on the road to redemption. “It’s just true that this company is now trying to retrofit onto its extraordinary creations and inventions a bunch of stabilizers, seat belts, and the like.” One person at the meeting—someone who only months earlier had made scathing remarks about the company—seemed reassured. “The coverage had gotten so over the top, to the point of caricature,” says the employee. “Even people internally who had their doubts about the company felt like, Hang on a second, this is not right—we are better than this. That all-hands was one of the best I’ve seen them do. I heard from a lot of people how good it made them feel.”
The public view of Facebook was still brutal. But the company was making changes and the new PR regime promised at least not to make things worse.
“It’s a shared view that we’ve turned the corner and that we now have confidence that we can not only address problems that come up but we can systematically get ahead of them in the future,” Andrew Bosworth told me in late 2018.
Not all of them, it turned out.
While Facebook’s critics were abundant, one in particular seemed to get under Zuckerberg’s skin: Apple’s CEO, Tim Cook. As Facebook’s problems became more public after the election, Cook began to voice reservations about social media, and Facebook in particular. Apple’s business model, Cook noted at every opportunity, was based on a straightforward exchange: you pay for the product and use it. The Facebook business model, Cook would note, provides a service that seems free but actually isn’t, as you are paying with your personal information and constant exposure to ads. With a touch of his native Alabama in his tone, Cook would say, “If you’re not the customer, you’re the product.” He implied that Apple’s model was morally superior.
Years after the death of Apple’s fabled CEO and co-founder Steve Jobs, the company still had the aura of the elite operation in Silicon Valley. Zuckerberg had gotten along well with Jobs, and seemed to have been a willing mentee. Jobs recognized Zuckerberg’s intelligence and seemed to get a kick out of his brash approach. They would often go on walks together, with the older executive sharing his pointed insights.
Cook and Zuckerberg’s relationship was chillier. Cook disagreed with Zuckerberg’s comments about privacy, and did not use Facebook personally. Basically, Cook didn’t seem to trust Zuckerberg as a partner, and didn’t go out of his way to hide it. Complicating matters was the dramatic pivot of the press and government, and to some degree the public, against the giant tech companies that suddenly seemed to dominate everyday life. Insiders referred to it as the “Techlash.” Of the West Coast behemoths being lashed against, Facebook was the biggest source of scorn and concern, with Zuckerberg seen as the guy who helped lose the halo that once hovered over the tech world.
Just as leaders of great national powers would summit despite their hostilities, Zuckerberg and Cook would generally set aside time to talk at the annual Herb Allen summer gathering. In 2017, Zuckerberg had been upset at a remark Cook had made at a commencement speech; the Apple CEO had told the graduates not to measure their worth with Likes, and Zuckerberg had taken that personally.
Tim Cook wasn’t about to run his speeches past Zuckerberg. By then Cook was promoting privacy as a pillar of Apple’s deal with its customers. The targets of his jibes were Google and Facebook, but only Google was a direct competitor, and Zuckerberg felt sideswiped. After Cambridge Analytica, Cook was asked what he would do if he were in Zuckerberg’s place. “I wouldn’t be in that situation,” he said. In an interview soon afterward, Zuckerberg called the comment “extremely glib.”
In mid-2018, Zuckerberg arranged a CEO sit-down at Apple Park, the company’s exotic spaceship-like headquarters. Once again, Zuckerberg complained about Cook’s remarks. And once again, Cook brushed him off.
Zuckerberg says he can’t get into Cook’s head but is disappointed that he hasn’t convinced Apple’s leader that Facebook’s business model is as valid as Apple’s is. “It’s widely understood that a lot of information businesses or media businesses are supported by ads, to make sure that the content can reach as many people as possible to deliver the most value,” he says. “And there is a certain bargain there, which is, you’re going to be able to use this service for free and there will be a cost, which you’ll pay with your attention, and advertisers will want to target ads to the type of people who are using whatever that service is.”
On January 30, 2019, the tension between Apple and Facebook exploded into a hot war. The escalation began when Apple looked into an app called Onavo Protect. It was the successor to the application created by the Israeli spyware company Onavo, purchased by Facebook in 2013. The application followed Onavo’s original plan of providing a free service to consumers and sucking the hell out of the data to do business analysis. The app promised users a secure network connection and used the Facebook name to signal trustworthiness. Once users installed it, it protected their information from everyone but Facebook, which aggregated all the data from Protect users to figure out what people did with their phones.
This approach violated Apple’s terms of service. Onavo Protect, Apple concluded, was a surveillance tool marketing itself as a secure VPN, and harmful to users. It told Facebook to withdraw the app, or Apple would ban it.
Facebook did withdraw it, in August 2018. But it was not ready to give up its data. In fact, it already had a tool that used similar VPN technology to monitor users’ activity, called Facebook Research. Facebook paid subjects to use it and was transparent that it would gather data. That still put it in violation of Apple’s terms, but in this case, Facebook had a plan in mind to bypass the rules. Since it paid the app users a paltry sum, but still a sum, Facebook now considered them contractors. (Those users also included thousands of teenagers, a practice that possibly ran afoul of laws that protected the privacy of minors.) This enabled Facebook to include the app in Apple’s “enterprise” program. Since apps in the enterprise program weren’t available to the public—most often they were used for pre-release prototypes or utilities limited to employees only—they didn’t have to go through the usual Apple certification.
Then Apple discovered the repackaged app and decided that it was abusing the enterprise program. So Apple decided to turn off Facebook’s access to the entire program. Without warning. In terms of internal applications, this was like cutting off a company’s electricity. Not only was the Onavo app rendered dysfunctional, but all the test versions of programs in development stopped working. In addition, a set of helpful services for people who worked at Facebook, like the one that listed the menus in various campus cafés, also suddenly stopped working. Facebook’s employee shuttles, which people widely use to get around the sprawling headquarters complex, also relied on an internal app, which went down as well.
The cutoff coincided with a quarterly earnings call. Zuckerberg, Sandberg, and CFO Dave Wehner entered the conference room to do the call. They had good news. Last year, 2018, had been the company’s best yet. “Full-year 2018 revenue grew 37 percent to $56 billion, and we generated over $15 billion of free cash flow,” said CFO Wehner.
Zuckerberg boasted about how much Facebook had risen to his trust challenge. “We fundamentally changed how we run this company,” he said. “We’ve changed how we build services to focus more on preventing harm. We’ve invested billions of dollars in security, which has affected our profitability. We’ve taken steps that reduced engagement in WhatsApp to stop misinformation, and reduced viral videos in Facebook by more than 50 million hours a day to improve well-being. . . . I feel like we’ve come out of 2018 not only making real progress on important issues but having a clearer sense of what we believe are the right ways to move forward.”
As he spoke, people on Facebook’s campus could not test new products and were canceling meetings because they could not get the shuttle.
It was Facebook’s split-screen moment, symbolizing the disconnect between the erosion of its reputation and the robustness of its business. Facebook’s campus had come to standstill, a direct result of its dicey privacy practices. But the money kept pouring in.
THE DISCONNECT EXEMPLIFIED Facebook’s difficult 2018. Its leaders felt that the company was making progress, but in the harder-to-measure market of reputation, its stock had bottomed out. People will remember Facebook’s 2018 for Cambridge Analytica, a huge data breach that was actually a breach, and maybe a hundred other mistakes and violations. But Facebook would prefer that people recall its Election War Room.
This was a conference room set aside to prepare for various plebiscites in the summer and fall of 2018, prime among them the American midterm congressional elections. I visited it twice, and once, after much prodding, I was allowed to drop in for a few minutes on the day that voters went to the polls. But tours of the War Room were common in the weeks leading up to the election. In fact, the pride in the War Room aroused suspicion that the whole thing was a charade, as phony as the Facebook pages set up by the Russians during the 2016 election.
The twenty-four people staffing the room, a spokesperson told me, were backed up by the 20,000 people now working in safety and security at Facebook. (A year later, Facebook would be reporting a higher number: 35,000!) Inside, the room was like a security Ginza, with hundreds of screens displaying dashboards reporting real-time results. Others conferenced in Integrity workers from around the globe, including Brazil, where an election was also being held. Even with the AI, the War Room was an expensive, labor-intensive solution. But whether or not the physical facility was actually required, or was essentially a showroom for the press, Facebook got through the 2018 midterms without searing consequences. Facebook’s head of civic engagement says the system actually did block tampering, citing an example of a false-information attack from Pakistan (or Macedonia, he doesn’t remember which) directed at Wyoming voters.
The fact that an election was held without Facebook having a role in screwing up the outcome was now considered a victory for the social network.
“I’d like to have our track record of the 2018 election for 2016, for sure,” Sandberg told me. “In 2016 we had never thought of this form of interference, we didn’t know what it was, no one in the government knew what it was, no one in any administration told us anything about it, before or after, at all.” (Actually, Maria Ressa told them.)
Still, Facebook was making progress. But reading newspapers (if anyone still did that) or checking the online news, you would never know this. Scandals kept popping up. It became a joke among journalists, a parody of the sign in factories listing how many days since the last industrial accident, with the daily number ideally reaching triple figures, at the least. With Facebook, the number seldom reached double figures, and often reset at one or two. The reporters kept digging (or wrote up what fell into their laps), the regulators kept investigating, the courts kept deposing, and the public was still thinking about whether it should #deletefacebook.
Enough with those tweaks, people seemed to be saying. The question was whether the next crisis might be big enough to ruin the company—and if Mark Zuckerberg was really plotting change, fundamental change.
And it turns out he was.