CHAPTER 26

THE WEB HITS BOTTOM

“Which Ousted Arab Spring Ruler Are You?” “You Might Be Cleaning Your Penis Wrong,” “37 Things Conservatives Would Rather Do Than Watch Obama’s State of the Union Speech,” “29 Cats Who Failed So Hard They Won.”

Here was BuzzFeed, at its height in the 2010s, undisputed king of clickbait, and the grandmaster of virality. As a cofounder of The Huffington Post, Jonah Peretti had gained a measure of success, recognition, and personal wealth. But it wouldn’t be long before he lost interest in the operation, which had begun to run itself, and felt compelled to return to his original passion: the pure art and science of harvesting attention with “contagious” or “viral” media. He was still at The Huffington Post when he began to conceive the endpoint, or perhaps the punch line, to his long obsession: a site whose mission would be nothing but to build pure contagion and launch it into the ether.

BuzzFeed billed itself as the “first true social news organization,” which meant it was designed for a post-Facebook and post-Twitter world, where news gained currency by being shared on social networks, through newsfeeds, Twitter feeds, and the like. It was also designed to be read on the now ubiquitous mobile platforms; by 2015, 60 percent of its traffic was via phones and other wireless devices (including 21 percent from Snapchat)—the key to success was now getting people to share stuff socially from mobile.

By the time Peretti built BuzzFeed, viral media were not an occasional phenomenon, but reaching the public like successive waves crashing on a metaphorical shore, they thus both rivaled and complemented (depending on the context) existing means of capturing attention. It was a time when a random picture of a grumpy-looking cat (Grumpy Cat) posted on the online bulletin board Reddit made a viable career for its owners; when a ridiculous dance video like “Gangnam Style” amassed more than 2.4 billion online views (while the 2014 World Cup, the most watched event in human history, reached about 1 billion).

As nothing but a pure embodiment of Peretti’s techniques, BuzzFeed did without even the pretense of a public mission, the only goal being to amuse viewers enough to trigger their sharing. With content often nearly devoid of any meaningful communication, the medium truly was the message. And while this might sound like unprecedented cynicism vis-à-vis the audience, the idea was to transfer creative intention to them; they alone would “decide if the project reaches 10 people or 10 million people.”1 To help them decide, BuzzFeed pioneered techniques like “headline optimization,” which was meant to make the piece irresistible and clicking on it virtually involuntary. In the hands of the headline doctors, a video like “Zach Wahls Speaks About Family” became “Two Lesbians Raised a Baby and This Is What They Got”—and earned 18 million views. BuzzFeed’s lead data scientist, Ky Harlin, once crisply explained the paradoxical logic of headlining: “You can usually get somebody to click on something just based on their own curiosity or something like that, but it doesn’t mean that they’re actually going to end up liking the content.”

BuzzFeed also developed the statistical analysis of sharing, keeping detailed information on various metrics, especially the one they called “viral lift.” Let’s take, for example, a story entitled “48 Pictures That Capture the 1990s,” which garnered over 1.2 million views. BuzzFeed would measure how many people read it (views), and of those, how many went on to share it, whether on Twitter, Facebook, or other sites. If, say, twenty-two people with whom the link was shared were moved to click on it, the story would be said to have a viral lift of 22x. Such data would help BuzzFeed’s experts refine their understanding of what gets shared, and what doesn’t.

Collectively BuzzFeed and its rivals—Mashable, Upworthy, and in time parts of the mainstream media—began to crack the code; eventually they could consistently make content go viral. Much of what they discovered validated Peretti’s original theories—particularly about the necessity of stimulating “pleasure in the social process of passing” something along and of ensuring that the contagion “represent[s] the simplest form of an idea.”2 But the “pleasure” of sharing did not necessarily mean that viewing the content had been pleasurable. The urge to share was activated by a spectrum of “high-arousal” emotions, like awe, outrage, and anxiety. A story with a headline like “When All Else Fails, Blaming the Patient Often Comes Next,” or “What Red Ink? Wall Street Paid Hefty Bonuses,” or “Rare Treatment Is Reported to Cure AIDS Patient” would trigger one of these emotions—or even better, several at once.

Naked plays for attention always draw scorn, and as BuzzFeed’s fortunes rose in the 2010s, it was no exception. As Ben Cohen, founder of the journalism site The Daily Banter, wrote: “I loathe BuzzFeed and pretty much everything they do….It could well trump Fox News as the single biggest threat to journalism ever created.”3 When BuzzFeed presented the Egyptian democratic revolution as a series of GIFs from the film Jurassic Park, Cohen fulminated: “To say this is childish, puerile bullshit would be a massive understatement….Doing funny GIF posts about cats and hangovers is one thing, but reducing a highly complex political crisis into 2 second moving screen shots of a children’s dinosaur movie is something completely different. If BuzzFeed really is the future of journalism, we’re completely and utterly fucked.”4 Indeed, by 2012, the scramble for eyeballs against forces like BuzzFeed seemed to bring news media to a new low. When Fox News broadcast a video of a man committing suicide and BuzzFeed reposted the link, the Columbia Journalism Review was compelled to ask, “Who’s worse? @FoxNews for airing the suicide, or @BuzzFeed for re-posting the video just in case you missed it the first time?”5

BuzzFeed was indeed proving the envy of all other online attention merchants, in traffic at least. By 2015, its 200+ million monthly unique viewers exceeded most of its competitors, and 75 percent of its traffic was coming from social media. Ultimately its techniques were widely copied, not just by its direct competitors like the Daily Mail or Cracked.com but by The Huffington Post, Peretti’s previous venture, and more obliquely, by magazines like Slate as well as newspapers like The Washington Post. Even literary magazines like The Atlantic and The New Yorker got in on the act. BuzzFeed thus became the reference point, the gold standard, for attention capture on the web.

Not that BuzzFeed was terribly profitable. It lost money for most of its early years, only began to turn a profit in 2013, and never exceeded $10 million (while hardly a fair comparison, Apple’s iTunes store alone, also in the content business, and not considered highly profitable, has been estimated to clear $1 billion in profit per year). Its fortunes reflected the still-low price of digital ads; BuzzFeed’s annual ad revenues of roughly $100 million were still far less than, say, People magazine (about $1 billion). Nonetheless, BuzzFeed was still growing, and as the decade reached its midpoint, was pegged at $850 million in value; then, over the summer of 2015, the cable giant Comcast bought a stake that valued the company at $1.5 billion.

Comcast’s investment in BuzzFeed was at last a consummation of the union between the old and the new media such as Microsoft and AOL–Time Warner had once contemplated, though now involving far less money than in those headier days. For comparison’s sake, though, it is worth remarking that The Washington Post, with its forty-seven Pulitzer Prizes, was purchased by Amazon for $250 million in 2013—old media valuations clearly weren’t what they used to be, either. And yet even if BuzzFeed had attracted real dollars, the deal with Comcast nonetheless seemed to diminish the new media in some way. Blogging and other forces that Jeff Jarvis and others had predicted were going to demolish the establishment had eventually yielded to BuzzFeed. BuzzFeed was then bought by old media for what amounted to chump change. So much for all of that.

Peretti had never been less than forthright and consistent about the objectives of his work: it was attention capture for its own sake. But the entry of contagions and clickbait and even social networks in the ecosystem of the content-driven media inevitably had its degrading influence on the latter. Mark Manson well described the state of the web in the 2010s:

Last week, I logged onto Facebook to see a story about a man who got drunk, cut off his friend’s penis and then fed it to a dog. This was followed by a story of a 100-year-old woman who had never seen the ocean before. Then eight ways I can totally know I’m a 90’s kid. Then 11 steps to make me a “smarter Black Friday shopper,” an oxymoron if I ever saw one. This is life now: one constant, never-ending stream of non sequiturs and self-referential garbage that passes in through our eyes and out of our brains at the speed of a touchscreen.6

Within twenty years of having been declared king, content seemed to be on the road to serfdom.

Once a commons that fostered the amateur eccentric in every area of interest, the web, by 2015, was thoroughly overrun by commercial junk, much of it directed at the very basest human impulses of voyeurism and titillation. To be sure, there were exceptions, like Wikipedia, a healthy nonprofit; Reddit, still a haven for some of the spirit of the old Internet; small magazines like Verge, Vox, Quartz, and the Awl; even some efforts to reboot blogging, like the Medium. Likewise, faced with an existential crisis of relevancy, traditional news media, so long allergic to the Internet, dramatically improved their online content over the decade. But these bright spots were engulfed by the vast areas of darkness, the lands of the cajoling listicles and the celebrity nonstories, engineered for no purpose but to keep a public mindlessly clicking and sharing away, spreading the accompanying ads like a bad cold. As the clicks added up, what was most depressing perhaps was that all this was for the sake of amassing no great fortune, but in fact relatively paltry commercial gains, rounding errors in the larger scheme of commerce. The idealists had hoped the web would be different, and it certainly was for a time, but over the long term it would become something of a 99-cent store, if not an outright cesspool. As with the demolition of Penn Station, a great architectural feat had been defaced for little in return. But as so often in the history of attention merchants, when competition mounts, the unseemliness soars and the stakes plummet.

And that was just the content; the advertising, meanwhile, was epically worse. By the mid-2010s the average reader on news sites like the Boston Globe’s boston.com would be subjected to extraordinary surveillance methods, with only the barest degree of consent. Such operations would be invisible to the user, except for the delays and solicitations they triggered. Online tracking technologies evolved to a point that would have made a Soviet-era spy blush. Arrival at NYPost.com would trigger up to twenty or more urgent “tracking” messages to online ad agencies, or “ad networks,” advising them of any available intelligence on the user, in addition to specifying what stories they were reading. Attention merchants had always been ravenous for attention, but now they were gobbling up personal data as well. Perhaps the oversharing on social media had simply lowered the standard of privacy. Perhaps the Internet, with its potential to capture every turn of our attention, made this inevitable. Whatever the case, several commercial entities were now compiling ever more detailed dossiers on every man, woman, and child. It is a more thoroughly invasive effort than any NSA data collection ever disclosed—and one of even more dubious utility.

The automation of customized advertising was intended, in theory, to present you with things more likely to seize your attention and make you click. It must be seen as a continuation of the search we have described for advertising’s Holy Grail: pitches so aptly keyed to one’s interests that they would be as welcome as morning sunshine. The idealists foresaw a day when ad platforms would be like a loyal valet who detected his master’s needs before he was aware of them, who suggested a new pair of shoes as a reasonably priced replacement for those you hadn’t noticed were wearing out. Perhaps he would remind you of your mother-in-law’s birthday while offering to send an appropriate gift at a one-day discount.

But the gap between this theory and its execution was wide enough to march Kitchener’s Army through it. Google’s CEO Eric Schmidt had once said that the ideal was to “get right up to the creepy line and not cross it.”7 Unfortunately, by the mid-2010s, that line was being crossed constantly. While promising to be “helpful” or “thoughtful,” what was delivered was often experienced as “intrusive” and worse. Some ads seemed more like stalkers than valets: if, say, you’d been looking at a pair of shoes on Amazon, an ad for just those shoes would begin following you around the web, prodding you to take another look at them. What was meant to be “relevant” to your wishes and interests turned out to be more of a studied exploitation of one’s weaknesses. The overweight were presented with diet aids; the gadget-obsessed plied with the latest doodads; gamblers encouraged to bet; and so on. One man, after receiving a diagnosis of pancreatic cancer, found himself followed everywhere with “insensitive and tasteless” ads for funeral services. The theoretical idea that customers might welcome or enjoy such solicitations increasingly seemed like a bad joke.

To make matters worse, the technology of behavioral advertising added layers of complexity to the code of any website, causing the system to slow or freeze, and sometimes preventing the page from loading altogether. According to a New York Times study in 2015, despite the fact that every other technology had improved, some websites were now taking five seconds or more to load; and the situation was even worse on mobile phones, with their slower connections.8 Videos had a way of popping up and starting to play unbidden; and the user looking for the stop button would find it was the tiniest of all, and often oddly located. And something of a ruse as well: if you missed hitting it directly, yet another website would open, with yet more ads.

In nearly every possible way, ad tech was terrible for consumers and, to compound the pity of it all, not particularly lucrative for advertisers either. As the programmer Marco Arment lamented in 2015, “In the last few years…web ad quality and tolerability have plummeted, and annoyance, abuse, misdirection, and tracking have skyrocketed. Publishers don’t have an easy job trying to stay in business today, but that simply doesn’t justify the rampant abuse, privacy invasion, sleaziness, and creepiness that many of them are forcing upon their readers.”9 Even the tech people managed to draw a short straw, for all of this mischief took a surprising amount of programming talent to accomplish. “The best minds of my generation are thinking about how to make people click ads,” commented scientist Jeff Hammerbacher. “That sucks.”10

Ultimately, the problem was as old as the original proposition of seizing our attention and putting it to uses not our own. It is a scheme that has been revised and renewed with every new technology, which always gains admittance into our lives under the expectation it will improve them—and improve them it does until it acquires motivations of its own, which can only grow and grow. As Oxford ethicist James Williams put it,

Your goals are things like “spend more time with the kids,” “learn to play the zither,” “lose twenty pounds by summer,” “finish my degree,” etc. Your time is scarce, and you know it. Your technologies, on the other hand, are trying to maximize goals like “Time on Site,” “Number of Video Views,” “Number of Pageviews,” and so on. Hence clickbait, hence auto-playing videos, hence avalanches of notifications. Your time is scarce, and your technologies know it.11

In this game of trackers and profile builders, as in so many others, Google and Facebook, de facto diarchs of the online attention merchants, reigned supreme. By design, both firms had acquired the best data on just about every consumer on earth, as well as possessing the best tools for collecting more of it, which by the 2010s both were prepared to exploit as far as possible. Never mind that each had originally been hesitant even to allow advertising to pollute its pages or interfere with the user experience. That was then. Since those years of initial hand-wringing, as investors and Wall Street demanded their quarterly increases in revenue, there was little choice but to turn up the heat, intensifying the reach of ads while hoping that their respective market positions were secure enough to prevent too many users from bolting. The essential bind of the attention merchant began tightening even on those Faustian geniuses who thought they had beaten the Devil.

YouTube, now a Google subsidiary, offers perhaps the starkest example of the change. Once entirely ad-free, by the mid-2010s many of its videos required users to watch a fifteen or thirty-second commercial to see a few minutes of content, making television’s terms look almost respectful by comparison. That priceless impression of getting great stuff for free, the attention merchant’s most essential magic trick, was losing its charm. As with any bit of legerdemain, once the actual workings are revealed, the strings made visible, it becomes ugly and obvious, drained of its power to enchant.

Targeting and tracking were not the only innovations in web advertising over the 2010s. Trying to stay ahead of the growing disenchantment, sites like BuzzFeed brought forth their own inventions, if such ideas could merit the term. One was known as the “advertorial,” or “native advertising,” ads designed to look like content native to the site, aping its form and functionality. The idea was that if it didn’t look like an ad it might get past more users’ defenses. “We work with brands to help them speak the language of the web,” said Peretti in the 2010s of this Trojan horse approach, uncharacteristically compromising the integrity of his shameless love of contagion. “I think there’s an opportunity to create a golden age of advertising, like another Mad Men age of advertising, where people are really creative and take it seriously.”12

In practice this supposedly new Mad Men age consisted of BuzzFeed-style stories written at the behest and expense of corporations. Consider “The 14 Coolest Hybrid Animals,” a series for Toyota’s Prius, or “11 Things You Didn’t Know About [the Sony] PlayStation” joined with “10 Awesome Downloadable Games You May Have Missed.” Since BuzzFeed also wrote “real” news stories about the Hybrid Sony PlayStation, it was sometimes awfully hard to distinguish the content that was sponsored, not that it mattered much in BuzzFeed’s case.

“Maybe I’m old-fashioned but one core ethical rule I thought we had to follow in journalism was the church-state divide between editorial and advertising,” wrote Andrew Sullivan, the prominent blogger and former journalist, about this approach.13 Nonetheless, by mid-decade native advertising had become a commonplace and even heralded as a potential solution to journalism’s woes. It would be embraced by media companies as reputable as The New York Times and Condé Nast, which now, like BuzzFeed, had on-site units aping their own in-house style for their advertisers. “The ‘sponsored content’ model is designed,” Sullivan observed, “to obscure the old line as much as possible.”

The world was slow to turn on the web, still after all the fountain of the new. Whether for reasons of politics or politesse, the web would suffer a lot of ruin before many critics, who’d fallen in love with its openness, would admit that things had gone awry. Even so, by the mid-2010s, more and more ordinary users had their own impression of the emperor’s new clothes. Perhaps the first sign of elite revolt was the idea best articulated by Nicholas Carr that the web was making us stupider. Maybe it was the growing talk of an “information glut,” or Jaron Lanier’s argument, in his manifesto You Are Not a Gadget, that the culture of the web had resulted in a suppression of individual creativity and innovation. Even the incredibly powerful tools of sharing and communication—email, Twitter, YouTube, Facebook, Instagram—once employed by entities like BuzzFeed, didn’t seem so magical, having collaborated in building an attentional environment with so little to admire. For in its totality the web seemed to be bobbing in the crosscurrents of an aggressive egotism and neurasthenic passivity. Thus trapped, it was suddenly vulnerable to capture by anyone with a promise of something different or better.