We are living through an industrial revolution in attention.
In the century between the 1870s and 1970s, the United States underwent an industrial revolution in food (the invention of the refrigerator), light (the mainstreaming of electricity), travel (the triumph of the automobile and the airplane), and even in the anatomy of homes, with the modernization of gas, sewers, and running water. As the economic historian Robert Gordon has argued cleverly, a time traveler visiting a 1970s house would feel quite at home in its kitchen, bathroom, living room, or bedroom.
But if this decade-hopper wanted to watch cable on a giant flat-screen television, or stream music from a million-song library, or look up something on the Internet, he would feel lost. In the 1970s, there were no phones capturing their owners’ gaze like Narcissus at the water, no headphone cords snaking up from every person’s pocket, no libraries of information stored in tiny plates of glass. In the last forty years, the most visible technology changes have been in the realm of attention and its sub-kingdoms of entertainment, communications, and information.
This book is about that revolution, and it offers several lessons to readers who seek something more from culture than an afternoon’s diversion—meaning, emotion, a deeper truth about life, and, perhaps, even a sense of genius.
Several tactics behind pop culture hits can be dangerously seductive outside the arena of entertainment. For example, repetition is the God particle of music. Without it, the world might sound like a cacophonous thrum, and the right amount of repetition in writing can make a sentence sing. But in political rhetoric these modes of repetition, like anaphora and antimetabole, often dress ugly ideas in catchy language. Musical rhetoric creates a kind of cognitive anesthesia that numbs audiences’ capacity for deeper thinking. National debates about important issues would be improved if more people were savvy to musical sloganeering, and pundits and commentators would be of greater service if they distinguished “a great speech” from “a great musical-rhetorical performance with vacuous or dangerous ideas at the center of it.”
The same goes for stories. Heroic myths have served as the favorite narrative structure for raconteurs going back many centuries. The journey from ordinary underdog to uplifting champion offers a deeply satisfying arc that teaches audiences a lovely lesson: that failure, mediocrity, and dissatisfaction are temporary way stations en route to a happier destination. But it is precisely because great stories are persuasive that we should be cautious about which narratives to let into our hearts. The storytellers in our lives, from Hollywood kingpins to garrulous grandparents, are all subtle instructors of cultural expectations. A great story can teach audiences that racial bias is right or wrong, that a war is necessary or abominable, that women are subservient sex objects or worthy self-determining heroes. Narrative drama is not always a moral attribute. It is more like a mercenary feature, equally adept at selling prejudice and empathy. Above all, a great story should be an invitation to think, not a substitute for thinking.
Another major theme of this book is the tension between neophilia and neophobia. Many people crave new products, ideas, and stories, provided that they are just like the products, ideas, and stories that they already know.
The digitization of content has yielded a world of algorithms that theoretically serve both the neophilic and the neophobic self. They organize the world of songs, shows, and articles around our prior preferences, allowing us to discover only those items that are “optimally new.” There is a thrill that comes from reading a brilliant essay that makes a point the reader already agrees with or hearing a joke that elegantly summarizes one’s worldview. It can provide an intellectual freshening, a kind of cognitive therapy.
But one of the downsides of people’s deep preference for familiar ideas is that they avoid stories and arguments when they expect they’ll disagree. Social media and algorithms, which shrink the aperture of news to only a handful of preferential stories by our peers, make it easier for people to avoid discomfiting ideas, or even to never learn of their existence. Rather than connect the world, these technologies can create millions of cults whose worldviews are airbrushed by a commercial interest in surrounding people with ideas that mirror their own.
“I don’t like that man,” Abraham Lincoln said. “I must get to know him better.” It would be nice to treat ideas the same way. For example: to make a list of ideas you don’t like or understand and read something new on the topic every month to get to know them better. Content platforms like Facebook could purposefully offer disfluent feeds, so that a Jewish Connecticut liberal could see the media diet of an evangelical Texas conservative and vice versa. Music and theater are often meant to be cathartic, but information isn’t supposed to feel like therapy. Sometimes learning about the world should hurt.
Soon after the surprisingly popular debut of American Graffiti, George Lucas told an interviewer he was working on a western set in outer space. The interviewer paused uncomfortably. “Don’t worry,” Lucas said. “Ten-year-old boys will love it.”
Is it meaningful that the most significant secular myth of the twentieth century was designed to appeal to fifth-grade boys? Perhaps the lesson is that ten-year-olds are Hollywood’s demographic goldmine. Perhaps it’s that adults are more like children than most people think. Or perhaps there is no lesson: Culture is chaos, after all, and there are a million ways that this movie for middle schoolers might have flopped (particularly if Lucas had filmed one of his horrid early drafts).
But I’ve come to see that there is a separate wisdom in Lucas’s throwaway comment. The paradox of scale is that the biggest hits are often designed for a small, well-defined group of people. Star Wars was for children of a magical age—old enough to appreciate movies and young enough to love medieval histrionics in space without irony or embarrassment. Facebook was initially designed to appeal to the friends of Harvard undergrads, not to connect the whole world. Vince Forrest found that his bestselling buttons have the most amusingly strange and specific messages. Johannes Brahms wrote his world-famous lullaby for one mother. Narrowly tailored hits are more likely to succeed, perhaps both because of their inherent qualities—they are focused works—and because of their network qualities. People are more likely to talk about products and ideas that they feel unusually attached to. High school cliques, California cults, and ideological clusters are all defined by their differences from an outward mainstream. These groups have existed for a long time, but a digitally connected world of commerce means it is easier to profit from cultish hits—to be profitably “weird at scale.”
Last year, I was discussing this book with a friend of mine, a music and television critic for a prominent national magazine. He asked a mischievous question: “But do you account for genius?”
I did not have an immediate answer. But I recalled that, in conversations with academics, one artistic achievement had come up more than any other. “You would be surprised how often professors want to talk to me about Kid A,” I said. From the British band Radiohead, Kid A is quite possibly the strangest album to ever sell one million copies. It belongs to no genre, there are practically no choruses, and on some songs there is hardly what any human would recognize as singing. Jazz trumpets trade riffs with electric guitars, walls of anxious static are punctuated by robotic beep-boops, and the title track sounds not unlike an alien dying of asphyxiation. And yet, because of or despite these musical heresies, it is unspeakably beautiful. It is the very definition of MAYA—music for a more advanced species, perhaps the one after ours.
But the psychologists and sociologists I consulted didn’t talk about the album’s sound. Rather, several of them made the same point, which was that there is no way that an album as hostile to melody as Kid A could have gone platinum if it had been a band’s debut. Kid A was genius, they granted. But it was acceptable only because Radiohead’s previous work had already bought the audience’s acceptance.
Coming after several massively successful LPs, Kid A was Radiohead’s fourth album. In this manner, I thought, Kid A’s success seemed to fit within a broader pattern. Led Zeppelin’s unnamed fourth album is its mythic masterpiece. Born to Run was Bruce Springsteen’s third studio album, Sgt. Pepper was the Beatles’ eighth, Thriller was Michael Jackson’s sixth, My Beautiful Dark Twisted Fantasy was Kanye West’s fifth, and Lemonade was Beyoncé’s sixth. I thought of Beethoven’s Fifth and Ninth Symphonies, Seinfeld’s fourth through seventh seasons, Stanley Kubrick’s eighth feature film, Virginia Woolf’s fourth novel, and Leo Tolstoy’s sixth book.
It is self-evident that a person’s best work might emerge after years of practice, as artists refine their skill. But there is something more at play here: These artists and teams produced their most resonant work after they had already passed a certain threshold of fame and popularity. Perhaps genius thrives in a space shielded ever so slightly from the need to win a popularity contest. Rather, it comes after the game has been won, after the artist can say, essentially, “Now that I have your attention . . .”
Raymond Loewy’s MAYA theory accounted for success at the frontiers of taste. Perhaps genius is the name we give to that limit, and the greatest work comes from creators who seek something beyond acceptance, who push forward the frontier.
• • •
There were no printed books in 1400. There were no modern public museums in 1700. There were no nickelodeon movie theaters in 1900, no radio news programs before 1920, no color television before 1950, and no Facebook, Twitter, or Snapchat in 2000. This is a ragtag group of institutions, publications, and apps. But they are all inheritors of common tradition, which is the democratized distribution of information and entertainment.
Each new entry in the marketplace of attention has threatened to obliterate the status quo. But despite many warnings to the contrary, the printed word didn’t kill the art of writing, movies didn’t kill books, radio didn’t kill the news, television didn’t kill the movies, the Internet hasn’t killed television, and video didn’t kill the radio star.
The landscape of pop culture is geologically active and forever growing. It is the story of new ideas slipping over the folded plate boundaries of old technologies. Although monks predicted that Gutenberg’s device would obliterate the written word, the printing press increased literacy and brought millions of new writers into the marketplace of letters. Before 1960, the highest-grossing films of all time—Gone with the Wind, The Ten Commandments, and Snow White—were all based on books, and yet more than half of eighteen- to thirty-four-year-olds are still reading for pleasure. There are movies based on mobile games, and video games based on movies, and there will soon be broadly popular virtual reality video games based on both.
Several years ago, McKinsey published an estimate of time spent consuming messages since the start of the twentieth century. In 1900, messages came through just a few channels. People read pulp—books and newspapers—but most communication was face-to-face. In the next century, households fell in love with the radio set, the television set, their computers, and their mobile devices. Theoretically, the most important boundary for media is time; there are still just twenty-four hours in a day. Yet each new generation spends more hours talking, reading, watching, and listening. New plates collide, and the mountain of media grows.
It is tempting to always see technological change as an agent of cultural death. In 1906, John Philip Sousa predicted that the invention of the phonograph and records would obliterate song composition and music education in America. “These talking machines are going to ruin the artistic development of music in this country,” he wrote to the United States Congress. “The vocal cord will be eliminated by a process of evolution, as was the tail of man when he came from the ape.”
Sousa, a white man, did not foresee that cheap music technology would give black Americans like Aretha Franklin and N.W.A. a global microphone; or that a collective audience the size of a thousand concert halls would hear recordings of his “Stars and Stripes Forever” over the next century, making him far more famous than he could have ever hoped to be in the 1800s; or that this wide availability of music would make it easier for all artists to share the influences and allusions that enrich musical creativity. The vocal cords of modern musicians reverberate in speakers and headphones around the world because of the very revolution in reproducibility that Sousa feared.
Just as some people are too willing to see death in every new thing, some technologists see a simple exponential line stretching toward utopia. But while it is easier to talk, listen, share, and watch, ease of access is not purely virtuous. Facebook is a global glue, binding companies, consumers, families, and friends together; and yet, social media makes some people feel lonelier by shining a light on the happiness that they’re missing. The digital revolution in music has made songs more abundant, but it has also driven down the price of recorded music, so that many bands receive massive exposure but paltry revenue. The digitization of music has made the rare hits more valuable than ever. In 2014, the top 1 percent of bands and solo artists earned 77 percent of all revenue from recorded music, and the ten bestselling tracks commanded 82 percent more of the market than they did in the previous decade.
This book is about the psychology of hits and the economics of media, but there is a broader lesson in these chapters, which is about humans and history. If I had to capture this metatheme in a sentence, this is the one I would choose: Technology changes faster than people do.
For the past fifty years, progress has marched to a hurried tempo known as Moore’s law. In 1965, Gordon Moore, the cofounder of Intel, was invited to write an article for Electronics magazine to predict the next decade of semiconductor technology. He predicted that the number of transistors that fit on a microchip would double every year or so. For the last half century, his prediction has proved almost exactly correct. But while technology races at the exponential velocity of Moore’s law, humans plod along at a leisurely Darwinian pace.
People’s basic needs are complex, but old. They want to feel unique and also to belong; to bathe in familiarity and to be provoked a little; to have their expectations met, and broken, and met again.
Technology offers new tools for old jobs. In the 1950s, the television became the most popular and fastest-growing consumer product in history. It threatened to replace movies as the sole source of video entertainment, the printed word as the prime source of news, and radio as the standard piece of living room furniture.
But while television did contribute to these industries’ relative decline, it also may have made them all better and more unique. Movies adapted to the rise of television by spending more on production to distinguish big-picture films from small-fry TV. Magazines and newspapers continued to produce excellent journalism while taking a lesson from television by adding more photography and graphics. Radio responded in perhaps the most interesting way: In 1940, the car radio was a rare feature, but within thirty years 95 percent of U.S. vehicles included one. “Radio has become a companion to the individual instead of remaining a focal point of all family entertainment,” the cultural historian J. Fred MacDonald wrote. The television fired radio from its old job, serving as a hearth of the home, but freed it to roam and follow users as they moved throughout the world.
Today, old-fashioned television is the technology in existential crisis, forced to answer for itself the question it once posed to the rest of media: What can I do best? For many years, the cable bundle was dominant in its ability to provide immediate news and information, to delight living rooms with original storytelling, and to open a universal window to sports. But today, the Internet provides more immediate news and information. Facebook provides more convenient and atomized escapism. Netflix, Hulu, and Amazon Video offer moving and meaningful storytelling. Virtual reality will soon offer more immersive experiences. It is as if by cutting the cord, young people released TV particulates into the air that are absorbing into all media, and everything is becoming television.
The future of hits will be a global stage with many narrow spotlights. This book has focused on Western culture, from European lullabies and impressionism to New York music labels and Hollywood. Some might find the Western focus egregious, but I think it’s defensible on the grounds that, in the last few centuries, the West has been the world’s chief cultural exporter of blockbusters and superstars. But this will change. In 2015 and 2016, at least ten films grossed $100 million worldwide with more than 99 percent of its audience outside the United States. Perhaps the indispensable nation is becoming dispensable.
Meanwhile, there is little doubt that the future of attention is moving to mobile, where fame is fleeting, and so is infamy. On Facebook and Instagram, pride, awe, and outrage blink in and out of existence like quantum particles. Writers used to call each fad a “nine days’ wonder.” In the 1960s, Andy Warhol predicted that everybody would have just fifteen minutes of fame. The half life of notoriety is shrinking. In the new mayfly media world, where a thousand hearts and likes can flock to an ordinary person’s photo or comment and then move on in the next minute, millions await their brush with the feeling of fame, their sixty seconds of celebrity.
Cultural change is impossible to plot along a straight line, because culture is Newtonian. The strongest actions provoke opposite reactions. The rise of e-books should have destroyed the smallest print-book shops. But the number of indie bookstores is up 35 percent since 2009. The growth of digital music should have destroyed physical recordings. But vinyl albums, while niche, are growing almost as fast as streaming. The frictionless publishing platforms of the Internet have allowed many news organizations to forgo subscriptions and finance themselves through advertising, making news totally free to consumers. But some of my favorite individual writers, like Andrew Sullivan and Ben Thompson, have made a living by ditching advertising and asking readers to pay with their income as well as their attention.
This final paradox is the one I find most interesting. The future of hits will be a heyday for both breadth and depth. Tomorrow’s empires of entertainment can be bigger than ever. But the independent artists can be stronger, too. My last two stories are about these two futures of hits, large and small—empire and city-state.
• • •
The Walt Disney Company is a global empire of media. Like every historical empire, it is best thought of not as a single state organization, but rather a power distributed across a diverse set of global properties. In addition to the animated movies of animals and princesses that made it famous, Disney owns Star Wars, Marvel Comics, and Pixar. It operates ESPN and ABC with partnership stakes in A&E and Hulu. It owns eight of the ten most popular amusement parks in the world. It is not a movie company, as the name might suggest, but rather the world’s most successful theme park company, attached to the country’s most profitable television company, connected to the world’s most famous movie company. In the pantheon of hit makers, Disney is Zeus.
But in the beginning, Disney was not the profit-gushing king of entertainment. In his early years, Walt Disney’s movies had decent cash flow, but Walt was an artist who preferred to spend every penny he could touch on his next film. His company rarely operated with strong and steady earnings in the 1920s—and those were the boom years for the U.S. economy. Then the Great Depression hit. Soon World War II would destroy the movie theaters of Europe. To go from artist to empire in the Depression, Walt Disney needed a heroic sidekick. He found one in a man named Kay.
“Kay” Kamen was born Herman Samuel Kominetzky in Baltimore, Maryland, on January 27, 1892. He was the youngest of four children in a Jewish family that had emigrated from Russia. His young life did not predict fame, fortune, or even moderate competence. He dropped out of high school and spent time in a juvenile penitentiary as a teenager. In his twenties, he finally got steady work selling mink hats in Nebraska.
Kamen quickly proved himself to be a preternaturally gifted salesman, even though his physical appearance was, on a good day, unsightly. He was a stocky gentleman with a broad face, squashed nose, round eyeglasses, and a severe middle part bisecting his thick black hair. Even his colleagues did not rise to his defense on this count. “Kay Kamen was one of the homeliest men I had ever seen,” Jimmy Johnson, a Disney veteran, wrote in his memoir. “But looks are only skin deep. Kay was one of the warmest, most charming persons I ever met.”
In his thirties, Kamen found success at a Kansas City marketing firm, where he specialized in developing products based on movies. His ambitions stretched westward. In 1932, he saw a Mickey Mouse cartoon and recognized that the mouse could be a star outside the cineplex. He placed a call to Walt and Roy Disney in Los Angeles with a simple request: Let me sell your cartoon mouse. The Disney brothers invited Kamen to drop by their Hyperion Avenue lot the next time he happened to find himself in Los Angeles. About forty-eight hours later, Kay Kamen was sitting in Walt’s office. He had, by one account, cashed in his life savings, sewn the bills into his suit jacket, and boarded a two-day westbound train. For fear that somebody might steal his jacket, he did not sleep for the entire forty-plus-hour voyage.
Kamen presented his plan to sell Mickey. “Kamen’s philosophy was that Disney needed to move Mickey Mouse out of the ten-cent store and into the department store, because that’s where consumers were moving,” said Thomas Tumbusch, perhaps the nation’s leading researcher on Disney merchandising history. Kamen signed to become solely responsible for licensing Disney character merchandising worldwide.
Kamen’s winning insight was simple and somewhat Cassandran: Hollywood thought that toys were advertisements for movies. Hollywood was wrong; the opposite was true. The films were proofs of concept. The future of the movie business was everything outside the movie theater.
In particular, the future of movies was in stores. Families were streaming from farms to cities, and the stores followed. In 1920, there were no Sears department stores in the United States. By 1929, there were three hundred. Annual sales of Disney merchandise went from $300,000 in 1930 to $35 million in 1935.
Kamen’s most famous achievement was the Mickey Mouse watch, which made its international debut at the Chicago World Fair in 1933. This was the nadir of the Great Depression. The U.S. economy had shrunk by a third since the late 1920s, and unemployment screamed past 20 percent. Many families hardly had the means to buy food in 1933, much less toys. But the Mickey Mouse watch was an instant and astonishing hit. Its manufacturer, the Ingersoll-Waterbury Company, was rescued from the verge of bankruptcy by horology, increasing the number of its factory workers from three hundred to about three thousand within the year to keep up with demand. Macy’s landmark New York City department store sold eleven thousand Mickey watches in one day. In two years, Disney sold more than two million. The watch was then the greatest financial success in the history of the Walt Disney Company, and it wasn’t even Walt Disney’s idea.
Kamen infested the world with cartoon rodents. The New York Times described a pop culture landscape “bustling with Mickey Mouse soap, candy, playing cards, bridge favors, hairbrushes, chinaware, alarm clocks, and hot-water bottles, wrapped in Mickey Mouse paper, tied with Mickey Mouse ribbon, and paid for out of Mickey Mouse purses.” The Cleveland Plain Dealer described an idyllic child in 1935:
In his room, bordered with M.M. wall paper and lighted with M.M. lamps, his M.M. alarm clock awakens him, providing his mother forgets! Jumping from his bed where his pajamas and the bedding are the M.M. brand, to a floor the rugs and linoleum upon which are M.M. sponsored, he puts on his M.M. moccasins and rushes to the bathroom [to] the soap made in the Disney manner, as are also his toothbrush, hair-brush and towels.
It is strange now to imagine Mickey Mouse as a symbol of anything but innocent charm and harmless rambunctiousness. But abroad he was a complex symbol, both beloved as art and derided as propaganda. The Soviets claimed he symbolized the pathetic timidity of the capitalist workforce, while the Russian movie director Sergei Eisenstein praised Disney’s work as “the greatest contribution of the American people to art.” In Nazi Germany, a similar divide opened between public derision and private delight. “Micky Maus is the shabbiest, miserablest ideal ever invented,” declared one Nazi propaganda newspaper in 1931. But Adolf Hitler must not have hated the character as much as he let on. In December 1937, three months before the invasion of Austria, the Nazi leader received eighteen Mickey Mouse films as his Christmas present. Incredibly, the gift was from Joseph Goebbels, the minister of propaganda.
Back in Los Angeles, Kamen’s empire of fantasy gave Walt Disney the confidence to make the first full-length animated film in history, Snow White and the Seven Dwarfs. “Without Kay Kamen,” Tumbusch said, “there would be no Snow White.” When the movie came out in December 1937, its reception was nothing short of rapturous—not just among children, but also among the industry’s most discerning grown-ups. Charlie Chaplin, who attended the world premiere, declared Dopey the Dwarf “one of the greatest comedians of all time.” Within a few years, it became Hollywood’s highest-grossing sound film.
Still, the movie’s box office couldn’t keep pace with Kamen’s juggernaut. In the two months after its 1938 premiere, the movie made $2 million from the sale of toys—more than the actual film made in the United States that entire year. There were Snow White caramels, coloring books, candy boxes, cooking ware, Christmas tree ornaments, carnival chalk figures, combs, crafts, and crayon sets, and that’s just the merchandise starting with the letter c.
Nobody in the film industry or beyond had ever seen something like this before—a film slipping off the silver screen and promiscuously impressing itself on every product category imaginable. “The picture has virtually developed a new industry from its by-products,” the New York Times hailed in an editorial from May 1938. The Times predicted that Disney had invented a new business, “industrialized fantasy,” that could save the U.S. economy from the Great Depression.
They were wrong. Industrialized fantasy was not the future of the economy. It was, however, the future of entertainment. Disney had developed the perfect movie-merchandising symbiosis. Snow White, produced with the profits from Kamen’s licensing business, poured fresh fuel right back into his merchandising machine. The movies inspired the toys and the toys paid for the movies.
Disney might not have been a born businessman, but he absorbed Kamen’s lesson: The art of film is film, but the business of movies is everywhere. Disney described the strategy as “total merchandising.” A movie was more than a movie. It was also a shirt, a watch, a game—and, soon, a television show.
In the 1940s, much of Hollywood greeted the dawn of television as newspaper publishers did: by shielding their eyes and waiting for it to go away. Disney, however, saw television for what it could be: a movie theater in every living room and a living room advertisement for the movies. For several years, he wanted to build an amusement park, a “Disney Land” for children, based on his animated characters. He was also interested in developing a television show for one of the major broadcast networks. The stroke of genius was to unite these two dreams. He told his brother Roy to sell a Disney TV series to a network only if it was willing to invest in his park, too. NBC balked. CBS, too. But ABC, the runt of the big three broadcasters, jumped at the idea. In 1952, it agreed to make a Disney TV show and to invest in one third of the park. In a move deemed “one of the most influential commercial decisions in postwar American culture,” Disney insisted that the television show and park have the same name: Disneyland.
Disneyland the show became the first ABC program to appear among the year’s ten highest-rated programs. About 40 percent of the nation’s twenty-six million TV households tuned in each week, even though it was often little more than an advertorial, or a sneaky blend of original content and advertising. One episode, “Operation Undersea,” provided a behind-the-scenes look at the film 20,000 Leagues Under the Sea just one week before Disney released it in theaters—an elaborate movie preview. The film went on to be the second-highest-grossing movie of 1954, after White Christmas. “Never before have so many people made so little objection to so much selling,” one ABC executive said of the Disneyland television show.61 Disney’s strategy also benefited from a demographic windfall. His key market was children sitting in front of the television, and in the postwar baby boom the number of kids between five and fourteen grew by 60 percent between 1940 and 1960.
On July 17, 1955, the Disneyland theme park opened. It was such a disaster that employees referred to it as “Black Sunday.” Several of the rides didn’t work. There weren’t enough water fountains to serve the attendees due to an earlier plumbers’ strike. One-hundred-degree heat melted the fresh asphalt, which clung to the heels of mothers, like octopus ink from Ursula’s lair.
But first impressions aren’t everything. In its first six months, one million paying customers passed through Disneyland’s gates, and the park accounted for one third of the company’s revenue for the year. ABC had made the park possible, but several years after it opened, the network sold its stake back to Disney. In retrospect, this was a horrifyingly bad idea, not unlike selling a band of rebels the weapons they’ll eventually use to sack your city. In 1995, the Walt Disney Company bought ABC for $19 billion, with many thanks to the profits from the very amusement park business that ABC had once financed.
By midcentury, the Walt Disney Company was no longer a movie business. Even in the 1950s, the studios produced stories that reached their largest audience through television. Disneyland the show constructed a mythology that families could truly inhabit only at Disneyland the amusement park, which made much of its profit thanks to something else: the sale of Disney merchandise. The Disney empire is predicated on the principle that audiences both want to lose themselves in a fairy tale and map their lives onto it.
One might cynically say Disney’s movies are proofs of concept for TV shows, which are advertisements for its theme park, which serves as a loss leader for capturing merchandising sales. But really, there is no unidirectional line of commercialization. Disney’s empire is an ouroboros, an infinite nostalgia loop is which everything is selling everything else.
Like the future of the world economy itself, Disney’s ambitions stretch eastward. The company’s most important new creation isn’t a new movie for America but an amusement park for China. Shanghai Disney Resort, which opened in 2016, is a $5.5 billion project that was twenty-five years in the making. Three hundred million Chinese people—about 90 percent of the population of the United States—live within a three-hour commute of the park by car or train. Just as Disneyland was about more than selling tickets, Shanghai Disney Resort isn’t just about theme park revenue, or even selling merchandise on the park grounds. It’s about building an infinity loop of awareness around Disney movies and products in China. “As Walt did with Disneyland in the fifties, enabling Disneyland to really grow the Disney brand in the United States, we believe we will have some really interesting opportunities to do the same in China,” Disney CEO Bob Iger said in 2009.
The Walt Disney Company is the quintessential hit empire for three commercial reasons. First, with its television and marketing channels, it has great power to build exposure and awareness for its new and riskiest art. Second, the company is rich enough to buy the most popular franchises in the world, like Star Wars and Marvel, and produce lavish, familiar surprises with new chapters of old stories. Third, it converts happy audiences into high-paying devotees at its amusement parks and stores. Children, ages four through one hundred and twenty, are invited to own the fantasies that Disney has conjured, and the dolls, bed sheets, and costumes they bring into their homes become the most powerful advertisement for the next installment of the fantasy. Umberto Eco called Disneyland “the quintessence of consumer ideology,” because it “not only produces illusion,” but also “stimulates the desire for it.”
This is one future of hits: “total merchandising.” Disney is larger, more influential, and more ever present than any company could have been in 1932 when Kay Kamen arrived by train in Los Angeles and proposed to turn Mickey Mouse into a watch. But still it operates by a business philosophy that is pure Kamen: Every channel of distribution is an opportunity for exposure and commerce. Far beyond the darkness of a movie theater, Disney’s stories inhabit cable and broadcast television; its movies stream on Netflix; its franchise films cover billboards and taxis every year; and eight of its amusement parks attract at least ten million annual visitors. Disney’s empire expands even to the stage, that most ancient and beautiful arena of entertainment. Eighteen years after the debut of the musical version of The Lion King, its productions have grossed more than $6.2 billion worldwide—more than any film—with more than eighty-five million tickets sold.
Total merchandising is powerful not only because it pushes a company’s content through all available channels, but also because companies can pull insights from these same channels. BuzzFeed, a young media company that might have the best chance to become a Disney for the twenty-first century, was born as a website. But, like Disney, it is like a promiscuous vine that can live and grow in any climate. In 2016, 80 percent of BuzzFeed’s audience discovered its content somewhere other than its website—on social networks like Facebook and Snapchat, publishing partners like Apple News and Yahoo, and messaging apps like WeChat. For some consumers, BuzzFeed is a digital newspaper. For others, who find it on Snapchat, it is more like a TV company programming content for different cable networks.
Content flows out and information flows in. BuzzFeed uses its distribution channels to collect information on what audiences read, watch, and share and turns those lessons into content for some other channel. Like Arthur Conan Doyle’s famous description of the crime lord Moriarty, BuzzFeed is like a spider sitting at the center of a vast web, heeding the radiations of a thousand threads. “If we see something works well on Instagram, it can be adapted for Snapchat,” founder and CEO Jonah Peretti once said. “If we see something works well as a post, it can be adapted for video. If we see something works in the UK, it can be adapted for Australia. The nodes of the network are very autonomous, but they share learnings back with the larger network.”
The legacy of Kamen runs through Disney and BuzzFeed. It is total merchandising—an infinity hit loop in which everything is a test of, and proof of concept for, everything else. This is one vision of the future of hits. But there is another.
The free distribution of the Internet should also empower individuals, untethered from the old gatekeepers that once controlled distribution, marketing, and hit-making. These individuals or small companies may not challenge Disney’s domain, but they can achieve cultural renown and commercial success on their own terms, by using the Internet to build networks and reach audiences. They are not like empires, but rather like city-states.
• • •
I met Ryan Leslie—rapper, hip-hop producer, network science nerd, and start-up founder—in the Financial District of Manhattan, the southern nub of the island where the shadows of tall buildings cut demented rhombuses of sunlight in the tiny streets. It was ten a.m. when we met in the lobby of a giant luxury residential tower. Leslie had been up until four a.m. that morning at his studio, a few blocks away, working on several new songs. His bottom teeth were encased in gold grill, and he was dressed in a long cotton T-shirt and light blue skinny jeans with holes in the knees.
We rode the elevator to his floor and opened a heavy door to enter his screening room, with beige soundproofed walls and several rows of matching beige chairs. Then Leslie told me his life story.
“My parents are Salvation Army officers, straight off the rip,” he began. When Ryan was born in 1978, his parents sent him to South America to live with his grandparents and the Salvation Army in Suriname. When he returned to the United States, the family moved constantly, but music anchored his home life. His mother sang and played piano. His father sang and played trumpet. Leslie attended four high schools in three years: in Richmond and Fredericksburg, Virginia, then to Daly City outside San Francisco, and Stockton, California.
By the age of fourteen, Leslie had nearly exhausted his high school’s slate of advanced courses, and a guidance counselor encouraged him to move on to community college in California. Leslie took the SATs. He scored a 1600.
He applied for a Rotary International grant and gave a speech: “To do what you believe in and believe in what you do is the key to a life of fulfillment,” he said with perfect antimetabole. He won the scholarship and applied to Stanford and Harvard. He chose Harvard.
Leslie arrived at Cambridge, Massachusetts, as a fifteen-year-old freshman. Hoping to honor his parents’ and grandparents’ example of sacrifice, he was prepared to become a doctor. Then he discovered Stevie Wonder. That was that. Leslie had to be a musician.
“My dad was confused,” Leslie said. “My parents wanted to protect me from the personal and financial risks associated with pursuing a career with so many uncertainties that requires a stroke of luck and magic sprinkle dust.”
He was obsessive, performing in several separate singing groups, like the Kuumba Singers and the Harvard Krokodiloes, and camped out in the Quad Sound Studios for all-nighters in the basement of the Pforzheimer House to work on beats and lyrics. When he needed a cheap way to contact recording companies in Los Angeles, he took an ad sales job with the Harvard Guide and secretly made free long-distance phone calls to scouts and producers.
Although his friends told him he was crazy and he spent much of college on academic probation, he graduated on time and delivered the commencement speech for Harvard’s class of 1998. But unlike the cap-and-gowned graduates in the audience, Leslie hadn’t spent his senior year setting up postgrad jobs in the consulting, banking, or corporate corridors of the the economy. He graduated from Harvard University with no job, no income, no savings, and no home. His most valuable possession was a working campus key that allowed him to sneak into vacant dorms over the summer to sleep and take showers.
Leslie’s early career was a series of false starts. He sold beats to gangs who wanted to anthologize their lives in rap songs and made so little money he had to live and record in a storage area behind a barbershop in Boston.
Finally, a bit of magic sprinkle dust fell. At twenty-four, on the verge of giving up and applying to law school, he was invited to participate in a one-month “beat camp” in the Bronx with the producer Younglord. The internship led to an interview with Puff Daddy, who immediately recognized Leslie’s prodigious talent and provided the money and stars to nurture it. Soon he was writing beats for Beyoncé. Leslie signed a publishing deal with Universal Records for half a million dollars. He fell in love and started dating a young model—black, Filipino, Mexican, and “atomically sexy,” according to New York magazine.
Her name was Cassie. He wrote her a song that she could give to her mother: “Me & U.” When his label heard the track, they insisted on releasing it and turning Cassie into a star. The record was a commercial smash, selling more than a million digital downloads in half a year, to become one of the biggest hip-hop songs of 2006. Critics hailed Cassie as the century’s next Janet Jackson.
But as quickly as everything seemed to come together, it all fell apart. Cassie started dating Puff Daddy, who now calls himself Diddy. Leslie released two solo albums, and neither achieved blockbuster success. In 2010, he filed for bankruptcy.
Leslie paused here. He had been talking for about an hour straight. He leaned back and looked up for a while, as if the next part of the story were written on the ceiling. “All I needed,” he said, and stopped. He gathered the words and rapped a verse:
All I needed was five racks to get up out of the projects.
All I needed was some contacts but I wasn’t getting no callbacks.
So I put it on my back. That’s my grind and my time
So when I’m counting these millions, man, that’s my money and my shine.
Leslie was distraught by the economics of the music label business. For every dollar earned by a recording artist, the label often earns between three and eight times more. As a result, labels are built to live on scale, even if its artists can make a meaningful living on niche. A million dollars here and there won’t make or break any label’s year. But for an independent artist, a million dollars would change somebody’s life.
Leslie doesn’t belong to a label anymore. Instead, he built a smartphone application that helps him stay in closer touch with a smaller audience that pays him directly for his music. The app, called SuperPhone, is like an advanced messaging service. Leslie has given his direct phone number out to more than forty thousand people. He uses the app to text them directly when he has a new song, when he’s performing, or even when he wants to invite them to a party. He sold out a New Year’s Eve celebration in 2014 with $1,700 tickets and sold several copies of a special July 4 album for $5,000.
In total, Leslie has sixteen thousand paying customers that he can contact on SuperPhone. He knows their names, their numbers, the music they’ve bought, and what they paid. (The average contribution is around $100 per year.) Less than twenty thousand buyers is not nearly enough scale to become a major artist within the label system. But Leslie doesn’t need a label. For an expense of about $3,000 in text messaging fees, he was able to generate $589,000 in 2014, without a label, manager, or marketing staff.
Building SuperPhone has also taught Leslie about the mechanics of success, why some talented people make it, and why other talents fail. Leslie has the numbers of Kanye West and Ludacris on his phone. He has famous rappers, singers, and hip-hop producers in his pocket. An ordinary twenty-two-year-old has none of that. Indeed, much of what initially seemed to him like “magic sprinkle dust” turned out to be real and quantifiable. So often, the difference between success and failure, he decided, was the quality of the people surrounding the artist.
Take two young men with the same talent. One is a good-looking kid from the Great Plains with a great voice. His five closest friends are his parents and classmates. The second kid is from London, Canada. His top five includes Usher, one of the biggest pop stars in the world, and Scooter Braun, one of the biggest talent managers in music. The first kid is a gifted nobody; the second kid is Justin Bieber. The difference isn’t the face or the falsetto. It’s the quality of the top five, the power of the network.
The science snaps into the lyrics: All I needed was some contacts . . .
“I get chills thinking about this stuff,” Leslie told me. “If you want to be a pop star, you need a pop star’s top five. If you want to be a politician, you need a politician’s top five. Your network needs to match the quality of Obama’s inner circle, or Clinton’s, or a Bush. If you want to be the best tennis player in the world, the five tennis people in your life have to be better than the five people around Serena Williams.”
Leslie was bouncing up and down on the couch and rubbing his arms from shoulder to elbow. Now I could see them, goose bumps on his forearms.
“My thesis is simple,” he said. “Your network is your power.”
• • •
Most hits bear the indelible imprint of not only their maker, but also some forgotten enabler along the way. Would I recognize a Monet painting if he had never met Manet, or Paul Durand-Ruel, or Gustave Caillebotte? Could hundreds of millions of people around the world hum Bill Haley’s “Rock Around the Clock” if not for the music tastes of a California fifth-grader, Peter Ford? There would be no Fifty Shades without a fan fiction network, just as countless apps could not have achieved instant virality without piggybacking on networks of college campuses. I knew and loved Johannes Brahms’s lullaby not only because of its economy of melody, but also because of a lineage of German Americans whose ancestors fled Europe at the right time. This cascade of musical influence bloomed out of Austria and Berlin, and its tendrils touched my mother’s home in the suburbs of Detroit.
I found myself mapping Ryan Leslie’s observations onto the lessons of Duncan Watts—the rapper and chaos theorist, artist versus scientist, growing up on opposite ends of the world, following divergent passions and goals, yet arriving at the same conclusion. Hits are morsels of meaning passed from one network to another—forged in the cluster of creators and delivered to a million little cults.
None of this is new. From ancient lullabies to modern memes, new hits serve old purposes: to fill the time, to familiarize the strange, to estrange the familiar, to infect with emotion, to create meaning. What’s different today is the means—the ability of small players, like Leslie, to amass large audiences, and the power of large companies, like Disney, to achieve global omniscience.
Ryan Leslie is not the biggest rap star on the planet. He probably never will be. There is too much talent—and too little listening time—for each worthy artist, creator, or entrepreneur to claim a seat in the pantheon of stardom. So he forged another path, and now perhaps his most durable legacy won’t be any one song, but an invention that allows artists to find paying audiences without interference from the old gatekeepers that once stood between them.
Leslie doesn’t know if his next thing is going to be a hit. Nobody can know that. To be a maker in this world is to sacrifice certainty for love at the altar of art. But it’s that tantalizing uncertainty that keeps him up until four in the morning. My grind and my time . . . That’s all anybody can hope to control. The rest is magic sprinkle dust.