The tricky thing about studying popularity and why people like what they like is that at least three inextricable factors are always getting in the way: choices, economics, and marketing.
Choices: If I were writing this book in 1918, a year when the Model T car came only in black, it would be obvious to note that everybody likes black automobiles. The idea that people might prefer more polychromatic cars would have been hard to defend, and there would have been little evidence to support the idea. But today’s cars come in hundreds of models, colors, styles, and sunroof options, and the streets flow vividly with nonblack vehicles in a variety of sizes. Choices changed people’s tastes in ways that would have made 1918 “taste explainers” quickly seem silly.
Economics: In the summer of 2007, when Abercrombie & Fitch was one of the most successful fashion retailers in the United States, it would have seemed prudent to hold up the company as the code breaker of teenage fashion. By the end of 2008, however, the United States had fallen into deep recession. Teenage unemployment spiked and parents who lost, or were at risk of losing, their jobs closed the allowance spigot. Abercrombie’s stock fell by more than 80 percent in a year, and Time magazine named it the country’s “worst recession brand” as many high schoolers moved on to less conspicuously labeled clothes and discount stores. Abercrombie’s style didn’t change, but America’s economy did. Economics changed the definition of cool.
Marketing: In 2012, Super Bowl XLVI set the record for the most watched U.S. television program ever (although it would soon hand off the record to another Super Bowl). There is nothing quite like the National Football League’s championship game as a universal marketing blast in an otherwise fragmented media environment, and indeed this game cemented at least one historic hit. A jaunty Chevy commercial featured the five-month-old song “We Are Young” by New York indie pop band Fun. The very next week, the song climbed thirty-eight spots to number three on the Billboard Hot 100 and eventually hit number one, where it remained for six weeks. The next year, Billboard named “We Are Young” one of the one hundred best-performing songs in music history. The song’s sudden success wasn’t about economic circumstances, since its price and availability hadn’t changed. It was just about marketing—the power of the right song, in the right place, with the right product, in the middle of the Super Bowl, that supreme emperor of advertising platforms.
So choices, economics, and marketing are always shaping tastes. But what if you could study popularity in a market without any of those things—in a store with infinite options, universal prices, and no advertising?
For example, imagine a national clothing outlet that carried every size and design of shirts, pants, and shoes. But this national chain had no labels or ads to promote one style over another. Every possible article of clothing simply existed, and they were all the same price. This chain would be a social scientist’s dream. Researchers could use it to study why certain fashions rise and fall without trying to control for the brutal power of advertising and distribution.
In fact, such a marketplace exists. It’s the marketplace for first names.
• • •
Choosing a first name is like shopping at an infinity store where every product costs zero dollars. First names are often a cultural product, like music or clothes. Parents select them for both deeply personal reasons (“Maria is my grandmother’s name”) and for more aesthetic reasons (“Maria sounds nice”). News still carries influence—in the 1930s, the name Franklin soared in popularity while Adolf vanished—but there is nothing like direct advertising for names. No organization or company benefits from more sons named Michael, Noah, or Dmitri.
The weird thing about first names is that, even though they’re free and infinite, they follow the same hot-and-cold “hype cycle” of many other products that do have finite choices, diverse prices, and lots of advertising. Just like clothes, first names are a fashion. Some names are cool today (Emily) while some once popular names now sound out-of-date (Ethel), even though the names Emily and Ethel are as Emilyish and Ethelian as they’ve always been. Nothing about the quality of the names has changed—just their popularity.
In the last decade of the twentieth century, the top three baby girl names were Jessica, Ashley, and Emily—none of which were in the top one hundred a century earlier. Meanwhile, the popular girl names from the early 1900s have all but disappeared. Ruth, Marie, Florence, Mildred, Ethel, Lillian, Gladys, Edna, Frances, Rose, Bertha, and Helen: All of these names were in the top twenty at the turn of the twentieth century, and none were still in the top two hundred by the century’s end.
It wasn’t always this way. For hundreds of years, first names were more like traditions than fashions. Parents would select names from a small pool of options and often recycle the same ones across generations. Between 1150 and 1550, practically every English male monarch was named Henry (eight of them), Edward (six), or Richard (three). Between 1550 and 1800, William, John, and Thomas accounted for half of all English men’s names. Half of England’s women went by Elizabeth, Mary, or Anne.
The trend was transatlantic. In the Massachusetts Bay Colony of the mid-1600s, half of the baby girls were named Elizabeth, Mary, or Sarah, and early records from Raleigh County, established in 1587, show that forty-eight of the ninety-nine men were named William, John, or Thomas. It was not just an English-speaking tradition. Germany, France, and Hungary had a similar concentration of popular names. São Paulo baptismal records from the late 1700s show that half of all girls were named Maria, Anna, or Gertrude.
Then, quite suddenly, in the mid- to late-nineteenth century, the list of most popular names embarked on a period of accelerating turnover, in both Europe and the United States. Girls’ names, in particular, cycle in and out of popularity faster than summer dress styles. Emma and Madison, two of the most popular names of the last decade, weren’t in the top two hundred just thirty years ago.
This leads to two mysterious questions about names, whose answers carry broad implications for just about any trend—cultural, economic, or political. First, how does something go from being a tradition, where the distinction between old and new hardly exists, to a fashion, where new customs are constantly pushing out old ones? Second, how do things become cool—even in markets with infinite variety, no prices, and no advertising?
• • •
The uptick in first name turnover started in England and spread through the Western Hemisphere in the middle of the nineteenth century, Stanley Lieberson finds in his marvelous book on names, A Matter of Taste. This is a familiar trajectory. Something else started in England and spread around the world in the 1800s. It was the industrial revolution.
There are several possible connections between industrialization and first names. First, factories encouraged workers to move from small rural farms to dense urban centers, and urbanization introduced them to new names. Second, education rates soared in the early twentieth century, and literacy exposed people to an even wider variety of names in books and international news reports. Third, as people moved from one-family settlements to cities, the ties between nuclear families and extended family networks weakened. The melting pot of denser cities put a new emphasis on individualism. On a small family farm, having a familial name made you a part of the family, but in the city a name set you apart from other cultures, ethnicities, and classes.
This period of change didn’t just erase a batch of old names and replace them with a fresh batch. It forever changed the way people thought about names as identities, creating a virtue around newness where formerly none had existed. The turnover rate of first names soared not only in the United States and the UK, but also in Hungary, Scotland, France, Germany, and Canada. A fashion was born.
This sudden metamorphosis of fashion has historical parallels. For most of human history, people didn’t change the way they dressed from year to year, or from millennium to millennium. In Europe, men covered themselves in long tunics extending to the knees from the Roman times to the 1200s.28 As late as the Middle Ages, the concept of clothing “fashion” didn’t really exist in most of the world. In India, China, Japan, and across Europe, clothing and costume styles were frozen in time. Even in seventeenth-century Japan, a shogun’s secretary claimed with pride that the empire’s clothing style had not changed in one thousand years.
But by the 1600s, fashion was a central part of European culture and economics. King Louis XIV of France strutted around Versailles in high heels, while his finance minister claimed that “fashions were to France what the mines of Peru were to Spain”—not just a game for peacocking royals, but an economic stimulus and an international export. At one point during Louis’s reign, the clothing and textile industries employed one third of Paris workers, according to the fashion historian Kimberly Chrisman-Campbell.
When did clothes become fashionable, and why? The historian Fernand Braudel said that trade churned the “still waters” of ancient fashion:
The really big change came in about 1350 with the sudden shortening of men’s costume, which was viewed as scandalous by the old . . . “Around that year,” writes the continuer of Guillaume de Nangis’s chronicle, “men, in particular noblemen and their squires, and a few bourgeois and their servants, took to wearing tunics so short and tight that they revealed what modesty bids us hide” . . . In a way, one could say that fashion began here. For after this, ways of dressing became subject to change in Europe.
Historians don’t agree on why the 1200s and 1300s were an inflection point. One possibility is that trade and travel exposed Europe to more styles, which gave nobles new ideas for outfits. Another theory is that the growth of the textile industry made clothes cheaper. When more Europeans could afford to dress like aristocrats, the aristocrats had to change their outfits more often to stay ahead of the plebes. In any case, Renaissance Europe was a tournament of styles, with Italy’s fussy and colorful embroidery going up against the tight black Spanish doublets and capes.
Fashion is governed by a neophilic rule with a neophobic catch: New is good and old is bad (but very old is good again). There is a theoretical benchmark for how fashionable attitudes are shaped by the passage of time called Laver’s law, named after its originator, James Laver, a British fashion historian. It goes like this:
Indecent: 10 years before its time
Shameless: 5 years before its time
Outré (Daring): 1 year before its time
Smart: Current fashion
Dowdy: 1 year after its time
Hideous: 10 years after its time
Ridiculous: 20 years after its time
Amusing: 30 years after its time
Quaint: 50 years after its time
Charming: 70 years after its time
Romantic: 100 years after its time
Beautiful: 150 years after its time
One can quibble with the precise language (I’d say “dowdy” is too matronly a term for a look that’s only twelve months old). But the larger lesson of Laver’s law is that there is no such thing as universal and timeless good taste in clothes, names, music, or perhaps anything. There are only present tastes, past tastes, and slightly ahead-of-the-present tastes. Like financial investing, fashion is a matter of both taste and timing. It doesn’t profit, in either profession, to have the correct opinion too late or to be prescient long before the market is ready to agree with you.
• • •
How do cool names suddenly become uncool—and then, perhaps, cool again? It’s no mystery what happened to Adolf, and few mourn the name’s demise. But what did Edna or Bertha ever do to anybody? That is a mystery that Freda Lynn, a sociologist at the University of Iowa, investigated with Stanley Lieberson. They noticed something interesting about siblings.
Parents tend to pick similarly popular names for their older and younger children. A couple that picks a unique name for its first baby is much more likely to pick a similarly unique name for its next child. Indeed, if you meet a family where the children are named Michael, Emily, and Noah, it’s rare for the fourth child to be named something exotic like Xanthippe. But if you meet siblings Xanthippe, Prairie Rose, and Esmerelda, you might be surprised to meet their younger brother Bob. This suggests that parents have a particular “taste for popularity,” as Lieberson and Lynn write.29 Some parents like certain names on the basis of their popularity.
Taste for popularity is a powerful idea in culture. A straightforward example could be seen in music’s biggest stars. Some people like Taylor Swift because she’s popular. Some people like Taylor Swift and don’t really pay attention to her popularity. And some people look for things to dislike about Taylor Swift because her popularity sends the equivalent of a warning that she might be fake, dreck, or both. All three groups can agree on what a Taylor Swift song sounds like. Yet something outside the actual sound of the music—Swift’s status as a star—can send a range of signals, from pure appeal to deep skepticism.
Popularity as a taste might apply to many categories—music, food, arts, housing, clothing, hairstyles, and political ideas. Some people are drawn to things because they’re hits. Some people shun things because they’re hits. You can imagine a spectrum from bandwagoners (“I only tried it because it’s popular”) to hipsters (“I don’t like it anymore now that it’s popular”). Although it’s possible that one’s disposition for newness or independence holds across items—e.g., that people who like blockbusters also shop at the Gap and eat chocolate ice cream—the more likely scenario is that an individual’s taste for popularity differs across categories. For example, I have an innate skepticism toward political ideas that seem too popular but also let photo spreads in mainstream magazines tell me how to dress.30
If we imagine that most Americans are in the middle of the taste-for-popularity spectrum for names, it would suggest most parents are looking for Goldilocks names—pretty common, but neither odd nor ubiquitous. But one million families cannot perfectly coordinate their baby naming decisions. It’s common for parents to think they’ve picked a moderately unique name for their girl only to learn, on her first day of school, that several other kindergarteners share the same one.
Samantha was the twenty-sixth most popular name in the 1980s. This level of popularity was pleasing to so many couples that 224,000 parents named their baby girls Samantha in the 1990s, making it the decade’s fifth most popular name. At this level of popularity, however, the name appeals mostly to the minority of adults who actively seek out extremely common names. And so a top five name tends to peak and fall for a long period of time. Indeed, the number of Samantha babies has fallen by 80 percent since the 1990s.31
This taste-for-popularity spectrum maps onto one of the first ideas in this book: That familiarity underlines popularity, although people have varying tastes for familiar products. Some people like weird names, others prefer common names, and many parents are Goldilocks, choosing from that large swath of names that is neither overexposed nor proudly weird, but rather a little surprising and yet instantly recognizable.
Individually, these parents are just picking names they like. Collectively, their choices create a fashion.
• • •
One of the most important concepts in social psychology is “social influence” or “social proof,” which means that other people’s tastes often become your tastes. In Dr. Robert Cialdini’s classic book on persuasion, Influence, he defines the principle of social proof as “the greater the number of people who find any idea correct, the more the idea will be correct.”
This theory is widely accepted in media and marketing: Here is the most popular thing, so you’ll like it. It means that “number one bestseller” is a universally alluring descriptor. It conflates “most read article” with most interesting article. It means you’re drawn to videos with more YouTube plays or Facebook likes. The truism even encourages some publishers and authors to artificially inflate book sales to get them on the bestseller lists or pushes game designers to fictitiously inflate download counts to appear in demand.
Manipulating popularity can work. But consumers are not infinitely clueless. There is a limit to how much you can trick people into liking something.
First, as song-testing sites from the first chapter show, you can put lipstick on a dead pig, but that’s not the same as creating a market for it. Lady Gaga’s third album tested abysmally on the British music-testing site SoundOut. But her label still pushed it down the throats of DJs and marketers and into the ears of radio listeners. Despite this massive marketing effort, the album sold considerably worse than her previous album. Quality might be a tricky thing to define, but people seem to know bad when they hear it. Distribution is a strategy to make a good product popular, but it’s not a reliable way to make a bad product seem good.
Second, raising awareness that something is popular might have unintended negative consequences. In the paper “The Paradox of Publicity,” researchers Balazs Kovacs and Amanda J. Sharkey compared more than thirty-eight thousand book reviews on Goodreads.com. They found that titles that won prestigious awards got worse reviews than books that were merely nominated for the same awards. In a perfect social-influence world, this would make no sense. If an authority figure tells you a book is good, you ought to internalize the advice and adore the book.
But the real world is more complex than that, and there are several intuitive reasons why a book award might lead to worse ratings. Prizes naturally raise expectations, and heightened expectations often go unmet. What’s more, prestigious prizes attract a larger and more diverse audience, and this broad composition will include people who have no taste for the book’s genre or style and are reading it only for the award sticker. These readers will dependably leave worse reviews. Meanwhile, a book that’s merely nominated for the same prize might not attract the same motley coalition of readers, so its ratings won’t suffer so much.
But the researchers’ most interesting explanation is that prizewinners attract lower ratings because of a backlash among the book’s readers. “Consistent with work in the area of fads and fashion, we found that growth in audience size, or popularity, can itself be seen as distasteful or a reason to give a lower evaluation,” the authors concluded. Popularity as a taste has a cousin: It’s renown as a taste. Some people are seduced by prestigious books, some people don’t care about prestigious books, and others are excited about disliking acclaimed works, because they look forward to forming an counterintuitive opinion about a book that people are talking about.32
In his chapter on social proof in Influence, Robert Cialdini begins with the example of the laugh track. TV executives glommed onto fake laughter in the early years of TV comedies because research showed that laugh tracks made people laugh. It initially seemed that hearing other people laugh counted nearly as much as a joke’s actual humor.
But the history of the laugh track is not a simple story about social influence. It is a history of an invention that created a trend, a trend that triggered a backlash, and a backlash that created a new mainstream. It is, in other words, a story about fashion.
• • •
In the 1960s, the biggest star in American television wasn’t Mary Tyler Moore or Andy Griffith. By pure screen time alone, the TV talent most present in American living rooms wasn’t an actor at all. It was an electrical engineer who never appeared in front of the camera, but whose work behind the scenes was influential enough that you could hear him almost every minute on about forty shows a week. At one point, he was so powerful, and his work so private, that he was called the “Hollywood Sphinx.” His name was Charles Douglass, and he invented the laugh track.
Douglass was born in Guadalajara, Mexico, in 1910 and his family moved to Nevada when he was a child to escape political unrest. He wanted to study electrical engineering like his father, an electrician with a Nevada mining company. But when he found himself in Los Angeles after World War II, the hot new media industry for a technophile like Douglass was television. He took a job as a sound technician with CBS.
Situational comedies in the 1950s tended to be shot in simple sets in front of live audiences. Entertainment often shoehorns past habits into new formats, and indeed 1950s television was basically live radio or theater in front of a camera. But when actors forgot a line or messed up their blocking, the second or third takes of the same jokes wouldn’t elicit many laughs. Weak chortling made a show seem stolid when it was broadcast to audiences sitting at home. This led to the practice of “sweetening” laughs by extending or amplifying the sound of merriment in postproduction.
Douglass was interested in a bigger solution to the problem: He wanted to invent a machine to simulate laughter. This way, shows would never be fully defeated by awful writers, worse actors, dead audiences, or the vagaries of a live recording. For several months in the early 1950s, he listened to audio of laughs, gasps, and applause from several theatrical performances and television.33 He recorded his favorite sounds of mirth on analog tape, which he could play with keys he took right off a typewriter.
The “Laff Box,” as his invention came to be known, looked like a gangly bastardized typewriter, but Douglass played it like an organ. The laugh keys could be pressed together like chords to create more than a hundred variations of audience amusement. In his private studio, Douglass knew how to layer laughter for the right moment during postproduction. As a sitcom gag worked its way toward a ridiculous climax, Douglass would play early chuckles, crescendo to hearty guffaws, and finally leave the invisible audience screaming with delight. Layering in the laughs was an art, and Douglass had the only game in town.
Douglass’s technology faced considerable antagonism in its early days (and high-minded doubters throughout its existence), but eventually networks realized that canned laughter had several advantages. First, it allowed directors to shoot first and add the audience later. Showrunners began to film television more like movies—inside and outside, with several takes and multiple camera angles. By 1954 Douglass had so many clients that he quit his job at CBS to work full-time with his Laff Box. He owned a monopoly on mechical mirth, but he was a benevolent monopolist, scoring a single episode for just about $100.
The second reason why laugh tracks eventually caught on requires a deeper understanding of why people laugh in the first place—of what makes something funny.
Plato proposed that laughter was an expression of “superiority” over a person or character in a story. Superiority is clearly at work in physical humor and Borscht Belt jokes. “My doctor said I was in terrible shape. I told him I needed a second opinion. ‘All right,’ he said. ‘You’re also quite ugly.’”
But the theory of superiority fails to explain puns, which are funny, at least in theory. “Two atoms are walking down the street. One of them turns to the other and says, ‘Hold up, I think I lost an electron.’ The first atom replies, ‘Are you sure?’ The second atom shouts, ‘Yes, I’m positive!’” This joke has nothing to do with power. The last word of the story arrives as a small yet meaningful surprise. But to explain what makes it funny, a broader theory is needed.
In 2010, two researchers proposed what might be the closest thing that sociology has to a universal theory of humor. It’s called “Benign Violation Theory.” Peter McGraw, now the director of the Humor Research Lab, and Caleb Warren, now an assistant professor of marketing at the University of Arizona, proposed that nearly all jokes are violations of norms or expectations that don’t threaten violence or emotional distress.
“If you look at the most universal forms of laughter shared across species, when rats laugh or when dogs laugh, it’s often in response to aggressive forms of play, like chasing or tickling,” Warren told me (and, yes, rats can laugh). “Chasing and tickling are both the threat of an attack, but without an actual attack.” By this theory, a good comedian chases with impropriety and tickles with wordplay, but does not deeply wound the audience’s social mores.
Any mainstream system—social behavior, manner of speaking, identities, even logic—can be threatened or violated. But people laugh mostly when they sense that the violation is benign or safe. And what makes something seem benign or safe? When lots of other people are laughing with you. That was the magic of Douglass’s box: It was an effective tool of safe public conformity. Hearing people laugh gave audiences license to chuckle, too.
But if the laugh track is such a universally effective tool, why is it disappearing? Influence was published in 1984, the year the Emmy for Best Comedy went to Cheers, with its riot of audience howls. Every comedy nominated for an Emmy in 1984 had live laughs or laugh tracks, too. So did every show to win the Emmy for Best Comedy going back to the early 1970s, including All in the Family, M*A*S*H, The Mary Tyler Moore Show, Taxi, and Barney Miller.
But in 2015, none of the comedies nominated for an Emmy had laugh tracks.34 The last time a show with a laugh track won an Emmy for Best Comedy was Everybody Loves Raymond in 2005.
With their three walls and proscenium style, television shows in the 1960s and 1970s looked much more like filmed plays. But by the early twenty-first century, many television shows looked and felt like movies. Since there are no artificial guffaws in film, laugh tracks eventually seemed anachronistic and inappropriate for their genre. A 2009 study called “The Language of Laughter” found that laugh tracks decreased the “mirth behavior”35 of TV shows that were more intricate, narratively rich, and movielike. For TV that resembles “traditional motion pictures as opposed to simplified theatrical presentations,” they wrote, “the laugh track appears to be an impediment to humor and audience enjoyment”—a middlebrow marker, unfit for prestige television. So many comedies signaled their highbrow separateness by dropping canned laughs entirely. By helping television become more like the movies, Douglass created the conditions for his invention’s ultimate demise.
This is the life span of the laugh track: It was conceived in controversy, grew up to become a social norm, and is dying a cliché. In other words, the laugh track was a fashion. The sound of other people laughing, which used to make people laugh, now makes many people cringe.
As I write this book in 2016, we are in the midst of several cultural trends—the glut of prestige television, the ubiquity of superhero franchises, the hegemony of hip-hop, the emerging dominance of Facebook—that are so absolute that they feel invincible, even eternal. But for decades, television executives assumed the laugh track, too, was the final free joke, a prosthetic tickle that would never fail. Its magical power was so absolute that it was hailed in one of the most read books about social psychology ever written. But its history suggests something subtler. The laugh track’s social power wasn’t anything like an iron law. It was more like a summer dress fashion, or a knock-knock joke. It worked once. Then it got old.
Culture doesn’t stop surprising us. In fact, culture doesn’t stop at all. Anything can be a fashion.
• • •
What is the next fashion, like names? What is an industry or custom that used to be governed by tradition but where we now see an explosion of choice? Consider one of the most basic human activities in the world: talking.
The Homo sapiens species has been around for about two hundred thousand years. But the oldest examples of prehistoric art date to about 50,000 BCE, suggesting that modern humans have spent far more time wandering the earth without written expression as we have spent surrounded by art and writing. After ancient cave paintings and pictograms, it took tens of thousands of years for humans to develop anything approaching an alphabet. Cuneiform in Sumeria and hieroglyphics in ancient Egypt are both dated to about 3000 BCE. In some parts of the world, language slowly evolved from ideograms, where shapes represented ideas, to phonetics, where letters represented sounds. But these early phonetic alphabets were typically all consonants, which forced outsiders to guess at the sounds between the letters. (When I studied Hebrew for my Bar Mitzvah, I was disappointed that I had to memorize the vowel sounds, as the temple’s Torah was written exclusively in consonants.) It was the ancient Greeks who finally introduced the concept of a vowel, which unlocked something previously impossible: The ability of anybody to pronounce any series of sounds by deciphering scribbles.
There is a kind of magic in the idea that humans can express quasi-infinite ideas and emotions from a code consisting of twenty-six funny-looking shapes. But this enchantment was slow forming. Thousands of years passed before civilization moved from symbols representing ideas to symbols representing sounds.
Although vowels made language easier, writing remained specialized, and even controversial, for millennia. Plato, who died in the fourth century BCE, disparaged written expression in the Phaedrus, as he suspected that writing drained one’s memories.36 Whatever the opposite of “going viral” is, that’s what writing did for the majority of its time with humans. It steadfastly refused to catch on. Literacy rates in European countries like France did not cross 50 percent until the 1800s, and half of the world could not read and write as late as 1960.
The true democratization of written language required a technology to cheaply distribute written words. But another 4,500 years passed between the first sign of hieroglyphics and the invention of the printing press by Johannes Gutenberg. The printing press, too, caused a scandal. Monk scribes were aghast about the thing, owing at least partly to the fact that it competed with their monopoly of producing books. In the pamphlet In Praise of Scribes, the fifteenth-century abbot Johannes Trithemius wrote, “He who ceases from zeal for writing because of printing is no true lover of the Scriptures.” In the final analysis, the apparent blasphemy of the machine did not outweigh its convenience. In a piece of perfect irony, Trithemius’s cri de coeur was ultimately printed, making use of the same press he demonized.37 Thus writing, once slandered and then sacred, gave way to book printing, once slandered and then sacred.
After 1500, inventions, systems, and organizations to facilitate the spread of written language came at a breakneck speed, relatively speaking. In 1635, the Royal Mail in Britain was made available to recipients who could pay the postage, marking Europe’s first public mail service. Two centuries later, a painter named Samuel Morse received a letter by mail that notified him of his wife’s tragic death. He immediately left Washington. But by the time he arrived in New Haven, she had already been buried. This reportedly inspired him to invent a faster mode of communication—the telegraph. Morse sent his first long-distance message in 1844, from Baltimore to Washington. Alexander Graham Bell placed the first phone call thirty-two years later.
Let’s pause here to acknowledge that, by 1900, humans had existed for two hundred thousand years and communication was still, in many ways, an ancient custom, just as first names were until the 1800s. People talked. Sometimes they sang. People read books, mostly religious texts. Some families wrote letters, and news might be transmitted by telegraph. But even the telephone seemed like a curious intrusion into the tradition of talking, and Americans appeared to have no idea what to do with it for years. It took less than ten years for cars, radios, color TVs, VCRs, cell phones, or the Internet to go from niche to mainstream—10 percent to 50 percent penetration—in the United States. It took the telephone almost forty years to make the same journey to the center of the mainstream.
The 1990s saw a Cambrian explosion of communications technology. The first text was sent and received in 1992 (“Merry Christmas,” it said); eight years later, half the country owned a cell phone. In 1995, six in ten U.S. adults said they had never heard of the Internet or weren’t sure what it was; five years later, half the country was online.
As communication choices abounded, modes of talking became fashionable—and then anachronistic. Corded and cordless phones connected teenagers in the 1990s. By the early 2000s, online chatting was the norm. Then centuries flipped, and the social media revolution erupted, with Friendster in 2002, MySpace in 2003, Facebook in 2004, Twitter in 2006, Whatsapp in 2009, Instagram in 2010, and Snapchat in 2011. These platforms were joined by other inventions, like Vine and Yik Yak, and modern twists on pre-alphabet pictograms like emojis, GIF keyboards, and stickers. Each of these apps and features are fundamentally pictures and words in boxes. But they all have a distinct patois and cultural context, and each represents a fashionable improvement or purposeful divergence from the previous dominant technology.
Communication, once a simple custom that did not change for millennia, is now so fraught with new choices that it is becoming something more like a fashion, where preferences in how we talk to each other, what technology we use, even what “talking” means, are constantly changing. MySpace and Facebook helped to make it acceptable to post private friend-to-friend messages publicly. Instagram created a massive social network strictly around images. Snapchat Stories allow anybody to create mini-movies about their lives for their friends to watch. None of these protocols are much like talking on the phone, and yet they all feel, to their millions of users, quite naturally like talking, and sometimes even superior.
Communication-as-a-fashion is one reason why today’s marketers are so embarrassingly bad in their attempts to glom onto the newest memes and strategies. The fashion changes by the time they get out the message. The 2013 Super Bowl suffered a rare power blackout, and Oreo created a sensation when it filled the dead space with a tweet inviting people to dunk their cookies in the dark. It was a legitimately surprising and clever move for a company to behave like a Twitter-savvy kid. But that was 2013. Within a few years, on-the-nose social media messages were mostly condemned as embarrassing and forced, like a dad misquoting a new teen movie in an attempt to look hip. Advertising firms are still catching up to the fact that the fashion cycle of slang moves faster than their copy desks.
Clothing, once a ritual, is now the definitive fashion. First names, once a tradition, now follow the hype cycle of fashion lines. Communication, too, is now coming to resemble the hallmarks of a fashion, where choices emerge and preferences change, sometimes with seeming arbitrariness, as people discover new, more convenient, and more fun ways to say hello.