Arts: Long Live the Dead Buffalo
Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while. That’s because they were able to connect experiences they’ve had and synthesize new things.
—Steve Jobs
The 2008 animated movie WALL-E, about the little trash compactor robot that discovers love, certainly deserved its Best Picture nomination at the Oscars. It offered proof that man and machine are capable of amazing creations when they work together. With its jaw-dropping computer-generated graphics, WALL-E was the perfect blend of technology and art.
But while it tells a positive story about how life and love can flourish in the darkest and most unexpected places, it’s also quite cynical. In the movie, humans have abandoned life on Earth in favor of a giant spacecraft not unlike the mammoth pleasure cruise ships that currently prowl the oceans. They’ve left the planet behind because their pollution has rendered it incapable of sustaining life. WALL-E, the eponymous robot, is one of many machines tasked with cleaning up the mess while the people spend their days watching movies and playing video games in comfy hover chairs aboard their space cruiser. Prosperity and automation have reached their ultimate conclusions, so no one needs to do a lick of work, which is why everyone is pleasantly plump, their limbs stumpy and weak from disuse. In one of the funniest scenes in the movie, a roly-poly human gets up from his chair and tries to walk, only to stumble about like a youngster taking his first steps.
Funny, yes, but optimistic about our future? Not at all. The carefree, blob-like people offer us a cautionary tale, a warning that if we continue down our current path, we’ll destroy the environment and end up as invalids, incapable of even taking care of ourselves. If that’s not a cynical outlook on our future, I don’t know what is.
But aside from its environmental and social commentaries, what I’ve always thought WALL-E got most wrong was its portrayal of our cultural consumption. Its humans have become invalids through their incessant watching of TV and movies and playing video games, which is an easy and even tired criticism to make. Who hasn’t heard the complaint about how kids aren’t reading or playing outside anymore, that they’re spending all their time glued to screens? On top of that, the movie also suggests that people will increasingly become zombies, presumably under the control of whoever is piping out all those films and games that they’re slavishly consuming.
Both suggestions run counter to what’s actually happening. In the future, big, evil, faceless corporations or even governments won’t be shoveling all that empty, distracting content into the trough of human consumption. It’ll be us, ourselves. We’re not just becoming hoggish consumers of content. Just as with our self-creating jobs, we’re also becoming prodigious producers of it too.
BEER NOT PHOTOS
In 1999, grunge music was all but dead, my hair had long been short, and everyone feared that the Y2K bug would destroy the world and everything in it. The digital revolution was just beginning; the Internet was starting to take off in a meaningful and life-changing way, and people were slowly discovering that all these newfangled electronics were much better than their old analog forebears. “Hey, look honey, the picture on this DVD really is much sharper than the VCR!”
For my first backpacking trek across Europe, aside from a guidebook and extra underwear, I’d also packed my Nikon SLR camera. Being in my mid-twenties and not made of money, I’d budgeted seven rolls of film, thirty-six exposures each, figuring that would be enough to get me through a month-long visit to some of the most amazing sights of the Old World. (For younger readers: film was a chemically treated plastic that we put in little canisters. Light etched the chemicals into images that we transferred onto treated paper, which ultimately gave us photo prints. Yes, it sounds like witchcraft to me too, now.)
So: 252 photos to cover Paris, Berlin, Rome, and Switzerland. No problem, right? After being extraordinarily stingy with what I actually photographed, the development cost me just about that number of dollars. But that wasn’t even the worst part. With photography skills that bordered on nonexistent, perhaps a dozen decent images resulted from my efforts. I got a good Arc de Triomphe, a pretty decent St. Peter’s Basilica, and a bunch of blurry Swiss Alps. The cost of the photos I kept averaged to about twenty dollars each. That seems insane now.
French inventor Nicéphore Niépce created the first analog photographs in the 1820s. The first consumer camera, the Kodak Brownie, wasn’t released till nearly a century later, in 1900. The best estimates figure that inventors, tinkerers, and professionals took a few million photos in those intervening eighty years. Cameras got cheaper and better and finally went mainstream around 1960. About half of all photos taken that year were of babies, a good proxy for judging whether such a technology had reached critical mass with consumers.1
As we saw in chapter one, the first electronic camera—where the image was created and stored in digital format rather than replicated through chemicals on film—was invented at Kodak in 1975, but again, Moore’s Law had to kick in. Kodak’s device weighed nine hefty pounds and took twenty-three seconds to record a single black-and-white photo onto a cassette tape, which was then played back on a television screen with the aid of an additional console. Obviously, it was nowhere near ready.
Sony made a big breakthrough in 1981 when it developed a two-inch-by-two-inch floppy disc that replaced the tape, and digital cameras continued to improve throughout the decade. By the nineties, film’s photo quality was still far superior and electronic cameras still cost around twenty thousand dollars. But the ground really started moving in 1990, when Switzerland’s Logitech released the first true digital camera, a device that converted images to the ones and zeros of binary code, connected to a computer, and stored photos on a memory card.
That link to computers finally gave photographers an easy way to transfer pictures to their machines, where they could transmit them to others over the Internet. Meanwhile, Moore’s Law took hold of other relevant components, including memory cards and image processors, so cameras quickly got smaller, better, and cheaper. Japan’s Nikon kicked off digital photography for professionals with the D1 SLR in 1999. People had already taken a good number of photos—around eighty-five billion in 2000, or about twenty-five hundred every second. In 2002, 27.5 million digital cameras were sold, or about 30 percent of all cameras. The digital revolution in photography took hold fully in the mid-2000s, and the film market all but died by 2007, when 122 million digital cameras sold.2 By the end of 2011, an estimated 2.5 billion people had digital cameras, whether SLRs, compact, or in their phones.3
With billions of cameras in existence, people took an estimated 375 billion photos in 2011 alone. As analyst and blogger Jonathan Good puts it, “Every two minutes today we snap as many photos as the whole of humanity took in the 1800s. In fact, ten percent of all the photos we have were taken in the past twelve months.”4
The growth is perfectly logical. With the costs of film and development suddenly gone—no more twenty-dollar photos—people started shooting like crazy. Rather than carefully picking a shot, waiting for the perfect light, and framing it just right, they snapped away without a second thought. We used to get upset—sometimes violently so—if someone walked through one of those meticulously crafted shots just as we pushed the button, but now it was no longer a big deal. (To think: I could have spent that $250 in Europe on beer instead—okay, more beer—and still come home with more usable pictures than I did.)
Yet, this paradigm shift had a downside in its first few years. Many of the photos being taken were going straight into limbo. After all, what were we supposed to do with them? We could look at them on a computer, but who wanted to do that? It’s one thing for a family to gather on the living room couch to pore over a physical photo album, but it was another to cram around a computer screen. It just wasn’t the same. Intermediate solutions came along. Digital photo frames, for example, proudly displayed all those images we took, but by this point, a sort of WALL-E laziness had prevailed. Once, we had been perfectly content to finish a film roll, haul it down to the drugstore, wait a week for it to be developed, and then go back to pick it up. But somehow the act of transferring photos to a digital frame proved too wearying. As a result, the majority of those billions of photos taken never left the memory cards on which they resided; the memories they represented were doomed to never see the light of day.
But then the second part of the revolution kicked in. Along came Facebook, the social network that opened its doors to the general public in 2006. The website offered the perfect repository for all those photos because it actively connected friends, family members, classmates, and co-workers. These were the people with whom we actually wanted to share our photos, and Facebook importantly made the connections passive from the user perspective. You didn’t have to search out a friend’s photo album to see if you were in it, as had been the case on Friendster and MySpace. Your friend tagged you, you found out more or less instantaneously, and shortly thereafter you checked out the photos in question. Digital photography and Facebook thus evolved into a sort of symbiotic relationship. The website grew dramatically because users could see their friends’ photos, while the number of photos on Facebook exploded because of the rapidly growing number of people on it.
The stats are almost overwhelming. In 2011, users shared some seventy billion photos on Facebook—about 20 percent of the total pictures taken in the world that year—bringing its total repository to 140 billion, or ten thousand times bigger than the photo catalog at the Library of Congress.5 The number keeps growing ridiculously: In 2012, the website announced it was receiving three hundred million photo uploads every day, amounting to 109 billion for the year.6 Plus, that’s just Facebook. It’s the largest driver of online photo sharing, but it’s also just one of many photo-focused social networks. Flickr, Pinterest, Twitter, Facebook’s Instagram, and Google’s Picasa all contribute their share as well. As more and more people come online every day around the world, the exponential growth is only going to continue.
The photo explosion helps us understand how technology is changing and shaping human culture itself. For much of the past hundred years, photography was a communication medium, hobby, and profession reserved for those who had the money to engage in it—twenty-dollar photos!—as well as the patience to learn its techniques properly. Technological advances eventually removed the cost and skill needed to take decent photos, then made it easy to distribute and share those images with their intended audiences. Now, with the barriers gone, photography is no longer the domain of a small elite group of people. It’s a communication medium on a grand global scale.
Sure, we can talk, e-mail, and text, but like the cliché says: A picture is worth a thousand words. An image can convey much more than simple text can, which is why people share photos of sunsets, concerts, and the artisanal ham sandwiches they had for lunch. Some people dump every photo they take onto social media, though most deliberate on their choices a little more. Each image we share, after all, does say something about us. Even the humble ham sandwich can tell viewers about its sharer: that he doesn’t keep Kosher, that she has a fetish for gourmet comfort food, or that he’s lonely and wants someone to pay attention to him. Whether intended to amuse, entertain, challenge, or inspire, most communicated photos are a conscious form of human expression. It may not be art or even good, but every shared photo conveys a series of messages.
THE MESSAGE IN THE MEDIUM
Photography is the most obvious medium to benefit from technological advance, but it’s hardly alone. Virtually every means of human expression and communication has grown significantly, especially in the past few decades, because of ongoing advances in production and distribution.
In the Middle Ages, literacy was reserved for the rich and privileged. Prior to the 1450s, when a German goldsmith named Johannes Gutenberg came along with his printing press, a small handful of monks safely tucked away in their monasteries jealously guarded the sum of human knowledge from the filthy masses. Gutenberg’s invention paved the way for mass literacy, and, over the next few centuries, more and more people became readers. Published books expanded from religious texts to great works of literature, then eventually to the likes of Dan Brown, Stieg Larsson, and 50 Shades of Grey. By 1990—at which point about three-quarters of the world’s people could read—publishers were pumping out nearly a million books a year to meet demand.7
While total world population has grown by a third since the early nineties, the number of books published has comparatively exploded by more than 150 percent in the same time frame. Continued growth in literacy—the global rate climbing to 84 percent by 2010—fueled some of that increase, but globalization has played a major role as well. The United States, United Kingdom, France, and Spain have become especially important because they publish in languages used internationally in former colonies, with most major multinational publishers consequently based in those countries. The United Nations Educational, Scientific and Cultural Organization (UNESCO) says those four countries “constitute the main international centers of publishing and have considerable influence beyond their borders.”8 Naturally, the numbers of publishers in those countries have proliferated. In the seventies, about three thousand publishers were operating in the United States, and by the end of the twentieth century that number had increased to sixty thousand. Pessimists claim blindly that no one reads anymore, but the number of titles published has grown dramatically worldwide, to about three million in 2010, a tripling in just twenty years.9
The revolution in self-publishing, meanwhile, is only just beginning. As with the rise of digital photos, two entwining arms of technology have made it easy for anyone to write a book. The first arm is the personal computer, a capability we’ve had at our disposal for the better part of thirty years. The second arm is micro-distribution in the form of print-on-demand technology and controlled electronic publishing. In the latter instance, the necessary advances have emerged only recently. Online retailer Amazon has been the biggest player in that area so far, having launched its Kindle Direct Publishing platform in 2007. Not only does the system allow authors to sell their work directly to consumers without having to go through a publisher, it typically lets them keep most of the proceeds as well. With the simultaneous proliferation of e-readers and tablets, a new world has opened both for established and would-be writers, many of whom jumped at the chance to reap more of their own rewards. Not surprisingly, self-published books—once the exclusive domain of losers who couldn’t get a “real” publishing deal—are skyrocketing. In the United States alone, self-published books grew nearly 300 percent between 2006 and 2011 to a quarter-million titles. In the summer of 2012, four self-published authors had seven novels on the New York Times bestseller list. As author Polly Courtney, who went back to self-publishing after three novels with HarperCollins, put it: “It feels as though the ground is shifting at the moment . . . It’s quite liberating. Some sort of transition was overdue.”10
But books aren’t the only medium for writing. The Internet has made it possible for anyone to start an online diary, commentary pulpit, or even news organization. The number of blogs has mushroomed from thirty-six million in 2006 to more than 180 million in 2011. The activity is proving particularly popular among women and young people, and a lot of people are clearly reading all these blogs. More than a quarter of all Americans online regularly check out the top three blogging websites: Blogger, WordPress, and Tumblr.11 More people are writing, and more people are reading. Nearly a billion people in the world still have yet to crack open a book or see a blog, but literacy rates are improving.12 Reading is far from a dying pastime; its future looks bright.
INDIE GOES MAINSTREAM
It’s a little trickier to track growth in the music industry. Until relatively recently, no one could record performances of music, so the history of the music industry was the history of music publishing, from the huge tomes of religious music copied in the Middle Ages to the sheet music printed and published into the nineteenth century.
Recording sound first became possible in 1857 when Édouard-Léon Scott de Martinville came up with what he called the phonautograph. The machine tracked sound waves moving through the air and replicated them using a stylus on a sheet of soot-coated paper. It couldn’t play the sounds back, but the process was soon reversed into a new device that could. Thomas Edison perfected the phonograph or “record player” in 1878, and so was born the era of popular music distribution. By the turn of the twentieth century, three companies—Edison, Victor, and Columbia—were selling about three million records a year in America alone.13 Decades before Moore observed the phenomenon, the technology advanced, improved, and got cheaper, to the point where an industry formed around it. Music became a business, with recordings moving from wax cylinders and discs to vinyl albums, cassette tapes, compact discs, and ultimately to digital files.
During the first half or so of the twentieth century, hundreds of record labels sprang up around the world to help produce and distribute music recordings for profit. In the latter half of the century, many of those independent operations consolidated into a few mega-players. By 1999, five major corporate entities essentially controlled the industry: Germany’s Bertelsmann AG, Britain’s EMI Group, Canada’s Seagram/Universal, Japan’s Sony, and US-based Time Warner. The concentration of power had its benefits. It lowered infrastructure and transaction costs, gave musicians greater access to the sorts of skilled and specialized workers they needed—studio technicians, producers, marketers, and the like—and introduced the ability to monitor and learn from the competition.14 The major labels controlled every aspect of music production and distribution, from the recruitment and development of artists, to legal services, publishing, sound engineering, management, and promotion. On the downside, it became incredibly hard and rare for an individual to produce and distribute music without having to go through one of the big gatekeepers.
As such, the big companies enjoyed tremendous success. In the United States alone, music sales soared 160 percent between 1987 and 1997 to $12.2 billion, surpassing every other medium. As researcher Brian Hracs puts it, “In 1997, the recorded music sector stood on top of the entertainment pyramid, surpassing domestic sales in the motion picture industry, as well as DVDs, video games, and the Internet.”15
Then, the so-called MP3 crisis hit, sending the industry into a tailspin. The new format allowed for the digitalization of music into small files, easily transmitted over the Internet. A file-sharing gold rush ensued. Napster in particular capitalized on the phenomenon. In 2000, half a million people were logged into the file-sharing service at a given time, and a year later that number had ballooned to sixty million. With people copying and acquiring music for free, industry revenues plummeted. Global music sales fell by 5 percent in 2001, then by a further 9 percent in 2002. One estimate pegged the decline in consumer spending in Canada between 1998 and 2004 at a whopping 40 percent.16 For the music industry, the sky was falling.
Yet, despite nearly a decade of MP3s, file-sharing, and iPods, music production continued to rise. Between 2000 and 2008, the number of new albums released in the United States more than doubled to 106,000, according to tracking firm Nielsen.17 Releases took a dip after that—to seventy-five thousand in 2010, partly explained by the global recession—as record labels finally adjusted to the new musical reality. It took a while for the full effects to take root, but the MP3 decade forced labels to become considerably more risk-averse. Now, they release albums mainly from proven money-drawing acts. Labels have transformed from their lucrative days in music development and production into brand-led marketing companies akin to Coca-Cola or Nike. Artist development, which they used to oversee, now falls to others, primarily the artists themselves.
With the big boys taking fewer risks, the business has now become much more independently driven. In the United States, the number of label-employed musicians declined by more than three-quarters from 2003 to 2012.18 In Canada the number of musicians estimated to be without a recording contract and therefore technically indie is a whopping 95 percent.19 Those figures suggest that the traditional means of tracking music production and distribution—primarily Nielsen’s album releases and sales—no longer have much relevance, at least not as far as gauging the real state of the art.
Some estimates peg the number of indie albums released in the United States in the realm of ninety thousand, or greater than the number released by actual labels.20 Even counting albums has become something of an outdated method since single songs have become much more popular in the digital age.
It’s difficult to track and estimate exactly how much music is being created and distributed, but with musicians having access to continually improving tools to make and disseminate recordings—whether it’s editing software such as Pro Tools or online stores such as iTunes—the medium has likely experienced the same big production growth as photography and writing. “With computers and file sharing, a whole lot more people are making [music],” says Anthony Seeger, professor of ethnomusicology at the University of California Los Angeles. “They can post their own latest composition, their own latest song. It’s stimulus to compose and write and get it out there and it’s also a mechanism that allows more people than ever possible in the past to do so.”21
OPTIMUS IPHONE
The same is happening in the movie industry. Here too the dual forces of globalization and technological advance have had a major effect both on production and distribution. With a potential worldwide audience, studios have been increasingly cranking out feature films, nearly tripling their production between 1996 and 2009, from 2,714 a year to 7,233. That growth also covers both developed and developing countries.22 In the United States, for example, 429 films were produced in 1950 versus 734 films in 2009. The comparative numbers in India were 241 versus 1,288.23 Many people would consider Hollywood the world capital of moviemaking, but it’s not. Both India and Nigeria surpass the United States in total films produced, with Japan and China rounding out the top five.
Just as with the music industry, technology is driving a major shift in how movies are made and distributed. With bigger and better televisions and increasingly affordable home theater systems, Hollywood studios are undertaking larger-scale productions that need to be seen on giant screens to be enjoyed fully. Big blockbusters therefore are getting more expensive to make. The 1994 Arnold Schwarzenegger action flick True Lies was the first movie to cost $100 million, but that seems cheap considering that thirty films between then and 2013 cost double that or more. Conversely, cheaper cameras, sound and lighting gear, and editing tools are making it easier to make movies on the low end. If a budding director can’t afford the already cheap professional quality Red Camera, which cost only twenty-five thousand dollars in 2013, he or she could always enter the iPhone Film Festival, where the only major outlay is generally a two-year cellphone contract.
More important than the hardware advancements is the proliferation of alternative distribution options. Cable TV, YouTube, Netflix, and a host of other online services are giving filmmakers more and cheaper ways to showcase their work. “Cinema is escaping being controlled by the financier, and that’s a wonderful thing,” says Academy Award–winning director Francis Ford Coppola. “You don’t have to go hat-in-hand to some film distributor and say, ‘Please will you let me make a movie?’ ”24
Not surprisingly, indie production—just as in music—is booming. The Sundance Film Festival saw submissions increase by 75 percent to more than four thousand between 2005 and 2011, while over the same time frame Cannes saw its submissions balloon by 184 percent to 4,376 a year.25 Filmmakers whose pictures major distributors aren’t picking up are finding willing buyers in cable companies’ video-on-demand services, Netflix, and a host of other online streaming operators.
But, as with writing, big industry isn’t the only outlet for budding filmmakers. YouTube has emerged as a platform on which creators can display their work and even create a profitable business. The growth there has been even more astronomical. In 2012, the site saw one hour of video uploaded by users every second, up from “only” ten hours every minute in 2008. It registers four billion views a day and three billion hours of video watched every month. More video was uploaded to YouTube in one month than the three major US networks created in sixty years.26 The reason for this growth is obvious: People like to communicate with each other, and video has become one of the more popular mediums on the Internet for doing so.
YouTube has also become an important launching pad for undiscovered talent. Patrick Boivin’s route to success is remarkable. As a teenager in the 1990s, he was pretty typical. He didn’t like school, but he did love drawing comics. Whenever he thought about his future prospects, though, he resigned himself to the likelihood of working as a waiter or some similarly low-paid drone job to support his passion. But he was enterprising, so he got together with a few like-minded friends and made some sketch-like short videos that spotlighted their artworks on VHS tapes. They showed the videos in bars on weekends and they proved to be popular. The tapes drew the attention of a small local television network, which hired Boivin and his crew to create a series called Phylactère Cola, a sketch show that parodied and otherwise skewered movies, television, and society in general. The show aired in Quebec in 2002 and 2003 and served as a sort of school for Boivin, who used it to learn the ropes of filmmaking. When it wrapped, he bounced around jobs, making commercials for various clients.27
Around this time, YouTube began its meteoric rise. Eager to draw the millions of viewers that some uploaders were getting, Boivin posted some of his short films on the site. But the big audiences didn’t come. He wasn’t completely disheartened, though. He studied what made popular videos go viral and set off to do the same. Some people were doing well by reconstructing scenes from the Transformers movie with the toys. The movie was hot, and people were talking about it. He figured that he could take advantage, given his background. He created a stop-motion video of an animated fight between two of the robots, and when that went viral he made more. Further success got him thinking about how he could make a living from this sort of work, and he came up with the idea of selling his services to toy companies. “Eventually, it worked,” he says. He’s been making viral videos as a full-time job ever since, with YouTube serving as a sort of audition tape for employers.
“It became a different way of making a living as an artist, which is amazingly rare because you usually don’t have many options, especially as a filmmaker,” he says. More importantly, there’s the opportunity of exposure. Before such widely available platforms existed, filmmakers often didn’t act on the ideas they had because they weren’t sure if they would ultimately be worth it. “It became an incredible motivation because the hardest thing I experienced as a movie maker—all the movie makers I know experienced the same thing before YouTube—to find an audience was so hard that most of the time you didn’t really care about doing something because you thought it would only be seen by a couple hundred people,” he says. “When you know that doing something, when it’s good, will be seen by thousands or millions of people, now that’s something. The days we are living in now are special because you know that what you do can be seen by the whole world.”
In 2012, Boivin uploaded his first feature film, the sixty-eight-minute Enfin l’Automne (Fall, Finally), just to see if he could do it. When we spoke, he said he was on the verge of signing a deal with a Hollywood studio to do a proper film. In 2013, he was tapped to direct Two Guys Who Sold the World, a sci-fi comedy that won a lucrative film pitch prize at the Berlin International Film Festival. Like the millions of people of varying levels of creativity who have uploaded and shared photos, e-books, blog posts, songs, and videos, Boivin owes his livelihood and success to the technology that made it possible. He speaks like a pitch-man for YouTube, but he’s actually selling the virtues of technological advancement. “As a French Canadian, it’s an opportunity that didn’t exist before. And it’s all thanks to YouTube.”
The same has held true for me, albeit in textual rather than visual context. I started a blog in 2009 to promote my first book Sex, Bombs, and Burgers. Over the years, it morphed into a broader canvas where I shared my thoughts on technological trends, news, and events. After a few years, in which I gained something of a following, I signed a deal with a pair of magazines to syndicate it, which provided me with a steady source of income. More importantly, the blog has served as my online portfolio and chief marketing tool and has landed me numerous jobs and opportunities. It’s an incredibly important platform that makes possible my own efforts at entrepreneurialism, as that kind of thing does for many others who write for a living.
AN EXPLOSION OF PLAY
Often derided by critics as passive or even mindless, anybody who actually plays video games knows that they’re anything but a medium devoid of cultural value. Games are at least as valuable an art form as movies, and they engage players in ways that no other medium does. In games with narratives, for example, players make choices that affect the outcome, which means they become more invested in the characters and stories they create than if they were only passively watching. Games often take the brunt of the blame for lulling children into sitting in front of a screen for hours—perhaps to become fat blobs as in WALL-E—but they’re no worse at doing that than other similar activities, like watching movies. Video games have become the scapegoat for every social ill, from overweight children to school shootings, largely because of their relative newness as a medium. The first commercial video games became available only in the early 1970s, making the medium a child relative to other forms of entertainment, all of which went through similar growing pains during their own respective infancies. Radio and television were similarly demonized for spurring social ills in their early days.
The negative attitudes are starting to ebb as better and cheaper technology is expanding the appeal of the medium beyond its initial audience of younger people. Games used to be played primarily by kids on computers and consoles in dark basements, but in recent years they’ve proliferated onto smartphones and tablets. Many adults, some of whom grew up with Nintendo or Atari consoles, now while away their commutes by playing bite-sized games on their phones and tablets. Video games are no longer just ultra-violent, forty-hour shooters with incredible graphics that cost sixty dollars; they’re also quick distractions that can be had for ninety-nine cents or less. The medium is maturing, and gamers are getting older on the whole. The average age is now about thirty.28 The raw number of people playing worldwide has risen from 250 million in 2008 to 1.5 billion in 2011.29
In the early 1980s, the predominant way to play video games was on the Atari 2600 console, which had a total catalog of just over four hundred titles.30 Since then, platforms and distribution methods have proliferated. Now, designers can get their games to a multitude of devices via an Internet connection, whether it’s home broadband or cellular. They don’t necessarily have to go through a gatekeeper like they used to, which in the past meant console makers such as Atari, Nintendo, Sony, or Microsoft. By way of comparison, the number of games available for Apple’s iPhone alone hit more than six thousand in 2009, less than a year after the company launched its app store, and climbed to ninety thousand by the end of 2011.31 The growth has been similar to that of photos: More games are being released for just the iPhone every three days than were created for the entire run of the Atari 2600.
The number of game makers is correspondingly exploding. In the United States alone, eighteen thousand game-related companies were operating in 2008, or eighteen times the number that existed just three years prior.32 Just as in film, where the budgets on the biggest games continue to go upward, it’s now equally possible for smash hits to be created on a veritable shoestring. Minecraft, developed and released in 2011 by a single person—Swedish programmer Markus Persson—sold more than a million copies before its full version was even released, netting its designer more than thirty million dollars. It was only after the game saw major success on its own that Microsoft asked Persson to make it available on the Xbox 360 console.
Even with its staggering growth, video game production still remains very much the domain of a relatively small elite group: the highly skilled computer programmers who know how to make them work. But that is already changing. The next step in this proliferation of production is for game creation itself to go mainstream. A growing number of titles are making so-called “user-generated content” a key feature by which players themselves dictate large portions of the game. It started with the ability to customize the looks of in-game characters—a different-colored hat here, a different shade of skin there—but has evolved into creating entirely new characters, levels, and worlds, as in Minecraft. The idea is for players to share their creations the same way they would share photos or music files, which can greatly increase the re-playability of a game and therefore its value. It’s happening because, once again, the tools to do so are becoming cheaper, better, and easier. Games are increasingly allowing their players to become designers without having to know the first thing about computer programming.
BIG LITTLE PLANET
One of the best examples is LittleBigPlanet, a game released for Sony’s PlayStation 3 in 2008. Starring a little puppet known as Sackboy, the game was cut from the same cloth as rival Nintendo’s Super Mario titles, which typically require players to jump from one platform to the next while gathering various treasures and power-up items along the way. LittleBigPlanet’s twist on the traditional formula came from giving players the ability to design and share their own levels online. With a straightforward, easy-to-use toolset, LittleBigPlanet became a phenomenon. Not only did the game sell more than four million copies in its first two years and win all sorts of critical accolades, it also saw more than seven million levels published online by 2012.33 Players who bought the game theoretically could play it forever. The only limit to the number of levels was their own imaginations.
You’d think the game’s maker, Media Molecule, would be something of a legend in Guildford, England, the small town about thirty miles southwest of London that it calls home, but you’d be amazingly wrong. The building by the train station does house some multimedia companies, but nobody there has ever heard of Media Molecule. The receptionist does some quick Google searching and sends me off to the center of town. Her address, however, turns out to be a parking lot. An older gentleman notices my confusion and offers to help, but he hasn’t heard of the company either. Nor has a nearby police officer, who surely must know of all the important local businesses, right? Nope, he has no idea who or what a Media Molecule is. He points me to the nearby offices of Electronic Arts, the biggest video game maker in the world, and I breathe a sigh of relief. A fellow game company undoubtedly will know where its competition is . . . which is why it’s so astonishing that no one there does. The receptionist has never heard of the rival developer either, nor have a handful of EA employees who overhear our conversation. By this point, I’m convinced I’ve journeyed to the wrong town.
Finally, one helpful fellow—probably noticing my growing exasperation—comes over to the reception desk and reveals that, yes, he has heard of the LittleBigPlanet maker. He goes upstairs to his desk and retrieves a business card, which has a phone number. We call and—eureka!—finally discover the long-lost land of Media Molecule. With the correct directions in hand, I walk the three blocks to the studio and let fly a stream of curses upon discovering that it is right by the train station.
Despite creating some highly successful games and being acquired by Sony in 2010, Media Molecule has purposefully stayed small. The comedy of errors in finding the place is ultimately worth it, with the studio turning out to be uniquely eccentric. The carpet running the length of the main floor is hot pink, which is enough to startle tired employees into a state of readiness as they trudge into work in the morning. The common areas—meeting rooms on the first floor, and kitchen and recreational room on the second—ooze relaxation with a funky, seventies decor. Green and pink throw pillows sit atop mismatched patterned blue and floral-print couches, and the beanbag wouldn’t look out of place in a kindergarten classroom.
With only forty employees, it’s a fraction of the size of giant operations like Ubisoft Montreal, for example, which employs thousands. The smallness, along with the low profile, explains why almost no one in town has heard of the place. Each employee therefore has lots to do; there are no single-task jobs here, like “background artist” or “debugger.” Everyone wears a number of hats, which keeps them busy and ensures their workspaces lie buried in clutter. During my visit, the team is ramping up production on Tearaway, a new game project for the handheld PlayStation Vita system that lets players make their own character creations, then print them out as paper models. The office is buried under prototype creations.
David Smith, the studio’s co-founder, technical director, and lead designer, greets me in the meeting room. He looks like the stereotypical video game maker: He’s wearing the standard uniform of jeans, sneakers, and a gray T-shirt, with glasses and a pulled-back ponytail. He probably would have played Dungeons & Dragons with me when I was younger, a feeling he corroborates when he starts talking about how he loves both art and math. He tells me he’s lucky that he gets to work in video games, the intersection of the two subjects. “I get to scratch both those itches: the geeky technology where new things are always happening, but also the new experiences that we can make and that make us laugh,” he says. “I really do feel truly blessed to be part of this surging wave, this crazy evolution that’s going on right now. If I had been born ten years earlier, I’d probably have gone down an engineering route and probably felt that the creative side of me was a little bit suppressed.”34
He nods knowingly as I explain the premise of Humans 3.0. I tell him about the chapter I’ve just finished writing, about how entrepreneurialism is on the rise. He’s not surprised. “It’s another form of playfulness,” he says. Business creativity coincides with the growth of art, communications, and expression, and all of these different aspects speak to people’s desires to create, experiment, and share. All of them also use the combinatorial aspect of creativity to generate ideas continually and exponentially.
Humanity is at a watershed moment in its history, he says. We’re emerging from a prolonged age of darkness, and he uses the analogy of playing games—to be expected, given his line of work. As children, we used to spend all day playing, constantly experimenting and making up new games. I immediately remember how my friends and I came up with our favorite game, a combinatorial variation of tag, tennis balls, and prefabricated playgrounds. Every child everywhere has thought up similar creations, yet at some point we all stop doing it because of the encroaching commitments and responsibilities of adolescence and then adulthood. As we age, we acquire bills to pay and dependents to support. “You can’t afford to mess around,” Smith says. “You have to be the bread winner and kill the buffalo.”
Yet, if all the creation, sharing, entrepreneurialism, and game-playing by adults are any indication, a profound shift is taking us back to that childhood experimentation. Our increased prosperity and longer lives are giving us the means and the time to take more chances, so we don’t necessarily have to focus all of our energies on killing the metaphorical buffalo. The encroaching realities of the changing job market—in which we’re all likely to change careers several times thanks to those damn robots—also means that we have to keep learning longer into life. Figuring out new skills and capabilities means experimenting. “You can’t get a job at the bank and know that fifty years later, you’ll die at the same place or be pensioned out,” Smith says.
The financial matters that are often tied to discussions of this explosion in sharing become irrelevant in this larger context. Who gets paid and what individual copyrights are violated don’t seem nearly as important as the large-scale shift toward rediscovering our ability to experiment and create. At the same time, the huge explosion in supply of art—music, writing, photos, and movies—seems to have devalued the demand for it. Many people are less willing to pay for the good stuff when free so-so stuff will do. But this is only a temporary, technological fluctuation that society will eventually level, Smith says. Having gone through the process of creating itself and therefore acquiring an understanding of what it requires, people may in fact become more willing to pay for really good art. “What you can’t cheat is that it takes time to make something that is unique and good. If it took you only ten minutes to make something, I really think you can’t make something that sounds truly new.”
Smith at once feels a sense of accomplishment and of community in regards to LittleBigPlanet, knowing that what he made is now more important to other people than it is to him. The bean counters who sign the checks may not share that view, but it’s a historical force that’s in unstoppable motion. “Even though I’d spent a lot of my energy making this creative environment, it wasn’t really something that I happily sat inside. There’s all these people who love that space—it’s now theirs. It’s out of my hands,” he says. “There’s a certain switch-over point where we realized that the lunatics are running the asylum.”
The future of art, entertainment, communication, and expression, meanwhile, is dazzlingly bright. The flood of content is ever increasing and therefore competing with itself, which may be harder for any one individual to get attention, but history suggests that this situation won’t discourage people from trying. For the artists and the communicators themselves, the need to express themselves drives them in the first place.
“I wouldn’t be bothered to make something cool unless I thought someone else would see it, even though I like to think I create things for their own value,” Smith says. “On some level, I kind of think this will help me communicate with someone or someone will see that and there will be some kind of human expression. If that was thrown away, I don’t think I could do that. It’s not something you consciously think about, it’s just part of being human.”
Which confirms my suspicion. The dual explosions of entrepreneurialism and personal expression indicate that technological advance is bringing people closer to their true natures as far as creativity is concerned. The need to create and collaborate flows in our blood. That doesn’t apply to just a segment of the population but rather to all of us. If some individuals aren’t doing that yet, it’s only because it’s not yet easy enough for them to do so. Like rice on a chessboard, it’s only a matter of time.