For many years the conviction has grown upon me that civilization arises and unfolds in and as play.
—JOHAN HUIZINGA,
Homo Ludens
A history of play and wonder would be justified even if it focused exclusively on the pleasure those experiences have brought us. The fact that our lives are surrounded by institutions designed with the specific purpose of bringing us happiness and amusement has to count as one of the undeniable achievements of civilization, however “uncivilized” many of those amusements first seemed. (And still seem, to some.) It would be entirely sufficient to write a history that just tracked the “pleasuring-grounds” and toys on their own merits. The world is a more interesting place because there are coffeehouses and national parks and IMAX theaters in it; we should celebrate the people behind those institutions the way we celebrate and study high-tech innovators or political revolutionaries.
Yet, still: Turn your mind’s eye away from all those wonderlands of illusion and delight and think about the utilitarian story. Ignore the pleasure those institutions generated, and focus on the innovations or historical sea changes they helped bring about: public museums, the age of exploration, the rubber industry, stock markets, programmable computers, the industrial revolution, robots, the public sphere, global trade, probability-based insurance policies, the American Revolution, clinical drug trials, the LGBT rights movement, celebrity culture. Think, too, of all the tragic consequences that descend from our endless quest for delight: slavery and exploitation and conquest. The sheer magnitude of this influence is remarkable. How odd it is that slacking off on one’s “lawful calling and affairs” would set off so many commercial and scientific aftershocks. The pleasure of play is understandable. The productivity of play is harder to explain.
Making sense of this mystery requires that we peer into the inner workings of the human brain, drawing on recent research in neuroscience and cognitive psychology—research that, fittingly, began by studying games. In the 1950s, inspired by Alan Turing’s musing on a chess-playing computer, a computer scientist at IBM named Arthur Samuels created a software program that could play checkers at a reasonable skill level on an IBM 701. (Legend has it that when IBM CEO Thomas Watson saw an early draft of the program, he predicted that the news of the checkers game would cause IBM stock to jump fifteen points.) As his work developed, Samuels grew increasingly interested in not simply teaching the computer how to play checkers but in having the computer learn on its own, through experience. This line of inquiry led to self-learning algorithms for more complicated games like chess and backgammon developed in the 1960s and 1970s, but more importantly, it led to a model of learning that has come to shape our understanding of the human mind itself. The model has several variants, each with its own adherents and detractors; it includes theories called “temporal difference learning,” or the “Rescorla-Wagner model,” or “reward prediction error.” Beneath these distinctions, though, the model suggests a common principle: humans—and other organisms—evolved neural mechanisms that promote learning when they have experiences that confound their expectations. When the world surprises us with something, our brains are wired to pay attention.
The early checkers and backgammon applications relied on this principle to bootstrap themselves into a high level of play. The software would begin with a rough model of what successful strategy looked like, making predictions about the consequences of each of its moves. Over time, it learned by paying careful attention to the difference between its predictions and the actual outcome. Based on those constructive errors, the software would then alter its model for the next game; after thousands of iterations, the software learned a high-level strategy without any expert player advising it directly. In a way, the AI researchers had programmed an appetite for surprise into the software.
Psychologists have long understood that this appetite is an integral part of the human mind. Countless studies of newborn infants have shown that before we can crawl or grasp or communicate, we seek out surprising phenomena in our environment. But it wasn’t until the 1990s that scientists first recognized that the surprise instinct is heavily regulated by the neurotransmitter dopamine. Because drugs like cocaine and nicotine activate the dopamine system as well, popular accounts of the neurotransmitter often make the mistake of referring to it as the brain’s “pleasure drug.” But this shorthand description is misleading: dopamine on its own doesn’t trigger feelings of pleasure the way, for instance, endorphins do. Rather, dopamine seems to help steer the attention and motivation systems of the brain. A new theory proposes that dopamine release creates a “novelty bonus” that accompanies the perception of some new phenomenon or fact about the external world. By heightening your mental faculties, making you more alert and engaged, the “novelty bonus” encourages you to learn from new experiences. (The computer scientist Jürgen Schmidhuber developed a similar process for machine learning that used a “curiosity reward” that encouraged the software to explore data with surprising results, and ignore predictable regions.) The surge of dopamine that accompanies a novel event sends out a kind of internal alarm in your mind that says: Pay attention. Something interesting is happening here.
Bone flutes, coffee, pepper, the Panorama, calico, Babbage’s dancer, dice games, the Bon Marché—beneath all the surface differences between these objects, one common characteristic unites them all: they were surprising when they first appeared. We were drawn to them compulsively because they offered novel experiences, tastes, textures, sounds. Illusions took our visual predictions about the spatial arrangement of objects in the world and confounded those expectations in startling ways. Spices offered exotic new flavors that our tongues had never experienced before. One of the defining characteristics of games—as opposed to, say, narrative—is precisely the fact that they turn out differently every time we play them; games are novelty machines. That’s what makes them fun (and sometimes addictive). All these forms of escape and amusement provided a “novelty bonus” to the brains that first experienced them.
On the one hand, our understanding of the dopamine system helps us understand why human beings became so obsessed with seemingly frivolous things like nutmeg or the Phantasmagoria. It is in our nature to seek out things that surprise us. But the “surprise instinct” also helps us answer a more complicated riddle: the innovative power of play, the way in which play compelled us to new cultural institutions that had little to do with our biological drives. A long tradition exists of intellectuals butting heads over the boundaries of nature and nurture; some scientist proposes a biological instinct for some facet of human behavior and inevitably a counterforce of humanities professors argues that the behavior is rooted in cultural adaptations, not some genetic destiny. But an appetite for surprise complicates those easy oppositions. Genes tend to steer us toward predictable goals, or away from predictable threats: seek out energy-rich carbohydrates; avoid intense cold or heat; find a mate and sexually reproduce with him or her. This is one reason why Darwinian interpretations of society or art tend to be less enlightening: you don’t need to be an evolutionary biologist to understand that people like to fall in love or care for their children. In a way, those genetic drives are conservative in their effects. They steer us back to predictable patterns: family, shelter, food.
But the surprise instinct propels us in the opposite direction. Its object is by definition undefined. It rewards you not for finding a mate or bonding with your child or consuming energy-rich food—it rewards you for having a new experience. It rewards you for breaking out of your usual habits, for stumbling across something that confounds your expectations. This appetite has an inevitable tendency for expansion. Hunger or the need for social bonding can be satisfied by a reliable source of food or close friends. But surprise requires new blood: one generation’s miracle is the next generation’s old news. (As we have seen, that expansion was often geographical as well as conceptual; we live in a global economy today in large part because of the novelty bonus of pepper, cotton, and coffee.) Sometimes cultural change happens because important ideas build on each other, one insight unlocking the door to further insights. Sometimes change happens out of necessity, out of the drive to satisfy our basic survival needs. But just as often cultural change happens because human beings are bored with the old experiences, and have a hunger for something new. This is the strange paradox of play and its capacity for innovation: play leads us away from our instincts and nature in part because of our instincts and nature.
Because new things are strange and not immediately applicable to life’s most pressing issues, they are not taken seriously. But we underestimate their ultimate significance at our peril. The drive for novelty puts us into unexpected situations, or exposes us to new materials: taverns and coffeehouses, rubber balls and magic lanterns. Once exposed, we end up using those spaces and those devices as platforms for the ideas and revolutions of traditional history. Toys and games, as Charles Eames said, are the prelude to serious ideas. So many of the wonderlands of history offered a glimpse of future developments because those were the spaces where the new found its way into everyday life: first as an escape from our “lawful calling and affairs,” and then as a key element in those affairs.
Think back to Charles Babbage, staring into the eyes of that automated doll in Merlin’s attic, two centuries ago. That encounter was, quite literally, child’s play, but the ideas and technologies that were stirring beneath the surface of that meeting are still transforming society as I write. Today we worry about dystopian futures where the machines become so physically dexterous that they take over our manufacturing workforce, or so intelligent that they become our masters. But perhaps, knowing the history, we have been focused on the wrong fears. Perhaps we have been wrong to worry about what will happen when the machines start thinking for themselves. What we should be really worried about is what will happen when they start to play.