Ten years after their emergence as a consumer product and public amusement, video games were no longer unfamiliar objects, and their novelty value was being transformed. They were increasingly present in young people’s everyday lives, whether through consoles, computers, or arcade cabinets, as objects of pleasure and fascination. Although adults also played them, in the early 1980s, video games were typically seen as a form of youth culture, defining a generation in distinction to its parents. No longer were they the “new trick your TV can do” or a “space-age pinball machine,” construed in a way that related the new medium to familiar technologies. Advertising no longer addressed families so much as teenage boys. The novelty now was that games had become an intense object of interest seeming to demand a bottomless supply of quarters from kids devoted, even addicted, to them. This happened only after the introduction of games like Space Invaders and Pac-Man, which rewarded obsessive repeat play and earned handsome sums for game operators.
In the early 1980s, such passion was an occasion for widespread concern among parents, teachers, community leaders and government officials, and a legion of experts in psychology, health care, and education. Some of this concern would seize on what was perceived as a serious social problem. Municipalities tried to restrict or ban video arcades. Parents and teachers were frightened by the presence in their communities of spaces believed to be corrupting youth. Video games were represented as a threat to childhood development and to the health and well-being of players. Concern about them often had strong moral overtones. However appealing they were, games were accused of being bad for kids, and bad for society. Many adults evidently subscribed to this notion wholeheartedly.
But the reaction to video games becoming ingrained in young people’s lives was not only or even primarily negative. Many looked at games as a productive pursuit acquainting young people with advanced technology and preparing them for future engagement with computers, especially in the workplace. For every complaint about video games leading to crime, sex, or addiction, one might hear a very different kind of expression indicating the potential for electronic play to inculcate useful skills and habits, to be educational or therapeutic. Video games were seen as a herald of the computer age and the information society. While some feared or disdained the computer as an instrument of control and dehumanization, many others saw its inevitable transformation of work and leisure in welcome and productive, even utopian, ways, as we saw in chapter 4. They expected young people to benefit from familiarity with computers along with other kinds of electronics and telecommunications.
What these two kinds of reaction to video games, the fearful and the hopeful, had in common was a conviction that the emerging medium would have a profound and long-term impact on the young players so enamored of it. Whether harmful or productive, the games were sure to affect their young aficionados. The novelty of video games as a generation’s defining pastime brought with it a common belief that games were sure to shape young people’s whole lives.
Whatever effects they may have had, video games could not have been as powerful as they were given credit or blame for being. The ways they were understood in the 1980s were products more of imagined than real effects. As they became commonplace and enormously popular, video games also became sources of intense fascination and fear. Instead of seeing what kids were actually doing with and around video games—having fun, competing with peers, learning mastery, socializing—many unsettled adult observers used them as objects on which to project fantasies of endangerment or improvement. As Carolyn Marvin has argued, “new media embody the possibility that accustomed orders are in jeopardy.”1 New media seem to speak to a culture’s hopes but also its anxieties, prompting ambivalence about the structure of society and its future prospects.2 Decades later, the panic and outrage over video games, and efforts to regulate and proscribe them, may figure more into popular historical memory, but at the time this was only one facet of the response to electronic play. Another, equally prominent in public discourse, was much more optimistic. Both the fearful and hopeful reactions can be seen as expressions of the same underlying concerns about changing social roles and shifting expectations about family, leisure, and work. They were both ways of coping with the novelty of electronic play in the lives of a generation coming of age in a new world.
One public venue for expressions of hopes and fears about video games was the popular press, including print and broadcast news. News programs and stories often presented the impact of games as a matter of ongoing controversy and debate. Following the convention of much North American journalism, news on video games was framed as a controversial issue with two opposed and equivalent sides. Representing one side might be a municipal office holder, PTA representative, teacher or parent, speaking of the harms inflicted or liable to afflict the town. For the other might be an arcade proprietor, social scientist, or real-life video kid testifying that the games were at worst benign fun but also possibly a useful form of leisure. Unlike some moral panics stoked by fear mongering in the popular press, video games were often shown in a somewhat balanced fashion, with proponents and opponents lining up against one another. The unfolding of court battles, such as the case of Mesquite, Texas, whose regulation of coin-operated games went all the way to the Supreme Court in 1982, meant that reporting could easily follow a he-said/she-said format.3 This might still have given substantial credence to the con side, whose fears were quite a bit overblown and often wanting for evidence and logic. But they also countered it with a pro point of view, or at least a position skeptical of the harms attributed to the medium.
As an example of the journalistic framing of video games as a hot topic with paired sides, consider one trope applied on several occasions in discussions of video game effects: the notion that video games were prompting new “trouble in River City.”4 This was an invocation of The Music Man, the 1957 Broadway musical adapted to film in 1962. Now, the news might report, the trouble isn’t “with a capital T that rhymes with P that stands for pool,” as in the famous show tune, but with a V. Readers or audiences were thus reminded of the fictional all-American place where people were easily persuaded by the fast-talking traveling salesman Harold Hill that the presence of a new pool hall posed a threat to the town’s young people. This new threat was associated also with youth styles of dress and speech (knickerbockers too low, “swell”), and was sure to lead to all manner of vice. The trouble could be removed by interesting youngsters in the band instruments and uniforms that Hill was selling, a more wholesome and communal form of cultural expression.
Perhaps the readers and audiences of news stories mentioning “trouble in River City” would not have recalled the whole scenario of The Music Man. But those who knew it might remember that the trouble was totally concocted, that Hill was a film-flammer out to take River City’s money and then slip out of town, and that the nostalgic representation showed that citizens were too credulous in accepting claims that pool poses a danger. The Music Man does not stoke the audience’s fears of children’s newest amusements. Actually, it affectionately mocks popular panics by showing bygone small-town mores, congratulating the audiences of the 1950s and ’60s for their greater sophistication and modernity. So by referencing The Music Man, news items indicated not so much that video games were as harmful as hard drugs or as likely to lead to criminal activity, as that they were an object of concern among cautious and worried ordinary good folks who might too easily be convinced of their danger. That is how things happened in River City.
Harold Hill’s crusade in The Music Man revives a familiar scenario of youth in danger: “keep the young ones moral after school,” the song goes. Parents naturally feel concern for their children’s well-being, and often too readily accept that some novelty that catches their interest must be a hazard. The history of moral panics around young people’s leisure-time interests and behaviors shows a remarkable continuity across generations. This follows a pattern of historical amnesia, as one generation forgets the same expressions of fears articulated by its parents not so long before about the novelties of its own youth.5 Sometimes these panics are not media specific. But their representation in the media not only brings them to consciousness, it also stokes their fire, helping them to spread and grow. Sometimes the objects of these panics are themselves new media technologies, like motion pictures or the Internet. Sometimes they are media genres, like dime novels or comic books. Media panic, a subtype of moral panic in which technologies, formats, or genres of media are at the center of public outrage, describes the widespread feelings of fear and anxiety around video games in the 1980s.6
Moral and media panics alike identify a form of youth culture seen as deviant and dangerous, whose threat is blown out of proportion if not invented entirely by the guardians of virtue. The object in question functions as the scapegoat or folk devil to which responsibility can be assigned.7 A panic is a moment in which generational tension bursts open. It exposes the pain of parents losing the world of their youth, and finding their own children’s lives unfamiliar. This tension and trauma spurs moralists to emotional excess and polarized, panicked expressions. It pits reasonable adult authority against a heedless youth gone wild, against their own children overcome by the effects of new media. Of course the reverse is also true: panicked reactions to new media in the name of reason tend to be emotionally excessive, and children’s defense of their fun can be remarkably cool-headed. In such cases, logic tends to be on the side of the young.8
Many observers of media panic see the specific object placed under intense scrutiny to be a substitution for matters of deeper societal concern. Kristen Drotner argues: “Panics are deeply implicated in political issues beyond their immediate causes.”9 Dmitri Williams regards the panic over video games to be an expression of concern over changing family dynamics as women entered the workforce and children’s unsupervised leisure time seemed to be a problem in relation to emerging sexual roles in the family.10 We can additionally see the upscaling of coin-operated amusements in plush family fun centers and suburban game rooms to carry along unwanted old working-class connotations, and arcades might be seen as schools of vice, unsuitable for middle-class offspring (see chapter 1). Frequently, as in The Music Man, a low-culture form of amusement is placed in opposition to high culture or at least middle-class culture, and the low form is presented as an enemy of the people, in effect as a proxy for fears of failure to reproduce social class. Complaints that games were addictive or crime-related function as well as a cover for economic and cultural anxieties.
Adults also fastened onto the video game as an example of rampantly advancing technology, a form of computing with which children but not their parents would become intimately familiar. They would have experienced this as an incursion of the technology of the new information society, so commonly associated with advanced electronics, into their children’s leisure. The world their sons and daughters were inheriting was in some ways markedly different from the postwar baby boomers’ environment. It was of course uncertain whether exposure to video games would teach young people how to work effectively with technology, or whether the technology so touted as beneficial might instead turn out to be harmful to their mental health, sucking them into microworlds and isolating them from society, influencing them to become bellicose and short-tempered, transforming them into zombies unable to function outside of the highly mediated psychic environment of the game. There was no shortage of scary scenarios and potential threats.
The media panic around video games in the early 1980s sometimes spread word of vague harms. There was a general feeling of contempt expressed in the usual kind of condemnations of new media, as in one letter to the editor of the New York Times asserting that video games were “cultivating a generation of mindless, ill-tempered adolescents.”11
But such thinking could also be quite detailed in portraying the trouble at hand. Among the specific dangers attributed to video games in publications of the time were both minor and major problems, including physical ills such as “Space Invaders Wrist,” skin, muscle, tendon, or joint problems in the hands, and damage to the eyes.12 Squandered lunch money was a frequent point of concern, which suggested that some children were starving themselves for Centipede. Worse yet: to have your lunch money stolen by a Frogger fiend.13 Good kids might be so seduced by electronic games that they would resort to theft to feed their habit.
Adults sometimes feared the clusters of teenage video game players (mainly boys) hanging around stores and shopping centers, who might be perceived as obnoxious or menacing, and who might sometimes harass passersby. A CBS News broadcast on July 9, 1982, quoted a municipal official in Boston describing youth congregating in a laundromat by the video game machine, offering “fast remarks” as adults passed by and terrorizing customers. In popular media discussions of the day we find expressions of fear about video games being linked not only to this irritating or hostile behavior like loitering or harassment, but also to truancy, panhandling, vandalism, gambling, loansharking, and taking or selling drugs. In some instances, adults feared possible pathways from video games to illicit sex. Children might be committing crimes to finance their habit, and some adults wondered where all those quarters came from (“Just how much help do the neighbors need each week?”).14 As one newspaper reporter summed up the generational divide over games, “To many parents, the glowing, beeping machines are Molochs to whom their children are lost. They conjure up fears of purse-snatching, truancy, and ‘42d Street drug dealing.’”15
Young patrons of the arcade were standardly described as mesmerized addicts, and sometimes their endless appetite for video games was likened to other addictions: caffeine, cigarettes, alcohol, heroin. Video games were routinely called addictive as if that were already a matter of established fact.16 Children also adopted this rhetoric in quotes to the press by calling themselves video game junkies or addicts, though their usage could be playful and ironic rather than alarmed. This addiction would explain the truancy and petty theft supposedly caused by games: only addicts would go so far. An item from Mesquite, Texas, an epicenter of video game panic, reported that addicted kids would break into cigarette vending machines to feed their cravings, pocketing the quarters and leaving behind the nickels and dimes.17
Television, another form of popular media blamed for negative effects on children, and likewise seen as analogous to drugs, was a reliable comparison. Video games replaced TV in some players’ leisure time, and were played in the home using a TV set. Some observers wondered if video games were even more addictive than TV owing to their interactive, participatory nature. A letter from a psychotherapist published Psychology Today in 1983 was given the matter-of-fact headline “Addictive Video Games.”18 Just like TV, it was feared, video games could turn out to be an impediment to children’s success in school and life.
Even if the children were not becoming victims or perpetrators of crimes thanks to video games, many adults worried about the new medium’s effects on young players’ cognitive and emotional development. In informal comments following a speech on the subject of violence in the family he delivered in Pittsburgh on November 9, 1982, the US Surgeon General, C. Everett Koop, asserted that children were becoming addicted to video games “body and soul,” and suffering adverse physical and mental effects. His comments made international news and stirred controversy. He walked them back the following day, clarifying that video games are no threat to children, but fears had by then been aroused.19
The attention commanded by this controversy and reversal suggests that the violent content of popular games was occupying many adults’ thoughts. Just as television violence was believed to encourage young people to act out real-world violence, video game violence was similarly a cause of worry about aggression and acting out. So many of the popular games at the time involved shooting and killing in heroic scenarios of invasion or war. The difference again between TV’s perceived passivity and video games’ interactivity was a further impetus to fear the impact of the new medium as a stronger and more damaging force than the more familiar small screen. Like TV, video games could also be seen as an isolating influence, a substitute for socializing. As an even more engrossing technology, games might be even more threatening to childhood sociability than television. Their association with a medium already often viewed with alarm and outrage in the late 1970s and early ’80s framed video games as a social problem.20
The video arcades sprouting up all around the United States and many other countries in the 1970s and ’80s became magnets for negative attention in numerous communities. The campaign to do something about the video games found in these arcades, and their putative harms, was waged in local municipalities as well as in national or international media. Some of the citizens of a city or town might view the novel presence or the proposed opening of a video arcade as an invitation to alarm. Video games would likely have already been present not only in homes but in the many public places that added arcade cabinets in the 1970s and ’80s such as convenience stores, supermarkets, bowling alleys, pizza parlors, and bus depots, but arcades were viewed with particular suspicion as hazardous spaces for children. Citizens who saw arcades as a threat to the youth of their locale might have summoned any available evidence and rhetoric to mobilize for their campaign.
Their crusade led to the renewed regulation of public amusements, such as ordinances barring children under eighteen from playing during school hours or requiring their accompaniment by a parent or guardian. In some municipalities citizens tried to prevent new arcades from opening at all by applying zoning by-laws, limiting the number of machines allowed in a single establishment, or passing ordinances to regulate electronic or coin-operated game machines. Some of the municipal ordinances covering electronic games already existed for coin-operated amusements such as pinball. These various efforts quite often led to local conflict over video game regulation, including a number of legal cases between municipalities and business owners.21 Often appearing in items in the newspaper or on television, these cases made video games newsworthy, providing a hook for a story about the “two sides” of the newly popular medium.22
Efforts to ban video games, or at least to protect children from them, looked for authority to educators, law enforcement officials, business leaders, mental health professionals, and other concerned members of the community. Anyone with a degree of social prominence or power might give weight to public statements opposing the games. These moral entrepreneurs often gave interviews to the media or presented themselves publicly as voices troubled by video games and their potential to harm their players.23 The media panic over games drew for its energy and credibility on these public persons defending the social order and status quo against a threatening, corrupting newcomer.24 They expressed an essentially conservative social ideal in which the community would remain unchanged by developments in technology and popular culture, and would protect its established ways against the seductive appeals of dangerous novelty. Moral entrepreneurs are prominent citizens whose concern over what they perceive to be deviance from community values leads officials of the state to make or adapt rules to proscribe or forbid objectionable behaviors. No mere ordinary members of the community, they claim superior knowledge and powers of judgment. They recognize a hazard, want it to be managed or stopped, and speak out forcefully. Video arcade moral entrepreneurs in American towns were often associated with schools or government. Some became outraged beyond all reason. Despite the authority they claimed, these antigame activists typically offered no compelling evidence in support of their position, but rather hysterical fantasies of youth gone wrong and overwhelming technology.
By 1982, the economic success of video games led to the proliferation of arcade cabinets in public places. Installing new arcade games was often seen as a legitimate get-rich-quick scheme, or at least as a smart business venture. The regulatory battles against video games were often waged in middle-class suburbs like Centereach, New York (on Long Island), Morton Grove, Illinois (north of Chicago), and Mesquite, Texas (in the Dallas-Fort Worth metroplex). On one side were entrepreneurs or big companies like Bally, which ran the Aladdin’s Castle arcade chain, along with the teenagers of the town who felt persecuted by crusaders against their fun. On the other side were the parents, moral entrepreneurs, and municipal officials. Trade in video games was unlikely to be regulated at the federal or state level, and by the early 1980s game consoles were finding their way into millions of homes alongside the television set. But municipal governments could adopt regulations to control the placement and use of video games in public, and it was around these proposed or adopted ordinances that the most public frictions occurred. As a visible battle pitting two camps, each with its own form of power (often political power versus economic or cultural power), the struggle to regulate or not regulate video games in these American municipalities made for a conventionally framed news item.
Some common forms of ordinance adopted in the early 1980s included restrictions on the number of coin-operated electronic game machines one establishment could have, restrictions on the age of patrons in game rooms (and perhaps variations in this policy by time of day), restrictions on children entering game rooms unaccompanied by a parent or guardian, and restrictions on game rooms opening within a certain proximity of a school. By adopting zoning regulations, a municipality could keep arcades away from certain parts of town. Unless the games were kept out of the city altogether, as happened in Marlborough, Massachusetts, and Coral Gables, Florida, these policies regulated environments of play and ages of patrons. They could not, however, prevent children and teens from playing video games altogether. Local ordinances were an effort to curb and curtail the attraction of games and their presence in young people’s lives. They had other perhaps more significant functions, in particular regulating the behavior of teenagers in public places. Still, passing ordinances to regulate game rooms was often portrayed as necessary to counteract the harmful effects of the medium on children. Naturally, these effects were merely asserted rather than shown to have the support of empirical research, and in one legal case, an expert for the defense testified that there were no such demonstrated effects.25 Whether there actually were harmful or helpful effects, each side was invested in a set of ideas about video games that served their interests.
Moral entrepreneurs agitating against games accused them of getting kids hooked, and if not causing full-on physical addiction, then at least hypnotizing young people in a way that would make them uninterested in anything else.26 Often this focus on the strength of electronic games’ appeal was a way of expressing fears that children’s fascination with games would spell failure in school. Education under threat was a frequent theme in the fearful discourses of the antigame crusaders. Kids were believed to be ditching school to go play video games. They dropped their lunch money, meant for the school cafeteria, into the slots. Fears of truancy caused by arcades led to ordinances barring children from game rooms during school hours, and requiring that any arcade be 1,000, 2,000, or even 2,500 feet from a school. But even outside of the school day, kids’ inability to stop playing was seen as an impediment to proper development; a fifteen-year-old child out playing video games at 10:30 on a school night might be unlikely to do well the next morning, causing concern for his parents.27 Without any evidence that players of video games did any differently in school than their peers, the members of city councils and other local governments restricted their use in the name of protecting students from failure.
While the games themselves could be considered an addiction, they were also seen as a gateway to substance abuse. The mayor of Bradley, Illinois, a town that made national news for its video arcade regulations, claimed that hundreds of kids had been seen smoking marijuana at a game room nearby. Public arcades were, in many parents’ fears, the place where children on the straight-and-narrow path would be given drugs.28 The National PTA, along with many concerned citizens and local governments, worried in particular about public game spaces in which supervision and regulation might be much too lax.29 As hubs of teenage socializing, such unruly locations were seen as likely gang hangouts, trading posts for drug dealers, and “dens” in which children would find access to booze and dope. The successful movement to ban video games in Marshfield, Massachusetts, was spearheaded by a retired police officer who feared games in public places would lead to crime and drug use, a concern in many towns.30 Some of the strongest fears expressed about video games seized on the dangers of the public space as much or more than the dangers of the medium. But typically the panicked expressions of moral entrepreneurs picked up on both of these sides of the fearful reaction against electronic play.
Those afraid of games themselves would tend to see their violent themes as troublesome. Just as television was thought to inculcate aggression or violence in children, so video games were seen as a likely cause of antisocial behavior and desensitization. While sometimes the interactivity of games was seen as a positive attribute by comparison to TV’s passivity, when thinking of violence, a critic could say that interactivity makes games’ effects potentially more detrimental than television’s. The player is the one committing the acts of shooting and killing, turning leisure-time diversion into practice for war. Many parents and local leaders feared that video games were breeding a violent generation likely to endure “long-term psychological damage” from Space Invaders and its ilk.31 In one article on the controversy over effects of games in the New York Times, a professor of psychiatry, psychology, and pediatrics at Rutgers University, Dr. Michael Lewis, worried about the intensity of the experience and the sense that the violent scenarios in games are real rather than imaginary.32 A 1983 article in Health by a pair of Bethesda, Maryland, psychotherapists, expressed the conviction that like troublesome representations on television, violence in games can lead to violence in real life.33
The Bethesda therapists also expressed concern over a tendency observed by adults that games “isolate” children and harm their socialization and development. CBS News reported on January 29, 1982, that “psychologists are beginning to worry that some youths are becoming spaced out on the space games,” leading to introversion among serious players. Like violence and truancy, isolation could pose a danger to the child’s academic achievement and maturation into a responsible adult. Computers more generally were often seen in these years as being so absorbing as to take the user away from human interaction into a strictly “man–machine” encounter, a quality, as we have seen, that the MIT professor Sherry Turkle called “holding power.”34 This was its own cause for worry. But in enveloping youngsters in violent fantasy worlds, video games in particular were feared as a horrible escape that would ruin the minds of the young and halt their progress toward responsible adulthood.
In some expressions of outrage or concern, many of these themes would be woven together. Television and addiction were already familiar friends, as in the title of Marie Winn’s popular screed The Plug-In Drug, as were television and violence.35 Video games fit into a familiar script of concern among adults around children’s leisure-time pursuits. A school principal’s statement in a 1983 ABC Nightline episode on video games sums up this interweaving of fearful sentiment by moral entrepreneurs:
The problem is that we have television, we have television violence. We now have a video craze where the theme of these games is violence. I think it’s another ripple effect of our standards. We’ve had certain standards and they are slowly eroding. And there are those of us who believe that this is just a symptom, the video games craze is a symptom of another addiction that’s taking place in this country. I use my crystal ball as a father and as a principal of twenty-seven years in the business of education, that if we don’t do something about this it will be another nail in the coffin of our country.36
As in many of the outraged expressions of moral entrepreneurs, it is not entirely clear what effects the principal is attributing to the technology or its specific forms or uses. The fear of decline, of being on a course to disaster, seems to come from a source much deeper than video games.
This sky-is-falling rhetoric can be hard to square with the limited powers exercised by municipal governments in the legal or legislative matters that made news. Since games could not be outlawed from millions of homes, local governments found themselves regulating technology that was going to be part of some citizens’ everyday lives no matter what. Most of the regulatory efforts were aimed at keeping children away from games during school hours, and some were aimed at keeping large game rooms from opening up that might serve as teenage hangouts, while leaving video games in restaurants or stores as they were. Keeping arcades out of town might have a different function from keeping kids away from video games during the day. The arcades were seen as trouble spots for reasons beyond the appeal of the games within them.
In 1983, Morton Grove, a Chicago suburb, opposed the opening of an Aladdin’s Castle emporium with one hundred arcade machines at a main intersection of the village. An assistant village administrator quoted in a Tribune story about the conflict over the arcade conceded that games might not be bringing about “the downfall of civilization.” But he insisted that the business would cause “headaches.”37 The village board voted to deny Bally from proceeding with its plan in February 1983, keeping in place an ordinance forbidding businesses from operating more than ten game machines. The Tribune coverage of this local government action noted that the board “foresaw large groups of teenagers loitering in front of neighborhood stores, possibly drinking or causing disturbances, if game arcades were permitted in the village.”38 In towns such as Westport, Connecticut, zoning and planning officials worried that arcades would attract crowds, create parking problems, and demand the attention of police.39 In well-to-do Cliffside Park, New Jersey, as in many genteel suburban municipalities, city councilors feared youth from out of town congregating at the arcade, “lurking, littering, and leering at shoppers.”40 Local arcade ordinances and other municipal policies and actions were much more geared toward regulating children and teenagers than they were regulating games as such. They aimed to protect the peaceful and calm atmosphere of small towns and suburbs from the potential disturbance of young people at leisure who were perceived as unpleasant or dangerous.
A form of class anxiety at the root of antigame efforts is also evident in the example of successful arcades that adapted to the suburban realm of parental concern, like the plush family fun centers and shopping mall arcades described in chapter 1. Westport was one town where the local Planning and Zoning Commission sought to prevent an arcade from opening. This was a “luxurious video game palace,” as described by the New York Times, called Arnie’s Place. To assuage local fears of youth being corrupted, Arnie’s Place was staffed with attendants in blazers and outfitted with a dozen CCTV cameras and monitors to keep a constant eye on the premises. A public address system was used to announce names of children called home for dinner, and late in the afternoon the time of day was regularly announced to remind young patrons of the hour. Parents of the town trusted that their children were in a wholesome environment free from ruffians and hoodlums, a place where they were unlikely to be exposed to mind-altering substances, sex, or crime. Kids playing at Arnie’s after school would responsibly return home on schedule rather than stay out till all hours doing God knows what. The values of bourgeois suburbia and the business interests of the entrepreneur who opened the game room were made to fit together in Westport.41
Making arcades into safe spaces was one way for entrepreneurs and the coin-operated amusements trade more generally to negotiate concerns about video games and their effects without losing business, but another was to sue the city or town. When video games cases appeared in court, judges often looked favorably on the concerns of municipalities to protect their order and calm. Several cases concerning zoning regulations applied to video games made their way through the courts, which often found that restricting access to minors and keeping large game rooms from opening was not a violation of the First Amendment protections of speech or assembly.42 In a 1982 New York case, America’s Best Family Showplace v. City of New York, the court ruled that the city could regulate games in order to curb noise and congestion, finding that video games are not a form of speech protected under the First Amendment.43 A 1983 Massachusetts decision in Malden Amusement Company v. City of Malden included the defense of a “legitimate objective of maintaining order, preventing crowding, and diminishing the prospects of out-of-town people congregating” in the town.44 The America’s Best Family Showplace decision was cited in the Malden decision as a precedent for denying video games First Amendment protections.
As has been true of other emerging forms of low or popular culture such as cinema and comic books, the Supreme Court was slow to recognize games as a form of protected speech.45 It declined to affirm this protection in the Mesquite, Texas case about restricting the age of unaccompanied patrons at Aladdin’s Castle. The Supreme Court never considered the case’s First Amendment implications; it sent the matter back to the lower court (which had found video games to be protected speech).46 Marshfield, Massachusetts, was one rare locale in the United States to succeed at banning video games altogether from public places, and while Marshfield’s regulation did make its way through the courts, the Supreme Court declined to hear the case, effectively letting the ban stand through inaction.47 Without weighing in on the harms or benefits of playing video games, the federal courts still recognized the rights of local governments to regulate new media and technology in keeping with their rather conservative values, and sometimes conveyed skepticism about the potential of games to be expressive or to convey ideas and information. Perhaps similar regulations would have been accepted of, for instance, libraries or bookstores, but the threat associated with video games and youth culture made their public venues much more likely to be magnets for dispute and regulation. Had the concern over games been more widely recognized as an instance of outraged panic rather than part of a balanced debate about the new medium’s effects, perhaps efforts at local regulation would not have been so successful. Yet, however much success they won, moral entrepreneurs and local governments serving their interests ultimately did little to prevent the generation coming of age with Space Invaders, Pac-Man, and Donkey Kong from playing video games and identifying so strongly as their players, as video kids. The moral entrepreneurs and the press who lavished them with attention did succeed at making games into an object of controversy, though, producing a perception of danger and disrepute in tension with another, more productive and positive reputation.
In framing the new medium as an issue of two embattled sides, stories about video games often looked to experts capable of speaking to their harmless or even salutary qualities. Many news and magazine items also aimed to defuse media panic and paranoia, or to explore the potential of electronic games to be a gateway to more advanced and sophisticated computing.48 Some young people, for instance, would graduate to programming rather than merely playing games. Playing games on computers would be a first step toward learning the inner workings of the new machines, as we saw in chapter 4. These stories appeared as newspaper op-eds, features or columns in intellectual publications like Smithsonian, or popularizing ones like Psychology Today, and in coverage of scholarly research. Several books by academic experts were also published in the early 1980s intended for general readers expressing hopeful and inspiring ideas about computers and games as technological advances. Psychologists, sociologists, education researchers, healthcare professionals, and others with clinical, scientific, or academic credentials spoke and wrote publicly both to diagnose the excessive fears of the moral entrepreneurs as a hysterical overreaction, and to identify benefits accruing to young people playing electronic games.49
To many intellectuals appearing in popular media—in contrast to typical concerned parents, teachers, and local officials—video games fit into a larger narrative of technological development, typically conceived as progress toward a new society taking shape. Rather than regard video games as the latest in a history of problematic public amusements (“Trouble in River City”), these professional thinkers looked at them as the herald of a transformed society. A new day would mean new ways of thinking and being, and machines would show these new ways to us, functioning as aids to human imagination and work in the new society. As children, the main market segment for video games was poised to learn from them habits, skills, and an understanding of technology that some adults felt sure would lead to a successful future of schooling and work with advanced electronics. Video games might have appeared to some adults like silly, trivial diversions about zapping aliens with a ray gun, but to optimistic experts they were “the first playthings of the information revolution.”50
The idea that video games were one piece of a much larger economic and sociocultural transformation was rooted in at least a decade of public discussion about the development and future of American life. Writers including the sociologist Daniel Bell and the political scientist Zbigniew Brzezinski had written and spoken from their prominent perches as public intellectuals about a shift underway from modern industrial society to its successor, which Bell famously termed “post-industrial society.”51 Industrial society, based on the Fordist production of goods, was by this account giving way to a new service economy in which information would be more central than material production. In his grand scheme of history, Bell saw three periods: agrarian society, based on farming; industrial society, based on the nineteenth- and twentieth-century technologies of factory manufacturing; and postindustrial society, based on knowledge work made possible in part by electronics. Computers were not the only new technologies of the twentieth century to figure into such thinking. For Brzezinski, electronics more generally was the notable innovation, and he called the new age “technetronic.” Cable television (including two-way cable TV), satellite systems, digital telephony, and telecommunications more generally were also central in much of the thinking about large-scale economic shifts during the 1970s. In one utopian futurecasting account, the prefix “tele” did much of the rhetorical work of setting the stage of a new society in which education becomes teleducation, medicine becomes telemedicine, and so on.52 “Cyber” would eventually carry similar rhetorical weight, but “tele” was more familiar at the time.
By the 1980s, computers, another electronic technology, were regarded as most central to the changes underway. Telecommunications could collapse distance and produce a more participatory mode of engagement among users, but computers could enable knowledge work of many kinds and apply systems theory to solving complex problems in large institutions of business and government. The rise and spread of this type of knowledge work, according to Bell, would transform the countries of the West from blue-collar to white-collar societies, with a professional and technical class soon becoming the largest group in the labor force.53 In this new age, “What counts is not raw muscle power, or energy, but information. The central person is the professional, for he is equipped, by his education and training, to provide the kinds of skills which are increasingly demanded in the post-industrial society.”54
The Coming of Post-Industrial Society, Bell’s 1973 book, was the origin point of many ideas about this transformation, and was widely reviewed and discussed. Bell had spoken publicly on its theme first in 1959 and throughout the 1960s, so the ideas were very familiar in learned circles. In many publications in the 1970s and early ’80s, from news items to popular nonfiction books, the coming of the postindustrial society was taken as a given, a fact of life, and the shift toward knowledge work and an information economy was a commonly shared vision of the present as it was becoming the future.55 In this future, the computer—the thinking machine—was making possible a world in which knowledge and technology are the most precious resources. To bring up a workforce adept at using computers, education would need to be expanded, with college becoming more available to more young people, and more necessary for social mobility. Professional labor would demand educated citizens trained to work in fields such as mathematics, engineering, finance, health care, and many others requiring technical skills. The society’s elite would comprise a range of highly educated experts in lines of knowledge-based service work. As one newspaper trend story on the service economy put it, “the future … is in offices, not factories.”56
To the optimists pushing this vision of the new age, the computer would be a strong force for significant change.57 To prepare for and take part in the emerging society, many people were learning computer skills even if they had no clear immediate use for them. “Computer literacy” was a term gaining currency, with its connotation that knowing how to use a computer is comparable to being able to read and write.58 A 1983 newspaper story summarizing many of the key points of John Naisbitt’s bestseller Megatrends, one of the books positing that knowledge work is replacing industry, expressed a commonplace assumption that more teachers of computer skills would be needed in the future.59 Social scientists at the time believed that the microcomputer would be to the information society what the automobile had been to the industrial society, and that video games were teaching children that microcomputers were easy and fun, acquainting them with essential technology.60 A 1983 report from a conference on children and computers discussed their utility as educational technologies in their own right. Headlines used phrases like “computer age” and “information age” interchangeably and routinely, assuming the reader would find them to be familiar shorthand and aptly descriptive of the changing times. For children growing up in a “computerized society,” wrote one psychologist in a letter to the editor of the New York Times—reacting to the media panic around arcades in Centereach, Long Island—video games could turn out to be “the educational tools of tomorrow.”61
With rhetoric like this, the experts defending video games championed their potential to breed familiarity and facility with essential technology. Children by these years were frequently encouraged or assigned to use computers in school or at home.62 Kids attended computer camps and took computer classes, and for middle-class children in particular, using computers was seen as an enriching experience that could develop a useful skill. American schools had 150,000 computers in 1983.63 An ABC News story airing July 8, 1982, reported from a summertime computer camp on a college campus where children were learning from experts and also exploring the technology on their own. A mathematics PhD, one of the camp’s counselors, said of the campers: “They’ll be able to teach me fifteen years from now … they’ll be ready for the computer age.” Many researchers were studying the educational applications of computers, including the potential for games to be integrated into educational software. Computers used for play, what Sherry Turkle called “high-tech rec,” could have the double function of being both a diversion and a pedagogical tool.64 And researchers were eager to extract lessons from video games to understand what makes them so much fun, the better to design effective educational software.65
Even without considering titles written for educational use, the video games popular with young people in the early 1980s were expected to yield long-term benefits to devoted players. The kids spending so many hours mastering Asteroids or Ms. Pac-Man were obviously engaged in learning. A frequently offered example of video game learning in popular discourse was eye-hand coordination, which was believed to be learned through regular arcade play.66 Whether this was to be a useful skill in the information age was rarely addressed by those making such claims; actually they were often made dismissively, as though quick hands were the only benefit of video games to line up against a long list of harms. But on the positive side, in addition to honing reflexes and sustaining attention, games demanded that players develop detailed understanding of their patterns and play structures, inculcating a feeling of mastery over a sophisticated machine. Proficient players were conquering technology.67 Electronic games exploited the child’s natural capacity to learn, to learn through play, as in the theories of John Dewey and Jean Piaget.68 The science-fiction author and technology maven Isaac Asimov said that games are “an important (perhaps unequaled) teaching device.”69 Video games were giving children motivation to educate themselves in the use of computers, since games were fun and rewarding.
Without even intending to, video game players were giving themselves a leg up on peers less familiar with this newly essential technology, gaining an edge in the information age. Young people growing up with video games would naturally have an advantage over their elders thanks to this intimate childhood familiarity. Like learning a language in childhood, learning computing while growing up would come more easily and become more fundamental to one’s habits of thought. It could even be harmful to children to avoid or fear computers, so encountering them at a crucial period of development, in the form of video games, would help lead to future success. Apprehension about games’ negative effects ran up against a notion of their becoming virtually essential to maturation in the information society. As Turkle told People magazine in 1983, “To feel afraid of computers in the next decade is to feel afraid of life.”70
One social scientist at this time doing ethnographic video game research found that affluent families in the San Francisco Bay Area acquired game consoles specifically as a way of accessing computers. Edna Mitchell, an education professor at Mills College in Oakland, studied family dynamics in the home around video games. In her study, supported with funding from Atari Educational Foundation, she observed that children did not become video game “junkies” (though one divorced mother did) and that family life was often enhanced by common experiences of electronic play. “Families reported playing together, interacting both competitively and cooperatively, communicating, and enjoying each other in a new style.” While families’ experience of games might have been mainly for fun, the purpose of the acquisition would have been justified by loftier ambitions. “The video-games, which were often advertised as computer-related, were seen as a way of familiarizing children with a new and important technology. Adults were interested in being part of that learning, and the fact that these were games, rather than real work, made it easier for the adults to participate.” Mitchell also observed that as computers became more widespread in American homes, it was more likely that well-off families like those in her study would upgrade from a game console to a home computer. The more the experience of games approached computing, the more the games would be seen in terms of learning rather than amusement.71
Computer games were no mere alternative to conventional classroom education. In many expert accounts, learning from games or computers could improve in important ways on traditional learning. Rather than passively absorbing information, computers would provide for active engagement with concepts and processes. Video games made learning computing fun, and children needed no encouragement or discipline to engage with them. Games were compelling, inherently gratifying activities. They were intrinsically motivating to their players: mastering them would be its own reward, and it encouraged further learning. With computer games, “you will be learning without even knowing you’re learning,” according to Isaac Asimov, “because we don’t call it learning when we are doing something we want to do.”72
As in many fantasies of computerization, machines would improve on tasks previously performed by people, working for us by being our betters, and these computers would at the same time teach us lessons and help us learn to use them. A college professor told Smithsonian magazine in 1982 that “the arcade machines are skillful teachers … with more variety and individuality than many human teachers.”73 A University of Michigan professor observed in the same story that the kids in the campus arcade seemed much more interested than those in the library. Learning with computers and also learning about computers were regarded as essential to an information-age education. The same commercially booming arcade cabinets seen by moral crusaders as the devil’s playthings were often imagined by their academic champions as being at the center of an emerging new pedagogy. The Smithsonian author summed up this line of thinking with a provocative, not totally incredulous question: “Are these psychologists saying the arcades are the new schools for survival? Is it somehow more valuable to learn Missile Command than to learn English?”74
Programming games was the most obvious productive use of computers to come from playing games. Geoffrey and Elizabeth Loftus’s 1983 book Mind at Play: The Psychology of Video Games, emphasized this desirable benefit. The Loftuses were married psychologists, and their account was one of several popular books published within a brief span of time in these years by academic researchers giving their endorsement (however qualified) of the value of the new medium.75 Loftus and Loftus saw video games as “potentially the most powerful educational tools ever invented.”76 They emphasized that video games would be valuable in introducing young players to computers. They also noted that young people by 1983 had already made the transition from playing to programming, and anticipated that some of these young programmers would find employment in this field.77 The Loftuses were also prescient in foreseeing unequal distribution of access and validation for different groups of players or computer users: “Since computer literacy is becoming increasingly essential in most jobs, children who are exposed to computers early in life acquire an advantage over those who are not. The boys who outnumber the girls in the arcades will be boys who outnumber girls in the adult world of computers.”78 And they engaged with the argument against games, showing it to be a shortsighted and panicked product of intergenerational misunderstanding.
Like many admiring accounts of the early ’80s, Mind at Play seized on examples of video games having concrete economic benefits to players. The book identifies one player in particular, a boy from Los Angeles named Mark, who was so enamored of video games that he begged his mother to buy him a computer, enrolled in computer science courses at a junior college, and used his home computer to handle accounting for a window-washing business he started.79 This connection between games, computers, and entrepreneurial work surfaces in a number of popular accounts. Seventeen ran a story in July, 1983, describing a twenty-one-year-old Michigan video game player who designed games on his home computer as a teenager, sold them to a California publisher called Sirius Software, earning tens of thousands of dollars, and after several successful sales moved out west to work for Sirius full time. Seventeen also encouraged its female readers to submit their games to Atari and Sirius, who were “eager to accept a marketable submission from a girl.”80 An NBC News segment airing March 8, 1982, profiled Tom McWilliams, a suburban teenager from California who earned $60,000 from the proceeds of his 1981 game Outpost. The report framed his achievement against the initial opposition of his parents, who disapproved of his obsession with video games and refused to buy him a computer (he saved up his own earnings to purchase an Apple II). Now, the news broadcast delighted in informing us, they call him “Tommy McMillions.”
The experts who saw video games as the playthings of the information revolution and a milestone in the evolution of education were no less captured by fantasy than the moral entrepreneurs stoking panic and advocating prohibition. But according to the optimistic vision, young people would be improved rather than endangered by the games. Games would be good not just for personal development, but for success at school and at cutting-edge careers in a changing world. They would help affluent families and children of the professional-managerial class reproduce their social status. To the moral entrepreneurs, video games threatened the future success of middle-class children and young adults, disrupting their education and luring them to vice. But to the academic experts, these same children stood to gain so much from playing video games that they could be understood not as diversions and amusements, not as distracting popular culture, but as essential formative experiences. To them, the new medium was bringing up a technically adept generation prepared to thrive in the computerized environment already flourishing around them.
The tensions between concerned local critics and intellectually engaged advocates of video games can be seen quite clearly in one moment in particular during the early 1980s when hopes and fears about the new medium were crystallized. From May 22 to 24, 1983, a symposium was staged at the Harvard Graduate School of Education on the theme of “Video Games and Human Development: A Research Agenda for the ’80s.”81 The conference attracted researchers in education, criminology, psychology, psychiatry, engineering, and medicine, entrepreneurs in the educational software business, and workers from video game companies. It was funded by a grant of $40,000 from Atari, which undoubtedly invested in hopes not only of supporting academic research but also of generating positive press for the industry and countering the well-publicized position of the moral entrepreneurs. Hundreds of people attended in person, among them journalists covering the event. Time and Newsweek ran stories under the headlines “Donkey Kong Goes to Harvard” and “Video Games Zap Harvard,” respectively.82 News items in several sources conveyed mostly gee-whiz enthusiasm, with some caution from skeptics quoted for balance.83
Reading the conference proceedings years after the establishment of game studies as an academic field, and after the proliferation of humanities and qualitative social science research on games as communication, as art, and as popular culture, one thing that stands out is how much the focus of a games conference was on science. Participants were set on proving the value of games as advanced tools for learning, as therapeutic technologies in clinical settings of various kinds, and as wholesome rather than destructive amusements. Education experts described the utility of the video game as a “sophisticated teaching machine,” capable of innovative applications in ordinary classrooms but also with patients with chronic disabilities and mental illnesses.84 Edna Mitchell, the education researcher from the Oakland family ethnography, described how video games taught one girl to have “fast eyes,” having learned to read faster by playing games, a benefit of their “cognitive workout.”85 Patricia Marks Greenfield, a cognitive psychologist from UCLA, spoke about the way the activity of games in relation to the passivity of television made games more likely to help young people develop skills. Games, she argued, teach inductive reasoning, spatial cognition, and other benefits, though she also decried the violence, racism, and sexism of many games.86 Emanuel Donchin, a psychologist from the University of Illinois, discussed the application of games to military research and training. “From the perspective of one interested in human information processing and in the nature of interaction between humans and computers these video games provide a fantastic setting for research.”87 He argued that a computer game is a good model for understanding how a novice becomes an expert, detailing how this kind of software has been used in training research funded by DARPA (Defense Advanced Research Projects Agency) and the US Air Force.
In a session titled “Video Games and Informal Settings,” many of the points concerned the everyday experience of ordinary players and the positive effects accruing to them. The main presenter was D. N. Perkins of Harvard’s Graduate School of Education, whose paper title was “Educational Heaven.”88 As in the rhetoric of Asimov and the Loftuses, Perkins saw video games overcoming the age-old problem of young people’s insufficient interest in learning. By being intrinsically motivating, games would lead to “strong capture” of students’ minds. He referenced the “Pac-Man Theory of Motivation”—the idea that games are such a formidable stimulus to learning because of their formal features.89 Video games are motivators by presenting clear tasks, defining the player’s role and responsibilities, offering graduated levels of challenge, giving immediate and unambiguous feedback, and being a solitary experience rather than a public opportunity for ridicule or shame upon failure. Perkins cited work by another presenter, Xerox PARC researcher Thomas W. Malone, whose experimental work on intrinsic motivation was often used in support of hopeful discourses around computers and games.90 Motivating factors according to Malone were the challenge of a clear goal with a certain outcome, the fantasy appeal of the game’s scenario, and the arousal of the player’s curiosity. In response to Perkins, several entrepreneurs making educational software or running computer learning companies amplified his ideas about games appealing to young people while also leading toward greater learning, whether of computers or other topics or skills. In another session on games in formal learning settings, speakers including Malone and Jerry Chafin, a special education professor at the University of Kansas, elaborated on exploiting game design for learning outcomes. Chafin sounded this particularly hopeful note: “If one could identify the motivational elements of the video arcade game and then integrate educationally relevant content into games utilizing these features, one could go a long way toward solving the learning problems of many of today’s pupils.”91
Many of the symposium speakers sought to dispel common worries, insisting that video games would promise a form of redemption through technology. This was standardly couched in terms of debate with the medium’s detractors. The Music Man was a point of reference, and not only in the title of the keynote speech on the opening night of the conference: “Donkey Kong, Pac-Man, and the Meaning of Life: Reflections in River City.”92 Many of the speakers addressed the situation of video games delivering their benefits and promises of future benefits in the face of such deep and widespread disdain. The keynote speaker, a clinical psychologist and teacher at Harvard named Robert G. Kagan, began: “Social science feels a pressure to deliver some verdict on the wholesomeness, or lack thereof, of video games. A very immediate source of the pressure is the adults who, concerned about their children, wonder if this is not some damaging mania.”93 A brain injury rehabilitation specialist from Palo Alto, William Lynch, presented evidence that playing Atari games had positive effects on patients undergoing “remediation of cognitive and perceptual-motor deficits.”94 Games were helping them improve at “reasoning, memory, and eye-hand coordination.” He also spoke out against parents and civic leaders whose criticisms “reflect incomplete understanding of both youngsters and video games.”95
B. David Brooks, a USC instructor and consultant specializing in juvenile crime, presented a study observing 1,000 hours of arcade play by 937 young people in Los Angeles and Orange County, putting to rest the notion of games as harmful, addictive, money-draining social ills.96 Brooks questioned whether games were addictive or isolating, whether they led to truancy, whether they turned good children into criminals and drug abusers. He argued that arcades were the new ice cream parlors, not dens of vice. Many of the children in his study did well in school and participated in other extracurricular activities. Many had consoles or computers at home. Four out of five spent less than $5 a week at the arcade, and those proficient at video games could milk one quarter for a long duration. Games themselves could even substitute for, not lead to, using drugs; they gave you a feeling of being “loaded” and anyway, being high while playing would compromise one’s performance. Directly countering the panic, Brooks concluded: “Video games are a form of recreation for kids and do not really pose a serious threat to the morals of American youth.”97
The concluding speaker at the Harvard symposium was its organizer, Inabeth Miller of the Gutman Library at the Graduate School of Education. Her rhetorical questions capture a perspective evidently shared by many of the event’s participants, and more generally by intellectual and academic observers of video games in the early 1980s:
Why are so many people entranced, held captive, by these machines? Is it visual excitement, fantasy, challenge—all the various elements described here during the symposium? Do lights and flashing asteroids transport the participant to a place far from the boredom of everyday lives, much like watching John Travolta on the disco floor? Is escape and attraction so reprehensible that it must be stamped out, as we have seen happen in New York, New Hampshire, and Florida?98
Of course she did not think it should be stamped out, and she assumed that the ubiquity of video games in the home would eventually cause the panic to wither away. Actually she worried about making sure access to this emblematic technology of the new age would be equal and fair. Would computer play be yet another way of dividing haves from have-nots? Would disadvantaged children be excluded “from effectively competing in a computer-age society”? Would video games “offer only a white-oriented, middle-class picture of American life?”99 Only a technology firmly within the grasp of America’s more affluent youth, producing cultural capital for the next generation of information age leaders, could be presented in such terms.
Accounts in the national newsmagazines presented the positive emphasis of the conference’s proceedings, offering tidbits of presentations on educational heaven, the production of new worlds through computer play, the therapeutic benefits of games to psychiatric patients, and the defense of children’s pleasures from moral crusaders. But the balanced formula of mainstream news also prompted opportunities for skepticism and doubt. The funding of the symposium by Atari’s video game money was one point along these lines, casting suspicion on the real agenda of the proceedings. Any story about video games would have to make reference to the worries of many parents about violence, wasted quarters, and the shady reputation of the arcade. For its kicker, Newsweek ran with this quote from Doris Mathiesen, a high school administrator from Framingham, Massachusetts, and perhaps the only doubting member of the symposium’s audience: “I think Harvard has gathered all the wisest people in the kingdom to admire the emperor’s new clothes.”100
Despite the critical perspective keeping optimism in check, the mere fact that the most elite college in America played host to a conference on video games brought prestige and legitimacy to the new medium, at least opening up the question of whether video arcade critics could be overreacting. The contents of the symposium, as filtered through press accounts into popular discussions, were relentlessly upbeat. Surely many observers in the early 1980s would have been unsure of what to make of video games, which seemed at once somehow both seductive and productive, a rebellious new youth culture and good preparation for a high-tech, high-paying career. Whatever observers made of them, seeing elements of good or bad in their potential, the framing of the new medium through many types of public discourse attributed great power to these amusements to chart the younger generation’s course through later life.
While their status as harmful or educational was a theme of news reports and columns in papers and magazines, a more fantastical and symbolic register of concern could be found in fictional stories about video games in cinema and television. Several movies released in the 1980s have prominent video game themes, and one motif they tend to share is crossing through the boundary separating the real world and the game. This notion of the player entering into a diegesis is similar to fiction/reality play in movies like Buster Keaton’s Sherlock, Jr. (1924) and Woody Allen’s The Purple Rose of Cairo (1985): characters pass from one realm to another and then return. Dreamlike fantasy is the essence of this device; the game world is an idealized, highly dramatic scenario. For video game narratives, the fantasy is to be doing more than merely controlling patterns of light and sound, a fantasy of identification with the representation in the game. It’s all about the boy’s or man’s mastery and power.
This trope could also be an expression of one common fear around arcades and computers: of the child getting lost, of the isolation of play capturing youth who are no longer present in real space and real life. The illustration on the cover of the 1982 Time magazine issue announcing “Video Games Are Blitzing the World,” which we saw in chapter 1 (fig. 1.1), pictures a male figure with a gun inside the representation in an arcade cabinet screen. This kind of imagery would dovetail with widely circulating ideas about the games being mesmerizing and potent, exerting a force over their user. Films working along these lines include Tron (1982), WarGames (1983), The Last Starfighter (1984), and Cloak and Dagger (1984). They present young male protagonists finding escape or empowerment through technology and adventure. As Carly Kocurek describes them, the heroes of Tron and WarGames are “exceptional embodiments of the technomasculine,” the latest versions of an American pop cultural type, “the bright, capable, mischievous, tech-savvy boy.”101
An early ’80s Saturday morning cartoon television series, Spider-Man and His Amazing Friends (NBC, 1981–83) and the Hollywood film WarGames both present revealing variations on this fiction/reality trope. The Spider-Man episodes do involve the fiction/reality transgression similar to Sherlock, Jr. or Tron, which presents a young man who is enveloped in the computerized environment. WarGames works slightly differently, by the player stumbling upon a networked computer program that he believes to be just a game but which turns out to be a real military program. In both Spider-Man and WarGames, we find many of the same hopes and fears expressed in contemporaneous news media. In these two examples, the optimism and pessimism around electronic games are married in a way that shows how, rather than either hopeful or fearful, we might best see ambivalence about technology as the prevailing discourse of video games at the point when they became a pop culture phenomenon.
Spider-Man and His Amazing Friends was an animated series on NBC that ran a total of twenty-four episodes in its original run (and kept airing repeats for years after that). Following the standard format, the superhero friends Spider-Man, Iceman, and Firestar have alter egos as students at Empire State University by day but also transform into crime-fighting heroes to do battle against supervillains. A character introduced in 1981 in the second episode, Video Man, appears three times in the series: first for two episodes as a villain, and then a final time as a hero. Video Man first appears as a two-dimensional, pixelated, hulking foe made to resemble a video game sprite. In his first episode, he is a depersonalized villain but the association with video games makes him seem like a computerized enemy who is more machine than man. At one point he lets video game characters loose from their game cabinets at Earl’s Arcade to wreak havoc, and a Pac-Man-esque character, “Mr. Grabber,” munches through a park with his two sharp teeth in pursuit of Spider-Man.
In the series’ seventh episode, “Video Man,” Peter Parker and his friends are shown hanging out like regular teenagers at Earl’s Arcade. From his command center, the supervillain Electro controls the machines in the arcade, and Electro sends Video Man through the electrical lines and arcade cabinets to capture an all-American hunky boy, Flash, a friend of the heroes, who is such a whiz at video games that they distract him from his university studies. Flash is captured by the villains and taken to a physics lab, where he is caught inside a Pong-style game in which he dodges balls bouncing back and forth. When Flash is hit by the one-hundredth ball, Electro will kill him. Eventually Spider-Man’s superhero friends Iceman and Firestar are also trapped inside the villains’ games, one that resembles Asteroids and another that is a vector graphics auto racing game. Somehow they figure out how to enter Flash’s Pong game to defend their friend, and Spidey gets Electro and Video Man to destroy each other. Having vanquished the villains, the heroes return safely from dangerous game space to reality. Flash’s memory of the episode is supposed to be wiped clean, but in a (perhaps ironically) moralizing epilogue, he has a flashback of being tortured by Electro triggered by a seemingly benign game of “Pongle.” The episode concludes on Flash’s lesson learned: “Pongle! I can play anything but Pongle. Excuse me I gotta go to class, lift some weights, run some errands, I gotta get outta here!” He flees the arcade. Angelica (alter ego of Firestar) winks at Peter (Spider-Man) and Bobby (Iceman), and the episode ends on this wholesome turn away from the seductive dangers of electronic play, with the All-American, middle-class, white teenage boy scared straight.
Video Man returns in season 3 of Spider-Man and His Amazing Friends as a good guy in the episode “The Education of a Superhero” (1983). Gamesman is a new video game villain, a maniacal genius preying on the kids at the arcade. He uses the arcade games for mind control of the young players. One nerdy kid, Francis, is better than all others at one game in particular, Zellman Command, and when Gamesman explodes the Zellman cabinet, Francis is sucked into the game to become a superhero, Video Man. Video Man can move through electrical lines in and out of television sets and arcade machines. Gamesman threatens to use telecommunications technology (TV sets, satellite transmission) to exert mind control over the whole world. But of course Spider-Man and Video Man foil this plan and save all of humanity from an awful fate.
Putting aside the minutiae of the convoluted plots in these cartoon episodes, the transformation of Video Man from a villain using video games to cause harm to young players into a hero whose superpowers arise from proficiency at an arcade game (and counteract the power of broadcasting as mass media) speaks to the uncertainty over the value of the new medium. The show at once satirizes and trivializes outcry over games’ effects and reinforces commonplace fears about them. But by shifting the character from evil to good, it also shows a progression from fear of games as a moral hazard to appreciation of games as advanced technology involving complex skills that kids can learn to master, gaining a valuable advantage.
WarGames is similarly the story of a teenage boy of the video game generation, David Lightman (Matthew Broderick). He frequents a typical arcade in his Seattle neighborhood where he plays Galaga, the colorful post–Space Invaders alien invasion shooter. He evidently prefers arcade games to schoolwork, as he races from the Grand Palace arcade to the high school, arriving late and receiving a big red “F” on his biology test. David is also eager to play games after school using a home computer in his bedroom connected by modem and telephone line to a network. After showing off his hacking skills by gaining access to his school’s computerized records and changing his failing biology grade to avoid summer school, he begins to play a game he discovers called “Global Thermonuclear War.” David assumes this is just a simulation, but it is no such thing. It is really a poorly secured NORAD military program for controlling a nuclear conflict with the Soviet Union, capable of launching missiles at the enemy when under attack without human input. A Cold War nightmare ensues as an accidental nuclear crisis is precipitated by an unsuspecting thrill-seeking teenager. David is both the cause of the crisis and the one to avert World War III through his bold and fearless initiative, computing knowledge, and natural intelligence. (He seems the type not to let school interfere with his education.) Along with the audience, David recognizes that nuclear war means mutually assured destruction, a pat Cold War moral made more appealingly contemporary with the home computer and video game twist.
While perhaps richer in humanist antinuclear and anticomputerization themes, WarGames also expresses the unexamined assumption that video games are an affluent teenage boy’s property, having the potential both for personal intellectual enrichment and extreme mischief and danger. David’s adept hacking produces much more serious consequences than the fun he is looking for. The “mere play” of electronic games matters more than anyone could imagine. The crisis he unwittingly triggers is outlandish and implausible, but the notion of video games being hazardous was everywhere in 1983. It’s a Hollywood premise that video games could cause World War III, but this was also a hyperbolic expression of commonly held views about the new medium. Champions and detractors alike agreed that games were sure to produce significant effects, in particular on well-off teenage boys like David Lightman. His friend and eventual romantic interest Jennifer (Ally Sheedy) shares in the adventure but has little interest in or proficiency with technology. This gendered pairing of the hero and sidekick places video games and computers squarely in the boy’s domain. In the climactic scene at the NORAD Command Center, it is David’s familiarity with computers and games that helps him find the solution to mutually assured destruction: he teaches the computer controlling the war game about futility by having it play a series of tic-tac-toe games to a draw, and the computer learns that, as in tic-tac-toe, in global thermonuclear war, “the only winning move is not to play.” This is the solution that saves the planet from annihilation, from the extinction of humanity. David’s knowledge of cutting-edge technology gets him into the conflict that gives WarGames its dramatic interest, but it would certainly also prepare him well for work in the information society if it doesn’t kill him, and the rest of us, first.
Stories of young male video game players in early 1980s pop culture exploit the novelty of computers in everyday life and of video arcades defining a generation. They also share some of the same ideas as The Music Man and a longer history of narratives about young people’s leisure. The hysterical excesses of danger in the melodramatic plots of Spider-Man and WarGames are also representations of the never-ending struggle to “keep the young ones moral after school.” Also like The Music Man, video game stories are hardly endorsements of media panic encouraging unwarranted concern. Never mind that these are films or TV shows aimed at children. They express an appropriately complicated, conflicted stance on new media and the generation coming of age along with its emergence, acknowledging its disruptive energy but also seizing on its promise of a better tomorrow.