6 The American Video Game ReNESsance

The world of Nintendo is not simply involved in manufacturing video game players and controllers but is interconnected with larger media and communication systems which have an enormous potential to shape and define our culture. (Provenzo 1991, 27)

Understanding the Super NES’s role, position, and importance in video game history—North American video game history, to be specific—requires us to first understand how Nintendo came to be such a household name piercing the heart of children’s popular culture. To do that, we must take a step back and contextualize the corporation’s presence in the video game industry, and as it was established through the NES in North America. Nintendo’s first 95 years as a company (1889–1984) will provide us with some of Nintendo’s DNA and modus operandi. As we will see in this chapter, the NES’s release and success opened a cultural period in video game history, the American Video Game ReNESsance. Accordingly, the Super NES as a cultural platform articulated a transition in Nintendo’s positioning among the changing landscape of gaming from the late 1980s to the mid- to late 1990s. Spurred in part by the Mortal Kombat fiasco (and more generally by Sega’s successful promotional campaigns), Nintendo erred away from its long history as a family-oriented entertainment provider, as well as its shorter history as a kid-centered firm, and stumbled through the mid-1990s before the great Fall at the dawn of the millennium.

From Family Cards to Electronic Amusement

Nintendo started operating in 1889 as a manufacturer of hanafuda, traditional Japanese playing cards, and progressively cemented its reputation for quality. In 1949, Hiroshi Yamauchi was appointed president of the company at age 21 to replace his dying grandfather. He modernized the production by manufacturing Western plastic-coated playing cards in 1953. However, in mid-century Japan, playing cards had a bad reputation for being associated with illegal gambling controlled by the yakuza.1 Nintendo’s reputation would have been seriously endangered if not for a timely licensing deal that Hiroshi Yamauchi signed with Walt Disney in 1959 to produce playing cards backed with pictures of Mickey Mouse and other Disney characters. These successfully expanded Nintendo’s market to include young people and families, even getting advertised on television. To reach these new customers, Yamauchi structured a new distribution system that would get the cards into larger department and toy stores. Yamauchi’s initiative yielded a doubly positive outcome for Nintendo: Its sales exploded and brought immediate financial benefits, but through the long-term shift in perception for playing cards it instilled, the firm earned a positive image as a provider of domestic family entertainment, as well as some all-important business connections in the toy industry.

From there, Nintendo specialized in developing technological toys. The Ultra Hand (1966), designed by Gunpei Yokoi, was its one early success. A second notable invention was the Nintendo Beam Gun, a light gun developed by Yokoi and a collaborator from Sharp Corporation, Masayuki Uemura. That invention allowed Nintendo to enter the electronic entertainment industry by installing shooting galleries operated by optoelectronic devices all around Japan. The Nintendo Beam Gun project proved to be pivotal for two reasons: First, it led Uemura from Sharp to Nintendo, where he would design the Famicom and Super Famicom. Second, it gave Nintendo expertise in the light gun and electronic entertainment industry, which prompted Magnavox to contact them for the development of its own light gun to be included in the Odyssey home video game console (Gorges 2008, Picard 2013).

Light guns and playing cards provide the technological and cultural blueprints to how Nintendo would go about entering the Japanese and American home video game markets with the Famicom and NES in the 1980s. At the time, Japan’s video arcades (“game centers”) were confronted with a problem of cultural image: High school boys would lurk there and bully younger kids, forcing parents to patrol game centers (DeWinter 2014, 333). For concerned families, investing in a home video game console was a way to avoid these issues. Hiroshi Yamauchi, having already dealt with the negative image associated with playing cards in the past, understood that well. Nintendo responded in the same way it had done for gambling in the 1950s: by featuring contents and styles appropriate for children and the whole family and redefining the product for them in the domestic space. Nintendo games would enter Japanese homes like Nintendo cards had done some 25 years earlier.

It is not surprising that Hiroshi Yamauchi found the name “Family Computer” to be “in logical continuity” with Nintendo’s tradition of “developing products that can be used by the whole family” (Gorges 2011, 34). As Florent Gorges writes, Nintendo took every effort to present its Famicom as a family product:

The very first television advertisement for the Family Computer, airing in September 1983, did not begin with images from Donkey Kong or Mario Bros., but rather with Mah-Jong and Gonarabe! The message is then extremely clear: the 30-second spot launches a campaign aimed at winning over the breadwinning fathers. The slogan goes in the same direction: “The whole family together, around the Family Computer.” (Gorges 2011, 41–42; freely translated)

Although the machine was publicly advertised with a focus on the family, behind the scenes Nintendo was targeting its simple and cheap machine to a core audience of kids. This aim was present from the inception of the system: Yamauchi had set the target retail price of the console, which Uemura strived to meet in designing the hardware, at 10,000 yen, an impossible command that ended up at a still impressive price of 14,800 yen (around $65). That price was based on the usual allowance money of children in Japan at the time, which according to polls amounted to 24,000 yen per year. Yamauchi figured that left them enough money to buy cartridges (Gorges 2011, 23, 32). In short, the marketing was aiming broadly at the family but targeting in priority children; as their parents owned the disposable income and control over the domestic space, Nintendo had to get them interested as well. The Famicom’s success gained Nintendo a 90% share of the 8-bit market in one year and 30% of the Japanese toy market during the mid-1980s (Picard 2013).

After successfully breaking through the arcade market of the United States with Donkey Kong and with the success of the Famicom in Japanese homes, pushing the machine to the home U.S. market seemed to be just a matter of time. It turned out to be rather a matter of effort and of micromanaging the marketing to a great degree of precision. Although Nintendo’s sales are calculated as part of the toy market in Japan, Yamauchi unequivocally stated in 1986, “We do not create toys. We provide entertainment. And the world of entertainment does not care to distinguish between children and adult audiences. The only thing that matters is to entertain everyone” (Gorges 2011, 42; freely translated). That approach is easily verified for the Famicom in Japan, where one can find strip Mah-Jong and other erotic or pornographic games (entertainment for everyone, indeed), but had to be tweaked for the U.S. market.

Thinking of the Children: A Generational Divide

For Nintendo to reach American families, retailers had to accept selling its system first, and retailers were clearly not putting any hope in “video games,” which had become something of a taboo word associated with a cultural practice seen as passé and a market thought to be burnt out amid the video game crash of 1983–1984. Nintendo of America’s first NES version, presented at the Consumer Electronics Shows of 1984, had a keyboard and tape data recorder, which made it look too much like these “serious” home computers, which video game hardware manufacturers were now trying to push (notably the one firm caught in the eye of the storm, Atari). One solution then was to target another type of consumer, one that would enjoy the colorful characters that were Nintendo’s strong suit after Donkey Kong and Mario Bros.

This is when the decision to market the console specifically to children, the core being 8- to 14-year-old boys, was made in a slightly different fashion than the marketing of the Famicom in Japan. On American shores, the family would quietly slip behind the children, and the Family Computer became a boys’ toy (especially thanks to the publicized Zapper light gun and ROB the robot, tech toys par excellence). American parents could, like the Japanese, buy the home console for their children to play at home instead of going to these disreputable arcades. Second, the console could be pushed as an “entertainment system” that did more than play video games: It could be presented as an entertainment machine, like a VHS player or turntable.2 Quite paradoxically, the NES had to look less like a toy and more like a machine on the surface, whereas in truth, at its core, it had to function more like a toy and less like a machine. This makes the NES something of an avatar of Nintendo’s own identity, a material signifier of the firm’s surface-and-core duality.

To spin the target demographics’ enthusiasm in the right direction would require some delicate positioning on Nintendo’s part: It had to impress the children while appearing as a reasonable and safe investment to their parents, who had experienced firsthand the video game crash. To paraphrase the authors of Digital Play, promoting the NES proved to be “an exquisite balancing act” based on children’s “pester-power” ability to handle the “delicate negotiations” required for parents to accept buying the console for their children; “Parents had to be reassured about the nature of interactive games,” all the while appealing to “children’s rebellion and independence” (Kline, Dyer-Witheford and de Peuter 2003, 119). This independence came in by marketing games as enablers of power fantasies, with a campaign centered on a “paradigm-shifting” tagline by Nintendo of America’s Gail Tilden (Harris 2014, 55): “Now you’re playing with power!” This new direction went against the antagonistic taunting practices prevalent in video game marketing before (Therrien 2014, 560–561) and was better suited at reaching children. In true Nintendo duality, however, the surface discourse of “playing with power” hid the core reality that most Nintendo games were punishingly difficult.

Nowadays, claims that video games are “kids’ stuff” or toys can occasionally appear in discourses but are usually met with eye rolls of annoyance or exasperation, like any cliché or retrograde view. Most people know that some video games are meant for kids but that video games as a whole cannot be reduced to that. But that awareness came progressively. In June 2002, for example, The Economist made a point of it: “Gaming is no longer the province of children and teenagers […]. A generation that grew up with games has simply kept on playing” (The Economist 2002). This is a testament to the impact Nintendo has had on the industry and the cultural image of video games because in the 1980s, the idea of selling video games to kids was not self-evident. As Christopher Paul (2012) notes in a chapter titled “Video Games as ‘Kid’s’ Toys,” one origin point of video games is in research laboratories and their expensive, specialized computer equipment. The commercialization of this technology through leisure occurred with Atari’s Pong and subsequent machines and led to the emergence of the second origin point to video games. As Dmitri Williams writes, “Game play in public spaces began as an adult activity, with games first appearing in bars and nightclubs before the eventual arcade boom. Then, when arcades first took root, they were populated with a wide mixing of ages, classes and ethnicities” (Williams 2006, 199).

This initial surge was quickly and increasingly confined to the young male audience as Carly Kocurek’s historical account shows; from video game arcades hatched “an easily recognizable technomasculine archetype” of gaming, evidenced by the “video game world record culture” that “present a cohesive picture of gaming: young, male, technologically savvy, bright, and mischievous” (Kocurek 2015, xviii–xix). It must be noted, however, that “young” here does not mean “young children.” Amusement arcades solely dedicated to video games were “a place that parents warned their kids to avoid because of perceptions about their clientele and sometimes seedy locations” (Paul 2012, 39). Video games may have been initially for men and women but they quickly shifted toward young men and women, then young men, and finally boys. Nevertheless, they were for big boys with basic economic sense, the capability to handle quarters, read and follow instructions, be tall enough to see the screen and manipulate the controls at an upright cabinet, and be left unsupervised in a public space for quite longer than what would be acceptable for young children. Video games were not for kids, and they were not toys, but rather an introduction to the computing processes of future technology, which the next generation of American workers would need (Kocurek 2015, chapter 1).

Arcades remained the prime revenue driver for the video game industry well after home video games emerged and remained especially so through the crash of 1983 and beyond. Kubey writes in 1982, “In the United States alone, consumers spend more on video games—about $9 billion a year, including some $8 billion for coin-op and $1 billion for home games—than on any other form of entertainment, including movies and records” (Kubey 1982, xiv). Much of that money was spent by young adults with disposable income and a taste for social entertainment, in the tradition of pinball parlors and bowling alleys. As home video game systems were developed and sold, from the beginning they were marketed to capitalize on the idea of “entertainment for the whole family” (Williams 2006, 197–199). Although some Atari 2600 games were appealing to children, or even specifically developed for them (such as Kool-Aid Man), children were not the primary consumer being targeted by the firm.

This is evident when looking at the firm’s advertisements. One of the earliest television commercials, dated December 17, 1977, by a YouTube uploader,3 showcases the 2600 (or rather the VCS, as it was known at the time) and Combat. Everyone seen in the commercial appears to be in their mid-30s and up; white hair is seen in the background crowd, and everyone is wearing a suit and tie. A compilation of Atari commercials, compiled by YouTube user memphiselle14 and lasting more than 30 minutes, can be described in a series of brief flashes of the first few minutes to provide further illustration. A boy and his mother play Asteroids together. Three men and three children play a variety of games. A man does business on his Atari home computer. A boy types in musical notation as the voice-over claims it is “simple enough for your child to use.” A family is gathered around their TV playing Atari, while the father explains how it provides good home entertainment. In one of the most widely circulated commercials, a crescendo of people gather around the console, watching excitedly as more and more games are shown, starting with two boys and progressively adding their father, sister and mother, aunt and uncle, grandparents, and eventually a policeman and a pizza delivery guy. Children clearly belong in this marketing campaign, but they usually do so as part of the family unit and in service of social entertainment.

Outside the traditional promotional channels of advertisement was the Atari Club, which sent to Club members the Atari Age magazine, a publication that notified and informed them about all things Atari. This magazine is clearly meant for adults. Atari Age, vol.1 no.1 (May/June 1982, 2), starts off with a “celebrity corner” mock interview with Pac-Man, who reveals he had a “well-rounded education” and “graduated sphera cum laude,” before doing stunt work in an enzyme detergent commercial. Baseball jokes ensue. On page 5, an article is titled “From Abu Dhabi to Venezuela, the World Plays Atari Games!” and discusses the recent South African Atari Tournament, an Atari Robot’s demonstration success in Puerto Rico, and the world Asteroids championships recently held in Washington, DC. The Atari News, starting on page 6, are formatted after traditional newspapers and explain “what’s an EPROM”—a nice case of technoliteracy, as seen in chapter 3—as well as presenting the Atari Computer Camps for “campers 10 to 18 years old”; “your child could be one of them!”, the subtext seems to be whispering as we read. There’s even a reprint of an Atari press release, starting thus: “Reinforcing its leadership position in offering cartridge versions of hit coin video games, Atari has signed an exclusive agreement with Centuri, Inc., for the rights to adapt current and future games created by Centuri, a leading American manufacturer of arcade games.” This is dry enough to fit in The Economist rather than in any magazine aimed at children. Comparing this magazine’s writing to that of Nintendo Power (from chapter 3) brings ample evidence that children fit in a peripheral manner to Atari’s market positioning.

Nintendo’s decision to market the NES to children in America is an important event in video game history, as it created a major generational divide. Kids born from the mid-1970s to the mid-1980s overwhelmingly took to Nintendo’s console to the extent that Provenzo named them “Nintendo Kids” (and Kline, Dyer-Witheford, and de Peuter, “Nintendo generation”). As we will see, console video games in North America more or less followed that generation with increasingly mature games. A little more polemical, Sheff affirmed in his 1993 book’s title that Nintendo “enslaved your children.” Although not all games published on Nintendo’s NES were made by the same teams or firms, they all had a certain feel of unity between them because of Nintendo’s heavy regulations on content, so that a certain cultural “flair” could be taken out (a “Nintendo ethos” I will describe a little later). People born earlier and who played video games during the 1970s and early 1980s either had to play Nintendo and become big kids again (often experiencing social stigma) or seek refuge in the pastures of PC gaming, which could appear as spike-filled technical pits to the uninitiated. Many of them simply stopped playing video games, and a quiet rift started separating the digital play of the Nintendo generation from that of their parents, guardians, or elder siblings:

One cohort effect is relatively easy to isolate: the generations that ignored video games in the late 1970s and early 1980s have continued to stay away. Those who played and stopped rarely returned; by 1984, baby boomers had dramatically decreased their play, probably because of the powerful social messages they were suddenly getting about the shame and deviancy of adult gaming. (Williams 2006, 205)

This generational divide is, in my opinion, obfuscated by the expression classically found in video game histories that “Nintendo resurrected the North American market” (Kline, Dyer-Witheford, and de Peuter 2003, 110; Williams 2006, 199; Harris 2014, 59; etc.). This may be true in the sense that Nintendo brought a financially sustainable market and model for the industry, like there was before the crash and its arrival. But “resurrection” has too many implications of continuity. The differences in economic and marketing models (see chapters 1 and 2), to say nothing of the cultural definition, role and impact of video games due to Nintendo’s approach, are too profound to speak of a “resurrection” or “rebirth.” Rather, we should think of Nintendo’s North American arrival as the starting point of something new, a Second Coming after the Video Game Apocalypse of 1983–1984 (or so would the biblically themed periodization have it). This is the start of a distinct period in a cultural history of video games, a period that’s larger than the NES, although it was born from it: the American Video Game ReNESsance.

The American Video Game ReNESsance

The ReNESsance is a regionally specific cultural period that designates the North American home video game market’s redefinition following Nintendo’s success with the NES after the Crash of 1983. Although “American” in its name and origin, its influence rippled across the larger world. I define it as a period where the dominant social image of video games was equated with children’s entertainment. Figure 6.1 charts the presence and strength of the period and identifies the four phases that shape it according to certain key events.

9787_006_fig_001.jpg

Figure 6.1 The American Video Game ReNESsance and its four phases: Appearance (1985), Rise and Apex (1985–1989), Decline (1989–1993), and Resistance (1993–1996).

As a historical period, the European Renaissance is typically characterized positively as a return to the culture of antiquity, philosophy, and a thriving of the fine arts. Moreover, it is often envisioned as a transition toward the Age of Enlightenment with the likes of Spinoza, Voltaire, Hume, Newton, and the Scientific Revolution. The ReNESsance I am describing here has none of these implications. On the contrary, it is built on conservative commercial policies, restrictive licensing and partnership deals, and a top-down, highly hierarchical and authoritarian structuring of the video game industry (as we have seen in chapters 1 and 2). Although the term appears to be positive on the surface, in actuality we are as far away as possible from the strongly positive connotations of the Renaissance and its ideals. This contradiction is conscious wordplay meant to replicate Nintendo’s own two-faced stance across the business-to-consumer and business-to-business spheres. For Nintendo to pull out the velvet glove and seduce consumers required that third-party developers be dealt with an iron hand.

Phases 1 and 2—Appearance, Rise, and Apex

The ReNESsance was foreshadowed by toy manufacturer Mattel’s 1980 entry in the video game market, which put the Intellivision in its catalog of toys. Nintendo cemented the idea of the ReNESsance with the release and marketing of the NES in 1985 and 1986. The cultural movement progressively rises with the popularity of the console until it reaches its apex in 1987 and 1988, when the NES becomes the most popular toy and the United States is hit by “Nintendo mania”: “In the U.S., ‘playing Nintendo’ replaced ‘playing Atari’ as the linguistic metonym for playing any videogame, not just software exclusive to Nintendo’s console” (Altice 2015, 160).

Although Nintendo reached its apex in no small part thanks to the marketing and technological lock-in mechanisms that coerced developers and publishers, its stringent “content guidelines” played a role in cementing the unknown firm’s brand reputation and forced all third-party games to conform to a shared “Nintendo ethos.” In addition to Nintendo testing and approving every game developed by licensees for bugs or operational flaws, all games were prohibited from the following: sexually suggestive or explicit content; sexist language or depictions; random, gratuitous, and/or excessive violence; graphic illustration of death; domestic violence and/or abuse; excessive force in a sports game; ethnic, religious, nationalistic, or sexual stereotypes and symbols; profanity, obscenity, offensive language, and gestures; use or depiction of alcohol, smoking, and illegal drugs; and subliminal political messages or overt political statements (McCullough n.d., compiled from Schwartz and Schwartz 1991).

Much has been written about the effects of these content policies on third-party games and developers (Altice 2015, Arsenault 2012, Crockford 1993) and the absurd cases of censorship it led to—nude art sculptures covered up or entirely removed from games, crosses removed from gravestones, and so on (McCullough n.d.). I won’t go over them here yet again, except to note that these policies proved necessary in accomplishing the NES’s mission of seducing children while reassuring parents. The Nintendo ethos, broadly, revolved around an “epish” treatment of narrative (equal parts epic and childish), which has traditionally been described as indulging in power fantasies (Therrien 2014), and was visually encoded in vibrant, colorful graphics that favored a cartoonish visual style informed by Japanese anime and manga aesthetics (Picard 2008) partly because of the Famicom and NES’s technical affordances.

Translation and localization issues and hiccups made it customary for players to decrypt important game clues encoded in messages almost impossible to decode. Through Nintendo games, children were also exposed to some elements of Japanese philosophy (honor, tradition, etc.), as well as some unique new discourses. Sheff opposed Disney’s Mickey Mouse message (“We play fair and we work hard and we’re in harmony…”) with Mario’s new values: “Kill or be killed. Time is running out. You are on your own” (Sheff 1993, 10). Messages aside, more children recognized Super Mario than Mickey Mouse (Sheff 1993, 9).

Phase 3—Decline: Genesis Does What Nintendon’t

The problem with targeting a “Nintendo Generation” of 6- to 14-year-olds, Nintendo soon found out, is that kids grow up pretty fast, and their idols of worship are bound to change just as quickly. The Nintendo ethos was contested and ridiculed by Sega when it targeted teenagers with its Genesis promotions in 1989, precipitating the ReNESsance into a phase of decline. Sega’s “edgy” promotional campaigns garnered attention and defined its personality through at least two tactics: aggressive comparative publicity campaigns, such as the now-iconic “Genesis Does What Nintendon’t” advertisements, and the “Sega shout” signature, consisting of a half-shouted, half-shrieked “Sega!” rather than calmly but firmly pronouncing it. It screamed rebellion with an edgy and cool style.

Edgy, cool, fast, wacky, bizarre, rebel, trippy; these could all describe Sega’s promotional signature. As the firm gained market share with games such as Altered Beast, insiders of video game culture knew, or would soon come to know, that games could deal with mature subject matters (and had sometimes done so for years, especially on the PC). I call this shift the “Teen Spirit” to reference the grunge movement that heavily defined the early 1990s in U.S. popular music and culture, and more specifically its origin point, Nirvana’s 1991 hit song “Smells Like Teen Spirit.” That was Sega’s take on the spoony bards of Nintendo culture.5

Sega did not invent the edgy push for game marketing, of course. In the early 1980s, Atari had produced four television commercials exclusively for showing on MTV, the same cultural demographic that would be targeted by Sony and Nintendo some 15 years later. A commercial for Pole Position showed a buttoned-up, bowtie-wearing father driving his quiet and clean family around for a “Sunday drive” while an off-screen voice derided his social position as a “corporate executive.” The plans were derailed by shaking the family from their car and dropping them into race cars so they could “play Pole Position.” The rocking soundtrack, dizzying visuals, fast and disorienting action montage, and aftermath of the race showing the traumatized family members slowly walking while clutching car parts in shock and awe are all early embodiments of the rebellious rallying cry for a new kind of video game culture.

Sony’s PlayStation marketing in 1995 would inscribe itself in the wake of Atari and Sega’s 1980 and 1990 trail and succeed in repositioning games for a wider range of audiences. In this respect, on the level of global video game culture, Sony merely gave the final push to a historical marketing arc that Atari had flashed and Sega had developed with the marketing of its Sega Genesis to the “Teen Spirit.” Just as the Seattle alternative rock bands’ success had been co-opted by mainstream media and fashion industries that commercialized grunge culture, so would video games enter the spotlight with the “MTV Generation” and the PlayStation. In this light, the American Video Game ReNESsance temporarily put this movement on hold because of Nintendo’s regressive marketing to children.6

The ReNESsance’s influence declined due to Sega’s efficient marketing campaign, especially given Nintendo’s unconvincing attempts at responding to Sega’s attacks: Against the witty “Sega Does What Nintendon’t,” all Nintendo could muster was “Nintendo Is What Genesisn’t.” Aside from Sega’s influence, a second factor contributed to the decline of the ReNESsance: the release of Tetris as the bundled title for Nintendo’s 1989 Game Boy. Sheff describes the surprising success the Game Boy has had with adults:

Grown-ups flocked to Tetris too. Arakawa had predicted correctly; feedback from its customers told NOA that a third to a half of the Tetris players were adults, and Nintendo’s presence in the adult market increased to such a degree that almost half (46 percent) of the Game Boy players in the West were adults. (Sheff 1993, 217)

Although both of these factors undermined the association of video games with children and chipped away at the ReNESsance as a cultural period, it still remained the dominant social image of video games in the 1990s due, in part, to Nintendo of America’s content guidelines. They had been established for the Famicom and were still in full force, but as games substantially grew in graphical fidelity and plot complexity, more and more knotty issues showed up. The censorship of religious themes and symbols affected Super Ghouls ‘n Ghosts by replacing crosses on gravestones with ankhs and the demon Samael’s name with Sardius.

Capcom got off easy compared with Quintet for ActRaiser, a game whose plot revolves around the player being God and reclaiming the Earth from Satan. Apparently Nintendo of America was fine with the player controlling a delegate “angel” during the city-building phases, but God and Satan had to be renamed “The Master” and “Tanzra” (Tanzra also had his horns edited out). Another thing apparently fine with NoA was the possibility for the player in Blackthorne to fire his shotgun and kill slaves standing innocently (or, worse, chained to the wall) without consequence—but, crucially, without blood. The hemophobic Nintendo wasn’t controlling games for ethical or moral ideas but simply for on-screen blood, gore, sex, and religious symbols, which is probably why it had Square alter Final Fantasy II so that Rosa, when captive, is threatened by a suspended wrecking ball rather than a scythe like in the Japanese original. When Cecil manages to rescue her, they hug rather than kiss (presumably to reduce “sexuality,” which I personally find hilarious). Sprites for partially uncovered female enemies and characters in Final Fantasy III, like bare-breasted statues in the halls of Super Castlevania IV, got wardrobe upgrades.

As dialogue got increasingly verbose with the rise of story-driven games, direct references to death and other sensitive issues were carefully avoided, with varying degrees of success. In Final Fantasy II, all dialogue bits that hinted at Cecil and Rosa sleeping together were edited out, just like the “Porno mag” item that could be found in the secret programmers’ room—proof, if any was needed, that Japanese games had not been defined as “kids’ games” as much as in America. In Final Fantasy III, the spell “Death” and the enemy “Death Gaze” were renamed “Doom” and “Doom Gaze”; the spell “holy” was renamed “pearl,” which didn’t help players understand the logic of opposite elements (as in Chrono Trigger, where Crono was keyed to the element “Lightning” in the North American version, instead of “Heaven” in the Japanese version, which included both lightning and holy magic). Bars became cafés to avoid depicting alcohol. The inventory process could be endless—lists of alterations and pages dedicated to the topic can be found all over the Internet.7 However, more than specific cases, what I wanted to illustrate was just how much Nintendo’s overbearing attitude hung heavily over third-party licensees and stifled their creative aspirations. One game in particular, however, was about to cause changes so deep that it would create a chasm—or rather a “khasm.” Before we get to that, we need context.

Interlude: CD-ROMs and FMV Games

Ever the technological stalwart of game culture, by the mid-1990s, the PC had seen a great adoption rate of CD-ROM players thanks to “killer apps” such as The 7th Guest, Myst, Star Wars: Rebel Assault, Wing Commander III: Heart of the Tiger, and Phantasmagoria. Interactive movies (also known as “Full-Motion Video” [FMV] games) were one of the newer up-and-coming genres that stood at the edge of current video games and provided a glimpse into what the “Future of Games” might look like. Magazines enthusiastically covered CD-ROM technology and the blending of games and cinema as the way forward to the future, in part, because such a framing provided a road to the cultural legitimization of video games.8 Nintendo Power printed an article in April 1992 titled “Super NES Technology Update—CD-ROM” (Nintendo Power #35, April 1992, 70–71), in which it covered the 1992 Winter CES presentation of Nintendo and Philips’s SNES-CD add-on. Screenshots from The 7th Guest, the hit FMV PC game for which Nintendo reportedly spent $500,000 to obtain the rights, appeared in the magazine, as the game and system were demonstrated in a private showing. The FMV train was moving fast, and it looked like games might integrate into mainstream culture soon.

Star Wars: Rebel Assault featured original footage digitized from the Star Wars films, and its 1995 sequel, Star Wars: Rebel Assault 2: The Hidden Empire, was the first time the Star Wars universe had seen live-action footage since Return of the Jedi in 1983. FMV games would try to deploy the film industry’s “star power” as much as possible. Wing Commander III starred Mark Hamill as a starfighter pilot (this time sans lightsaber), with Malcolm McDowell and John Rhys-Davies as supporting cast. Tia Carrere could be seen in The Daedalus Encounter, and the David Duchovny/Gillian Anderson duo appeared in The X-Files Game. Games were on their way to something like “respectability” (i.e., cultural legitimization) thanks to these crossovers. Games were going mainstream (at least in this specific sector; in the larger home consoles market, the effect of the ReNESsance was still strong, and video games were seen as just that, games—and games for children, specifically).

Hollywood and Silicon Valley, it seemed, were destined to merge—a movement whose detractors were all too happy to prematurely christen as “Silliwood.” One thing many gamers, reviewers, and magazine editors noticed is that going mainstream meant going simpler and blander. Their objections concerned the nature of the gameplay experience that CD-ROM technology afforded. The only thing you could do with film clips was start or stop them; once started, they would simply go on, and you would sit there without interacting. It had started in the arcades, where early LaserDisc games such as Dragon’s Lair in 1983 or Mad Dog McCree in 1990 impressed audiences but ultimately fell short on exciting gameplay. Watch film clip and wait for the right moment, aim quickly and shoot, watch film clip that acts as reward or punishment, and wash, rinse, and repeat.

This basic template is ironically a system that Nintendo had used in a pioneering form of “interactive cinema” way back in 1974 with the electromechanical arcade machine Wild Gunman. A 16-mm projection apparatus would play a film scene on the screen, and when the gunman’s eyes flashed brightly, players had to draw the light gun from the holster and quickly shoot the target. Depending on their speed, one of two film clips would be switched on by the machine, with the gunman either triumphing or dying. On a purely mechanical level, this simple branching system functioned exactly as a reflex testing machine that lights up a button and asks the player to tap it down as soon as possible, with a certain timed threshold resulting in failure. Of course, playing the game amounted to a much richer experience than simply tapping a button. Here we see a particular graphical regime, one that has since been deployed into quick time event scenes in modern games: the “timed trigger and reward.” The main reason that people played these games was to enjoy the images being shown as the conflict or task to be accomplished was set up and the corresponding reward after successfully accomplishing the task. While the images are noninteractive (the player simply has to do something by some timed point or fail), their presence is key to the game experience and, indeed, is the game’s raison d’être. The CD-ROM’s storage capacity and random access to data provided the technical key for these games to be made in the domestic space.

Answering the Call of Cinema and TV

Nintendo passed on the opportunities of FMV games, leaving the Philips CD-i Hotel Mario and Zelda trilogy to die by the wayside, ideally with as little promotion as possible (see chapter 7). Nintendo did, however, partake in the Silliwood program through an ambitious experiment: making a film adaptation of Super Mario Bros. (1993). At first glance, that idea wasn’t so bad. Nintendo characters had graced the small screen through multiple animated series, including Captain N: The Game Master (1989), Super Mario World (1991), and The Adventures of Super Mario Bros. 3 (1990). These were all preceded by the Super Mario Bros. Super Show! (1989), which ran for three seasons and distinguished itself by alternating animation and live-action segments within each episode.

Naturally, the idea of having a live-action Mario jumping around was pretty quirky. Could Mario become live-action material? On the one hand, he was a human; on the other hand, he was the only human thing in the Mushroom Kingdom, a fantasy land rendered in cartoon form anyway. What would goombas, koopas, and Bowser even look like if Mario were an actor in a cap and overalls? The Super Mario Bros. Super Show! supplied as good an answer as any: Mario was Lou Albano, former wrestler and ring manager of Italian-American descent and fitting stature. The live-action segments would show Mario, Luigi, and various visiting celebrities in their Brooklyn plumbing business and depict their past life in the real world before they took a warp pipe to the Mushroom Kingdom and lived their grand adventures (the latter being animated segments). This solution had the advantage of taking the filming completely out of the fairy-tale setting, thereby suspending any questions of accuracy between the live-action show and the games.

The live-action solution would not fly, however, in making the transition from the small to the silver screen and from 10-minute comedic skits to a full-blown narrative. The fan site smbmovie.com chronicles the film’s extensive solution-seeking work, which went through seven early script drafts by eight different writers in nine months. Production hit numerous roadblocks typical of the film industry: egos, filming schedules, misadjusted sets and props, and competing visions among creatives and financiers (Reeves 2013; Harris 2014, 317–323). Because producing a movie was squarely outside Nintendo’s creative capability range, the firm had been completely hands off in the process. The disastrous result of the movie, a critical and box-office failure, no doubt reinforced the central Nintendo tenet of “never relinquish control.” Either Nintendo would jump into film production and produce its own movies or it wouldn’t have anything to do with them at all. Following its principles of staying lean in its software orientation, and facing the impossibility of laying a vertical hand over the filmmaking process like it had done with video game production, it quit.

In the end, Nintendo wouldn’t go to the movies, and movies wouldn’t come to Nintendo. However, some form of cinema found its way to a few third-party developers who pushed “cinematic” content on the SNES. If Nintendo couldn’t integrate cinema in its core, then it would paste it over the surface.

The Seeds of Moral Panics

Digitized graphics started appearing in home video games around 1990, following the early push of rotoscopy made famous by Jordan Mechner’s Karateka and Prince of Persia. Animation filmmakers had been using the rotoscope since the 1920s. The device was used to trace over previously filmed actors’ movements, frame by frame; the technique allowed artists to replicate the lines of the silhouette and body exactly as they moved, on a frame-by-frame basis. Mechner had filmed his younger brother performing the basic motions needed for the game and had traced his silhouette for each frame in computer graphics. As a result, movement in Prince of Persia reached considerable fluidity and realism. The game’s success spawned a number of variations, including Another World, Flashback: The Quest for Identity, Blackthorne, Nosferatu, and Lester the Unlikely. These games would eventually be retroactively grouped into a subgenre: the “cinematic platformer” (note the name).

Digitization was about to push that logic further. The process was simple: Rather than having computer artists create graphics by filling in grids of pixels with colors, with one slightly different image for each frame of animation for each character, the developers would shoot actors against bluescreen backgrounds, filming them or taking photographs, and digitizing the picture frames one by one to make up a game’s sprites and animations. Once the digitized pictures were in, it made no difference for programming: Sprites were sprites, assemblages of colored pixels organized in a grid, no matter what their ultimate origin had been. Individual frames could be touched up and special effects integrated into the animation. This technique avoided the issues with interactivity that FMV games had bumped into. The dissolving of motion into individual frames brought the source material into the realm of animation, which made the pictures as malleable as standard computer graphics.

One of the earliest games to have used the technique was Atari’s Pit-Fighter, released for the arcades in 1990. Martial artists (and cheering spectators) had been filmed, the pictures digitized in computers, and animated in the game. Atari’s poster (intended for arcade operators) claimed the game had been “Made entirely of DIGITALLY PROCESSED GRAPHICS for the ultimate in realism!” More interesting, it claimed a relation of kinship with the seventh art: “Camera ‘zoom’ and side-to-side ‘pan’ for a more cinematic look!” The idea was simple but the execution tricky because the core competencies of video game artists and programmers typically did not cover the various areas of filmmaking expertise required: the obvious issues of camera framing and operation, but also costumes, sets, make-up, lighting, digital photography editing, and so on. Moreover, convincingly integrating the shot characters into varied digital environments soon appeared all but impossible, especially because of lighting. Characters shot in bright light would appear to be in bright light anywhere in the game, even in game environments that were pictured as darker.

However imperfect they were, digitized graphics unquestioningly brought video games a step closer to the age-old quest for realism and achieved the literal technical benchmark of photorealism. That proved to be a step too close for some, who started paying closer attention to video games that showed “real people” bleeding and getting dismembered, their digital portraits accompanied by their digitized screams. “Welcome to the Next Level,” as Sega would say.

From ReNESsance to Resistance: The Mortal Kombat Tipping Point

Nintendo’s cultural fight with Sega—and with its own heritage to a degree—is best encapsulated in the Mortal Kombat fiasco. Mortal Kombat took the fighting game genre to new extremes—cranked it to 11, in colloquial speech. It combined the photorealistic digitized graphics of Pit-Fighter with the supernatural, physics-defying special moves of Street Fighter II and smeared buckets of blood and gore over it all, in the tradition of some particularly violent arcade games such as Smash T.V.9

Porting the game to home consoles was financially inevitable but culturally problematic given the role game consoles played in many American homes as a supervised alternative to the disreputable arcades. Sega’s publishing philosophy was based on consumers’ freedom of choice, and so appeared more amenable to this kind of game. As the company faced games with increasingly realistic violence, it created the Videogame Rating Council in 1993, a panel of psychology and media experts that would rate Sega games in one of three categories: GA for general audience, MA-13 for “mature” gamers 13 years or older, and MA-17 for adults. Where would Mortal Kombat land? Well, if a game where digitized actors can rip out the heart or spinal cord of their opponents amid pools of blood doesn’t get the MA-17 rating, what could possibly justify it? However, the game had to get the MA-13 rating to sell to teenagers, Sega’s main target. A wily stratagem let Sega have its cake and eat it too: The Genesis version conserved all the blood and gore of the highly violent arcade version but only if the player entered a “blood code” in the menu. Nintendo’s SNES version was, for its part, irrevocably toned down, with characters losing gray “sweat” when hit instead of blood and similarly limited and less gory “fatality moves.”10

The SNES version of Mortal Kombat was largely derided and tarnished Nintendo’s image as a “kids’ games” company, which played right into Sega’s marketing strategy. Nintendo wasn’t happy about this because even now it was trying to get rid of its heritage from the American Video Game ReNESsance. Gamers sure weren’t happy about this either, as a reader letter from Mike Haney in the Super NES Buyer’s Guide from March 1993 (months before the game was even released) illustrates:

I have read that the mega-hot coin-op Mortal Kombat is going to be done by Acclaim for the Super NES. At first I thought that was great, but since the Big “N” has been known to insist that “excessive” blood and violence be removed from games for their systems, I don’t know if the game is really going to be that great. We have something in this country called “Freedom of choice.” It is our right to choose what we watch on TV, or what games we play. I for one won’t buy the game if it isn’t the best translation possible. And, since it is supposed to be 16 megabit, I would believe that there should be enough memory to include all the characters, moves and even the fatalities. Also, since it is to be a high memory game, I refuse to pay $90 for a game that won’t have the fatalities just because of Nintendo’s archaic “no violence” policy. (EGM #44, March 1993, 6)

These realities took on a wholly new dimension during the 1993–1994 U.S. congressional hearings on the video game industry and offensive contents. At the initiative of Senators Herbert Kohl and Joseph Lieberman, the hearings examined games with disturbing contents—chiefly, Mortal Kombat and Night Trap, with their digitized graphics and live-action filmed actors—to decide whether they should be banned or regulated. If the video game industry wasn’t willing to self-regulate its contents, then Congress would pass bills or form a regulatory agency to do so.

The hearings and the creation of the ESRB are often discussed as part of the general history of video games, but they have had a major impact on Nintendo, and on the cultural legacy of the NES and the ReNESsance more specifically. Nintendo claimed the moral high ground because it had always shut out all possibly controversial contents from its platforms; its version of Mortal Kombat had been sanitized, after all. Nintendo’s approach was in truth already a mixed blessing: Although it preserved its image as a provider of “family-friendly” entertainment to the general public, it also alienated a large portion of its already maturing user base, as well as limiting the creative freedom of its third-party developers. Ultimately, Sega and Nintendo joined forces to pass a plan, which led to the creation of the Entertainment Software Rating Board (ESRB).

This news was bad for Nintendo because it dismantled the beneficial effects of its strategy in the public sphere. Nintendo had always been identified (and identified itself) as the console of choice for families and young children; Nintendo was a trusted brand, with exhaustive content guidelines that purportedly protected the children. Creating the ESRB leveled the playing field because Nintendo could no longer claim the moral high ground; any game for kids could appear on any platform, and with proper age classification, any game with mature contents could appear on any platform without harming the platform owner’s reputation. The congressional hearings had crystallized in the public sphere what industry insiders had known for years: that video games were not kids’ stuff. In the following months, Nintendo began shedding off its old skin. Seasons had come and gone, and the Nintendo Kids of yesteryear had grown and matured. They had fallen in with the wrong crowd, fallen prey to the bad influence of Sega. It was time to reclaim them.

Phase 4—Resistance: Rebellion in Dream Land

All the transitions between the phases of the American Video Game ReNESsance are fluid and imprecise to a degree, but the one between decline and resistance is particularly so. I would argue that the moment when Nintendo begins to fight against its own ethos marks the beginning of the resistance phase. It must be understood as two simultaneous processes: Nintendo resisting its own heritage, and the ReNESsance resisting and persisting in the public realm despite all attempts to move on because public perceptions do not change overnight with new marketing campaigns and slogans. Nintendo’s first step was launching its “The Best Play Here” campaign (Elliott 1994), moving the target of marketing from children between 6 and 14 years of age to an “MTV generation” of 9- to 24-year-olds (Wesley and Barczak 2010, 20). In the process, Nintendo took a page or two from Sega’s TV advertisement book. Taking a cue from the “Nintendo Is What Genesisn’t” failure, I’d be tempted to describe the attempt as “Nintentries What Genedid.”

A Super Metroid commercial shows this new direction. A scientist-looking young “geek chic” man explains that Nintendo wanted to make sure its latest Metroid game was the best ever before releasing it. He explains this while reining in a menacing Doberman (“Killer”), with subdued barks and growls, who attempts to chew the camera—us, in first-person, extreme, Sega Shout-style. Locked into a playtest room, the dog barks as light comes out of the door’s window slit; an impressive rapid-sequence montage of gameplay against enormous bosses, explosions, and speed running illustrate the young man’s commentary, who accentuates the “24 megs” of content that make up “Nintendo’s biggest game ever.” He then opens the door to reveal that the menacing Doberman has become a frightened Chihuahua before yelling, “Ship it!” The ad concludes with the tagline, “The best play here. Super Nintendo Entertainment System.”

Fighting fire with fire, Nintendo stepped it up in July 1994 by launching an important promotional campaign around its new-found slogan: “Play It Loud.” Teenagers could conceivably “rebel out” by playing Nintendo games at a high volume, hence disturbing their parents’ tranquility. Conceivably, but embarrassingly, this is what good-boy bourgeoisie rebellion looked like. It was all the more hilariously inefficient if gamers were engaging in this rebellious attitude while playing their Game Boy with headphones. In 1995, Nintendo remarketed its popular handheld console in a variety of colored cases, christened the Play It Loud! series. Although the innards were the same (and should not be confused with the 1998 Game Boy Color hardware), the marketing embraced attitude—or rather wanted to. Nintendo Power put an ad with cool dudes (mysterious shades, black hair, yellow sunglasses, green punk hair) and gals (bold and flashy redhead, young woman with shaved head) (Nintendo Power #72, May 1995, 86–87). Surprisingly (and tellingly), the ad was advertising a contest to design an ad for the Game Boy Play It Loud! series. This strategy is as good as any when you have no idea how to market a product to a certain demographic: ask them for ideas. By this point, it seems obvious that Nintendo had no idea what its demographics were, let alone how to speak to them.

Still, the tone and target audience shifted. Many advertisements went into gross-out marketing—a full two-page spread opened many issues of Nintendo Power by showing a huge jar full of toenail clippings. Sometimes it was the iron stare of a grandmother handing out a huge platter of meatloaf, with plenty of texture details thrown to the reader’s face. Nintendo, through its advertisements and games such as Killer Instinct, was combating and resisting the ReNESsance’s legacy, which persisted and resisted among entire groups of the general public for whom video games still were kids’ toys.

One of the best illustrations of Nintendo’s newfound coolness can be seen in Donkey Kong Country, a rhetorical incarnation of Nintendo’s will in renewing its corporate identity by playing on “edgier” ground while enforcing conservative gameplay modes that capitalized on its own design expertise and history, in line with gamers’ expectations. It all started before the game was even released. When Nintendo Power subscribers received their February 1994 issue of the magazine, they also got treated to a VHS tape titled “Donkey Kong Country Exposed.” It started with a serious-looking “WARNING: The video you are about to see contains scenes of a graphic and animal nature. Anyone who may be offended by such material should leave the room now.” The video then opened with a flash-cut montage of teenagers discussing, wearing backward baseball caps, athletic jerseys, or earrings. They went live-reporting behind the scenes for a look at the game, meeting developers who explained the technology. Their discussions ended on them agreeing that “it’s a game that’s ahead of its time,” before a montage of in-game footage and playing teenagers was shown, with the “Play It Loud!” slogan presented to a suitably rocking rhythm of distorted guitars.

It’s all in there: teenagers instead of kids, edgy look and promotion technique, and attitude through the slogan and music; all is set to break away from the “old” image of Nintendo. The game Donkey Kong Country did exactly the same in its introduction screen. An old bearded monkey played a gramophone record—that’s Donkey Kong, the star of the old, quasi-mythical game—before getting promptly ejected by the new, hip, and cool Donkey Kong, who barged in with his stereo player and danced to a new, rocking soundtrack. We could almost hear him say to us, “Play It Loud!” Two of the most cutesy game heroes or franchises are emblematic of this shift to the “Teen Spirit” makeover. Kirby, the quintessential representative of Japan’s kawaii aesthetics,11 went from his traditional smiley, pinky, cloudy marshmallow self to a mean-looking thug aesthetic (well, as thuggish as a pink puff can possibly be), taking a mug shot at the Metro Police Department with stubble, bandage, angry eyes, and frown. “He used to be such a good boy,” the title reads, before the text goes on:

Sad. One day you’re cute ‘n cuddly. The next, you’re burying your opponents and spitting on your enemies. Who’s to blame? Bad parenting? One too many sitcoms? Either way, the mutant marshmallow is now on 16-bit in two games. […] Yes, His Flabbiness is back in two new games for SNES. And this time he’s here to separate the men from the cream puffs. (Nintendo of America 1995a)

Baby Mario, co-starring in Super Mario World 2: Yoshi’s Island, also got the Play It Loud! treatment. A two-page ad paints the baby as “outta control,” advising players to “put on a fresh diaper.” Pictured, we find baby Mario’s nursery room, wallpaper torn out, eggs smashed all over, window broken, and underwear drawer half-ripped open. A nice touch of Sega-esque competitor-denigrating marketing accompanies a screenshot: “Kicking, shrieking, crying, tantrums…and that’s just the guys who bought new systems” (Nintendo of America 1995b).

After years of overbearing control and “content guidelines,” mature games finally got their place at the Nintendo table in 1995. The extreme violence of Mortal Kombat II came to the Super NES wholly unaltered, and id Software could finally bring Doom over for dinner, with blood and gore intact—and even a blood-red cartridge. Nintendo did more than simply open the gates, however: It took a game that was under development at Rare (Nintendo’s trusted second-party developer) and made it into its next poster franchise: Killer Instinct. No more confusion; Nintendo had found something to prove its “street cred” that it really wasn’t a kids’ games company anymore. Even the black cartridge said so—now this was different.

The new games fed into a renewed marketing circuit in obvious and somewhat entertaining ways. Examining the cover of Nintendo’s Spring 1996 catalog for “Super Power Supplies” reveals some of the different products (clothes, magazines, keychains, watches, etc.) floating around a brain that has been partitioned between hit Nintendo games, all in psychedelic colors and style. Uncharacteristically from Nintendo, Mario is nowhere to be seen, at first. It turns out that he is there after all, only he’s tucked away in the corner of the faceplate of a Yoshi’s Island watch, itself appearing in a small bubble floating around. That’s quite the demotion for a character that, just a few years ago, was touted as being more famous than Mickey Mouse among children. (“How art thou fallen from heaven, o Mario!”, the Biblical analogy would now go) Flipping the catalog open does not reveal the traditional assortment of Mario pajamas, bedsheets, or lunchboxes either but rather the Killer Instinct products page. The first item listed is a “KI Motorcycle Jacket.” “From Nintendo Kids to Nintendo Bikers” would have made a compelling headline.

After the Fall: The Nintendo Dark Age

Nintendo would rely on Rare to continue its soul-searching over the next years with the Nintendo 64. GoldenEye 007 and Perfect Dark, a pair of celebrated shooters, were soon one-upped by Conker’s Bad Fur Day, a twisted, irreverent take on cutesy cartoon characters gone haywire with guns, gore, profanity, alcohol, and scatological scenes, along with numerous popular culture and movie references to The Matrix and Saving Private Ryan, among others. It wasn’t enough for Rare to put the ESRB Mature rating on the game box; an additional label at the bottom read: “ADVISORY: THIS GAME IS NOT FOR ANYONE UNDER AGE 17.” If Nintendo had ever allowed a licensee to publish a game meant for a mature audience for the NES in 1989 or the Super NES in 1994, it would have insisted on having this label to clearly dissuade parents from buying the game for their children. But this is not the case in 2001 with Conker. Here, the logo is meant to intrigue the mature consumer—the 8–14 Nintendo Kid, now turned a twenty-something—into checking out what’s such a big deal about this edgy game, not unlike the advisory for explicit lyrics found on music albums in the hip-hop, punk, hard rock, and heavy metal genres.

In hindsight, there’s a clear way to frame these discursive me-too’s: “Ninten’s Stuck Where Genewas.” It waded alone in the murky waters of its Dark Age, as we’ll see in the next chapter, with the GameCube years exacerbating the firm’s will to break away from its image of a “kiddie” games provider. The most telling sign of this can be found in the platform’s game library, with violent and horror games appearing in much larger numbers: a 2002 remake of Resident Evil, Resident Evil 0, Resident Evil 4, Hunter: The Reckoning, and Killer7 were all good indicators but not as much as Nintendo’s publishing of its first-ever M-rated title, the horror game Eternal Darkness: Sanity’s Requiem (Silicon Knights 2002). Eventually, Nintendo managed to “wake up” and returned to its roots with the Wii. “Entertainment for the whole family, together around the Wii,” the ghosts of Famicom marketing whispered. The ReNESsance had been progressively phased out with the Dark Age, and finally a new day lay ahead, Nintendo’s Wiivival.

Notes