“If my parents won’t get me the new deck,” Justin says, “I’ll probably sell my old deck and games to get the money to buy the new one. […] I’ll be kind of sad to see the old stuff go,” he says, “but the way I look at it is, I’m going to have the same thing back again, only better.” (Guinn 1991)
The Nintendo Economic System proved hugely profitable to Nintendo during the Famicom boom in Japan and Nintendo Mania in America, but it would soon get a little toning down. At the dawn of the 1990s, NEC’s PC-Engine was seeing a measure of success in Japan, and Sega’s Genesis console was also off to a good start in America. Although nothing stellar in themselves, the console sales carried with them the risk of attracting game developers who could produce quality games for these rivals, build up an increasingly interesting game library, and eventually build an increasingly high market share that could spiral out of Nintendo’s control and tear down its wall brick by brick. Facing competition from NEC and Sega, Nintendo had to respond quickly. It did so by announcing a new, more powerful console coming in the future: the Super Famicom. Ta-dah! Everything people loved about the Famicom (or the NES) would still be there, only better, so the name hinted at.
This chapter details the context in which the Super Famicom, and eventual Super NES, were conceived and marketed, and the critical process of launching a video game console, a particular case of technology adoption whose importance cannot be overstated. This discussion will lead us to a new understanding of what a platform may be for consumers and how the Super NES managed to take off in America despite the challenges Nintendo faced.
Content with the Famicom’s success, in 1987 Nintendo had no plans to issue a follow-up system and lose its lucrative revenue stream from game sales. Expansion would come from within the garden’s walls by releasing the Famicom/NES in Europe and South America. On the development side, the two hardware research and development teams were both at work on different projects. Masayuki Uemura’s R&D2 team would add something to the garden thanks to a Famicom Modem, to be released in 1988. The R&D1 team, headed by Gunpei Yokoi, had completed a handheld prototype that would eventually be released as the Game Boy in 1989 to conquer another garden, that of handheld games.
Whereas Nintendo’s strategy had allowed it to take over the Japanese and American home game markets, its technological, corporate, legal, and cultural walls made future expansion difficult to achieve. As Sheff notes, “Nintendo had reached relative saturation of its largest group of buyers, households with young boys.” (Sheff 1993, 233) Rival firms set their sights further, and soon there were “barbarians at the gates” (Harris 2014, 390) massing just outside their precious garden, building arks and ships that would take them across the seas, across the chasms of new technology, where the greener pastures of new markets sprawled—fields ripe for newer, bigger gardens. When Nintendo squinted hard to look over to the horizon, it realized some of those barbarians were its own guests, who were leaving their garden.
Hudson Soft, the first of Nintendo’s licensed third-party game developers (of Bomberman and Adventure Island fame), had approached them with a proposal for expanding the Famicom’s graphical capabilities through a custom graphics chip. (Gorges 2011, 65) Nintendo had refused, prompting Hudson to seek out alternatives to the increasingly tight technological quarters the Famicom provided. It found a partner in the Nippon Electric Company (NEC), a general manufacturer of electrical goods and computing equipment that can be described as the Japanese equivalent to the American Telephone & Telegraph Company, AT&T. NEC’s technological expertise was doubled by a strong advantage in resources: it was one of the world leaders in semiconductors.
When NEC and Hudson announced the release of a jointly developed console in 1987, the PC-Engine, Nintendo was caught flat-footed. NEC’s manufacturing and suppliers, combined with Hudson’s graphical technology and game-making expertise, made for a vertically integrated powerhouse in the industry. Together, they would bring to market a technologically superior hardware-and-software proposition, with a CD-ROM drive to boot—a world premiere! In the words of Florent Gorges, Nintendo “freaked out” and announced it was developing a new console—a lie meant to diminish the effect of the PC-Engine’s release. In fact, they had nothing in the works because they had intended to keep riding on the success of the Famicom (Gorges et al. 2009).
The PC-Engine was a big success, outselling the Famicom in Japan in 1988 and slowly being imported by some dedicated U.S. gamers. While Nintendo was busy working on a follow-up to the Famicom, Sega released the 16-bit Mega Drive in October 1988 but failed to make any significant headway into the Japanese market. Sega had, however, its sights on the US market and proceeded quickly: It had announced a North American release for January 1989 but revised launch for August of that year.
Nintendo’s long turnaround gave Sega plenty of time to enter the North American market. The one thing necessary for the launch to succeed was for the system to have great games. To achieve this goal, Sega of America would capitalize on home conversions of its arcade hits and endorsements from sports celebrities. However, that wouldn’t be enough to compete against the entrenched NES, so Sega had drafted a licensing agreement that was a bit more flexible than Nintendo’s while still revolving around high licensing fees and the control over cartridge manufacturing. It soon appeared all but impossible to enlist third-party licensees, however, because all NES developers had agreed to an exclusivity contract under Nintendo’s license agreement. Soon the American firm Electronic Arts confronted Sega: It had found a way to bypass Sega’s security measures and was able to make Genesis games without any license. With such leverage, Sega of America negotiated with EA and settled on a more favorable deal. It turned out to be a boon to Sega of America because EA contributed to the early success of the Genesis with games such as John Madden Football.
Learning from the incident, Sega eased some of the conditions of its licensing agreement and successfully poached developers and publishers who had been working for Nintendo’s NES. Like its software library, sales of the console picked up at a steady rate—until the release of Sonic the Hedgehog, which Sega bundled with the Genesis in 1991. This move was, as many historical chronicles pun, “Sega’s Sonic Boom,” which cut into Nintendo’s SNES entry into the market.
Despite the Genesis’s success, however, Sega hadn’t struck Nintendo-level gold. Because it entered a healthy, occupied market with a strong leader, Sega needed every bit of leverage to maximize its first-mover advantage and followed the first principle of orthodox video game marketing: gain market share at any cost. Sega settled on a “classical” razor-and-blades strategy, which was actually the second-phase, post-patent Gillette strategy, as we saw in chapter 1. That gamble was much riskier than Nintendo’s “phase 1” Gillette model, which, like Atari, had it making money on hardware as well as software. Sega would subsidize the hardware and give the razors away to inflate the adoption rate. It would go even further in its expansion strategy and give away its best blade, Sonic the Hedgehog, with the razor. Desperate times called for desperate measures. Although the strategy did gain Sega a large share of the American market, it didn’t translate into heaps of gold: Nintendo’s total net income for all markets, through 1992 to 1996, varied between twice and 12 times as much as Sega’s (Grant 2003, 230–231). The focus on market share in discussions of the video game industry, and especially for the 16-bit console wars, too often hides such financial realities. For all its market share increases, Sega would never get to swim in a pool of gold like its rival.
When Nintendo caught wind of Hudson and NEC’s unholy alliance and upcoming PC-Engine, set for release on October 30, 1987, it knew something had to be done. It was time for the empire to strike back, but the empire had nothing up its sleeves and nothing to do besides praying that its network of fans, consumers, partners, and imperial subjects would stay faithful and not be swayed by other higher forces. Nintendo did more than pray: It started to actively preach about its own Second Coming.
Nintendo summoned the press on September 8 for a shocking announcement: A new 16-bit console, the Super Famicom, would be on sale next year (Audureau et al. 2013, 12–13). Behind the closed doors of Nintendo, no such console was in the works, but the announcement’s purpose was to lessen the impact and media coverage of NEC’s market entry. Thus, development work on the Super Famicom began under Uemura’s lead, and press announcements followed regularly throughout the next months.
Chris Covell’s website offers two pages titled “Japanese Secrets!” where he summarizes the Japanese press’ coverage of the Super Famicom from its first announcement to its release. The reactions first reported on backward compatibility with the Famicom (September 1987) and then a trade-in program instead (November 1987). Later on, the impending release was the focus instead, with an announced 1988 release in July of that year, then in December, and a revised launch window in 1989; when July 1989 came, the press informed the public that another year of delay was expected. This was quite a wait for the eager Nintendo consumers but within expectations for a game console to be developed from scratch.
Knowing that the Super Famicom was, in fact, a rushed response to competitors rather than a carefully planned project provides an interesting lens for examining some peculiarities around its development and marketing. The first of these aspects is how the Super Famicom’s presentation through press announcements revolved around the display (or discourse) of technological supremacy “for the first time in Nintendo’s young history” (Audureau et al. 2013, 13). Such a framing of hardware by Nintendo should be noted because it runs against the corporation’s usual way of framing virtually every one of its new consoles as an extension or application of Gunpei Yokoi’s philosophy of “lateral thinking with seasoned technology.” Nintendo’s promotional discourse usually downplays the importance of technological performance and argues instead for game quality, fun factor, or game design experience. The Super Famicom and its follow-up, the Nintendo 64, were both uncharacteristically framed as consoles with “more power” than the competition. For instance, Famicom Hissyoubon’s report, following the SFC’s first demonstration to the press in 1988, frames the official announcement as “high performance beyond imagination” (Covell, “Super Famicom: December 1988”).
The reason that the Japanese press focused on technical specifications and technological arguments when they covered the Super Famicom is simply because that is what Nintendo focused on in their handouts to the press at the November 21, 1988, conference (reproduced in the December 16 issue of Famicom Hissyoubon). The Japanese press’ contents trickled down into the US press so that the technological discourse has mostly been relayed uncritically, as I show later in chapter 3.1 So why did Nintendo forsake its Yokoi principle of lateral thinking with seasoned technology, thus engaging in the arms race for technological power? The simplest explanation is that there was no time to think in the limited time frame they had to conceive, design, and develop the Super Famicom.
In this light, the next console’s goal is to be just like the Famicom, only “super,” whatever that means. The North American marketing line can be understood entirely differently with this context in mind too. Instead of “Now you’re playing with power … super power!” with the ellipsis marking a dramatic pause to create an impactful punchline, we can understand the ellipsis as a marker of uncertainty while the speaker is looking for something more detailed, or more impressive to say, before quickly settling on a vague and ultimately empty epithet: “Now you’re playing with power, … ummm… super power!”
Understood in this way, then, the Super NES looks like a more-of-the-same, half-hearted effort at making something “new”; it is in fact a conservative console, a souped-up Famicom—a “Famicom 2,” as it was first known internally at Nintendo (Audureau et al. 2013, 11). As I presented in the introduction to this book, it is no coincidence that various publications discussed it as an “upgraded NES” (EGM #2, May 1989, 32). It’s not just a matter of miscomprehension: During its early development, people believed it would be (and Nintendo tried to make it) backward-compatible in some way with the Famicom/NES. B-Young Age reported on November 23, 1987, that Nintendo would take their customers’ old Famicom systems in a trade-in program; a year later, Famicom Hissyoubon Magazine discussed this on December 16, 1988, with Nintendo’s handouts to the press indicating that a trade-in program is “being considered” (Covell, “Super Famicom: December 1988”). In the end, the program never materialized.
Still, Nintendo treated the Super Famicom as a Famicom upgrade, most notably in its initial marketing. During the first demonstration to the press on November 21, 1988, Nintendo presented a prototype SFC that could be hooked up to the soon-to-be-released “Famicom Adaptor,” a regular Famicom that would have a built-in audiovisual output. When connected to the Super Famicom’s audiovisual input, the Famicom Adaptor’s signal would pass through the Super Famicom and into the TV, thus allowing both consoles to be plugged into the TV at the same time (and eerily foreshadowing Sega’s future Sega-CD and 32X add-ons to the Genesis). The final and most telling sign of this “Famicom upgrade” mentality is that on release day, the Super Famicom was sold in Japan with two controllers and nothing else. By nothing else, I mean no A/C adapter or audiovisual cables. Why? Because the Super Famicom could use the Famicom’s, and Nintendo considered that just about anyone who would want an SFC would already own a Famicom. That a consumer product could ship without the cable needed to power it speaks volumes to the manufacturer’s noncommitment to pursuing a new, expanded audience. Nintendo’s stance when commercializing the Super Famicom is clearly one of promoting continuity from the mindset of a hardware upgrade; the console was intended as a retention tool for keeping Nintendo consumers in the firm’s lap.
In July 1989, Nintendo organized a press conference to announce that the Super Famicom would not be released for another year due to shortages in semiconductors, which were tied up by the importance of NES production and the release of the Game Boy in April 1989 in Japan and July 1989 in America. As the next release date of August 1990 loomed by and another target was set for November instead, a Super Famicomania hit Japan with more than 1.5 million pre-orders for the machine. Doubts were expressed on Nintendo’s capacity to fulfill these orders, and the Yakuza reportedly (or so Nintendo feared) took an interest in the console in hopes of selling it on the black market. Nintendo launched “Operation Midnight Shipping.” “On November 19 at midnight, less than 48 hours before the nationwide launch, a hundred heavyweight trucks went to the company’s secret warehouses, each carrying 3,000 Super Famicom destined for Japanese stores.” (Audureau et al. 2013, 21, also covered in Sheff 1993, 232–233) The Super NES was put on sale on a chaotic Wednesday. It was chaotic because the 300,000 shipped units could only cover 20% of pre-orders. A store owner reported having registered more than 1,500 pre-orders and getting only 100 packages. Some independent and smaller stores decided not to open at all to avoid the ire of consumers—many of which had taken a day off their job to line up at stores, only to go home empty-handed. Disturbances were enough of a deal to spur the Japanese government to ask console manufacturers to release new hardware on weekends from now on.
An additional 300,000 consoles were put on sale the next week, which depleted Nintendo’s stocks for months—including the crucial Christmas period of 1990. This explains the lukewarm results published by Nintendo at the end of the fiscal year in March 1991, with Super Famicom sales of only 600,000 consoles, compared with Sega’s 900,000 Mega Drives and NEC’s 1.3 million PC-Engines. This slow start dampened analysts’ expectations in the United States, which may have hurt the sales of the Super NES when it launched there later in 1991. Unfortunately, the difference between lukewarm sales and a sold-out product that had been underdistributed was lost on the American public to fully appraise the situation. As we will see in chapter 3, this situation was compounded by the fact that the leading Nintendo magazine in the United States did not monitor the anticipation building up in Japan for the console.
From the retrospective vantage point of the dominant history of games, which claims that the Super NES beat the Sega Genesis after the advertisement-heavy rivalry between the two (ignoring NEC because of a U.S.-centric bias), it might be difficult to envision just how bad things looked for Nintendo at the time. Most analysts reasoned that Nintendo was in a difficult position and that Sega’s Genesis had taken over the market. After the Consumer Electronics Show (CES) of June 1991—two months before the American launch of the Super NES in August—Time Magazine considered the possibility for Nintendo to crash and burn like Atari:
At best, say analysts, over the next five years Nintendo will sell about two-thirds as many of the new systems as it sold of the old. At worst, Nintendo could end up like Atari, which in the early 1980s tried to replace a wildly successful video-game player with one that was more powerful but incompatible. Atari ended up with a mountain of unsold game cartridges that got loaded onto dump trucks and used as landfill. (Elmer-DeWitt 1991, 75)
What the American public had in mind was the fact that this new Super NES system would not be compatible with the NES games parents had purchased for their children over the years. (Elmer-DeWitt 1991, Guinn 1991) Japanese consumers had been promised, told, or suggested that there might be adapters or trade-in programs for their soon-to-be-obsolete Famicom, at launch, soon enough, or eventually. In America, one “rumor” reported by Electronic Gaming Monthly in October 1990 stated that “The American version of the Super Famicom will supposedly attach to the underside of the Nintendo Entertainment System through the expansion port.” But aside from this weird particular mention, there were no discussions of backward compatibility. Most American consumers may have possessed a variety of dedicated first-generation home video game consoles and then an Atari 2600 (and perhaps a home computer for productivity software, bought cheaply after the 1983 home computer price wars). This “Super Nintendo” was their first contact with the cyclical nature of video game consoles and the upgrade logic of noninteroperable successors—in other words, the lack of backward compatibility.
The Atari 2600 game library had been available to both the system’s competitors (the Intellivision and Colecovision) in sideward compatibility and its successor, the Atari 5200 Supersystem, in backward compatibility, through adapters. The introduction of the Super NES marked the first time that a platform in good health and with plenty of support was being displaced by a newer, incompatible platform from the same firm. It soon appeared to everybody that contrary to other consumer electronics and entertainment industries, video game consoles were not a standard-based industry, or at least not exactly.
In chapter 1, I wrote that standard-based industries are usually viewed as playing a zero-sum game of “capture the market share.” This is because standards are usually seen as providing a way for consumers to do something, which is an adequate conception in many technology industries. Users are typically assumed to side with one standard to complete that activity and are not expected to adhere to multiple standards concurrently. For example, having a printer from one manufacturer ties the user to that standard of ink cartridges, and there is typically little reason in having a second printer from another manufacturer, at least in the home consumer market (an office may use a laser printer to print black and white text reports in mass quantities and a color ink printer for occasional graphical elements, of course).
This idea of exclusive choosing is at the heart of a standard, to the point where the cost incurred from “breaking out” of the lock-in created by the standard is called a switching cost, implying the idea of moving from one standard to another, instead of adopting one more and using all of them concurrently. Game consoles, however, do not work this way because each console yields access to a different library of games according to exclusive licensing agreements and “signature” products; the printer analogy breaks down because it is as if only Nintendo was manufacturing the red ink and Sony the gray one. Hence, many gamers will own more than one console because they want to play games published by both Sony Computer Entertainment and Nintendo, and each will only appear on their corporation’s console because noninteroperability is the name of the game. “Playing games” is not a valid category like “printing documents” (which means choosing between Canon or Epson products) or “listening to music” (which may entail choosing among cassettes, CDs, MP3 players, or smartphones). Platforms grant affordances or present resistances to various game types, genres, audiences, scopes, or publishers, directly or indirectly shaping distinct game libraries. Gamers don’t just choose to “play games” when they buy a console, they choose to “play Nintendo games” or “play Sony games,” or they choose to “play action games” or “play strategy games”, and so on.
The Super NES’s coming, like a forecast of dark clouds, had carried a bad surprise for consumers: all that time, they thought they had been investing in “Nintendo games,” but in truth they had actually been choosing to “play 8-bit Nintendo games,” and now they would have to start anew with 16-bit Nintendo games—and a costly $199 SNES console to get aboard. They knew about switching costs, but now they had to face upgrading costs—a pill all the harder to swallow given that Sega was offering a backward compatibility module for the Genesis to play 8-bit Master System games (Schilling 2006, 78–79).
This situation wasn’t like printers and ink, VHS tape decks, or razors and blades (but VHS owners would face a similar situation when the DVD standard picked up). Following Picker (2010), there is a qualitative difference between the complementary goods for razors and game consoles. The razor hinges on disposable blades that are trashed after use, so that no going forward value remains, leaving the consumer free to switch to another razor. On the contrary, abandoning a console means forfeiting an accrued library of games, whose value would otherwise remain for the player to keep enjoying. Because games are not a disposable commodity, the video game market is one where the accumulation of complementary goods (games) creates lasting value for the consumer that can quickly exceed the value of the primary good (the console). This can be measured through the tie ratio of a console, which compares the sales of a main good with the sales of complementary goods, thus expressing in a ratio how many games per system have been sold to consumers. A console that sells for $299 with a tie ratio of 6:1 means that, on average, players owning that console have purchased six games. If those games cost $60 each, then on average, players have accrued a software library valued at $360 for their system, which they would have to forfeit if they switched to a rival.
Having expensive, noninteroperable consoles has an important drawback: consumers tend to adopt a “wait-and-see” approach to ensure that the console will be adopted before gambling on it. This “hold-out” effect can feed back into a vicious cycle and kill a platform’s chance of success in the marketplace: a lack of consumers buying the device makes game developers refractory to develop games for the underadopted platform, which makes consumers wait further, and so on. This is the single biggest problem with selling game consoles, expensive technology products that depend almost entirely on complementary goods to convince consumers to spend their money upfront. This is why, in the games industry, the launch window of a new console is absolutely critical. More than anything, a successful launch relies on great games, a notion that merits further discussion before getting to the SNES’s launch in particular.
Although it is admittedly half provocation and half wishful thinking, the idea that consoles’ technical specifications could be useless is nevertheless productive—as long as we remember that this book asks, “What’s a platform to whom?” This will undoubtedly sound heretical to the economics-oriented literature on the video game industry. When Subramanian, Chai, and Mu (2011), for instance, discuss the collaborative and complementary competencies of Nintendo for the Wii, collaborative competencies exclusively cover the hardware and technology angle of their platform, delving into the interfirm relationships between Nintendo and Datel, Mitsumi Electric, Tabuchi Electric, Analog Devices Inc., and ST Microelectronics Inc, which produce the wired LAN adapter, wireless LAN module and controller parts, AC adapter, and parts and technology for the Wii Remote. The complementary competencies are compared according to a checklist of primary features, determined to be CPU speed, GPU power, RAM, ROM, video resolution, sound channels, and storage media; and secondary features, namely, online capabilities, connectivity, backward compatibility, and controller.
Richard Gretz and Suman Basuroy formulate complex economic calculations based on “console quality,” derived from “Graphics processing speed,” CPU speed, total RAM, and maximum program size of games designed for the console. (Gretz and Basuroy 2013, table 4) As we’ll see in chapter 4, numbers can (and often) lie when it comes to “power” or “quality.” For the evaluation of games, they assume consumer homogeneity to compute a “mean utility” value for each published game (287)—an assumption whose limits they recognize (297) but that ignores consumers’ preferences and heterogeneity of game content, notably through game genres (Marchand and Hennig-Thureau 2013, 145).
Most humanities scholars would probably balk, chuckle, or roll their eyes at these processes of crunching down the aesthetic pleasures that games and consoles provide into hard data points that contribute to a standardized “quality” metric. Not only is the categorization of these features completely arbitrary, but the games are totally absent from this discussion of a firm’s technological and commercial capability. To me, the positions, methodology, and preoccupations of business studies and game studies seem so far apart that we can only stand to gain if we can adopt the other perspective and work at bridging the gap. In this spirit, I will note that, following Ian Bogost’s discussion on interfaces and games as experiences (2008), video game consoles are not toasters. We do not buy them based on their technical specifications alone, with the understanding that we will use them for whatever bread we prefer having toasted, because they are gateways to walled gardens, and they enforce technological standards that define the kinds of game experiences we will be able to enjoy. It makes no sense to abstract games away from the competencies and features of a firm that’s developing and marketing a games console. It may make sense from a strictly marketing position—hardware and software development depend on different processes, suppliers, distributors, and business models—but it makes no sense from a customer’s point of view because the primary reason for investing in a games console is to access a library of games. It makes no sense for Nintendo in particular, given its software orientation.
The case of technical specifications and console launches illustrates the multivalent nature of platforms. As soon as launch titles are published, they make the abstract specs—or at least their effect—concrete and visible for consumers. The role of technology is to attract third-party game developers and publishers and is largely played out by the time the platform reaches consumers. In other words, platform owners never sell technology to consumers. They sell technology to game developers and publishers in a B2B relationship. Game developers use it to create games that they sell to consumers in B2C exchanges. These exchanges feed back into the B2B relationship, with licensing fees returning to the platform owner. Contrary to popular intuition—and to the prevalent paradigm in business studies—platform owners do not sell technology to gamers as a base good; they present technology as a promise of new games to come. In the end, these games are the real base good. As Nintendo of America’s Peter Main put it, “the name of the game is the game” (Harris 2014).
Complementary goods are characterized by the fact that they extend, expand, or otherwise transform the base good’s function(s). They are useless without the base good, and the base good’s utility or duration is severely hampered by a lack of them. From an industrial standpoint, it may be the case that a platform owner finds itself investing most of its production effort, research and development budget, distribution efforts, and/or capital in the production and sale of its game console, which justifies treating it as the base good—along with the fact that it serves as a cement that brings together all the firm’s other goods (games, peripherals, etc.). It may make sense to describe games and peripherals as “complementary” to the platform owner’s main line of business then. However, this mental model is derived from a production-centric view of the industry. Moreover, it runs the risk of firms falling into what Wesley and Barczak have called the performance trap:
Designers and engineers are often energized by breakthrough technologies that allow them to accomplish tasks they only dreamed were possible. In the process, they often lose sight of the real goal—fulfilling a customer need. They succumb to what we call “the performance trap.” […] (Wesley and Barczak 2010, 5)
The one firm to have remarkably avoided the performance trap through the history of video games is Nintendo. As we have seen in chapter 1, by refraining from the cutting-edge, “next-gen” technology, and instead looking for creative applications in new (lateral) ways for old or outright obsolete technology, Nintendo has kept game development costs low and has sold hardware both cheaply and profitably, without the need to rely on an influx of third-party licensing revenue to offset console subsidizing.
Nintendo understood that technology is not what consumers want. Consumers want games to play. They want only games that they want to play—one or two stand-out titles, not a bunch of “alright” games (Clements and Ohashi 2005). Because the console is merely a means to that end (an inevitable means, I might add), consumers only buy a console if and when there are games they want to play on it. As Electronic Gaming Monthly stated when discussing the upcoming Game Boy, “The worth of any new system, no matter how versatile or technologically advanced, is in the software that the machine runs. After all, why buy a GameBoy if the system can’t play decent games?” (EGM #2, July/August 1989, 41) This is often referred to in the business literature as the need to have a “killer app” for the platform, a software title that is so hotly anticipated it creates demand for the hardware. Launch titles are usually tasked with becoming “killer apps” for the up-and-coming platform. They are of the utmost importance in establishing a platform’s ludic promise because they function as an interface between the platform’s underlying technology, game developers, and gamers.
As such, killer apps are the perfect tool for building the all-important confidence in the new platform across the two target audiences of the platform owner: game developers and publishers, and consumers. In this regard, the Super Famicom’s launch provides an exemplary demonstration of the dynamics of “killer apps.”
On release, the Super Famicom was sold alone for 25,000Y or with Super Mario World for 32,000Y (Sheff 1993, 233; $175 and $220, respectively). This price tag made it an expensive console for the Japanese, who had been paying 15,000Y for the Famicom and less than 10,000Y for the Game Boy. Another sore point was the lack of launch titles: only Super Mario World and F-Zero were available, although that initial offer was soon complemented with other releases for a total of around 10 titles released by the end of 1990, including notably ActRaiser, Pilotwings, Gradius III, and Final Fight. There was, however, no sign of the third Legend of Zelda game that had been planned as a launch title for years, ever since the console had been revealed. Sales ramped up following Nintendo’s production schedule, and four notable game releases stood out by creating “impressive scenes of hysteria” (Audureau et al. 2013, 22): Final Fantasy IV (July 1991), The Legend of Zelda: A Link to the Past (November 1991), Street Fighter II (June 1992), and Dragon Quest V (September 1992). Before getting there, however, it’s worth examining closely the contributions of the two launch titles.
The high-profile launch title for the SFC was Super Mario World, fittingly subtitled Super Mario Bros. 4 in its original Japanese release and widely referred to as such during development, both internally at Nintendo and by the press. The development team, in fact, considered that the game was not different and new enough to be a good showcase for the Super Famicom’s increased capabilities, as game director Takashi Tezuka expressed (Audureau et al. 2013, 17). This view is still present today in the Euro-American sphere. When Retro Gamer readers voted Super Mario World as the greatest game of all time in the magazine’s 2015 edition of the yearly poll, the staff’s article noted, “In retrospect, Super Mario World is surprisingly economical with its resources, given its status as a showcase game for a new console,” and “the game makes sparse use of the console’s advanced graphical features” (Retro Gamer #150, 62).
It may not be surprising to note that Super Mario World’s original North American release box does not mention technology at all and is perfectly satisfied with describing the contents and backstory of the game.2 Contrary to what nostalgia and historical hindsight might lead us to believe, the game was not particularly well received at the time of its release. Florent Gorges notes how in certain magazines in France, the import Super Famicom tests gave F-Zero awesome scores, whereas Super Mario World was described as being alright, with a score of around 80%. (Gorges et al. 2009) Sheff, covering the game from a closer historical vantage point of 1993, was critical as well:
“Super Mario World” wasn’t a sufficient departure from its predecessor. “People don’t know how to write 16-bit software yet,” Greg Fischbach said at the time. “It will be revolutionary, but it will take some time to understand.” There would be more lifelike and emotion-filled games because of 16-bit processors. Miyamoto says, “Wait, and I will learn more about the limits of this machine.” In the meantime, “Super Mario World” was a disappointment, particularly when it was compared to a new game that was released for Sega’s 16-bit system [Sonic the Hedgehog]. (Sheff 1993, 231)
Still, although critics and industry pundits lamented Super Mario Bros. 4’s underwhelming role in promoting the new console, players bought and enjoyed it immensely. Nintendo’s abundance of mosaic effects, scaling and rotation, and scrolling background layers in the game can be read as a means to demonstrate the strengths of the Super NES platform to other developers interested in traditional games. The other launch title, F-Zero, showcased the console’s unique Mode 7 graphics to stir experimentation in other directions.
A racing game set to a behind-the-car view, everything about F-Zero’s concept was perfect to create a convincing illusion. The game relied on a brand-new technology embedded in the Super Famicom, “Mode 7” graphics (detailed later in chapters 4 and 5). This technology allowed game developers to project a playfield (or ground map) in formally correct linear perspective, as if the viewer were standing inside the fictional world with the ground receding away, and distant objects converging up to a horizon line. One of Mode 7’s obvious limits was that it could only project flat surfaces, so anything that had to stand up from the ground, such as houses on the side of a race track or mountains, were out of the question. F-Zero got around that obstacle by putting the race tracks as elevated highways running atop the surface of planets stretching out below, which we imagine to be far down so everything looks small.
Although perspective wasn’t new in the racing genre (various arcades and NES games had used it, including Pole Position, Hang-on, OutRun, and Rad Racer), what F-Zero offered was an incredible sense of speed with unparalleled smoothness and fluidity. The game’s fiction took place in the 26th century, where speeds of hovering cars in the hundreds of kilometers per hour weren’t a question of realism. Moreover, because Mode 7 could project static ground maps, the rail guards that border the track consisted of tiny bulbs that formed a line instead of elongated full lines. As the player raced through the stretches and curves of the track, these bulbs (vaguely reminiscent, especially through their pushback/electrical shock behavior, of pinball pegs) zoomed by at fast speed, from the center of the horizon line and all the way down to either side of the screen’s bottom edge. It appeared that race cars hit pegs and bounced back, but under the hood, the machine did not register these as material objects, but as painted dots on a flat track, with an “invisible wall” delineated exactly on top of them.
Mode 7 allowed smooth scrolling and perspectival effects on a flat surface; through a clever trick of trompe-l’oeil, F-Zero managed to provide the illusion that objects actually existed in the game world and at speeds that defied any other racing game that had been out before—in homes or arcades. Much of that impression came from the simple decision of placing dots on the ground rather than continuous lines. On a macrolevel of video games in general, F-Zero wasn’t an innovative game—it wasn’t even a particularly feature-rich racing game, with its lack of a two-player mode. Yet on the microscale of racing games in perspective view, its speed, smoothness, and fluidity were impressive achievements. More than anything, however, it proved to be a terrific success for Nintendo as a technical demo to attract developers to the platform. By seeing the game in action, developers knew what could be done with “Mode 7” perspective and the unique strengths of the SFC if they wanted to develop games for it. This led to a wave of games that focused on a similar experience, whether they revolved around piloting and shooting (Hyperzone) or racing (Top Gear, Battle Cars). Many games integrated “Mode 7” sequences amid their usual gameplay for traveling (Secret of Mana, Illusion of Gaia, Final Fantasy III) or for action sequences (the Super Star Wars series, Indiana Jones’ Greatest Adventures).
The Super Famicom’s launch illustrates a key but perhaps counterintuitive point: the importance for launch titles to tread in paths familiar to gamers. Both of the SFC’s launch titles were an additional entry in a long line of established game genres: platformers and racing games. As such, consumer demand clearly existed for these kinds of games, and it was possible for consumers to evaluate exactly how these games were novel or more sophisticated—in essence, to perceive the added value that the new hardware provided. When a platform is launched with titles that do not easily fit in established generic categories, firms face two challenges at the same time: persuading consumers that the hardware is worth their money (which, as we have seen, they are reluctant to accept), and persuading them that the games are worthy of their time and interest. Launching a console with games from established genres can eliminate the second challenge and provide an indirect effect to alleviate the first one: Consumers can judge how much of an impact the new hardware has on this genre of games.
In the end, the high price of the console, paucity of games offered at launch, and limited available quantities that resulted in a chaotic launch could have all seriously impacted the Super Famicom’s initial performance in the Japanese market. But the case illustrates a point made by Clements and Ohashi (2005): The number of software titles on offer during the launch window of a console is only of secondary importance; what matters is for “hit” games to be there.
We have seen in the previous section how “killer games” may drive hardware adoption during a console’s launch period. According to this logic, consumers never desire or demand consoles: They learn to cope and put up with them. Even after dealing with finances to buy them, consoles remain a hurdle and a liability. They require additional connectors on a TV and occupy additional power outlets in the living room. Cables might be too short and dangle inconveniently. Parents, partners, and roommates may find them bothersome. They take up space, especially with their convex and irregular shapes expressly designed to prevent people from stacking things on top of them. They may break and are costly to repair or replace, as the Xbox 360’s “red ring of death” problems have reminded many gamers. Consoles are not a base good; they are a financial hurdle to be overcome for consumers to buy base goods—games.
When consumers exhibit behaviors that may be interpreted as manifesting desire for a platform (e.g., pre-ordering an upcoming console), we should interpret this as a transitional interest in the platform as an inevitable means toward achieving the real desire: getting access to a new game library or specific killer game. Clements and Ohashi wrote in a similar direction (2005, 2): “The console itself does not have any value apart from facilitating the use of software.” In other words, the valuation of video game hardware comes from the range of software that is available for it, rather than being an intrinsic valuation like in other industries. Intrinsic valuation includes features such as anti-skip technology on portable CD players, which increases the base good’s desirability without providing access to a larger library of audio CDs compared to portable CD players without anti-skip technology. Waterproofing on digital watches, or an increase in speeds on early CD-ROM drives, are some other examples. None has equivalents in the landscape of video game consoles, aside from hard drive storage space options, for instance.3
Although the game console as a financial hurdle might be a good way to describe a common mindset, we should be careful not to lump all consumers together as if they were a Borg-like monolithic block of desires. If the utility of a game console is not intrinsic but derived from the games it allows to play, then why are hundreds of thousands of people pre-ordering game consoles as soon as they are announced or buying them as soon as they launch, even with few games available for them?
In fact, a subset of consumers finds genuine value in the console’s technology. Such consumers are typically found among industry analysts, reviewers, and other members of the press; game developers or publishers; or people employed in related technology sectors. Like car enthusiasts, racing fans, or mechanics who might collect cars or car pieces, they find intrinsic value in the technology put forth by the platform owner. These I will term techno-fetishists and consider that they naturally become early adopters of the platform. The discursive strategies found in the magazines that announced and covered the launch of the Sega Genesis, TurboGrafx-16, and Super NES, to be seen in chapter 3, attempted to shape young and impressionable consumers to become techno-fetishists exactly for this reason: so that they could adopt new gaming technologies by finding intrinsic value in them. As I will show, we would do well not to underestimate the effect of marketing.
These techno-fetishists, however, form a minority within a minority. A sizable portion of early adopters are not techno-fetishists buying cutting-edge technology but rather gamers investing in a ludic promise. They invest at the earliest stage rather than adopting a wait-and-see approach for various reasons. They may do so because they desire a particular game, to avoid future expected shortages, to profit from any number of special measures tailored toward early adopters, or simply because they figure out that price cuts won’t come anytime soon and they might as well buy it now rather than later. What these consumers actually want are games. They are not buying a base good and waiting for eventual complementary goods to maximize the value they get from their base good because their base good provides them no value to begin with; they are facing the financial hurdle of getting equipped with the proper standard right now so they can buy and use the upcoming games as they are released.
What are the components of a console’s ludic promise? I can enumerate a number of them as a starting point and without being exhaustive. The most-often circulated and discussed component is technological innovation, advertised through classic promotional means and demonstrated through the console’s launch titles. These launch titles are complemented by a roster of announced upcoming game releases—regardless of whether they actually make it to the market in the end or satisfactorily fulfill these promises. At the periphery of these games sits a much larger (and more diffuse) nebula of unannounced but expected game releases: Buying a Nintendo console always hinges on the expectation of future Mario, Zelda, Pokémon, Donkey Kong, Metroid, and other games in flagship franchises. Beyond direct games, an important component of the ludic promise lies in the third-party firms that have announced support for the platform, even if specific games have not been announced yet. The ludic promise can also benefit from unique distribution or other marketing policies, as when the OUYA announced its principle of providing free demos for any game published on the platform. Finally, other auxiliary ludic services can contribute to the promise, as the game-sharing or streaming play features of the PlayStation 4, or other voice chat and support for network play, specific controller features, or achievements and trophy systems.
In all cases, the base good is largely immaterial and oriented toward the future. Because game consoles function as locked standards, there is no way of separating the value of the platform that could theoretically be attributed to the hardware from the worth derived from its library of games: What the hardware contributes has to be concretely expressed in the form of games and in the form of future games. Therefore, consumers tend to develop irrationally strong loyalty toward their chosen console because the console’s success in the marketplace—expressed through market share—will determine whether the platform becomes an attractive standard for developers to support it and whether it will see many games produced for it in a virtuous circle and bandwagon effect or instead slowly wither and die in a vicious cycle of confidence crises from game developers and consumers.
The market logics of locked, noninteroperable platforms make up the conditions that induce high levels of launch pressure and make consumers an indirect part of a console’s success. They push consumers to become spoony bards, foolishly enamored with their chosen packs of circuits, metal and plastic, ready to sing their praises to whomever crosses their path. That is, when all goes well. Often console launches devolve into console wars, and the fanatical fervor of devotees turns them into evangelists—or, worse, crusaders bent on fighting in holy wars. The Super Famicom and Super NES managed to overcome the hurdles that lay in front of them, maintaining consumer loyalty and confidence in the firm despite the two-year wait for the hardware’s release. This was Nintendo’s true Super Power, as deployed through the formidable promotional practices that came through game magazines and used every trick in the book to maintain a phenomenal ludic promise.