5

INTERFACE

Vincent Mauger

In the object universe of technology, game interface design began as an offshoot of video game development. It remained far removed from the constellations of design research, evolving on its own. Even in the flourishing field of game studies, key design aspects are ignored or belittled, just as video games are by the design study community, being, for them, a mere grain of sand on the beach of design culture. To bridge this gap, video game interface design will be examined here from a designerly techno-historical perspective.

To start with, the primordial role of the game interface is, just like any other interface, to enable information to be provided, accessed, and applied. Acting like a translator, the interface mediates between two parties, making one sensible to the other in a semantic relationship. Interaction happens through this shared boundary where the user wanting to fulfill a certain task meets the artifact or product enabling them to perform that task; that is, where the player meets the game: through game boards and playing pieces, playfields, screens, joysticks, keypads, and controllers; notwithstanding that today, most video games involve users via screen-based graphical user interfaces (GUIs) that are increasingly mobile, portable, and pervasive. Although their design intricacies are not obvious to the viewer, user, or player from the outside, interface design goes far beyond this external appearance and its layout: the graphic design establishing the arrangement, proportions, and relationship between the individual elements on the page or screen.

Lacking hindsight, it is common throughout the game industry to view the interface as detached from the game’s graphics, bounded with clear beginning and end. This makes the defining of the task easier and the direct application of traditional interface design concepts possible. However, these practices may hinder innovation and experimentation with dynamic game interface elements that enhance gameplay experience, as interface designers are lured into the false security of customary static or passive constituents, such as visual frame or timer, life bar, and ammunition count. The design team behind Dead Space (Visceral Games, 2008) did not take this easy way out: their keen interface conveys information through intradiegetic elements such as floating holographic projections and an integrated health meter onto the spine of the protagonist’s spacesuit.

Not surprisingly, the development of such visual and special effects has had an important interface application: instead of using text or icons, graphics can communicate information to the player. It explains why the great game studios understood years ago that putting aside an interface design assignment was not the brightest move. For example, the Unreal Tournament (Epic Games, 1999) team developed the game’s engine with UnrealEd, which used a windowing interface written in Visual Basic (VB). In addition to being old and fragile, only one team member knew and cared about VB. Inevitably, bugs plagued the team while nobody had the time or inclination to fix them (Reinhart, 2003, p. 102). Technology choices may also eclipse the recognition of the great job artists and designers do, as happened with Diablo II (Blizzard Entertainment, 2000). Players frequently labeled the game’s graphics as “outdated” or “pixilated,” catching the team by surprise (Schaefer, 2003, pp. 87–88). The moral of this is that most players will immediately notice a poor visual quality. Likewise, a new game released from an obscure studio will hardly ever receive positive reviews if its graphical interface looks botched, even if the gameplay itself is astonishing.

The more relevant aspect of game interface design is its functionality: “that form ever follows function. This is the law” (Sullivan, 1896, p. 5), claimed the Modernist architects; though the main purpose of a game interface is always to allow players to interact with the game software. As many counter-examples demonstrate, a poor interface may ruin a video game experience. However, an aesthetic and easy-to-use game interface with a neat visual design can significantly enhance play experience. As for screen design, the organization of information and interactive elements on screen-based interfaces, animation, and motion design are also some of the interface designer’s greatest assets, which aid in the addressing of standard graphic design concerns such as composition, page layout, color usage, and the creation and use of typography and icons. Nevertheless, before entering into the details of game interface design practice, let’s look at its origins.

Making Interaction with Information Possible

The visionary Vannevar Bush, with the publication of his essay “As we may think” (1945), laid the foundations for hypertext with “trails of interest” built into the hypothetical Memex (standing for memory index or extender). He also introduced a major interface design metaphor: by using an ordinary desk as a document administration device, he envisioned the digital desktop.

Other pathbreakers such as Ivan Sutherland (1963) soon experimented with pixelbased displays, paving the way for future raster-based editing programs such as MacPaint (1984) or Photoshop (1990). Yet, it was on December 9, 1968 that Douglas Engelbart presented the demonstration that would define the modern computer interface. His breakthrough was the new paradigm of “direct manipulation” (Shneiderman, 1983), a way to give the user control over displayed text and windows. The multiconsole display used Engelbart’s new tool, the “mouse,” which served as the user’s representative in dataspace and still remains the standard way to intuitively and directly access abstract information displayed on a monitor.

In the 1970s, at the Xerox Palo Alto Research Center, Alan Kay continued the struggle to transform the arcane command-driven modes into a genuine GUI, consisting of layers of windows based on real-world metaphors such as sheets and arrows. The WIMP paradigm (windows, icons, menus, and pointers) was born. As Sutherland and Engelbart helped equip the computer with space, Kay gave it depth, thus bringing the whole idea of imagining the computer as an environment or virtual world (Johnson, 1997, p. 47). But the power of this metaphor was too strong to remain trapped in a lab.

Around the beginning of the 1980s, a co-founder of Apple Computer, the far-sighted Steven Paul Jobs, was searching for the technological advance that would revolutionize personal computing. He found what he was looking for at PARC in an experimental operating system called Smalltalk. After years of development, Apple released the Macintosh in 1984, along with a masterfully planned commercial spot—played once during the Super Bowl broadcast—casting IBM as George Orwell’s Big Brother. Corporate DOS snobs that rebuked the GUI as a child’s toy or a video game opposed the playful character and “look-and-feel” popularized by Mac advocates and were soon engaged in an aesthetic conflict alongside the nascent Microsoft Windows; its mere name confirmed the superiority of the new paradigm. To a certain extent, these battles over desktop usage and platform superiority, that continue these days in the video game community, have by a strange blow dealt by fate improved our grasp over digital spaces.

Correlated and Intricate Interface Design Practices

Increased digitization and interactive media development made information one of the most important resources for the interface designer. Designing interfaces as access points to digital information, where the link between the user and the digital application contains a level of feedback such as responses to user’s command, communication, or selection, brings forth particular experiences. Interface design concerns user interactions in a wide array of contexts, such as video gaming, to achieve an optimal user interface. Every time players communicate a decision, the system offers new criteria for any new decision players might make within its architecture. There are no objective criteria available to help guarantee this delicate equilibrium. This is precisely where interface design practice comes in, constantly formulating and anticipating future uses. However, different scopes for design practices involve the machines and applications surrounding us.

Information design is “the translating [of] complex, unorganized, or unstructured data into valuable, meaningful information” (STC, 2012). “Information architect,” a term coined by Richard Saul Wurman (Wurman & Bradford, 1997), was first used to describe the designer who structured inherent patterns into data in order to display complex information as clearly as possible so others could find it. Accordingly, information designers are thus facilitators defining the options for different information spaces. Since information now reacts dynamically to the way it is used through context-related suggestions, developing the core of an interface requires a dialogue between interface designer, interaction designer, engineers, and users. Interaction with information requires information design to be integrated with interface design.

Interaction design is “focusing on the fit between human actions and system responses” (Murray, 2012, p. 10) and determines what is brought into motion in relation to the user over time. It describes the use of the interactive product, and thus makes possible content manipulation and the users’ navigation through it, via a choice of appearances or adaptive interfaces that can be customized according to users’ interest and their level of knowledge. These interactions, between humans and artifacts, are the main research interest in the fields of human–computer interaction and man–machine interaction (MMI). This results in products that have a multitude of operability and usability requirements. These domains relate to diverse aspects such as cognition, perception, semantics, ergonomics, usability, and quality experience.

Interaction flow thus implies movement through and navigation of a hierarchical structure in which decisions are made, using the linked elements of a digital appliance or hypermedium. One crucial task when planning dramas and dialogue sequences is defining how information can be conveyed at the navigation metalevel, so that the navigation itself already incorporates formerly chosen content. In video game development, the management and structuring of this kind of information is often referred to as “narrative design” and the elaboration of the contents itself, “game writing” (see Mauger, 2012a).

These interrelations explain, on the one hand, why interface, information, and interaction design are often used together to describe an original design concept (to see 67 examples: IIDj, 2005) or unified into design processes and theories such as information interaction design (Shedroff, 2000). On the other hand, digital media scholar Janet Murray has called attention to the fact that interface is a “useful term, though misleading as the focus of digital design since interaction design is more inclusive and has supplanted it as a description of professional practice” (2012, p. 426). Indeed, many designers of the sensorial design disciplines have long worked to make products more “usable,” or been brought in at the end of design processes to make them more “user-friendly.” “This model has been replaced by a more inclusive design process and a focus on the interaction between the human being and the automated system” (Murray, 2012, p. 10).

Discourse concerning experience design (Laurel, 1990, 1991; Shedroff, 2001) or flow (Csikszentmihalyi, 1990) also had a major effect on interface design. According to Nathan Shedroff, interface design “is only one of the many terms used to describe the design of experience.” Thus, we could consider interface design “as encompassing information design, interaction design, and some forms of sensorial design (mostly visual and auditory design, since most computers can only display sights and sounds)” (2001, p. 109).

Interface Design and Game Studies

In 2000, designer Chuck Clanton (p. 301) pointed out that interface designers and game designers were two isolated design communities. He suspected that hardly any software designers attend game design conferences, and that few game designers know much about the human–computer interface (dubiously acronymed as “HCI”) design community:

Almost every game I play has one or more flaws that HCI designers know how to remedy. Yet, I suspect that few HCI designers could design a great game. Likewise, few software applications show any awareness of techniques of game design that could make them easier and more fun to learn and use.

At that time, the human–computer interaction community had already observed empirical evidence about the value of user testing and iterative design, but these techniques were still meeting some resistance in “serious” software companies. Ironically, playtesting—paired with quality assurance testing—was a well-accepted technique used during video game development. Today, most game designers expect the quality of a game to improve as the design evolves during prototyping, playtesting, and revising (Fullerton, 2008). It can be argued than it is playtesting, not HCI expertise, that eliminates the most crippling user interface mistakes.

The ways a game designer accounts for the user within the design of a video game involves a much deeper and riskier process than that which occurs during the design of utility software products, because making gameplay “fun” is far more intractable to analysis than is productivity. “In software application design, … if the user interface is demonstrably bad, but the functionality is valuable, the product may well succeed, as many major software applications attest. Game design is more brutal” (Clanton, 2000, pp. 333–334).

As an elusive quality and a by-product of the imagination, the experience of fun is still in need of measurement methods to grasp the likelihood of its success. At the same time, few “serious” software businesses give expert attention to the challenges that face a user, or the awareness of the user’s need for variety, pacing, and purpose. Difficulty might be a driving force for players: it is not an enemy, but a friend to be sought, pampered, and brought into shape. Without it, no sense of accomplishment can arise. An important aspect of video games is qualified as “hardcore” by gamers and the specialized press. Given a steep difficulty curve, such games, from the classic Rogue: Exploring the Dungeons of Doom (Artificial Intelligence Design, 1980) to Demon’s Soul (From Software, 2009) seem “harsh at first glance” (Mauger, 2012b) and require players to adapt, given the efforts necessary over time to develop mastery, a phenomenon described by Torben Grodal as an aesthetic of repetition (2003, p. 148). However, there are many other traits characterizing video game interface design practice.

Video Game Interface Design Specificities and Distinctive Goals

Similar to those who work with other digital and hypermedia applications, video game developers juggle many technical requirements such as file sizes, disk space, load time, file compression, or online content. Although video game players often interact with buttons, sliders, menus, and other traditional components of graphical user interfaces, video game interface design uses concepts specific to the game industry, involving a particular design practice with its own characteristics, which intends to channel the unique experience of game playing.

The diversity of manual interfaces that provide players control of a game goes far beyond the usual keyboard and mouse duo. Console controllers, besides action buttons, analog joysticks, and directional pads, may have numerous triggers, rumble devices, additional speakers, touch screens, and motion-capture technologies. Specialized hardware such as mock weapons or musical instruments, a steering wheel coupled with pedals, dance pads, and other devices may help reinforce the feel of a game. Keeping in mind that more controllers means more interfaces, video game interface development certainly has a promising future ahead.

Perspective and camera controls are key elements in video game interface design. Specific choices of camera angles may convey affect just like a game system grabbing control over the camera may create drama, but at a cost: freedom. Cuts also eliminate the traversal of time. These creative choices may impact overarching game design decisions and precise interface design characteristics. Letting the player choose between perspective options such as a first-person, third-person, or isometric point of view, split screens, and restricted or hybrid views, will inevitably define gameplay elements and generic aspects of a game. This characteristic, closely tied to the actual potential for interactivity, is one that distinguishes the video game medium from other audiovisual media.

Game styles and genre conventions, as cultural frames and cognitive schemas, have an influence of the design because of the habits of players or designers themselves. These shape aspects of the game systems over which players must have control, as it becomes difficult to change design schemata once conventions are deeply rooted. Specific elements are closely related to players’ goals and tasks, such as a system of rules, strategic depth, the number of units or characters under the player’s command, communication and trading tools, maps and quest journals, as well as the absence, presence, and control over dialogues or character developments.

In 2000, Nathan Shedroff suggested a way to consider interactivity by picturing all experiences and products as inhabiting a continuum. The way to determine a value judgment about an experience’s respective position in this continuum was to assert a certain level of interactivity. Two elementary spectra were identified according to the achievement of the experience’s goals: control and feedback (pp. 283–284). Game designers Kevin Saunders and Jeannie Novak follow a similar trail, claiming that a game’s interface has the same two primary goals: control over what happens through the inputting of information into the game, and feedback through information received from the game (2007, p. 20). All elements in an interface should take part in larger schemes that empower or inform players, furthering at least one of these goals. Feedback should indicate short-term and long-term progress toward game goals, by teaching players new concepts through direct or implicit instructions, enabling them to develop strategies, or allowing them to perceive duration and degrees of success. Secondary goals also apply to games. Saunders and Novak also mention immersion, a psychological state that “makes players forget they are playing a game” (2007, p. 26). This psychological state could be encouraged by more transparent interfaces such as those used for Peter Jackson’s King Kong: The Official Game of the Movie (Ubisoft, 2005) or ICO (Team Ico, 2001), in which no GUI has objects that are displayed as icons on the screen. Instead, interfaces are well-integrated into the diegesis. Still according to Saunders and Novak, an “atmosphere” may also be achieved when the interface is consistent with the nature of the game played, such as the light gun used for simulation in Duck Hunt (Nintendo, 1984).

This vague conception of immersion refers to various mechanisms related to different immersion types such as sensory, fictional, or systemic ones (Arsenault, 2005). It may also refer to the two strategies of visual representation behind the concept of remediation. Bolter and Grusin (2000) describe the phenomenon of reproducing conventions, content, or both from one medium to another: immediacy (or transparent immediacy) “whose goal is to make viewer forget the presence of the medium (canvas, photographic film, cinema, and so on) and believe that he is in the presence of the objects or representation” and hypermediacy “whose goal is to remind the viewer of the medium” (pp. 272–273). A hypermediated game interface such as the one portrayed in the massively multiplayer space simulation Eve Online (CCP Games, 2003) may make the buttons, menus, motions, or artistic elements of the GUI the focal point, instead of game contents such as actions, graphics, rules, or narrative information, which can be counterintuitive. The interface should never demand more attention than the gameplay itself. Deep immersion within a game will only start after a user is no longer conscious of the interface during the decision-making processes within the experience of play.

Game Design and Interface Design: Planning for the Game’s Completion

According to game project manager and art director Brent Fox (2005, p. 10), if game designers create games with goals that are clear and then communicate them clearly to the development team, the design of the interface will be easier. Breaking a general goal into specifics is the idea behind this simple approach. “The point is to define useful goals that will provide direction during development” (2005, p. 12), so that interface planning may help game design. A solid game proposal thus helps the planning of an interface, which will result in an interface design document usually summarized as part of a larger game design document. This also explains why the interface is the one of the first elements needed in a game project, and one of the last ones that can be tested for usability.

As in any other design project, planning and documentation are an essential for the development of a successful interface. The luxury of free experimentation may be possible in a large-budget production, but with a smaller budget, in which your own investment may be at stake, you need to get it right early on. Even if it sounds paradoxical, to complete an interface design quickly, more time needs to spent planning: it is the heart of design; the ground from where the project will take root. Defining the schedules, screens, and artwork used or re-used, and the information displayed in the video game, are tasks that need to be done in the planning stage, through the generation of asset lists.

Let’s keep in mind that the making of the assets used in a typical game interface usually requires much less effort than the development of engines, 3-D models, and animations sequences. For that reason, it is worthwhile to expend the right amount of time and energy to properly design an interface’s aesthetics. For example, the thematized visual interfaces in the game StarCraft (Blizzard, 1998), playing the minor dual role of guide and ambience beacon in this franchise, could encourage identification with the race the player has chosen to play.

In a perfect world, the design method would be immutable, allowing a game to be perfectly planned in advance. However, the iterative nature of the game design process—analyze, design, test, then repeat (Zimmermann, 2003, p. 177)—until “satisficing” (Simon, 1956, p. 136; 1969) guarantees that some changes will be necessary. Without meticulous planning, even the most inspired concepts, such as those in Age of Empire (Microsoft, 1997), Thief: The Dark Project (Looking Glass Studio, 1998), System Shock 2 (Electronic Arts, 1999), and Black & White (Electronic Arts, 2001), did not pass the test of concrete expression without any changes (see Grossman, 2003).

Ingenuity and time must do their work. A beginner’s trap is to use the time allowed for this process (usually scheduled by any skilled project manager) to continue the ideation phase. This procrastination of the actual realization of the design will often leave a budding designer trying to justify an incomplete work when the deadline comes. To build a solid core is essential: one must only include, assiduously, additions that are useful, meaningful, and justified.

This behavior will seem fairly natural to game designers. It also clarifies why good interface planning and design normally help game design by directing attention to technical issues that would otherwise have been considered later in the video game development. For example, crafting a heads-up display calls forth gameplay decisions. Or a simple screen menu where the player may click on an environment icon triggers many questions: Will the player be able to choose between different environments? How many levels? Will some be locked until certain tasks are completed? Is there only one precise order that must be followed, or are many other ones possible? Will those choices be affected by gameplay? And so forth.

It is worth mentioning that the particular menu system that appears before a game really starts and allows gameplay is often referred to as the front-end. It must be distinguished from the in-game interface and the pause menu in the game. To plan and organize those menus, the creation of different flow charts is the best way to organize ideas. In game design classes, adding this clear and simple division to assignments improves a project’s quality overall, as well as students’ comprehension about flowcharting and interface design.

After sketching and flowcharting, the next step is to use a vector editor (such as Illustrator or Inkscape) or a diagramming application (such as Visio or Omnigraffle) that includes the option of printing on multiple pages and creating one big chart. This allows one to trace and adjust arrows and windows that will show the interaction flows of the interface through site maps, flowcharts, wireframes, and screen designs (see Brown, 2007).

The Many Shapes of Things to Come

Even if layers of documents are still presented in windows, data still deleted by dropping it in the trash, and digital documents still archived in files, every day these metaphors lose more and more of their transferability. A single folder can’t support a vast number of subfolders and sub-subfolders, and disks and storage media are not dropped into a wastebasket. For the sake of digital media advancement, new metaphors will inevitably emerge.

In an opposite direction, physical space itself is becoming the domain of digital experience, as a result of new technologies and interactive systems. Mediated, mixed, and augmented reality open brave new hybrid worlds for digital exploration. Interactions with portable equipment, such as smartphones, computer tablets, or handheld game consoles, offer many possibilities just as the various application of a person’s movement, position, and articulation in space is captured by peripherals through motion capture, “the process of capturing and recording movements from a real, physical actor or element and then using the translated data to control a digital model” (Mauger, 2012c, p. 421).

The steady stream of new inventions implies that video games are limitless, as display technologies progress beyond standard monitors and displays, and when almost any surface can function as a projection screen for information. For better or worse, interfaces will continue to grab our imagination through their efficient illusions and our suspended disbelief.

References

Arsenault, D. (2005). Dark waters: Spotlight on immersion. Game On North America 2005 International Conference Proceedings (pp. 50–52). Eurosis-ETI: Ghent.

Bolter, D. & Grusin, R. ([1999] 2000). Remediation: Understanding new media. Cambridge, MA: MIT Press.

Brown, D. M. (2007). Communicating design: Developing web site documentation for design and planning. Berkeley, CA: Peachpit Press, New Riders.

Bush, V. (July, 1945). As we may think. The Atlantic. Reprinted in Life magazine September 10.

Clanton, C. (2000). Lessons from game design. In E. Bergman (Ed.), Information appliances and beyond: Interaction design for consumer product (pp. 300–334). San Francisco: Morgan Kaufmann.

Csikszentmihalyi, M. (1990). Flow: The psychology of optimal experience. New York: Harper and Row.

Fox, B. ([2004] 2005). Game interface design. Thompson Course Technology PTR. Boston: Premier Press.

Fullerton, T. (2008). Game design workshop: A playcentric approach to creating innovative games. Burlington, MA: Morgan Kaufmann.

Grodal, T. (2003) Stories for eye, ear, and muscles: Video games, media, and embodied experiences. In Mark J. P. Wolf & Bernard Perron (Eds.), The video game theory reader (pp. 129–155). New York: Routledge.

Grossman, A. (Ed.) (2003). Postmortems from game developer. San Francisco: CMP Books.

IIDj (Institute for Information Design Japan) ([2003] 2005). Information design source book. Basel: Herausgeber, Birkhäuser.

Johnson, S. (1997). Interface culture: How new technology transforms the way we create and communicate. New York: Basic Books.

Laurel, B. (Ed.) (1990). The art of human–computer interface design. Reading, MA: Addison-Wesley.

——. (1991). Computers as theater. Reading, MA: Addison-Wesley.

Mauger, V. (2012a). Game writing. In M. J. P. Wolf (Ed.) Encyclopedia of video games: The culture, technology, and art of gaming (pp. 238–239). Santa Barbara: Greenwood.

——. (2012b). D’innovante à dissidente: l’abord rêche de l’esthétique Rogue. In S. Galand & J. Gauthier (Eds.), Esthétiques numériques vintage. NT2 Laboratory Hypermedia Arts and Literature Directory. Retrieved December 15, 2012 from nt2.uqam.ca/recherches/cahier/article/dinnovante_a_dissidente_labord_reche_de_lesthetique_rogue.

——. (2012c). Motion capture/motion control. In M. J. P. Wolf (Ed.), Encyclopedia of video games: The culture, technology, and art of gaming (pp. 421–423). Santa Barbara, CA: Greenwood.

Murray, J. (2012). Inventing the medium: Principles of interaction design as a cultural practice. Cambridge, MA: MIT Press.

Reinhart, B. (2003). Epic Games’s Unreal Tournament. In A. Grossman (Ed.), Postmortems from Game Developer (pp. 91–102). San Francisco: CMP Books.

Saunders, K. & Novak, J. (2007). Game development essentials: Game interface design. Clifton Park, NY: Thompson Delmar Learning.

Schaefer, E. (2003). Blizzard Entertainment’s Diablo II. In A. Grossman (Ed.), Postmortems from Game Developer (pp. 79–90). San Francisco: CMP Books.

Shedroff, N. (2000). Information interaction design: A unified field theory of design. In R. Jacobson (Ed.), Information design (pp. 267–292). Cambridge, MA: MIT Press.

——. (2001). Experience design 1. Indianapolis: New Riders.

Shneiderman, B. (1983). Direct manipulation: A step beyond programming languages. IEEE Computer, 16(8), 57–69.

Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63(2), 129–138.

——. (1969). The sciences of the artificial. Cambridge, MA: MIT Press. First edition.

STC (Society for Technical Communication, Information Design Special Interest Group). (2012). Definitions of information design. Retrieved on September 1, 2012. www.stcsig.org/id/id_definitions.htm.

Sullivan, L. H. (March, 1896). “The tall office building artistically considered,” Lippincott’s Magazine. Public domain. Transcribed article retrieved on November 18, 2012. ocw.mit.edu/courses/architecture/4-205-analysis-of-contemporary-architecture-fall-2009/readings/MIT4_205F09_Sullivan.pdf.

Sutherland, I. E. (1963). SketchPad: A man-machine graphical communication system. AFIPS Conference Proceedings, 23, 323–328.

Wurman, R. S. & Bradford, P. (1997). Information architects. New York: Graphis.

Zimmerman, E. (2003). Play as research: The iterative design process. In B. Laurel (Ed.), Design research: Methods and perspectives (pp. 176–184). Cambridge, MA: MIT Press.