8 
Designing for AI

In chapter 4, I posed the question why the type of artificial intelligence methods we have seen so many examples of in this book are not used more in games. I outlined a couple of potential reasons for this. One of the reasons I mentioned is that game development is a surprisingly risk-averse industry because of the hit-driven nature of the business and that the technology may not be mature enough yet. Now, after spending the past few chapters on AI methods for playing games, modeling players, and generating content, we’ll revisit the question. This time we focus on the role of game design in enabling AI and, conversely, AI in enabling game design.

Back when I was a naive and overenthusiastic PhD student, and even when I was a slightly less naive and overenthusiastic postdoc, I tried rather naively to effect change. When I met a game designer or developer at a conference, I would try to convince her that her company’s new game stood to win a whole lot by using some of these fancy new AI methods. Usually the response I would get was that no, in fact their game did not need my AI at all; it works perfectly fine as it is. For example, while we could train a neural network to drive a car faster or provide a more challenging opponent in a fighting game, this is unnecessary because it’s easier to simply artificially manipulate the top speed of computer-controlled cars or hit boxes of nonplayer character (NPC) fighters until they get the desired performance. Basically, why introduce complex AI when you could simply cheat? And, anyway, the game would get boring if the enemies were too hard because the fun comes from beating them. It’s true that we could use online adaptation, maybe through reinforcement learning, to create a game character that learns from your behavior in a role-playing game and updates its own behavior to match what you do; but this runs the very real risk of ruining the carefully tuned game balance and making the game unplayable. Sure, we could build a level generation algorithm that enables an endless supply of new competitive multiplayer levels for a first-person shooter; but the game already has a couple of good levels and most players prefer to play the levels they already know.

I found this attitude extremely conservative and annoying, but after a while, I had to admit that in many cases, they were right. Many games would not actually benefit from advanced AI because they were designed to not need any AI.

Let me explain. Most of today’s video game genres have their roots in games developed in the 1980s and early 1990s. These eras saw the development of platformers, role-playing games, puzzle games, turn-based and real-time strategy games, team sports games, first-person and third-person shooters, construction and management simulations, racing games, and so on. While there has certainly been design innovation since 2000—for example, the invention of multiplayer online battle arenas (MOBAs) such as League of Legends and sandbox games such as Minecraft—these new game genres evolved from earlier genres.

Back in the 1980s and early 1990s, artificial intelligence was much less advanced than it is today. While the fundamental algorithm behind modern deep learning, backpropagation, had been invented, it was far less understood than it is today, and many of the inventions that make neural networks work so well had not been made yet. Monte Carlo tree search did not exist, and although evolutionary algorithms were an active field of research, major advances have been made since. Most important, though, was that computer power was very limited then. Depending on what you measure, your current laptop is at least tens of thousands of times as fast as the computers that genre-defining games such as DOOM and Dune 2 were designed to run on, and your smartphone is faster than the fastest supercomputers of the 1980s. On top of that, the ability to run neural networks on graphics cards (GPUs) did not exist back then; its invention has added another few orders of magnitude of speed for deep learning in particular.

When the games that came to define whole genres were developed, incorporating state-of-the-art artificial intelligence was not an option. I don’t think that a design goal for early platformers was to have (only) enemies that moved back and forth in predictable patterns. It also seems improbable that it was considered a good thing in early role-playing games that the NPCs say the same canned lines all the time and force you to navigate cumbersome dialogue trees and, presumably, the level generation in early roguelikes was not meant to be highly erratic and disregard the player’s skill and preferences. Rather, this is how it had to be because of technical limitations, and then the rest of the game was designed to accommodate these shortcomings.

To take yet another example, in section 4, I explained the algorithms behind a typical enemy in a first-person shooter through describing its seven-second “life span.” Why only seven seconds? Early first-person shooters were designed with essentially no persistent characters in order to mask their simplicity. If you interacted with a character in DOOM for a minute, its simplistic programming would be painfully obvious for every player. But if the enemy is on screen for only a few seconds, there aren’t enough clues for you as to how intelligent it is (or isn’t). And later first-person shooters were heavily influenced by the trailblazers of the genre, such as DOOM (figure 8.1).

11723_008_fig_001.jpg

Figure 8.1DOOM (id Software, 1993) was one of the original first-person shooters and a major influence in the development of this game genre.

In other words, video games of that era were designed around the lack of AI. This led to a number of design choices that would not have been made had better AI been available. For example, boss fights were designed around patterns of recurring actions that the player needed to decode instead of around the boss trying to genuinely outsmart the player and dialogues in role-playing games were designed around a set of fixed dialogue choices rather than around NPCs having a dynamic knowledge base about the world that the player character could query in arbitrary ways. For the same reasons, difficulty scaling in games is typically implemented through giving computer-controlled adversaries more or fewer resources, essentially cheating rather than modeling the player’s skill and adapting the depth of decision making of the computer-controlled characters.

These design choices came to define game genres as other designers copied them and players started expecting them. It is possible to break the genre conventions, but this may involve creating new genres. Creating a role-playing game that does not have fixed dialogue trees, as the AI researcher Michael Mateas and game developer Andrew Stern did in their groundbreaking relationship drama game Façade, has come to be seen as creating a new type of game rather than trying to repair an aspect of role-playing game design that has been broken since the beginning. Given the (justifiable) cautiousness of most large game developers and publishers, it is no wonder that the rather remarkable recent advances in AI methods are barely reflected at all in game development. Existing games do not need advanced AI because they are designed not to need it.

AI-Based Game Design Patterns

For someone like me, who cares deeply about both artificial intelligence and games, the natural question is how to change this. Advances in AI methods promise to make amazing new games possible, but because of conservative design and development practices, this is not yet happening. So how can we design games that actually need advanced AI methods?

That was the question a handful of my colleagues and I posed one cold January day in the attic of Schloss Dagstuhl, a German castle where we were organizing a seminar on the future of AI in games. We decided to investigate the different roles AI can play in games, trying to find examples from well-known or little-known games that use AI in such a way that you need to interact with and understand it to play the game well. We tried to categorize these into design patterns. The design patterns we came up with,1 some of which follow, could serve as inspiration for envisioning even more ways of designing around AI.

AI Is Visualized: In this design pattern, the internal workings of the AI algorithm are exposed to the player, and the player can use that information in game play. In other words, the player can see how one or several NPCs think by looking inside its mind. An example is the stealth game Third Eye Crime, where you are tasked with outsmarting security guards. The guard behavior is driven by an AI technique called occupancy maps, which create a model of where the guards should explore next as they go looking for you. The trick here is that these occupancy maps are visible to the player through being laid out on the game map. In effect, the player can see the state of the guards’ minds (figure 8.2). In order to play the game well, the player needs to understand the AI system to predict what the NPCs will do.

11723_008_fig_002.jpg

Figure 8.2 In Third Eye Crime (Moonshot Games, 2014), the colors on the ground signal to the player both where the guards can currently see and where they are thinking of looking next, offering the player a view into the mind of the enemy.

AI as Role Model: Many of the algorithms that underlie NPC behavior are relatively simple and easy to predict, as we saw in chapter 4. Instead of trying to make these algorithms more human-like, one intriguing game design idea is to make humans behave more like the algorithms. Spy Party is an asymmetric two-player game, where one player has to identify a human player in a group of NPCs and the other player tries to blend in as much as possible so as not to be identified by the first player while carrying out a mission that has been assigned to her. Blending in is best accomplished by trying to copy NPCs’ movement patterns and decision making (figure 8.3). In other words, one player needs to understand how the algorithms that drive the NPCs’ behavior works through observation in order to copy the behavior, and the other player needs to understand the same behavior in order to discern the interloping human. One way of seeing this game mechanic is as a form of reverse Turing test. The basic concept behind the Turing test is highly appealing and it’s possible that many other interesting game mechanics could be built on it.

11723_008_fig_003.jpg

Figure 8.3 A scene from Spy Party (Chris Hecker, 2009) features a number of NPCs in a bar, and one player must try to blend in seamlessly with them.

AI as Trainee: The god game (or management simulator game, if you want a more mundane name for this genre) Black and White puts the player in the role of a local deity, in various ways influencing the life of mostly hapless villagers (figure 8.4). The most important way to influence the villager is through a giant creature, which acts as your embodied stand-in in the world. You cannot control this creature directly; instead you must teach it how to interact with the villagers. You do this by rewarding and punishing it for its actions and by showing it by example what to do. The creature’s behavior is driven by machine learning algorithms, which learn from your actions in real time as you play the game. To play this game well, you need to master the art of training the creature, which is a little bit like learning to train a dog: you can do it without understanding very much of what actually goes on in the dog’s head.

11723_008_fig_004.jpg

Figure 8.4 The giant creatures in Black and White (Lionhead Studios, 2001) can do your bidding, but only if you train them well.

Another take on this particular pattern is to build games where you train agents that are then competing or fighting against each other, a little bit like the training mechanic of the Pokémon series but with actual machine learning instead of a simple role-playing game-style progress mechanic. One example of this is NERO (NeuroEvolution of Robotic Operatives), a research-based game by Ken Stanley, now at University of Central Florida and Uber AI Labs. In that game, you train an army of miniature soldiers through designing various tasks for them and deciding what kind of behavior to reward them for.2 Another research game from my team, EvoCommander, is based on the same idea of training agents to do the player’s bidding, but instead of training multiple agents, you train a number of “brains” (separate neural networks) for a simulated robot (figure 8.5). When playing against another player, you then control the robot indirectly through selecting which brain it should use at each point in time.3

11723_008_fig_005.jpg

Figure 8.5 A family tree of brains in EvoCommander (Daniel Jallov, 2015). Before a match, you choose which of your trained brains to bring with you into battle.

AI Is Editable: You can also design a game around directly editing the instructions for the algorithms that control the behavior of an agent. The board game RoboRally is proof of the possibility of creating a very successful game around such a mechanic. In RoboRally, each player in turn chooses the instructions that her robot should carry out in that turn. Although the “programming” here is simplistic, predicting the resultant behavior is very challenging because all players’ robots carry out their programs in parallel.

A more advanced example is the network editor mode of Galactic Arms Race, another research-based game by Ken Stanley’s team. Galactic Arms Race is a third-person space shooter game built around a unique form of search-based procedural content generation (figure 8.6). Weapons in this game are controlled by neural networks, which decide how the particles fired by the player’s spaceship behave. Players can collect and discard weapons throughout the game world, and at any point, they can switch between several equipped weapons. Weapons are created through a collaborative evolutionary algorithm where all players of the game act as a fitness function; new weapons are the offspring of the weapons that players choose to use the most. This is in itself a very interesting use of AI techniques in the game, though more in a background role because players do not need to understand the weapon-generating evolutionary algorithm to play the game. The AI-is-editable design pattern was introduced in an extension to the game, which makes it possible to manually edit the neural networks defining the weapons. The structure of neural networks is generally hard to understand for humans, meaning that this editing mode is not for everyone, but for some players, editing the neural network to try to get a desired weapon behavior is an engaging puzzle game in itself.4

11723_008_fig_006.jpg

Figure 8.6 Evolved weapons in Galactic Arms Race (Evolutionary Games, 2009).

AI Is Guided: Yet another idea for how to design a game around AI so that the player needs to interact with and understand it is to have game characters controlled by AI algorithms, but imperfectly so either because you limit what the algorithms can do or because the tasks the game characters are asked to perform are too complex. The player will then need to act as a guide or manager for the agents, giving them high-level commands or guiding them through operations they cannot perform by themselves. An excellent example of this design pattern is the enormously successful The Sims series of games. These games can best be described as life simulators or virtual dollhouses, where you control a family of characters as they go about their life. You need to make all the large life decisions for them, such as where to build a house, but in many cases you also need to help out with small tasks, such as making sure there are pots and pans available for cooking. But the characters also have a say. The Sims games feature complex AI systems that control the agent, so that they not only perform autonomous actions such as going to the bathroom and cooking dinner but also strike up friendships and fall in love (figure 8.7). Playing the game is a constant balancing act between the player and the AI system. Crucially the game frequently communicates the state of its AI systems via little thought bubbles above the characters’ heads, allowing the player to understand what goes on.

11723_008_fig_007.jpg

Figure 8.7 A romantic encounter in The Sims 4 (Maxis, 2013).

Of course, this is just a small subset of the many, many possible ways in which AI can be used in visible roles within video games. And I have mentioned only one pattern involving procedural generation and none building on player modeling. It is pretty clear that there is a vast and underexplored design space out there, with plenty of novel game design ideas available for those who look beyond established genres and preconceptions on what parts AI can and cannot play.

Notes