1
Searching for Bobby Fischer

The definition of the term “video game” remains unsettled as digital entertainment continues to evolve. Broadly speaking, the term encompasses any entertainment experience powered by electronic logic circuits that requires a player to manipulate an input device to interact with objects presented on a display. By this definition, the development of the first video games coincided with the rise of digital computing in the latter half of the twentieth century. While computers had existed for well over a century by that point, their function had been markedly different and unsuited to playing a game. In the eighteenth and early nineteenth centuries, a computer was merely a person doing basic addition and subtraction to complete mathematical tables.1 In the late nineteenth century, human computers were augmented by analog devices like Lord Kelvin’s tide predictor that physically simulated specific phenomena through the aid of mechanical devices like levers, pulleys, and gears. In the early twentieth century, the analog computer largely become the domain of the electrical engineer, who used room-sized machines full of resistors, capacitors, and inducers to simulate the operation of power grids as the developed world rapidly electrified. Analog computing reached its high-water mark in 1931 when MIT professor and electrical engineer Vannevar Bush completed his differential analyzer, which could solve a wide array of differential equations, but remained limited to a relatively small set of problems.2

As the first digital computers entered development in the late 1930s, the goal of their creators remained solving specific equations, making them little more than complicated and expensive calculating machines. A computer created under this philosophy merely replaced human computers – laborers undertaking arithmetic tasks that required little independent thought. By the conclusion of World War II, which saw the construction of the first electronic digital computers, there was a small group of engineers, mathematicians, and logicians who felt it should be possible to create a computer that could think for itself. When it came time to prove to the rest of the world this dream was possible, these thinkers felt the most effective way to do so was to craft a computer program able to match a human opponent in a strategy game.

Over the next decade and a half, multiple individuals undertook the challenge of creating an artificial intelligence (AI) that could play a convincing game of checkers or chess, all in aid of a larger goal of creating a computer program that could learn and think for itself. By the early 1950s, these pioneers had determined the basic obstacles they would need to overcome to achieve this goal and had developed shortcuts like decision trees and alpha-beta pruning derived from the nascent field of game theory to overcome these difficulties. Applying these methods to fashion a credible intelligence took longer, with engineers hampered primarily by the capability of the hardware architectures and programming languages with which they were forced to work. These engineers were not striving to create entertainment products, but their research represents some of the earliest instances of developing a computer program to play a game on a computer against an artificial opponent.

***

In early 1935, a recent mathematics graduate of King’s College, Cambridge, named Alan Mathison Turing attended a course taught by King’s College fellow Max Newman regarding certain foundational principles of mathematics and logic while awaiting word on his own application to become a fellow at the college. One topic Newman discussed was systematic procedures that can be accomplished step-by-step by following instructions without any real understanding of how the procedures work. According to Newman, these tasks required so little knowledge or insight that even a machine could undertake them successfully.3

Unlike many of his mathematical peers, Turing was not content to confine his studies to purely philosophical constructs. Possessed of both a love of invention and a desire to explore abstract concepts through the material world, Turing found his imagination fired by Newman’s concept of a machine performing calculations based on instructions. For the next year, he attacked the Entscheidungsproblem originally posed by David Hilbert in 1928 through the lens of a theorized mathematical device inspired by Newman’s lecture.

Hilbert, the dean of the international mathematics community, had long believed that no problem in mathematics was unsolvable, and his Entscheidungsproblem posited that any mathematical formula could be proven through formal logic to be universally valid. To examine this assertion, Turing envisioned a universal machine consisting of a scanner reading an endless roll of paper tape. The tape would be divided into individual squares that could either be blank or contain a symbol. By reading these symbols based on a simple set of hardwired instructions and following any coded instructions conveyed by the symbols themselves, the machine would be able to carry out any calculation possible by a specialized machine, output the results, and even incorporate those results into a new set of calculations. In April 1936, Turing completed his paper, entitled “On Computable Numbers, with an Application to the Entscheidungsproblem” and demonstrated that his universal machine proved Hilbert incorrect. While merely a hypothetical device employed to serve a theoretical purpose, Turing’s universal machine defined the basic attributes of the general-purpose digital computer.4

With the onset of World War II, Turing joined His Majesty’s Code and Cypher School at Bletchley Park to decipher high-level German military codes and took part in the construction of increasingly elaborate cryptographic machines – called bombes for the ticking noises they emitted while decrypting – to crack the German military’s Enigma code. With what had once been a hobbyist interest in cryptology now transformed into his primary vocation, Turing increasingly turned to math and logic problems to unwind in what little leisure time his work afforded him. In particular, he began pondering whether there might be a machine method to play a “perfect” game of chess.5

Game theory was still in its infancy in the 1940s, but the concept of “minimaxing,” the idea that in any contest where both participants possess complete information regarding the state of play there exists a pair of strategies that allow each player to minimize and maximize his losses, had already been articulated in 1928 by one of the founders of the discipline, John von Neumann. In 1941, Turing began discussing with fellow mathematician Jack Good on how a device akin to his universal machine might employ minimaxing theory to play a game of chess. Turing and Good went so far as constructing a basic decision tree for chess and determining that the number of moves the computer would have to “look ahead” needed to be variable so that the computer could accurately identify potential capture moves, but then set the problem aside.6

In early 1943, Turing traveled to Washington, DC to share information with the cryptographers employed by the U.S. Navy. While abroad, he called on Bell Labs, where he discussed his universal machine with Claude Shannon. An electrical engineer with a master’s degree from MIT, Shannon combined a passion for logic with practical experience working with Bush’s differential analyzer to conceptualize a calculating device incorporating Boolean logic to not just solve mathematical problems but also to carry out symbolic logic tasks as well. Shannon’s work paved the way for machines that could not just perform a rote operation, but actually decide how to tackle problems on its own, for almost any decision, no matter how complex, can be broken down into a basic series of yes or no, on or off, 1 or 0 propositions. Published in his master’s thesis in 1937, Shannon’s framework for the binary digital circuit proved instrumental to transforming Turing’s idea of a universal machine into reality. Turing and Shannon held several discussions during their time together at Bell, which helped crystallize both their desires to build machines that could go beyond the computing devices of the day and actually reason.7

Shortly after returning to Bletchley, Turing struck up a friendship with newly arrived Oxford student Donald Michie, who had begun his course of study in the classics, but ended up at Bletchley Park when he took a cryptology course in the hopes of making himself useful to the war effort and discovered a heretofore unknown aptitude for code breaking. In Michie, Turing discovered a kindred soul interested in chess and probability and teaching machines to think.8 By now, the Bletchley cryptologists had turned to breaking more complicated German ciphers than Enigma for which even the elaborate bombe machines proved insufficient. In a new section headed by Max Newman, electrical engineers were now building more complicated electronic code-breaking equipment, culminating in the revolutionary Colossus completed by Tommy Flowers in 1944, the world’s first fully functional electronic digital computer.9 Electronic machines were now fully integrated into the daily lives of the Bletchley cryptographers, so talk naturally turned toward what should be done with the technology after the war. From his discussions with Good, Shannon, and Michie, Turing decided the only logical course was to construct a machine that could think for itself.

***

The earliest electronic computers like Colossus were wholly unsuited to serve as the basis for a thinking machine, for they were built for highly specific tasks, and their function was entirely governed by their hardware, so they could only be reprogrammed for a new task by physically rewiring the machine.10 This bottleneck could be eliminated if computer programs dictating the operation of the computer could also be stored in memory alongside the numbers they were manipulating. In theory, the binary circuits envisioned by Shannon could represent these instructions via symbolic logic, but in practice the vacuum tubes found in early computers could only house around 200 characters in memory, which was fine for storing a few five- or ten-digit numbers for calculations, but not for instruction sets that required thousands of characters. In the late 1940s, engineers began developing more expansive memory options to create the first stored-program computers.

One of the first people to whom Turing gave a copy of his landmark 1936 paper was its principle inspiration, Max Newman. Upon reading it, Newman became interested in building a Universal Turing Machine himself. He tried to interest Tommy Flowers in the paper while he was building his Colossi for the Newmanry at Bletchley Park, but Flowers was an engineer, not a mathematician or logician, and by his own admission did not really understand Turing’s theories. As early as 1944, Newman began expressing enthusiasm for applying wartime electronic advances to a project to build a Universal Turing Machine at the war’s conclusion.11

In September 1945, Newman took the Fielden Chair of Mathematics at Manchester University and soon after applied for a grant from the Royal Society to establish the Computing Machine Laboratory at the university. After the grant was approved in May 1946, Newman arranged for portions of the dismantled Colossi to be shipped to Manchester for reference and assembled a team to tackle a stored-program computer project. Perhaps the most important members of the team were electrical engineers Freddie Williams and Tom Kilburn.12

While working on radar during the war, Williams and Kilburn developed a method through which a cathode ray tube (CRT) could “remember” a piece of information by virtue of firing an electron “dot” onto the surface of the tube to create a persistent charge well. By placing a metal plate against the surface of the tube, this data could be “read” via a voltage pulse transferred to the plate whenever a charge well was created or eliminated by drawing or erasing a dot. Originally developed to eliminate stationary background objects from a radar display, a Williams tube could also serve as computer memory and store 1,024 characters. As any dot on the tube could be read at any given time, it functioned as an early form of random access memory (RAM).13

In June 1948, Williams and Kilburn completed the Manchester Small Scale Experimental Machine (SSEM) to test the viability of the Williams Tube as a computer memory device. While this computer contained only 550 tubes and was therefore not practical for computing projects, the SSEM was the first device in the world with all the characteristics of a stored-program computer. Building on this work, the team completed the Manchester Mark 1 in October 1949.

Meanwhile, Turing took up residence at the National Physical Laboratory (NPL) after the war to commence his own attempt at building a universal machine, a stored-program computer he named the Automatic Computing Engine (ACE). He remained convinced that chess represented one of the best avenues for pursuing machine learning, so in the summer of 1948 he and economist David Champernowne began developing a chess program in their spare time that could run on the computer. Meanwhile, Michie returned to Oxford to continue his studies, but did not forget his discussions with Turing at Bletchley regarding a computer capable of playing chess. Along with colleague Shaun Wylie, he outlined on paper a basic chess-playing program christened Machiavelli. When Turing learned of Michie’s work in September 1948, he quickly applied the finishing touches to his own program, dubbed Turochamp after its creators, with the intention of pitting the two systems against one another.14

By the time he completed Turochamp, Turing had departed NPL to join Newman at the University of Manchester after becoming increasingly frustrated by bureaucratic obstacles placed in the path of his overly ambitious design for the ACE computer. Therefore, the Manchester Mark I served as the new target platform for Turochamp and Machiavelli. Unfortunately, the programs proved too complicated for the computer, and Turing never succeeded in making either one operational before his untimely death in 1954. Nevertheless, these two programs represent the earliest known attempt to harness a digital computer to play a game.

***

Alan Turing may have left NPL in 1947, but his quest to build a universal machine continued in scaled-down fashion in the form of the Pilot ACE computer, completed in 1950 by a team that included Donald Davies. A follower of Turing, Davies also held a fascination for the possibility of playing games on a computer and authored a paper that appeared in the June 1950 edition of Penguin Science News entitled “A Theory of Chess and Noughts and Crosses.”15 This article exerted a profound impact on the first generation of computer programmers in Britain, including a scientist named Christopher Strachey.

Born in 1916, Strachey descended from a distinguished family of artists and puzzle solvers. His father, Oliver, became a cryptographer during World War I and was named a Commander of the Order of the British Empire (CBE) for helping establish the Canadian cryptographic operation during World War II, while his Uncle Lytton was a member of the celebrated Bloomsbury Group of writers and intellectuals. Christopher received a fine secondary education and matriculated to Cambridge in 1935, where he socialized with Turing but was most likely not introduced to his theories. An indifferent student, Strachey failed to achieve sufficient marks to earn a desired research position at the college and took a low-paying job as a physicist with Standard Telephones and Cables (STC). After doing radar and radio research for STC during the war, he took a position as a school master at St. Edmunds in 1945 before moving to the more prestigious Harrow School in 1949.16

A lover of puzzles and games, Strachey became interested in the nascent research into automatic computing devices before the war, but he never had an opportunity to engage in the field himself. In January 1951, he gained an introduction to a member of the Pilot ACE team named Mike Woodger through a mutual friend and spent a day learning the inner workings of the machine. Inspired by Davies’s article and his own love of games, Strachey resolved to gain a greater understanding of the machine by programming it to play draughts.17

That May, Strachey returned to NPL with what he thought was a complete draughts program, but it failed to run due to programming errors. By the time he corrected the defects a month or so later, the Pilot ACE had undergone a hardware upgrade that would have forced a major redesign of the program. By then, Strachey had learned of the Ferranti Mark 1 at the University of Manchester, a commercial update of the original Manchester Mark 1 that included greater storage capacity. In July 1951, he traveled to Manchester to see the computer and consult with Turing, who had recently completed the programmer’s handbook for the machine. Turing encouraged Strachey to switch his programming efforts to the Mark 1, though he also convinced Strachey to put his draughts program aside in favor of other programming pursuits. In November 1951, Strachey accepted a job with the National Research and Development Corporation (NRDC) and returned to his draughts program. He completed it over the next year, with most of the programming occurring in June and July 1952.18

Strachey’s draughts program represents an important milestone in computer programming. The match was played between a human player, who flicked toggle switches on the computer to identify the piece he wanted to move and the space to which he wanted to move it, and the computer itself, which displayed its moves via a teletype. The computer opponent incorporated a heuristic algorithm that looked ahead several moves to determine the best course of action, making it perhaps the first software program capable of thinking for itself. Strachey also manipulated the Mark 1’s CRT storage tubes 3 and 5 so that one would constantly display the current state of the board, while the other could be used to preview moves.19 This is most likely the first graphical display used by a computer program.20

The United Kingdom, so central to the development of computer technology in the 1940s through computers like the Colossus, the Manchester Mark 1, and the Pilot ACE, fell increasingly behind as the 1950s progressed. Conservative British businesses refused to embrace computer technology, while Continental Europe was still recovering from the devastation of the war.21 Consequently, the market for British computers remained limited, allowing domination of the international market by firms in the United States.

***

In America, the major axis for early electronic computer research ran on a line between two Ivy League institutions, the University of Pennsylvania and Princeton. At Pennsylvania, John Mauchly and J. Prepser Eckert built the first widely known and influential electronic computer, ENIAC, between 1943 and 1945 to calculate ballistic tables for the U.S. Army. Subsequently, the duo collaborated with Princeton mathematician John von Neumann to tackle the stored-program concept.22 In his “First Draft of a Report on the EDVAC,”23 distributed in June 1945, von Neuman articulated the first logic design for a stored-program computer and defined its basic building blocks as the input, the output, the control unit, the arithmetic unit, and the memory. This so-called “von Neumann architecture,” largely defined by Eckert and Mauchly despite the name, remained the basic architecture of the digital computer for over 50 years.24

After collaborating on the early stages of the EDVAC project, von Neumann began development on his own computer at the Institute for Advanced Study (IAS) at Princeton. Although not completed until 1952, the specifications for von Neumann’s IAS Machine were widely disseminated in the late 1940s, so when the U.S. government began investing heavily in computer technology as defense spending skyrocketed with the onset of the Korean War, over a dozen universities, research laboratories, and government think tanks adopted the IAS Machine architecture as the basis for their own stored-program computers, including the University of Illinois (ILLIAC I, 1952), the Los Alamos National Laboratory (MANIAC I, 1952), and the RAND Corporation (JOHNNIAC, 1953). As in Britain, it would not take long for engineers and mathematicians interested in thinking machines to co-opt these computers for their own research, which often involved programming games.

As Turing set the tone for early research into thinking machines in Britain, his friend at Bell Labs, Claude Shannon, did the same in the United States. Like Turing, Shannon attacked the problem of playing chess on a computer through the lens of game theory. Unlike Turing, who worked on his program only informally and did not publish any of his work until 1953, Shannon built an electromechanical machine called Caissac in 1949 to test some of his theories and released his findings in 1950 in two articles: “A Chess-Playing Machine” in the February edition of Scientific American and “Programming a Computer for Playing Chess” in the March issue of Philosophical Magazine.25

In his Philosophical Magazine article, Shannon articulated both why chess was the perfect game around which to build a thinking machine and how best to craft a program with a reasonable chance of defeating a human opponent. Shannon presents a game of chess as a complex decision tree in which each node represents a specific board layout and each branch outlines a possible move. Theoretically, a computer could play a perfect game of chess by working backwards through the decision tree for any given board state. Unlike in simpler games like tic-tac-toe in which such a “brute force” approach is relatively simple, a game of chess can unfold in upwards of 10120 variations, meaning that a computer running 1 million variations per second would need over 1090years just to calculate the first move of the game. This was far beyond the capabilities of the computers of the day, so the only way for one to play a game of chess in a reasonable period of time was to react and adapt to a human player through a basic understanding of the value of particular pieces and strategies.

Shannon solved this problem by having the computer only track moves to a certain depth on the tree – in his case, two moves – and then choose the best move under the circumstances. This would be determined by evaluating a series of static factors such as the value and mobility of pieces – weighted based on their importance in the decision-making process of expert chess players – and combining these values with a minimaxing procedure. Although this plausible move method would ultimately be discarded as computers became more sophisticated and could analyze many potential moves in a small amount of time, Shannon’s approach defined the parameters of computer chess research for the next two decades.

The first person to apply Shannon’s ideas to a working program did so not through chess, but through checkers. Electrical engineer Arthur Samuel became an expert on vacuum tubes after joining Bell Labs in 1928 and worked to improve radar displays during World War II. After the war, he took a teaching position at the University of Illinois and became involved with the nascent ILLIAC computer project. In 1947, when it seemed no one was willing to fund additional work on the machine, Samuel and his colleagues toyed with building a stripped-down version that could perform a few headline-grabbing feats to acquire additional funding. At the time, Shannon was traveling the country delivering lectures on developing a chess-playing computer, and Samuel figured that if Shannon were closing in on solving the chess problem (which he actually was not), then it should be a snap to create a checkers program capable of defeating a human opponent. Coincidentally, a world championship checkers match would be held the next spring in the nearby town of Kankakee, so Samuel decided to throw together a program capable of defeating a champion player to generate headlines at the event and secure crucial funding.26

Building a competent checkers program proved more difficult than Samuel expected, and his design remained unfinished when he left Illinois for International Business Machines (IBM) in 1949. After some delay, the tabulating machine and office equipment giant was preparing to embrace the electronic computer revolution, and Samuel contributed important breakthroughs in CRT storage that allowed the company’s debut 701 computer, yet another offshoot of von Neumann’s IAS Machine, to incorporate memory that was both higher capacity and more reliable than that found in earlier machines. Needing a program to test the new machine’s instruction set, Samuel returned to his checkers program and had it working by the end of 1952.27

Samuel continued refining his program over the next decade, mostly working on his own time because IBM frowned on its engineers engaging in frivolous activities like teaching a computer to play checkers. He earned a small measure of respect in 1955, however, after the program gained the ability to apply past experience to future games, making it the first known program that could learn and improve over time. Like Shannon’s proposed chess program, Samuel’s checkers game evaluated the board by searching a decision tree to a certain depth and made a move decision based on static factors and minimaxing, but it also maintained a record of each board position previously encountered and the value of previous moves made from that position, allowing it to favor more effective moves over time. IBM management was so impressed by this breakthrough that it demonstrated the program on national television on February 24, 1956.28

In 1962, Samuel challenged a champion checkers player named Robert Nealey to a match against his program, which emerged victorious.29 At first glance, this feat stands as proof that a thinking computer had mastered its first game, but the truth of the matter is different. Nealey was merely the self-proclaimed “world blind checkers champion,” a title he claimed by default when no other players showed up for a tournament held in Peoria, Illinois.30 Furthermore, analysis of the match between man and machine demonstrates that Nealey executed several poor moves that a true grand master would never make, so the program won more through luck than skill. Indeed, a rematch was arranged the next year that Nealey won handily, and when Samuel took his checkers program to a championship tournament in 1966, it went 0-8 against two actual grand masters. Therefore, while Samuel’s program is noteworthy as the first capable of self-learning, it never achieved its author’s original goal of defeating a true champion player.31

***

While Turing and Shannon were among the first to ponder the ramifications of a thinking machine, a professor named John McCarthy gave it a name and an academic discipline. Born in 1927 to communist parents, McCarthy became interested in science at an early age when he read a Soviet technology book called 100,000 Whys. A precocious student, he studied college-level calculus on his own time while a junior at Belmont High School in Los Angeles, California, and skipped 2 years of math courses when he matriculated to the California Institute of Technology (Caltech) in 1944. Upon graduating with a B.S. in mathematics in 1948, McCarthy did a year of graduate work at Caltech and attended the university’s Hixon Symposium on Cerebral Mechanisms in Behavior in September 1948. During the conference, several eminent psychologists gave lectures on the workings of the human brain, but the highlight of the show for McCarthy was a lecture by game theory and computer pioneer John von Neumann on self-replicating automata – machines capable of creating copies of themselves. While no lectures were delivered regarding thinking machines, von Neumann’s lecture sparked McCarthy’s interest in trying to create an automaton that could model human intelligence.32

In 1949, McCarthy moved to Princeton to continue his studies, where he earned his PhD in mathematics in 1951 and subsequently became an instructor. The next year, one of his graduate students suggested McCarthy collect papers on machine intelligence. McCarthy contacted Shannon, and together they published a book of papers called Automata Studies. Few of the papers related to machine intelligence, however, which disappointed McCarthy.33 After a brief foray to Stanford, McCarthy established himself at Dartmouth in 1955 and attempted to attract interest in machine intelligence again. Inspired by Defense Department workshops, he decided to stage a gathering of eminent thinkers in the field to work on projects and solve problems together.34 Remembering the confusion engendered by the term “automata studies” in 1952, he decided to name the event the Summer Research Project on Artificial Intelligence.

With funding from the Rockefeller foundation, McCarthy successfully hosted his AI workshop in the summer of 1956 with the aid of Shannon, Harvard professor Marvin Minsky, and Nathaniel Rochester of IBM. While it failed to achieve all its lofty goals, the workshop established AI research as a distinct discipline within computer science. It also exposed McCarthy to the work of Carnegie Institute of Technology (later Carnegie-Mellon University) professors Allan Newell and Herbert Simon.35

At the time of the conference, Newell and Simon were working with RAND employee Cliff Shaw to develop a high-level programming language called Information Processing Language (IPL) on RAND’s JOHNNIAC computer to serve as the heart of a program called the Logic Theory Machine that could complete elementary logic proofs. Unlike other programming languages, IPL was created not to instruct a computer to perform calculations, but to serve as the basis for an AI. Therefore, the language incorporated concepts like dynamic memory allocation and recursion and was the first language built around list processing. McCarthy loved the idea of list processing, but preferred working in FORTRAN, the pioneering high-level language developed by John Backus at IBM in 1956. FORTRAN could not perform recursion, however, which was essential to AI programming, so McCarthy developed his own programming language combining the best features of FORTRAN and IPL called List Processing Language, or LISP.36

Later in 1956, McCarthy moved to MIT and reunited with Minsky, who had been doing research into neural networks at Harvard. They convinced the electrical engineering department at MIT to establish an AI Lab to further their research with a staff initially consisting of a secretary, two programmers, and six graduate students. In spring 1959, McCarthy decided to offer the first computer course at the university open to freshman, one of whom was gifted electrical engineering student named Alan Kotok.37

Raised in the New Jersey suburbs of Philadelphia, Kotok began wiring lamps on his own at the tender age of six. In high school, he participated in a field trip to a SOCONY-Mobile research facility in Paulsboro, New Jersey, where he not only saw his first computer, but had the opportunity to run through a simple programming exercise on the machine. From that day forward, Kotok knew his destiny lay with computers, so he enrolled at MIT and leapt at the chance to take McCarthy’s freshman programming course. The class only increased Kotok’s desire to program on the department’s IBM 704 computer, so at its conclusion he and friends Elwyn Berlekamp, Michael Lieberman, Charles Niessen, and Robert Wagner approached McCarthy and begged for the chance to do a project for him. McCarthy had just barely started implementing a chess program on the 704, so he turned the project over to the students.38

By 1959, chess programs had made great strides after over half a decade of frustration, but no program had emerged that could play the game competently. In 1956, James Kister, Paul Stein, Stanisław Ulam, William Walden, and Mark Wells completed the first chess program that adhered to Shannon’s methods on the MANIAC I at Los Alamos,39 but only by reducing the size of the board from 8 × 8 squares to 6 × 6 and eliminating bishops. The next year, IBM employees Alex Bernstein, Michael de V. Roberts, Timothy Arbuckle, and Martin Belsky implemented a complete chess program on an IBM 704 by reducing the number of moves examined by the computer through a series of “plausible move generators,” so the computer only considered 2,500 of over 800,000 possible permutations. AI pioneers Newell, Simon, and Shaw took their own crack at the problem in 1958 by developing the NSS chess program on the JOHNNIAC.40 Programmed in the IPL language, the NSS program used a more sophisticated move evaluation routine than Bernstein and managed to defeat a human opponent – a novice taught the rules of the game right before the match – but it was still a weak player.41

After 3 years of work, Kotok and company unveiled a chess program on an IBM 7090 – the successor to the 704 at MIT – that for the first time could play the game on the level of a passable amateur. The advance allowing the program to play a better game of chess than its predecessors was a technique called alpha-beta pruning in which the computer assigns a value to each move and only considers moves that fall within a certain range of values. By employing this procedure, Kotok’s program became smarter over time as it experienced and assigned values to board positions and could soon consider fewer moves than earlier programs while largely ignoring only poor choices.42 Kotok published an undergraduate thesis describing the program in 1962 before he and his friends set it aside.43

In 1965, an MIT student named Richard Greenblatt learned about the Kotok group’s work and built his own program incorporating his own vast chess knowledge through a series of 50 heuristics to create what he called MacHack, which played a better game of chess than any program to come before it. In 1967, Greenblatt took MacHack VI to a chess tournament to challenge human opponents. It defeated a player with a rating around 1,400, roughly the level of an average tournament player, though it also lost several matches.44 Although AI research still had a long way to go, the dream of Turing and Shannon that a machine could exhibit sufficient intelligence to play a complex strategy game with some degree of skill had been realized.

In the 1950s and 1960s, programming a computer to play checkers or chess was simply an academic exercise intended to improve the state of AI research and to pave the way for true intelligent machines. These programs were played almost exclusively by a small circle of academics and rarely packaged in a manner suitable for public consumption. Truly, they were not “games” at all, as they just replaced one of the players in a two-player contest. Nevertheless, AI research represented one of the two primary avenues for computer game creation in the 1950s. The other was the military-industrial complex, which created increasingly sophisticated simulations as the decade progressed.