3 Explaining the Ineffable
Margaret A. Boden
Preamble
Creativity was highlighted as a key concern by one of the fathers of artificial intelligence (AI) right at the beginning of his AI career—and right at the end of it, too.
In 1958, Herbert Simon confidently declared in the management journal Operations Research, “It is not my aim to surprise or shock you. … But the simplest way I can summarize is to say that there are now in the world machines that think, that learn and that create” (Simon and Newell 1958, 6; italics added). And in an internal RAND memo, “The Processes of Creative Thinking,” he had recently outlined some specific ideas about problem-solving in seemingly intractable areas where comprehensive search was impossible (Newell, Shaw, and Simon 1962). These were already being implemented, in the pioneering Logic Theorist and General Problem Solver programs (Boden 2006, 6.iii.c).
Almost forty years later, in 1996, he joined in an informal email discussion among some of the Fellows of AAAI (the American Association for AI, today renamed as the Association for the Advancement of AI) about whether AI is really a science—as opposed to a branch of engineering (13.vii.a). In reply, he recommended that his fellow Fellows quell their doubts by reading his paper, “Explaining the Ineffable” (Simon 1995). Its subtitle was rich in intriguing keywords: “Intuition, Insight, and Inspiration.” All three were near synonyms for creativity. Because he chose this paper from the entire research literature to show that AI is, indeed, a science, he clearly felt that AI had gone some way toward fulfilling the promise implicit in the title of his 1958 RAND harbinger (for the harbingers of AI, see 10.i.f).
But that promise had not been taken up explicitly until the 1980s. Some of the midcentury readers of Operations Research would have been eager to see the still-unpublished RAND memo. But they might have been disappointed. The title mentioned creative thinking, to be sure. But the focus was on problem-solving and game playing: creativity in its everyday sense was ignored. Likewise, Nathaniel Rochester’s section on “Originality in Machine Performance” in the proposal for the seminal Dartmouth summer meeting on AI (see 6.iv.b) had not discussed originality in the arts and sciences (McCarthy et al. 2000, 49). In the 1950s, that was not considered a fit topic for AI.
3.1 Creativity Ignored
Creativity was an obvious challenge for AI from the very beginning—or even before it. In the imaginary conversation that launched the Turing test (16.ii), Alan Turing had depicted a computer as interpreting and defending a sonnet, tacitly implying that a computer might compose poetry too. However, once people started writing AI programs, this particular challenge was parked on the sidelines.
When GOFAI was still NewFAI (good old-fashioned AI and newfangled AI; see 10.preamble), creativity was a no-go area. To be sure, Donald Mackay (1951) had described a probabilistic system that he said would show “originality” in a minimal sense—but that had been an aside, not the main object of the exercise. Similarly, problem-solving programs were occasionally described as creative. And Simon, in 1957, had mentioned the theory of creativity he and Allen Newell were developing and had praised the pioneering work of Lejaren Hiller and Leonard Isaacson, whose computer-generated Illiac suite (a string quartet, first performed in 1957) was, in his estimation, “not trivial and uninteresting” (McCorduck 1979, 188). Nevertheless, creativity in the layman’s sense (i.e., art, music, and science—and maybe jokes) was all but ignored by AI professionals.
Outside the field, this was less true. Konrad Zuse’s vision of automatic carpet design, with deliberate weaving errors to add authenticity, was not yet in the public domain (10.iii.a). But visual computer art was getting started in Europe and the United States by 1963, and interactive computer art had already begun in the 1950s (see 13.vi.c).
In the 1950s, too, Hiller—a professional chemist, but also holding a master’s degree in music—had initiated the Illiac suite (see Hiller and Isaacson 1959, 182–197). The first three movements were generated from rules defining musical styles (sixteenth-century counterpoint, twelve-tone music, and a range of dynamics and rhythms), sometimes combined with tone pairs chosen by chance; the fourth movement was based not on familiar styles but on Markow chains (Hiller and Isaacson 1959). Systematic experiments in computer music, including instrumentation, were done at IRCAM (Institute for Research and Coordination in Acoustics/Music) in Paris from the early 1960s. And a competition for computer-composed music was organized at the 1968 IFIP (International Federation for Information Processing) meeting in Edinburgh (and led to the founding of the UK Computer Arts Society).
In general, however, these projects involved the artistic avant-garde with a few art-oriented scientists—not the leaders of GOFAI. Their references to creativity, invention, and discovery in the Dartmouth proposal were not being echoed in AI research (McCarthy et al. 2000, 45, 49–51). Even creative problem-solving was rarely described as such, despite the Logic Theorist team’s use of the term in their late 1950s call to arms.
That is why my AI colleagues were bemused when, in the early 1970s, I told those who asked that I had decided to include a whole chapter on the topic in my book on the field (Boden 1977, chap. 11). Several protested, “But there isn’t any work on creativity!”
In a sense, they were right. Admittedly, the 1960s had produced an intriguing model of analogy (discussed later), and meta-DENDRAL had touched on creativity in chemistry (10.iv.c). Moreover, much NewFAI work was an essential preliminary for tackling creativity as such. For instance, NLP (natural language processing) researchers had asked how memory structures are used in understanding metaphysics (Wilks 1972, 9.x.d), metaphor (Ortony 1979), or stories (Charniak 1972, 1973, 1974; Schank and the Yale AI Project 1975; Rieger 1975a, 1975b). One brave soul was generating story-appropriate syntax (Davey 1978; 9.xi.c), and another was studying rhetorical style (Eisenstadt 1976).
But apart from a handful of painfully crude poets, story writers, or novelists, there were no models of what is normally regarded as creativity (Masterman and McKinnon Wood 1968; Masterman 1971; Meehan 1975, 1981; Klein et al. 1973). The renowned geometry program, which found a superbly elegant proof about the base angles of isosceles triangles, was only an apparent exception (see 10.i.c and Boden 2004, 104–110).
This explains why John Haugeland (1978, 7), when he questioned the plausibility of cognitivism, raised general doubts about GOFAI analyses of “human insight” but did not explicitly criticize any specific models of it. His first worry was that GOFAI systems “preclude any radically new way of understanding things; all new developments would have to be specializations of the antecedent general conditions.” But Galileo, Kepler, and Newton, he said, invented “a totally new way of talking about what happens” in the physical world, and “a new way of rendering it intelligible.” Even to learn to understand the new theory would be beyond a medieval-physics GOFAI system, “unless it had it latently ‘built-in’ all along.” He may have chosen this particular example because he had heard of the about-to-be-published BACON suite (described later) on the grapevine. The underlying difficulty, he suggested (following Herbert Dreyfus: see 11.ii.a), was that “understanding pertains not primarily to symbols or rules for manipulating them, but [to] the world and to living in it.”
Before the 1980s, then, creativity was a dirty word—or anyway, such a huge challenge that most programmers shied away from it. The most important exception, and still one of the most impressive, was due to someone from outside the NewFAI community.
3.2 Help from Outside
Harold Cohen (1928–2016) was a highly acclaimed abstract artist of early 1960s London. Those words “highly acclaimed” are not empty: a few years ago, a major exhibition, The 1960s, was held at London’s Barbican. Even though this was focused on 1960s culture as a whole, not just on the visual arts, two of Cohen’s early paintings were included. He turned toward programmed art in 1968. He spent two years at Stanford University as a visiting scholar with Edward Feigenbaum, in 1973–1975, where he not only found out about AI but also learned to program.
In the four decades that followed, while at the University of California, San Diego, he continuously improved his drawing and coloring program, called AARON (Cohen 1979, 1981, 1995, 2002, 2012; McCorduck 1991; Boden 2004, 150–166, 314–315). Successive versions of AARON were demonstrated at exhibitions in major galleries (and at science fairs) around the world and received huge publicity in the media. An early version is now available as shareware.
Unlike his 1960s contemporaries Roy Ascott and Ernest Edmonds (13.vi.c), Cohen was not using computer technology to found a new artistic genre. Rather, he was investigating the nature of representation. To some extent, he was trying to achieve a better understanding of his own creative processes. But over the years he came to see his early attempts to model human thought as misdirected:
The common, unquestioned bias—which I had shared—towards a human model of cognition proved to be an insurmountable obstacle. It was only after I began to see how fundamentally different an artificial intelligence is from a human intelligence that I was able to make headway. … The difference is becoming increasingly clear now, as I work to make AARON continuously aware of the state of a developing image, as a determinant to how to proceed. How does a machine evaluate pizazz? (personal communication, July 2005)
Cohen asked how he made introspectively “unequivocal” and “unarbitrary” decisions about line, shading, and color. (From the mid-1990s his main focus was on color.) And he studied how these things were perceived—by him and others—as representing neighboring and overlapping surfaces and solid objects. Moving toward increasingly three-dimensional representations, he explored how his/AARON’s internal models of foliage and landscape, and especially of the human body, could be used to generate novel artworks. He was adamant that they were artworks, although some philosophers argue that no computer-generated artifact could properly be classified as art (O’Hear 1995).
Two examples, dating from either side of 1990, are shown in figures 3.1 and 3.2. Notice that the second, drawn by the later version of the program, has more 3-D depth than the first. Which, in turn, has more than the drawings of AARON’s early 1980s acrobats-and-balls period (see figure 3.3).

An example of AARON’s jungle period, late 1980s; the drawing was done by the program, but the coloring was done by hand. Untitled. 1988, oil on canvas (painted by Harold Cohen), 54″ × 77″, Robert Hendel collection. (Reproduced with permission of the Harold Cohen Trust.)

An example of AARON’s early 1990s period; the drawing was done by the program, but the coloring was done by hand. San Francisco People. 1991, oil on canvas (painted by Harold Cohen), 60″ × 84″, collection of the artist. (Reproduced with permission of the Harold Cohen Trust.)

An example of AARON’s acrobats-and-balls period, early 1980s. (Reproduced by permission of the Harold Cohen Trust and reprinted from Boden 2004, frontispiece.)
In particular, jungle AARON did not have enough real 3-D data about the body to be able to draw a human figure with its arms overlapping its own body. His early 1990s San Francisco people could have waved if he had wanted them to, but his late 1980s jungle dwellers could not have crossed their arms instead of waving.
By 1995, Cohen had at last produced a painting-machine-based version of AARON that could not only draw acceptably but also color to his satisfaction, using water-based dyes and five “brushes” of varying sizes. However, his satisfaction was still limited.
By the summer of 2002, he had made another breakthrough: a digital AARON as colorer, whose images could be printed at any size.
The program was regularly left to run by itself overnight, offering up about sixty new works for inspection in the morning. Even the also-rans were acceptable: one gallery curator who exhibited it told Cohen that “he has not seen AARON make a bad one since it started several weeks ago” (personal communication). One might even say that AARON has now surpassed Cohen as a color artist, much as Arthur Samuel’s program surpassed Samuel as a checkers player nearly half a century before (10.i.e). Cohen regards this latest incarnation of AARON as “a world-class colorist,” whereas he himself is merely “a first rate colorist” (personal communication).
This is no longer the latest incarnation of AARON. By 2009, a version commissioned by the Carnegie Science Center in Pittsburgh, and scheduled to run there continuously for the next ten years, had achieved a yet wider variation of imagery, not least because its images, exhibited electronically on large screens, ceaselessly changed and developed in real time (see Cohen 2012). Cohen in his artistry and in his programming skill was indefatigable.
A comparably spectacular, although later, AI artist was Emmy—originally designated EMI (Experiments in Musical Intelligence). This program was written in 1981 by the composer David Cope (1941–), at the University of California, Santa Cruz.
It was not easy for other people to run, or experiment with, the early version. The now-familiar MIDI, or Musical Instrument Digital Interface, which defines musical notes in a way that all computers can use, was not yet available. Indeed, its inventor Dave Smith first had the idea in that very year and did not announce the first specification until August 1983 (Moynihan 2003). Today, Emmy’s successors are based on MIDI and so can be run on any PC equipped with run-of-the-mill musical technology.
This was not the first attempt to formalize musical creativity (neither was the Hiller and Isaacson effort). A system of rules for do-it-yourself hymn composition was penned in the early eleventh century by Guido d’Arezzo, who also invented the basis of tonic sol-fa and of today’s musical notation (A. Gartland-Jones, personal communication). But Cope, almost a thousand years later, managed to more than formalize his ideas about music, he actually implemented them. Moreover, his program could compose pieces much more complex than hymns, whether within a general musical style (e.g., baroque fugue) or emulating a specific composer (e.g., Antonio Vivaldi or J. S. Bach). It could even mix styles or musicians, such as Thai and jazz or Bach and Joplin, much as the Swingle Singers do.
Emmy gained a wide audience, although not as wide as AARON’s. Part of its notoriety was spread by scandalized gossip: Cope has remarked, “There doesn’t seem to be a single group of people that the program doesn’t annoy in some way” (Cope 2001, 92). But as well as relying on word of mouth, people could read about it and examine some Emmy scores in Cope’s first three books (1991, 2000, 2001). Enthusiasts could even try it out for themselves, following his technical advice, by using one of the cut-down versions, such as ALICE (ALgorithmically Integrated Composing Environment) and SARA (Simple Analytic Recombinant Algorithm), provided on CDs packaged inside his books.
They could listen to Emmy’s compositions, too. Several stand-alone CDs were released (by Centaur Records, Baton Rouge, Louisiana) in the late 1990s. In addition, several live concerts of Emmy’s music were staged for public audiences. These featured human instrumentalists playing Emmy’s scores, because the program did not represent expressive performance: it laid down what notes to play, not how to play them.
However, the concerts were mostly arranged by Cope’s friends: “Since 1980, I have made extraordinary attempts to have [Emmy’s] works performed. Unfortunately, my successes have been few. Performers rarely consider these works seriously” (Cope 2006, 362). The problem, said Cope, was that they (like most people) regarded Emmy’s music as computer output, whereas he had always thought of it, rather, as music. Moreover, being output it was infinitely extensible, which, he found, made people devalue it.
In 2004 he took the drastic decision to destroy Emmy’s historical database: there will be no more “Bach” fugues from the program (Cope 2006, 364). Emmy’s “farewell gift” to the historical-music world was a fifty-page score for a new symphonic movement in the style of Beethoven, which required “several months of data gathering and development as well as several generations of corrections and flawed output” (366, 399–451). From then on, Emmy—or rather, Emmy’s much-improved successor—composed in Cope’s style, as “Emily Howell” (374).
Douglas Hofstadter, a fine amateur musician, found Emmy impressive despite—or rather, because of—his initial confidence that “little of interest could come of [its GOFAI] architecture.” On reading Cope’s 1991 book, he got a shock:
I noticed in its pages an Emmy mazurka supposedly in the Chopin style, and this really drew my attention because, having revered Chopin my whole life long, I felt certain that no one could pull the wool over my eyes in this department. Moreover, I knew all fifty or sixty of the Chopin mazurkas very well, having played them dozens of times on the piano and heard them even more often on recordings. So I went straight to my own piano and sight-read through the Emmy mazurka—once, twice, three times, and more—each time with mounting confusion and surprise. Though I felt there were a few little glitches here and there, I was impressed, for the piece seemed to express something. … [It] did not seem in any way plagiarized. It was new, it was unmistakably Chopin-like in spirit, and it was not emotionally empty. I was truly shaken. How could emotional music be coming out of a program that had never heard a note, never lived a moment of life, never had any emotions whatsoever?
[Emmy was threatening] my oldest and most deeply cherished beliefs about … music being the ultimate inner sanctum of the human spirit, the last thing that would tumble in AI’s headlong rush toward thought, insight, and creativity. (Hofstadter 2001b, 38–39; italics in original)
Hofstadter was allowing himself to be overimpressed. Frederic Bartlett’s “effort after meaning” (5.ii.b) imbues our perception of music as well as of visual patterns and words. The human performer projects emotion into the score-defined notes, much as human readers project meaning into computer-generated haikus (9.x.c). So given that Chopin-like scores had been produced, it was not surprising that Hofstadter interpreted them expressively.
What was surprising was the Chopin-like musicality of the compositions. Simon’s 1957 prediction that a computer would write aesthetically valuable music within ten years had failed and had been mocked accordingly (Dreyfus 1965, 3). But Cope had now achieved this, although fourteen years late.
Emmy’s basic method was described by Cope as “recombinatory” and summarized by Hofstadter as “(1) chop up; (2) reassemble” (Hofstadter 2001b, 44). In fact, Emmy was exploring generative structures as well as recombining motifs. It showed both combinational and exploratory creativity—but not, as Hofstadter (2001a) was quick to point out, transformational creativity (Boden 2004). A new style could appear only as a result of mixing two or more existing styles.
The program’s database was a set of signatures (note patterns of up to ten melodic notes) exemplifying melody, harmony, meter, and ornament, all selected by Cope as being characteristic of the composer concerned. Emmy applied statistical techniques to identify the core features of these snippets and then—guided by general musicological principles—used them to generate new structures. Some results worked less well than others (e.g., Cope 2001, 182–183, 385–390).
Strictly, Emmy was not an exercise in “explaining” the ineffable. Cope’s motivation differed from Simon’s (and Hofstadter’s; see later discussion). His aim was not to understand creative thought but to generate musical structures like those produced by human composers. Initially, he had intended EMI to produce new music in his style but soon realized that he was “too close to [his] own music to define its style in meaningful ways,” so he switched to the well-studied classical composers instead (Cope 2001, 93). A quarter century later, having destroyed the historical database, he switched back to computer compositions in his own style (Cope 2006, 372–374 and pt. 3).
In short, he was modeling music, not mind. The task would not necessarily have been easier had he known the psychological details; for instance, limits on short-term memory rule out the use of powerful generative grammars for jazz improvisation (Johnson-Laird 1993).
Nevertheless, the implication of Cope’s writings was that all composers follow some stylistic rules, or algorithms. Sonata form, for example, is a formal structure that was supposed to be rigidly adhered to by everyone composing in that style. This assumption has been questioned. It is known that Haydn, Mozart, and Beethoven (for instance) worked diligently through the exercises given in musical textbooks. But whether they stuck rigidly to those rules in their more original, creative music is quite another matter. Peter Copley and Andrew Gartland-Jones (2005) argue that they did not.
On their view, the formal rules of style are extracted and agreed post hoc and are then followed to the letter only by musical students and mediocrities. Once sonata form, or any other musical structure, has been explicitly stated, it tends to lose its creative potential. This explains the paradox of sonatas in the Romantic period being far less free than in the classical times (Copley and Gartland-Jones 2005, 229). But the rules are flexible enough to evolve in use:
It would be tempting to view this process as generally agreed forms, evolving. But … [it] would be more useful to see the general acceptances of common practice as present to provide a frame for musical differences, changes, developments etc. [Footnote: This is the basis for Boden’s transformational creativity]. If there is a constraint it comes organically from within a complex network of practitioners rather than [a] set of stated constraints that are accepted until someone decides they need breaking. This is a complex mechanism indeed, and even if we argue that algorithms might explain certain emergent patterns, it is not at all sure that such patterns stem from a simple, if enormously lengthy, set of rules. (Copley and Gartland-Jones 2005, 229; italics added)
Cope’s Emmy, they admit, can indeed compose many acceptable pieces. But even if it could generate fully “convincing” examples, “without a model of how [the abstracted rules] change we are capturing an incomplete snapshot of musical practice” (229). In other words, Cope is modeling musical creations—not yet musical creativity.
3.3 In Focus at Last
It is no accident that those two highly successful programs, AARON and Emmy, were written by non-AI professionals. They depended on a reliable sense of how to generate and appreciate structures within the conceptual space concerned (Boden 2004, chaps. 3–4). In general, a plausible computer artist stands in need of an expert in art. Even if (like Hiller) the person is not a professional artist, they need (again, like Hiller) to be a very highly knowledgeable amateur.
Often, artist and programmer are different people (e.g., William Latham and Stephen Todd, respectively; Todd and Latham 1992). But some artists are sufficiently computer literate to design their own systems. Edmonds was a professional computer scientist in the 1960s, as well as being an influential artist. Although he has always used his programs to help him understand creative thinking in general, he was less concerned than Cohen or Cope to talk about the program as such: if his viewers needed to realize that there was a program involved, they did not need to think about just how it worked (see 13.vi.c). Today, fifty years later, many young artists are computer literate even if they are not computing professionals. Whether their computer literacy reaches beyond the consumption of (ready-made) programs is another question.
If artist programs need experts in art, much the same is true of programs focused on literature, mathematics, and science. A competent programmer is the sine qua non, but domain expertise is needed too. Because most AI researchers were reasonably proficient in those areas, one and the same person could be programmer and expert. That is why AI work on creativity, once it got started, usually focused on them (rather than on music or, still less, visual art).
The everyday skill of analogy, which features in both literature and science, had been modeled as early as 1963 by Thomas Evans (1934–) at MIT. His program was a huge advance (Evans 1968). Implementing the ideas briefly intimated by Marvin Minsky in his harbinger article “Steps” (see Boden 2006, fig. 10.2), it not only discovered analogies of varying strength but also identified the best.
It could do this because it described the analogies on hierarchical levels of varying generality. Using geometrical diagrams like those featured in IQ tests (see figure 3.4), Evans’ program achieved a success rate comparable to that of a fifteen-year-old child. But it was not followed up. This was an example of the lack of direction in AI research that so infuriated Drew McDermott, in his squib “Artificial Intelligence Meets Natural Stupidity” (1981; 11.iii.a).

Analogy problem tackled by Evans’s program (Evans 1968).
By the early 1980s, analogy had returned as a research topic in AI. For instance, an international workshop held in 1983 included five papers explicitly devoted to it, plus several more that could be seen as relevant (Michalski 1983, esp. 2–40).
Evidently, the AI scientists concerned did not read Jerry Fodor’s Modularity of Mind (1983) or anyway did not accept its pessimism about explaining the higher mental processes. Fodor despaired of any attempt to understand analogy in scientific terms. Despite its undeniable importance, he said, “nobody knows anything about how it works; not even in the dim, in-a-glass-darkly sort of way in which there are some ideas about how [scientific] confirmation works” (Fodor 1983, 107).
And, according to him, they never would: “Fodor’s First Law of the Nonexistence of Cognitive Science [states that] the more global … a cognitive process is, the less anybody understands it.” He was right in arguing that we will never be able to predict or explain every case of analogy in detail but wrong in concluding that nothing of scientific interest can therefore be said about it (7.vi.h).
Fodor’s skepticism notwithstanding, several GOFAI scientists in the 1980s tried to model how conceptual analogies are generated, interpreted, and used (Gentner 1983; Holyoak and Thagard 1989; Thagard et al. 1988; Thagard 1990; Gentner et al. 1997). Like Newell and Simon before them, they sometimes went back to the gestalt psychologists’ 1930s work on problem-solving (5.ii.b). For instance, they asked (taking Karl Duncker’s example) how the notion of a besieging army could help in discovering how to irradiate a tumor without killing the surrounding tissues.
In general, they wanted to know just how an analogous idea could be identified and fruitfully mapped onto the problem at hand. The answer usually given was to show how distinct conceptual structures could be compared and, if necessary, adapted so as to match each other more closely. Sometimes, the problem solver’s goal was allowed to influence the comparison. But the basic process was like that used long before by Patrick Winston’s concept learner (10.iii.d): comparing abstractly defined and preassigned structures.
A blistering critique of these “inflexible” and “semantically empty” approaches was mounted by Hofstadter (1995, 55–193). He complained that they were less interesting than Evans’s work of twenty years before and radically unlike human thinking. Although both these charges were fair, his stinging critique was not entirely justified (for structuralist replies, see Forbus et al. 1998; Gentner et al. 1997).
But Hofstadter did not share Fodor’s gloom about the impossibility of any scientific understanding of analogy. To the contrary, he had already spent many years studying it, using a basically connectionist approach. He had been thinking—and writing—along these lines since the early 1970s (12.x.a).
By the mid-1980s, he and his student Melanie Mitchell had implemented the Copycat program (Hofstadter 1985b, chaps. 13 and 24; 2002; Mitchell 1993; Hofstadter and Mitchell 1993; Hofstadter 1995, chaps. 5–7). This was described in several seminars at MIT in 1984, although without attracting much attention at the time (Hofstadter, personal communication).
Intended as a simulation of human thinking, it modeled the fluid perception of analogies between letter strings. So it would respond to questions like these: “If abc goes to pqr, what does efg go to?” or the much trickier “If abc goes to abd, what does xyz go to?” Lacking a circular alphabet, it could not map xyz onto xya; instead, it suggested xyd, xyzz, xyy, and the especially elegant wyz. Its descriptors were features such as leftmost, rightmost, middle, successor, same, group, and alphabetic predecessor or successor. Like Evans’s program, Copycat could generate a range of analogies and compare their strength. Unlike Evans’s program, it was probabilistic rather than deterministic and could be primed to favor comparisons of one type rather than another. Whereas Copycat worked on letter strings, Hofstadter’s Letter Spirit project focused on complex visual analogies. Specifically, it concerned the letter likenesses and letter contrasts involved in distinct alphabetic fonts. An a must be recognizable as an a no matter what the font; but seeing other letters in the same font may help one to realize that it is indeed an a. At the same time, all twenty-six letters within any one font must share certain broad similarities, all being members of that font (see figures 3.5 and 3.6).

The letter a written in different fonts (Hofstadter 1995, 413). (Reprinted by permission of Basic Books, an imprint of Hachette Book Group, Inc.)

Different fonts based on the Letter Spirit matrix (Hofstadter 1995, 418). (Reprinted by permission of Basic Books, an imprint of Hachette Book Group, Inc.)
Hofstadter’s early writings on Letter Spirit had outlined a host of intriguing problems involved in interpreting and designing fonts (e.g., Hofstadter 1985a). And in the mid-1990s he described a program capable of recognizing letters in a variety of styles (Hofstadter 1995, 407–496; McGraw 1995; Hofstadter and McGraw 1995). By the new millennium, the design aspect of Letter Spirit had been partially implemented too (Rehling 2001, 2002). The new program could design an entire alphabet font if given five seed letters (b, c, e, f, g). Today, the group’s aim is to generate an alphabet from only a single seed.
Letter Spirit is the most ambitious AI analogy project and in my view the most interesting. Whether it can readily be applied to other domains, however, is unclear. Even the much simpler Copycat would be difficult to generalize. However, if what one is interested in is how it is possible for human beings to engage in subtle and systematic analogical thinking, then it is a significant contribution. It is no simple matter to generate or appreciate an alphabetic font. So much so, indeed, that hardly anyone other than Hofstadter would have dreamed that anything cogent could be said about the computational processes that may be involved.
Turning from analogy to story writing, the best AI storywriter was authored not by a novelist or literary critic but by a computer scientist, now a professional game designer, Scott Turner (1994). He is not to be confused with Mark Turner, a teacher of English at the University of Maryland who turned to cognitive science to illuminate the interpretation and creation of literature (Turner 1991; Fauconnier and Turner 2002).
This program did not produce high-quality literature. Its plots were simplistic tales about knights, princesses, and dragons, and its use of English left a great deal to be desired. But it had three interesting features.
First, it used case-based reasoning (13.ii.c) to create new story plots on the basis of old ones. For example, it generated a concept of suicide (derived from killing) when the story’s plot could not be furthered by third-party fighting. Second, it relied on the latest version of the Yale analysis of motivational and planning schemata to decide what might plausibly be done and how (see 7.i.c and 9.xi.d). And third, Scott Turner had realized what previous AI programmers had not (e.g., Meehan 1975), that a story needs not only goals and plans for each character involved in the plot, but also rhetorical goals and plans for the storyteller. Accordingly, a character’s goals were sometimes rejected by the program or their expression suppressed in the final narrative for reasons of story interest or consistency. (For a sustained critique of Turner’s approach, see Bringsjord and Ferrucci 2000.)
By the mid-1990s, AI had even made good on Charles Babbage’s suggestion that puns follow principles, including double meanings and similar pronunciation of differently spelled words (3.iv.a). Kim Binsted, at the University of Edinburgh, wrote a program, JAPE, that originated punning riddles fitting nine familiar templates, such as What do you get when you cross an X with a Y? What kind of X has Y? What kind of X can Y? and What is the difference between an X and a Y? (Binsted 1996; Binsted and Ritchie 1997; Binsted, Pain, and Ritchie 1997; Ritchie 2003a). JAPE used a semantic network of over thirty thousand items, marked for syllables, spelling, sound, and syntax as well as for semantics and synonymy. The program would consult the templates (no simple matter) to generate results including these: What do you call a depressed train? A low-comotive. What do you call a strange market? A bizarre bazaar. What kind of murderer has fiber? A cereal killer. Babbage’s triple pun (cane/Cain, a bell/a belle/Abel) could not have been created by JAPE, but its puns were no more detestable than most. In fact, it is the most successful of today’s AI jokers (Ritchie 2001; 2003b, chap. 10).
As for creative mathematics, great excitement (within AI, although not outside it) was caused in the late 1970s by Douglas Lenat’s program AM, or Automated Mathematician (Lenat 1977). This was his doctoral thesis at Stanford. Starting from a few simple concepts of set theory, it used three hundred heuristics for modifying concepts and used criteria of mathematical interestingness (occasionally supplemented by specific nudges from the programmer) to generate many concepts of number theory. These included integer, addition, multiplication, square root, and prime number. It even generated a historically novel concept, which was later proved as a (minor) theorem, concerning maximally divisible numbers, a class that Lenat himself had not heard of. Later, he recalled how he found out about them:
“[I wondered whether any mathematician had thought of such a thing. Polya seemed to be the only one who knew.] He said, ‘This looks very much like something the student of a friend of mine once did.’ Polya was about 92 at the time. It turned out the friend was [Godfrey] Hardy and the student was Ramanujan” (Shasha and Lazere 1995, 231).
Coming up with something that had been discovered by such a giant as Srinivasa Ramanujan (1887–1920), people felt, was no mean feat.
Whether the excitement was fully justified was another matter. Critics pointed out that Lenat had not made clear just how the interesting concepts were generated and suggested that a heuristic that was crucial for generating the notion of primes had been included, whether consciously or not, so as to make this discovery possible (Ritchie and Hanna 1984). What is more, they said, it may have been used only the once. No detailed trace of the program was available.
Lenat replied that AM’s heuristics were fairly general ones, not special-purpose tricks, and that (on average) each heuristic contributed to two dozen different discoveries and that each discovery involved two dozen heuristics (Lenat and Seely Brown 1984). He admitted, however, that writing AM in LISP had given him a tacit advantage because minor changes to LISP syntax were relatively likely to result in expressions that were mathematically interpretable.
A few years later, yet more excitement was caused by Lenat’s EURISKO, which—satisfying Drew McDermott’s plea for program development (11.iii.a)—incorporated heuristics for modifying heuristics (Lenat 1983). Besides being used to help plan experiments in genetic engineering, it came up with one idea (a battle-fleet design) that won a war game competition against human players and another (a VLSI chip design) that won a US patent, which are awarded only for ideas not “obvious to a person skilled in the art.” Lenat himself then switched to research on AI en-cyc-lopedia, or CYC (13.i.c). But his AM program led others to focus on how mathematical interestingness could be used in automating creative mathematics (for a review, see Colton, Bundy, and Walsh 2000).
What of Simon himself? His late-century programming efforts were devoted to explaining creativity in science, although he gestured toward the humanities from time to time (Simon 1994). With Patrick Langley and others at Carnegie Mellon University, he wrote a suite of increasingly powerful programs (BACON, BLACK, GLAUBER, STAHL, and DALTON) intended to model the thought processes of creative scientists such as Francis Bacon, Joseph Black, Johann Glauber, Georg Stahl, and John Dalton. These were initiated in the late 1970s and continually improved thereafter (Langley 1978, 1979, 1981; Langley, Bradshaw, and Simon 1981; Langley et al. 1987).
These inductive systems generated many crucial scientific principles—quantitative, qualitative, and componential. They came up with Archimedes’s principle of volume measurement by liquid displacement, the very origin of Eureka! And they rediscovered Ohm’s law of electrical resistance, Snell’s law of refraction, Black’s law of the conservation of heat, Boyle’s law relating the pressure and volume of a gas, Galileo’s law of uniform acceleration, and Kepler’s third law of planetary motion. Occasionally (e.g., with Snell’s law), they used a symmetry heuristic to choose between more and less elegant, although mathematically equivalent, expressions. Some could produce hypotheses to explain the observed data patterns, whether mathematical (e.g., Black’s law) or qualitative (e.g., the chemical patterns observed by Glauber and componentially explained by Stahl and Dalton). And the later versions could use the real (i.e., messy, imperfect) historical data, not just data doctored to make the sums come out exactly right.
Simon wanted to explain the ineffable, not just—as in meta-DENDRAL or Emmy—to mimic it. So his group aimed to keep faith with human psychology.
Their programs, accordingly, respected the details recorded in the laboratory notebooks of the scientists concerned. For example, they used the same data that the long-dead authors had used. Or rather, they used the same verbal and mathematical data: they could not accept visual, auditory, or haptic input from the real world or recognize similarities between sensory patterns (see 12.v–vii). And as well as generating the same scientific laws, they tried to match the heuristics that had been used by the human scientists and the temporal order of their hunches and discoveries—and mistakes. Later work by the Carnegie Mellon group focused, for instance, on the general principles of how to suggest and plan experiments (Kulkarni and Simon 1988) and on the use of diagrams in scientific discovery (Cheng and Simon 1995).
At one level, the BACON program and its siblings were highly impressive. They offered successful models of induction and illuminated certain aspects of the way many human scientists go about their work. Despite the common propaganda, some science is not concerned with numbers or with componential structure (7.iii.d). But the creativity involved was exploratory rather than transformational.
In other words, these programs were spoon-fed the relevant questions, even though they found the answers for themselves. What their human namesakes had done was different, because they had viewed the data in new ways. Indeed, they had treated new features as data. To identify mathematical patterns in the visual input from earth or sky had been a hugely creative act when Galileo Galilei first did it. To look for numerical constants was another, and to seek simple linear relationships before ratios or products (search priorities that were built into the heuristics used by the BACON suite), yet another. The Carnegie Mellon programs were deliberately provided with the ways of reasoning that Bacon, Glauber, Stahl, and Dalton had pioneered for themselves. They were roundly criticized by Hofstadter as a result (Hofstadter 1995, 177–179; see also Collins 1989).
This, of course, recalls Haugeland’s principled objection to GOFAI-based models of insight. It also relates to the worries about open-ended evolution (discussed later in this chapter and in chapter 5, and in 15.vi.d). We will see in chapter 5 that totally new types of sensor have been evolved in artificial systems but only by unexpectedly taking advantage of the contingencies of the physical environment or hardware. Compare these biological examples with Haugeland’s psychological claim that understanding pertains to “the world and to living in it.”
In short, Saul Amarel’s (1968) hope that an automated system might come up with a radically new problem representation remained unfulfilled. And not only Amarel’s, for Simon himself had expressed much the same hope in the 1950s. The Logic Theorist team’s harbinger memo had listed four characteristics of creative thought, of which the last was this: “The problem as initially posed was vague and ill-defined, so that part of the task [is] to formulate the problem itself” (Simon and Newell 1958).
That is still beyond the state of the art. Langley (1998) reviewed seven cases of AI-aided discovery that were sufficiently novel and interesting to be published in the relevant scientific journals. He pointed out that, in every case, the programmers had been crucial in formulating the problem or manipulating the data or interpreting the results. So Ada Lovelace’s futuristic vision of science by machine has been realized: many new answers have been found automatically and some new questions too (e.g., new experiments). But fundamental reformulations of old problems, still less radically new ones, have not. Lovelace would not have been at all surprised by that. On the contrary, it is just what she expected (3.iv.b).
In one sense, it is what Simon himself expected, too. Machines, at present, lie outside the cooperative loop (2.ii.b–c). Simon described scientific discovery as a matter of “social psychology, or even sociology” (1997a, 171). His main reason was that open scientific publication provides a “blackboard” that hugely extends the individual scientist’s memory (172). Machine discovery systems “are still relatively marginal participants in the social system of science.” We might be able to hook them up to data-mining programs to reduce their reliance on human beings for providing data and problems. However, “even this is a far cry from giving machines access to the papers, written in a combination of natural, formal and diagrammatic language, that constitute a large part of the blackboard contents” (173).
As for negotiations about the value of supposed “discoveries” (1.iii.f; Boden 1997), Simon’s view was that “the machine (augmented now by the computer) has already, for perhaps a hundred years, been a member of the society of negotiators” (Simon 1997b, 226). Here, he cited the prescient Henry Adams, who had been so deeply troubled by his visit to the dynamo hall in an industrial exhibition (Adams 1918). Future AI programs, said Simon, might persuade us to value their discoveries above our own previous judgments (1997b, 225). He added that this was already happening in the area of mathematical proof (but that is not straightforward; see MacKenzie 1995).
Disputes about what counts as a discovery are especially likely in cases of transformational creativity, in which previously valued criteria are challenged. This type of creativity was eventually modeled, up to a point, by evolutionary programs. Some were focused on art (Sims 1991; Todd and Latham 1992), some on music (Gartland-Jones and Copley 2003; Hodgson 2002, 2005), and some on science or engineering (Goldberg 1987; Ijspeert, Hallam, and Willshaw 1997).
To some extent, genetic algorithms (Boden 2006, 15.vi; and see chapter 5) could transform the conceptual space being explored by the program. For example, Paul Hodgson, an accomplished jazz saxophonist, wrote several programs designed to improvise Charlie Parker–style jazz in real time. The first two, IMPROVISER and VIRTUAL BIRD, used brief melodic motifs as primitives (these were not statistically culled, as in Cope’s work, but were based on a systematic theoretical analysis of music (Narmour 1989, 1992). VIRTUAL BIRD played well enough, early in the new century, that the world-famous Courtney Pine was willing to perform alongside it. Another, an evolutionary version called EARLY BIRD, used only dyadic (two-note) primitives (Hodgson 2005). It explored Bird space even more adventurously than its predecessor, partly because of the transformations it generated and partly because its primitives were less highly structured. Even so, it was not transformational in the sense of generating a recognizably different musical style. Hodgson felt that this would be possible but only if he himself added a great deal more musical information to constrain the changes allowed (Hodgson, personal communication).
That is hardly surprising, for in all such evolutionary programs the fitness function was due to the human being, whether built into the program or provided interactively. Peter Cariani (1997) has criticized current genetic algorithms accordingly, arguing that they are incapable of the sort of open-ended creativity seen in biological evolution. Wholly new sensory organs, however, have been evolved in artificial systems, by a combination of genetic algorithms and environmental or hardware contingencies (15.vi.d). Perhaps future research on psychological creativity will take a leaf out of this biological book.
Biological evolution is not only open ended but unpredictable. And most creative ideas are unpredictable, too. There are various reasons why that is so (Boden 2004, chap. 9; see also Boden 2006, chap. 17). But that does not mean that AI creativity researchers were wasting their time. One may be able to explain something—to show how it is possible—without also being able to predict it (7.iii.d). Much as a theoretical psychology could never predict every passing fancy or every suicidal thought within Joe Blow’s mind, so it could never predict every creative idea. It might be able to say a great deal, however, about the general types of ideas that were likely or unlikely and why. That, you will remember, had been Cohen’s aim when he embarked on AARON in the first place; and it was Simon’s aim, too.
By the turn of the millennium, then, AI research on creativity had at last become respectable. Several books on the topic had appeared, written or edited by long-standing members of the community (Michie and Johnston 1984; Boden 2004; Shrager and Langley 1990; Partridge and Rowe 1994; Hofstadter 1995). The interest was spreading way beyond a few enthusiasts. The Stanford Humanities Review published two book-length special issues on AI and creativity, especially in relation to literature (Guzeldere and Franchi 1994; Franchi and Guzeldere 1995).
The respectability suddenly snowballed into a range of professional meetings. IJCAI-97 (International Joint Conference on Artificial Intelligence) commissioned a keynote presentation on the topic (Boden 1998). And a flurry of creativity conferences and workshops were mounted by AI and artificial-life researchers. These included the continuing series Discovery Science, which complemented the Creativity and Cognition meetings on computer art that had been organized in the UK for many years past by Edmonds (13.vi.c).
If the ineffable had not yet been fully explained, that it was truly ineffable was now highly doubtful.
______________
This chapter sketches the history of AI-work on creativity. The many cross-references refer not to other parts of this volume but to Margaret Boden’s book Mind as Machine: A History of Cognitive Science (2006), from which it is a lightly edited extract (13.iv). The cross-references have been left intact here because they indicate closely-relevant areas of AI and computational psychology that may be of interest to readers of this volume. (The next part of the book—13.v-vi—outlines the history of personal computing and virtual reality, including some brief comments on the rise of computer art in 13.vi.c.) The cross-references to Boden 2006 are structured as chapter.section.sub-section; so 9.xi.d, for example, denotes chapter 9, section xi, subsection d.
References
Adams, H. 1918. “The Dynamo and the Virgin.” In The Education of Henry Adams, 379–390. Boston: Houghton and Mifflin.
Amarel, S. 1968. “On Representations of Problems of Reasoning about Actions.” In Machine Intelligence 3, edited by D. M. Michie, 131–172. Edinburgh: Edinburgh University Press.
Binsted, K. 1996. “Machine Humour: An Implemented Model of Puns.” PhD diss., University of Edinburgh.
Binsted, K., H. Pain, and G. D. Ritchie. 1997. “Children’s Evaluation of Computer-Generated Punning Riddles.” Pragmatics and Cognition 5:305–354.
Binsted, K., and G. D. Ritchie. 1997. “Computational Rules for Punning Riddles.” Humor: International Journal of Humor Research 10:25–76.
Boden, M. A. 1977. Artificial Intelligence and Natural Man. New York: Basic Books.
Boden, M. A. 1997. “Commentary on Simon’s paper on “Machine Discovery.” In Machine Discovery, edited by J. Zytkow, 201–203. London: Kluwer Academic.
Boden, M. A. 1998. “Creativity and Artificial Intelligence.” Artificial Intelligence 103:347–356.
Boden, M. A. 2004. The Creative Mind: Myths and Mechanisms. 2nd ed. London: Routledge.
Boden, M. A. 2006. Mind as Machine: A History of Cognitive Science. 2 vols. Oxford: Clarendon/Oxford University Press.
Bringsjord and Ferrucci. 2000. Artificial Intelligence and Literary Creativity: Inside the Mind of BRUTUS, a Storytelling Machine. Mahway, NJ: Lawrence Erlbaum.
Cariani, P. 1997. “Emergence of New Signal-Primitives in Neural Systems.” Intellectica 2:95–143.
Charniak, E. 1972. Toward a Model of Children’s Story Comprehension. AI-TR-266 Cambridge, MA: MIT AI Lab.
Charniak, E. 1973. “Jack and Janet in Search of a Theory of Knowledge.” Proceedings of the Third International Joint Conference on Artificial Intelligence. Los Angeles, 337–343.
Charniak, E. 1974. “He Will Make You Take It Back”: A Study in the Pragmatics of Language. Castagnola, Switzerland: Istituto per gli Studi Semantici e Cognitivi.
Cheng, P., and H. A. Simon. 1995. “Scientific Discovery and Creative Reasoning with Diagrams.” In The Creative Cognition Approach, edited by S. M. Smith, T. B. Ward, and R. A. Finke, 205–228. Cambridge, MA: MIT Press.
Cohen, H. 1979. Harold Cohen: Drawing. San Francisco: San Francisco Museum of Modern Art. Exhibition catalog.
Cohen, H. 1981. On the Modelling of Creative Behavior. RAND Paper P-6681 Santa Monica, CA: RAND.
Cohen, H. 1995. “The Further Exploits of AARON Painter.” In “Constructions of the Mind: Artificial Intelligence and the Humanities,” edited by S. Franchi and G. Guzeldere. Special issue, Stanford Humanities Review 4 (2): 141–160.
Cohen, H. 2002. “A Million Millennial Medicis.” In Explorations in Art and Technology, edited by L. Candy and E. A. Edmonds, 91–104. New York: Springer.
Cohen H. 2012. “Evaluation of Creative Aesthetics.” In Computers and Creativity, edited by J. McCormack and M. d’Inverno, 95–111. New York: Springer.
Collins, H. M. 1989. “Computers and the Sociology of Scientific Knowledge.” Social Studies of Science 19:613–624.
Colton, S., A. M. Bundy, and T. Walsh. 2000. “On the Notion of Interestingness in Automated Mathematical Discovery.” International Journal of Human-Computer Studies 53:351–375.
Cope, D. 1991. Computers and Musical Style. Oxford: Oxford University Press.
Cope, D. 2000. The Algorithmic Composer. Madison, WI: A-R Editions.
Cope, D. 2001. Virtual Music: Computer Synthesis of Musical Style. Cambridge, MA: MIT Press.
Cope, D. 2006. Computer Models of Musical Creativity. Cambridge, MA: MIT Press.
Copley, P., and A. Gartland-Jones. 2005. “Musical Form and Algorithmic Solutions.” In Creativity and Cognition 2005: Proceedings of the Fifth Conference on Creativity and Cognition, edited by L. Candy, 226–231. New York: ACM Press.
Davey, A. C. 1978. Discourse Production: A Computer Model of Some Aspects of a Speaker. Edinburgh: Edinburgh University Press.
Dreyfus, H. L. 1965. Alchemy and Artificial Intelligence. Research Report P-3244. Santa Monica, CA: RAND.
Eisenstadt, M. 1976. “Processing Newspaper Stories: Some Thoughts on Fighting and Stylistics.” Proceedings of the AISB Summer Conference. Edinburgh, July, 104–117.
Evans, T. G. 1968. “A Program for the Solution of Geometric-Analogy Intelligence Test Questions.” In Semantic Information Processing, edited by M. L. Minsky, 271–353. Cambridge, MA: MIT Press.
Fauconnier, G. R., and M. Turner. 2002. The Way We Think: Conceptual Blending and the Mind’s Hidden Complexities. New York: Basic Books.
Fodor, J. A. 1983. The Modularity of Mind: An Essay in Faculty Psychology. Cambridge, MA: MIT Press.
Forbus, K. D., D. Gentner, A. B. Markman, and R .W. Ferguson. 1998. “Analogy Just Looks like High Level Perception: Why a Domain-General Approach to Analogical Mapping Is Right.” Journal of Experimental and Theoretical AI 10:231–257.
Franchi, S., and G. Guzeldere, eds. 1995. “Constructions of the Mind: Artificial Intelligence and the Humanities.” Special issue, Stanford Humanities Review 4 (2): 1–345.
Gartland-Jones, A., and P. Copley. 2003. “The Suitability of Genetic Algorithms for Musical Composition.” Contemporary Music Review 22 (3): 43–55.
Gentner, D. 1983. “Structure-Mapping: A Theoretical Framework for Analogy.” Cognitive Science 7:155–170.
Gentner, D., S. Brem, R. W. Ferguson, A. B. Markman, B. B. Levidow, P. Wolff, and K. D. Forbus. 1997. “Conceptual Change via Analogical Reasoning: A Case Study of Johannes Kepler.” Journal of the Learning Sciences 6:3–40.
Goldberg, D. 1987. “Computer-Aided Pipeline Operation Using Genetic Algorithms and Rule Learning. Part I: Genetic Algorithms in Pipeline Optimization.” Engineering with Computers 3:35–45.
Guzeldere, G., and S. Franchi, eds. 1994. “Bridging the Gap: Where Cognitive Science Meets Literary Criticism (Herbert Simon and Respondents).” Special issue, Stanford Humanities Review 4 (1): 1–164.
Haugeland, J. 1978. “The Nature and Plausibility of Cognitivism.” Behavioral and Brain Sciences 1:215–226.
Hiller, L. A., and L. M. Isaacson. 1959. Experimental Music: Composition with an Electronic Computer. New York: McGraw Hill.
Hodgson, P. W. 2002. “Artificial Evolution, Music and Methodology.” Proceedings of the Seventh International Conference on Music Perception and Cognition (Sydney), 244–247. Adelaide, Australia: Causal Productions.
Hodgson, P. W. 2005. “Modeling Cognition in Creative Musical Improvisation.” PhD diss., University of Sussex.
Hofstadter, D. R. 1985a. “Metafont, Metamathematics, and Metaphysics.” In Metamagical Themas: Questing for the Essence of Mind and Pattern, edited by D. R. Hofstadter, 260–296. New York: Viking.
Hofstadter, D. R. 1985b. Metamagical Themas: Questing for the Essence of Mind and Pattern. New York: Viking.
Hofstadter, D. R. 2001a. “A Few Standard Questions and Answers.” In Virtual Music: Computer Synthesis of Musical Style, edited by D. Cope, 293–305. Cambridge, MA: MIT Press.
Hofstadter, D. R. 2001b. “Staring Emmy Straight in the Eye—and Doing My Best Not to Flinch.” In Virtual Music: Computer Synthesis of Musical Style, edited by D. Cope, 33–82. Cambridge, MA: MIT Press.
Hofstadter, D. R. 2002. “How Could a COPYCAT Ever Be Creative?” In Creativity, Cognition, and Knowledge: An Interaction, edited by T. Dartnall, 405–424. London: Praeger.
Hofstadter, D. R., and FARG (The Fluid Analogies Research Group). 1995. Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. New York: Basic Books.
Hofstadter, D. R., and G. McGraw. 1995. “Letter Spirit: Esthetic Perception and Creative Play in the Rich Microcosm of the Roman Alphabet.” In Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, edited by D. R. Hofstadter and FARG, 407–466. New York: Basic Books.
Hofstadter, D. R., and M. Mitchell. 1993. “The Copycat Project: A Model of Mental Fluidity and Analogy-Making.” In Advances in Connectionist and Neural Computation Theory. Vol. 2, Analogical Connections, edited by K. Holyoak and J. Barnden, 31–112. Norwood, NJ: Ablex.
Holyoak, K. J., and P. Thagard. 1989. “Analogical Mapping by Constraint Satisfaction.” Cognitive Science 13:295–356.
Ijspeert, A. J., J. Hallam, and D. Willshaw. 1997. “Artificial Lampreys: Comparing Naturally and Artificially Evolved Swimming Controllers.” In Fourth European Conference on Artificial Life, edited by P. Husbands and I. Harvey, 256–265. Cambridge, MA: MIT Press.
Johnson-Laird, P. N. 1993. “Jazz Improvisation: A Theory at the Computational Level.” In Representing Musical Structure, edited by P. Howell, R. West, and I. J. Cross, 291–326. London: Academic Press.
Klein, S., J. F. Aeschlimann, D. F. Balsiger, S. L. Converse, C. Court, M. Foster, R. Lao, J. D. Oakley, and J. Smith. 1973. “Automatic Novel Writing: A Status Report.” Technical Report 186. Madison: University of Wisconsin Computer Science Dept.
Kulkarni, D., and H. A. Simon. 1988. “The Processes of Scientific Discovery: The Strategy of Experimentation.” Cognitive Science 12:139–175.
Langley, P. W. 1978. “BACON.1: A General Discovery System.” Proceedings of the Second National Conference of the Canadian Society for Computational Studies in Intelligence. Toronto, 173–180.
Langley, P. W. 1979. “Descriptive Discovery Processes: Experiments in Baconian Science.” PhD diss., Carnegie Mellon University.
Langley, P. W. 1981. “Data-Driven Discovery of Physical Laws.” Cognitive Science 5:31–54.
Langley, P. W. 1998. “The Computer-Aided Discovery of Scientific Knowledge.” In Discovery Science, Proceedings of the First International Conference on Discovery Science, edited by S. Arikawa and H. Motoda, 25–39. Berlin: Springer.
Langley, P. W., G. L. Bradshaw, and H. A. Simon. 1981. “BACON.5: The Discovery of Conservation Laws.” Proceedings of the Seventh International Joint Conference on Artificial Intelligence. Vancouver, 121–126.
Langley, P. W., H. A. Simon, G. L. Bradshaw, and J. M. Zytkow. 1987. Scientific Discovery: Computational Explorations of the Creative Process. Cambridge, MA: MIT Press.
Lenat, D. B. 1977. “The Ubiquity of Discovery.” Artificial Intelligence 9:257–286.
Lenat, D. B. 1983. “The Role of Heuristics in Learning by Discovery: Three Case Studies.” In Machine Learning: An Artificial Intelligence Approach, edited by R. S. Michalski, J. G. Carbonell, and T. M. Mitchell, 243–306. Palo Alto, CA: Tioga.
Lenat, D. B., and J. Seely Brown. 1984. “Why AM and EURISKO Appear to Work.” Artificial Intelligence 23:269–294.
Mackay, D. M. 1951. “Mindlike Behaviour in Artefacts.” British Journal for the Philosophy of Science 2:105–121.
Mackenzie, D. 1995. “The Automation of Proof: A Historical and Sociological Exploration.” Annals of the History of Computing 17 (3): 7–29.
Masterman, M. M. 1971. “Computerized Haiku.” In Cybernetics, Art, and Ideas, edited by J. Reichardt, 175–183. London: Studio Vista.
Masterman, M. M., and R. McKinnon Wood. 1968. “Computerized Japanese Haiku.” In Cybernetic Serendipity, edited by J. Reichardt, 54–55. London: Studio International.
McCarthy, J., M. L. Minsky, N. Rochester, and C. E. Shannon. 2000. “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.” In Artificial Intelligence: Critical Concepts. Vol. 2, edited by R. A. Chrisley, 44–53. London: Routledge.
McCorduck, P. 1979. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. San Francisco: W. H. Freeman.
McDermott, D. V. 1981. “Artificial Intelligence Meets Natural Stupidity.” In Mind Design: Philosophy, Psychology, Artificial Intelligence, edited by J. Haugeland, 143–160. Cambridge, MA: MIT Press.
McGraw, G. 1995. “Letter Spirit (Part One): Emergent High-Level Perception of Letters Using Fluid Concepts.” PhD diss., Indiana University, Bloomington.
Meehan, J. 1975. “Using Planning Structures to Generate Stories.” American Journal of Computational Linguistics 33:77–93.
Meehan, J. 1981. “ ‘TALE-SPIN’ and ‘Micro TALE-SPIN.’ ” In Inside Computer Understanding: Five Programs plus Miniatures, edited by R. C. Schank and C. K. Riesbeck, 197–258. Hillsdale, NJ: Lawrence Erlbaum.
Michalski, R. S., ed. 1983. Proceedings of the International Machine Learning Workshop. June, Monticello, Illinois.
Michie, D. M., and R. Johnston. 1984. The Creative Computer: Machine Intelligence and Human Knowledge. London: Viking-Penguin.
Minsky, M. L., ed. 1968. Semantic Information Processing. Cambridge, MA: MIT Press.
Mitchell, M. 1993, Analogy-Making as Perception. Cambridge, MA: MIT Press.
Moynihan, T. 2003. “The Sweet Sounds of MIDI.” May 21. Available at http://
Narmour, E. 1989. The Analysis and Cognition of Basic Melodic Structures: The Implication-Realization Model. Chicago: Chicago University Press.
Narmour, E. 1992. The Analysis and Cognition of Melodic Complexity: The Implication-Realization Model. Chicago: Chicago University Press.
Newell, A., J. C. Shaw, and H. A. Simon. 1962. “The Processes of Creative Thinking.” Contemporary Approaches to Creative Thinking, edited by H. E. Gruber, G. Terrell, and M. Wertheimer, 63–119. New York: Atherton Press.
O’Hear, A. 1995. “Art and Technology: An Old Tension.” In Philosophy and Technology, edited by R. Fellows, 143–158. Cambridge: Cambridge University Press.
Ortony, A., ed. 1979. Metaphor and Thought. Cambridge: Cambridge University Press.
Partridge, D. A., and J. Rowe. 1994. Computers and Creativity. Oxford: Intellect Books.
Rehling, J. A. 2001. “Letter Spirit (Part Two): Modeling Creativity in a Visual Domain.” PhD diss., Indiana University, Bloomington.
Rehling, J. A. 2002. “Results in the Letter Spirit Project.” In Creativity, Cognition, and Knowledge: An Interaction, edited by T. Dartnall, 273–282. London: Praeger.
Rieger, C. J. 1975a. “Conceptual Overlays: A Mechanism for the Interpretation of Sentence Meaning in Context.” Proceedings of the Fourth International Joint Conference on Artificial Intelligence. Los Angeles, 143–150.
Rieger, C. J. 1975b. “The Commonsense Algorithm as a Basis for Computer Models of Human Memory, Inference, Belief, and Contextual Language Comprehension.” Theoretical Issues in Natural Language Processing (Proceedings of a Workshop of the Association of Computational Linguistics. June, Cambridge, MA, 199–214.
Ritchie, G. D. 2001. “Current Directions in Computational Humour.” Artificial Intelligence Review 16:119–135.
Ritchie, G. D. 2003a. The JAPE Riddle Generator: Technical Specification. Informatics Research Report EDI-INF-RR-0158. Edinburgh: University of Edinburgh, School of Informatics, February.
Ritchie, G. D. 2003b. The Linguistic Analysis of Jokes. London: Routledge.
Ritchie, G. D., and F. K. Hanna. 1984. “AM: A Case Study in AI Methodology.” Artificial Intelligence 23:249–268.
Schank, R. C., and the Yale AI Project. 1975. SAM: A Story Understander. Research report 43. New Haven, CT: Yale University, Department of Computer Science, August.
Shasha, D., and C. Lazere. 1995. Out of Their Minds: The Lives and Discoveries of 15 Great Computer Scientists. New York: Copernicus.
Shrager, J., and P. Langley, eds. 1990. Computational Models of Discovery and Theory Formation. San Mateo, CA: Morgan Kaufmann.
Simon, H. A. 1994. “Literary Criticism: A Cognitive Approach.” In “Bridging the Gap: Where Cognitive Science Meets Literary Criticism (Herbert Simon and Respondents),” edited by G. Guzeldere and S. Franchi. Special issue, Stanford Humanities Review 4 (1): 1–27.
Simon, H. A. 1995. “Explaining the Ineffable: AI on the Topics of Intuition, Insight, and Inspiration.” Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence. Vol. 1, 939–948.
Simon, H. A. 1997a. “Machine Discovery.” In Machine Discovery, edited by J. Zytkow, 171–200. London: Kluwer Academic.
Simon, H. A. 1997b. “Machine Discovery: Reply to Comments.” In Machine Discovery, edited by J. Zytkow, 225–232. London: Kluwer Academic.
Simon, H. A., and A. Newell. 1958. “Heuristic Problem Solving: The Next Advance in Operations Research.” Operations Research 6:1–10.
Sims, K. 1991. “Artificial Evolution for Computer Graphics.” Computer Graphics 25 (4): 319–328.
Thagard, P. 1990. “Concepts and Conceptual Change.” Synthese 82:255–274.
Thagard, P., K. J. Holyoak, G. Nelson, and D. Gochfeld. 1988. “Analog Retrieval by Constraint Satisfaction.” Research paper, Cognitive Science Laboratory, Princeton University, November.
Todd, S. C., and W. Latham. 1992. Evolutionary Art and Computers. London: Academic Press.
Turner, M. 1991. Reading Minds: The Study of Literature in an Age of Cognitive Science. Princeton, NJ: Princeton University Press.
Turner, S. R. 1994. The Creative Process: A Computer Model of Storytelling and Creativity. Hillsdale, NJ: Lawrence Erlbaum.
Wilks, Y. A. 1972. Grammar, Meaning, and the Machine Analysis of Natural Language. London: Routledge and Kegan Paul.